Image compression and conversion are handled by Web Workers running WebAssembly-compiled codecs via the jsquash library. When you select an image, we read it as an ArrayBuffer using the File API, transfer it to a background worker thread, decode it with the appropriate codec (JPEG, PNG, or WebP), and re-encode it with your chosen format and quality settings. The encoded result is transferred back to the main thread as a Blob, which generates the download link and preview.
Image cropping uses the Canvas API on the main thread. Your image is drawn onto an HTML canvas element, and when you define a crop region using the interactive handles, we use canvas.toBlob() to export just the selected portion. The aspect ratio constraints, handle dragging, and overlay rendering are all computed in real-time using pointer events and canvas drawing operations.
Image merging also uses the Canvas API. We calculate the total canvas dimensions based on your chosen layout (horizontal, vertical, or grid), gap size, and the dimensions of each input image. Each image is drawn onto the canvas at its computed position, and the final result is exported as a PNG Blob.
The application itself is built with Next.js 16 and React, compiled as a fully static site. There is no backend server, no API endpoints, no database, and no server-side rendering. The HTML, CSS, and JavaScript files are served once from a CDN, and every subsequent interaction happens entirely within your browser session. When you close the tab, all processed data is released from memory.