You are facing an XY and even -Z problem here, but each may have an useful answer, so let's dig in.
X. Do not use the canvas API to perform image format conversion.
The canvas API is lossy, whatever you do, you will loose information from your original image, even if you do pass it lossless images, the image drawn on the canvas will not be the same as this original image.
If you pass an already lossy format like JPEG, it will even add information that were not in the original image: the compression artifacts are now part of the raw bitmap, and export algo will treat these as information it should keep, making your file probably bigger than the JPEG file you fed it with.
Not knowing your use case, it's a bit hard to give you the perfect advice, but generally, make the different formats from the version the closest to the raw image, and once it's painted in a browser, you are already at least three steps too late.
Now, if you do some processing on this image, you may indeed want to export the results.
But you probably don't need this Web Worker here.
Y. What takes the biggest blocking time in your description should be the synchronous toDataURL()
call.
Instead of this historical error in the API, you should always be using the asynchronous and nonetheless more performant toBlob()
method. In 99% of the cases, you don't need a data URL anyway, almost all you want to do with a data URL should be done with a Blob directly.
Using this method, the only heavy synchronous operation remaining would be the painting on canvas, and unless you are downsizing some huge images, this should not take the 400ms.
But you can anyway make it even better on newest canvas thanks to createImageBitmap method, which allows you to prepare asynchronously your image so that the image's decoding be complete and all that needs to be done is really just a put pixels operation:
large.onclick = e => process('https://upload.wikimedia.org/wikipedia/commons/c/cf/Black_hole_-_Messier_87.jpg');
medium.onclick = e => process('https://upload.wikimedia.org/wikipedia/commons/thumb/c/cf/Black_hole_-_Messier_87.jpg/1280px-Black_hole_-_Messier_87.jpg');
function process(url) {
convertToWebp(url)
.then(prepareDownload)
.catch(console.error);
}
async function convertToWebp(url) {
if(!supportWebpExport())
console.warn("your browser doesn't support webp export, will default to png");
let img = await loadImage(url);
if(typeof window.createImageBitmap === 'function') {
img = await createImageBitmap(img);
}
const ctx = get2DContext(img.width, img.height);
console.time('only sync part');
ctx.drawImage(img, 0,0);
console.timeEnd('only sync part');
return new Promise((res, rej) => {
ctx.canvas.toBlob( blob => {
if(!blob) rej(ctx.canvas);
res(blob);
}, 'image/webp');
});
}
// some helpers
function loadImage(url) {
return new Promise((res, rej) => {
const img = new Image();
img.crossOrigin = 'anonymous';
img.src = url;
img.onload = e => res(img);
img.onerror = rej;
});
}
function get2DContext(width = 300, height=150) {
return Object.assign(
document.createElement('canvas'),
{width, height}
).getContext('2d');
}
function prepareDownload(blob) {
const a = document.createElement('a');
a.href = URL.createObjectURL(blob);
a.download = 'image.' + blob.type.replace('image/', '');
a.textContent = 'download';
document.body.append(a);
}
function supportWebpExport() {
return get2DContext(1,1).canvas
.toDataURL('image/webp')
.indexOf('image/webp') > -1;
}
<button id="large">convert large image (7,416 × 4,320 pixels)</button>
<button id="medium">convert medium image (1,280 × 746 pixels)</button>
Z. To draw an image on an OffscreenCanvas from a Web Worker, you will need the createImageBitmap
mentioned above. Indeed, the ImageBitmap object produced by this method is the only image source value that drawImage() and texImage2D()(*) can accept which is available in Workers (all other being DOM Elements).
This ImageBitmap is transferable, so you could generate it from the main thread and then send it to you Worker with no memory cost:
main.js
const img = new Image();
img.onload = e => {
createImageBitmap(img).then(bmp => {
// transfer it to your worker
worker.postMessage({
image: bmp // the key to retrieve it in `event.data`
},
[bmp] // transfer it
);
};
img.src = url;
An other solution is to fetch your image's data from the Worker directly, and to generate the ImageBitmap object from the fetched Blob:
worker.js
const blob = await fetch(url).then(r => r.blob());
const img = await createImageBitmap(blob);
ctx.drawImage(img,0,0);
And note if you got the original image in your main's page as a Blob (e.g from an <input type="file">), then don't even go the way of the HTMLImageElement, nor of the fetching, directly send this Blob and generate the ImageBitmap from it.
*texImage2D actually accepts more source image formats, such as TypedArrays, and ImageData objects, but these TypedArrays should represent the pixel data, just like an ImageData does, and in order to have this pixel data, you probably need to already have drawn the image somewhere using one of the other image source formats.
img
before and then setimg.src = url
using javascript, also attach a onload function on theimg
and render when the img gets loaded – Serra