HTML5 Canvas Resize (Downscale) Image High Quality?
Asked Answered
L

14

180

I use html5 canvas elements to resize images im my browser. It turns out that the quality is very low. I found this: Disable Interpolation when Scaling a <canvas> but it does not help to increase the quality.

Below is my css and js code as well as the image scalled with Photoshop and scaled in the canvas API.

What do I have to do to get optimal quality when scaling an image in the browser?

Note: I want to scale down a large image to a small one, modify color in a canvas and send the result from the canvas to the server.

CSS:

canvas, img {
    image-rendering: optimizeQuality;
    image-rendering: -moz-crisp-edges;
    image-rendering: -webkit-optimize-contrast;
    image-rendering: optimize-contrast;
    -ms-interpolation-mode: nearest-neighbor;
}

JS:

var $img = $('<img>');
var $originalCanvas = $('<canvas>');
$img.load(function() {


   var originalContext = $originalCanvas[0].getContext('2d');   
   originalContext.imageSmoothingEnabled = false;
   originalContext.webkitImageSmoothingEnabled = false;
   originalContext.mozImageSmoothingEnabled = false;
   originalContext.drawImage(this, 0, 0, 379, 500);
});

The image resized with photoshop:

enter image description here

The image resized on canvas:

enter image description here

Edit:

I tried to make downscaling in more than one steps as proposed in:

Resizing an image in an HTML5 canvas and Html5 canvas drawImage: how to apply antialiasing

This is the function I have used:

function resizeCanvasImage(img, canvas, maxWidth, maxHeight) {
    var imgWidth = img.width, 
        imgHeight = img.height;

    var ratio = 1, ratio1 = 1, ratio2 = 1;
    ratio1 = maxWidth / imgWidth;
    ratio2 = maxHeight / imgHeight;

    // Use the smallest ratio that the image best fit into the maxWidth x maxHeight box.
    if (ratio1 < ratio2) {
        ratio = ratio1;
    }
    else {
        ratio = ratio2;
    }

    var canvasContext = canvas.getContext("2d");
    var canvasCopy = document.createElement("canvas");
    var copyContext = canvasCopy.getContext("2d");
    var canvasCopy2 = document.createElement("canvas");
    var copyContext2 = canvasCopy2.getContext("2d");
    canvasCopy.width = imgWidth;
    canvasCopy.height = imgHeight;  
    copyContext.drawImage(img, 0, 0);

    // init
    canvasCopy2.width = imgWidth;
    canvasCopy2.height = imgHeight;        
    copyContext2.drawImage(canvasCopy, 0, 0, canvasCopy.width, canvasCopy.height, 0, 0, canvasCopy2.width, canvasCopy2.height);


    var rounds = 2;
    var roundRatio = ratio * rounds;
    for (var i = 1; i <= rounds; i++) {
        console.log("Step: "+i);

        // tmp
        canvasCopy.width = imgWidth * roundRatio / i;
        canvasCopy.height = imgHeight * roundRatio / i;

        copyContext.drawImage(canvasCopy2, 0, 0, canvasCopy2.width, canvasCopy2.height, 0, 0, canvasCopy.width, canvasCopy.height);

        // copy back
        canvasCopy2.width = imgWidth * roundRatio / i;
        canvasCopy2.height = imgHeight * roundRatio / i;
        copyContext2.drawImage(canvasCopy, 0, 0, canvasCopy.width, canvasCopy.height, 0, 0, canvasCopy2.width, canvasCopy2.height);

    } // end for


    // copy back to canvas
    canvas.width = imgWidth * roundRatio / rounds;
    canvas.height = imgHeight * roundRatio / rounds;
    canvasContext.drawImage(canvasCopy2, 0, 0, canvasCopy2.width, canvasCopy2.height, 0, 0, canvas.width, canvas.height);


}

Here is the result if I use a 2 step down sizing:

enter image description here

Here is the result if I use a 3 step down sizing:

enter image description here

Here is the result if I use a 4 step down sizing:

enter image description here

Here is the result if I use a 20 step down sizing:

enter image description here

Note: It turns out that from 1 step to 2 steps there is a large improvement in image quality but the more steps you add to the process the more fuzzy the image becomes.

Is there a way to solve the problem that the image gets more fuzzy the more steps you add?

Edit 2013-10-04: I tried the algorithm of GameAlchemist. Here is the result compared to Photoshop.

PhotoShop Image:

PhotoShop Image

GameAlchemist's Algorithm:

GameAlchemist's Algorithm

Liaoyang answered 20/9, 2013 at 17:48 Comment(25)
You might try incrementally scaling your image: #18761904Hectoliter
possible duplicate of Html5 canvas drawImage: how to apply antialiasing. See if not that works. If images are large and reduced to small size you will need to do it in steps (see example images in link)Maigre
You've got image-rendering set to "optimizeSpeed". That sounds like it will tend to give an uglier resize. Have you tried setting it to "optimizeQuality"?Augustaaugustan
You also would want to keep image-smoothing enabled for canvas.Maigre
@Ken-AbdiasSoftware I want to have the best quality available.Liaoyang
@ScottMermelstein good point but does not solve the problem.Liaoyang
@confile turning off interpolation will make it worst. You want to keep that enabled. Look at the link I provided above. I show there how to use steps to scale down larger images and keep quality. And as Scott says you want to prioritize quality over speed.Maigre
@Ken-AbdiasSoftware I tried you approach but the problem is that it will get worse the more rounds I use for the step-wise scaling. Any idea how to fix that?Liaoyang
@Ken-AbdiasSoftware I updated my post. The image gets more fuzzy the more rounds I use for down sizing. Any idea how to solve that?Liaoyang
@ScottMermelstein I updated my post. The image gets more fuzzy the more rounds I use for down sizing. Any idea how to solve that?Liaoyang
@confile did you leave smoothing on? Can you share the original image that you try to scale down? (f.ex. imgur.com)Maigre
@Ken-AbdiasSoftware Here is the image: imgur.com/DR94LKg Smoothing on: I did not set any css on canvas or imageSmoothingEnabled values.Liaoyang
More steps are only needed if you need to reduce it to a very small size from a large size. You seem to enlarge the image after it has been reduced - that won't work, neither reducing many small steps. See f.ex. here when reduced to a smaller size the result is good: jsfiddle.net/AbdiasSoftware/4skegMaigre
@Ken-AbdiasSoftware Can you please fix my code in an answer?Liaoyang
related: #2304190Palaeontography
@Palaeontography your link does not help to get a better quality.Liaoyang
Surely the chances of replicating the functionality of an expensive professional photo editing software using HTML5 are pretty slim? You can probably get near(ish), but exactly as it works in Photoshop I'd imagine would be impossible!Pfeffer
Yes I think your expectations are a little high. There are a lot of steps involved for scaling an image while maintaining good quality, and even more for something like photoshop level of quality.Louisalouisburg
Why use the canvas to resize images? Modern browsers all use bicubic interpolation — the same process used by Photoshop (if you're doing it right) — and they do it faster than the canvas process. Just specify the image size you want (use only one dimension, height or width, to resize proportionally).Superfluous
@Superfluous can you please post an answer with your idea? It sounds great. Can you also say something about browser support?Liaoyang
is this answer helpful? #2304190Appetite
people used to do several resizes in photoshop too, claiming they got better results than a one-pass bicubic. i never saw much improvement to show for a 5-10X increase in CPU and labor costs, and i was a pro photographer before i took up programming. an N-step can can avoid moire patterns, but it's also usually a softer result: good for faces, bad for landscape/product shots. you don't know how the image wwill be displayed and what processing the viewer will apply anyway, so your efforts could make for a worse result. regular re-sizing should be good enough for all but printed enlargements.Hallagan
@Hallagan This does not help. Look at the first two images I have posted. The one resized with the canvas has so bad quality you cannot even use it for screen.Liaoyang
@confile: they look about the same. both are perfectly acceptable under most conditions. i doubt most people put as much focus or consideration on the images as you. in the two random office mates i asked, both actually preferred the canvas re-sized image you bashed...Hallagan
Why do i end up with such a larger file then? going from 168Kb to 610MB??Squeak
E
195

Since your problem is to downscale your image, there is no point in talking about interpolation -which is about creating pixel-. The issue here is downsampling.

To downsample an image, we need to turn each square of p * p pixels in the original image into a single pixel in the destination image.

For performances reasons Browsers do a very simple downsampling : to build the smaller image, they will just pick ONE pixel in the source and use its value for the destination. which 'forgets' some details and adds noise.

Yet there's an exception to that : since the 2X image downsampling is very simple to compute (average 4 pixels to make one) and is used for retina/HiDPI pixels, this case is handled properly -the Browser does make use of 4 pixels to make one-.

BUT... if you use several time a 2X downsampling, you'll face the issue that the successive rounding errors will add too much noise.
What's worse, you won't always resize by a power of two, and resizing to the nearest power + a last resizing is very noisy.

What you seek is a pixel-perfect downsampling, that is : a re-sampling of the image that will take all input pixels into account -whatever the scale-.
To do that we must compute, for each input pixel, its contribution to one, two, or four destination pixels depending wether the scaled projection of the input pixels is right inside a destination pixels, overlaps an X border, an Y border, or both.
( A scheme would be nice here, but i don't have one. )

Here's an example of canvas scale vs my pixel perfect scale on a 1/3 scale of a zombat.

Notice that the picture might get scaled in your Browser, and is .jpegized by S.O..
Yet we see that there's much less noise especially in the grass behind the wombat, and the branches on its right. The noise in the fur makes it more contrasted, but it looks like he's got white hairs -unlike source picture-.
Right image is less catchy but definitively nicer.

enter image description here

Here's the code to do the pixel perfect downscaling :

fiddle result : http://jsfiddle.net/gamealchemist/r6aVp/embedded/result/
fiddle itself : http://jsfiddle.net/gamealchemist/r6aVp/

// scales the image by (float) scale < 1
// returns a canvas containing the scaled image.
function downScaleImage(img, scale) {
    var imgCV = document.createElement('canvas');
    imgCV.width = img.width;
    imgCV.height = img.height;
    var imgCtx = imgCV.getContext('2d');
    imgCtx.drawImage(img, 0, 0);
    return downScaleCanvas(imgCV, scale);
}

// scales the canvas by (float) scale < 1
// returns a new canvas containing the scaled image.
function downScaleCanvas(cv, scale) {
    if (!(scale < 1) || !(scale > 0)) throw ('scale must be a positive number <1 ');
    var sqScale = scale * scale; // square scale = area of source pixel within target
    var sw = cv.width; // source image width
    var sh = cv.height; // source image height
    var tw = Math.floor(sw * scale); // target image width
    var th = Math.floor(sh * scale); // target image height
    var sx = 0, sy = 0, sIndex = 0; // source x,y, index within source array
    var tx = 0, ty = 0, yIndex = 0, tIndex = 0; // target x,y, x,y index within target array
    var tX = 0, tY = 0; // rounded tx, ty
    var w = 0, nw = 0, wx = 0, nwx = 0, wy = 0, nwy = 0; // weight / next weight x / y
    // weight is weight of current source point within target.
    // next weight is weight of current source point within next target's point.
    var crossX = false; // does scaled px cross its current px right border ?
    var crossY = false; // does scaled px cross its current px bottom border ?
    var sBuffer = cv.getContext('2d').
    getImageData(0, 0, sw, sh).data; // source buffer 8 bit rgba
    var tBuffer = new Float32Array(3 * tw * th); // target buffer Float32 rgb
    var sR = 0, sG = 0,  sB = 0; // source's current point r,g,b
    /* untested !
    var sA = 0;  //source alpha  */    

    for (sy = 0; sy < sh; sy++) {
        ty = sy * scale; // y src position within target
        tY = 0 | ty;     // rounded : target pixel's y
        yIndex = 3 * tY * tw;  // line index within target array
        crossY = (tY != (0 | ty + scale)); 
        if (crossY) { // if pixel is crossing botton target pixel
            wy = (tY + 1 - ty); // weight of point within target pixel
            nwy = (ty + scale - tY - 1); // ... within y+1 target pixel
        }
        for (sx = 0; sx < sw; sx++, sIndex += 4) {
            tx = sx * scale; // x src position within target
            tX = 0 |  tx;    // rounded : target pixel's x
            tIndex = yIndex + tX * 3; // target pixel index within target array
            crossX = (tX != (0 | tx + scale));
            if (crossX) { // if pixel is crossing target pixel's right
                wx = (tX + 1 - tx); // weight of point within target pixel
                nwx = (tx + scale - tX - 1); // ... within x+1 target pixel
            }
            sR = sBuffer[sIndex    ];   // retrieving r,g,b for curr src px.
            sG = sBuffer[sIndex + 1];
            sB = sBuffer[sIndex + 2];

            /* !! untested : handling alpha !!
               sA = sBuffer[sIndex + 3];
               if (!sA) continue;
               if (sA != 0xFF) {
                   sR = (sR * sA) >> 8;  // or use /256 instead ??
                   sG = (sG * sA) >> 8;
                   sB = (sB * sA) >> 8;
               }
            */
            if (!crossX && !crossY) { // pixel does not cross
                // just add components weighted by squared scale.
                tBuffer[tIndex    ] += sR * sqScale;
                tBuffer[tIndex + 1] += sG * sqScale;
                tBuffer[tIndex + 2] += sB * sqScale;
            } else if (crossX && !crossY) { // cross on X only
                w = wx * scale;
                // add weighted component for current px
                tBuffer[tIndex    ] += sR * w;
                tBuffer[tIndex + 1] += sG * w;
                tBuffer[tIndex + 2] += sB * w;
                // add weighted component for next (tX+1) px                
                nw = nwx * scale
                tBuffer[tIndex + 3] += sR * nw;
                tBuffer[tIndex + 4] += sG * nw;
                tBuffer[tIndex + 5] += sB * nw;
            } else if (crossY && !crossX) { // cross on Y only
                w = wy * scale;
                // add weighted component for current px
                tBuffer[tIndex    ] += sR * w;
                tBuffer[tIndex + 1] += sG * w;
                tBuffer[tIndex + 2] += sB * w;
                // add weighted component for next (tY+1) px                
                nw = nwy * scale
                tBuffer[tIndex + 3 * tw    ] += sR * nw;
                tBuffer[tIndex + 3 * tw + 1] += sG * nw;
                tBuffer[tIndex + 3 * tw + 2] += sB * nw;
            } else { // crosses both x and y : four target points involved
                // add weighted component for current px
                w = wx * wy;
                tBuffer[tIndex    ] += sR * w;
                tBuffer[tIndex + 1] += sG * w;
                tBuffer[tIndex + 2] += sB * w;
                // for tX + 1; tY px
                nw = nwx * wy;
                tBuffer[tIndex + 3] += sR * nw;
                tBuffer[tIndex + 4] += sG * nw;
                tBuffer[tIndex + 5] += sB * nw;
                // for tX ; tY + 1 px
                nw = wx * nwy;
                tBuffer[tIndex + 3 * tw    ] += sR * nw;
                tBuffer[tIndex + 3 * tw + 1] += sG * nw;
                tBuffer[tIndex + 3 * tw + 2] += sB * nw;
                // for tX + 1 ; tY +1 px
                nw = nwx * nwy;
                tBuffer[tIndex + 3 * tw + 3] += sR * nw;
                tBuffer[tIndex + 3 * tw + 4] += sG * nw;
                tBuffer[tIndex + 3 * tw + 5] += sB * nw;
            }
        } // end for sx 
    } // end for sy

    // create result canvas
    var resCV = document.createElement('canvas');
    resCV.width = tw;
    resCV.height = th;
    var resCtx = resCV.getContext('2d');
    var imgRes = resCtx.getImageData(0, 0, tw, th);
    var tByteBuffer = imgRes.data;
    // convert float32 array into a UInt8Clamped Array
    var pxIndex = 0; //  
    for (sIndex = 0, tIndex = 0; pxIndex < tw * th; sIndex += 3, tIndex += 4, pxIndex++) {
        tByteBuffer[tIndex] = Math.ceil(tBuffer[sIndex]);
        tByteBuffer[tIndex + 1] = Math.ceil(tBuffer[sIndex + 1]);
        tByteBuffer[tIndex + 2] = Math.ceil(tBuffer[sIndex + 2]);
        tByteBuffer[tIndex + 3] = 255;
    }
    // writing result to canvas.
    resCtx.putImageData(imgRes, 0, 0);
    return resCV;
}

It is quite memory greedy, since a float buffer is required to store the intermediate values of the destination image (-> if we count the result canvas, we use 6 times the source image's memory in this algorithm).
It is also quite expensive, since each source pixel is used whatever the destination size, and we have to pay for the getImageData / putImageDate, quite slow also.
But there's no way to be faster than process each source value in this case, and situation is not that bad : For my 740 * 556 image of a wombat, processing takes between 30 and 40 ms.

Epeirogeny answered 2/10, 2013 at 18:46 Comment(30)
Could it be faster if you scale the image before you put it in the canvas?Liaoyang
i don't get it... it seems it's what i do. The buffer as well as the canvas i create (resCV) have the size of the scaled image. I think the only way to get it faster would be to use breshensam-like integer computation. But 40ms is only slow for a video game (25 fps), not for a draw application.Epeirogeny
do you see any chance to make your algorithm faster while keeping the quality?Liaoyang
i tried to round the buffer (latest part of the algorithm) using 0 | instead of Mat.ceil. It is a bit faster. But anyway there's quite some overhead with the get/putImageData and again, we cannot avoid to process each pixel.Epeirogeny
Does it makes any difference when I use context.ImageSmoothingEnabled or not?Liaoyang
no, since this settings is for drawImage. Here, i only used get/putImageData which are about 'raw' image processing.Epeirogeny
This produces great quality downscaled images, but has issues with GIF's/PNG's that have opacity/transparency. The entire transparent area gets painted black. Any idea why?Circumscribe
Well i didn't handle opacity at all, so transparent == black for the moment. I edited to handle alpha, but i have no time to test it, so i leave it commented for now. Update me if you do test.Epeirogeny
@Epeirogeny I've created a jsFiddle of your new script with the alpha stuff uncommented. It doesn't seem to work, as the completely transparent sections still get turned to solid black. If I 256 on the alpha instead of 8, I get a transparency on the curves, so that seems to be a better solution. jsfiddle.net/kpQyECircumscribe
Ok, thx for this. If you see at the end of the downScaleCanvas's code, every pixels gets a 255 alpha, so everything is opaque. I'll have to think it through. Quick fix : consider #000 as transparent and set alpha to 0 in this case. Or have tbuffer to be rgb'c' where c is the count of non transparent pixels (you guess the end). Real solution seems to be interpolating also the a just like the r,g,b, but i don't see clearly how, say, a 1/4th alpha yellow mixes with a solid green : is the result solid or opaque ? I don't think if i'll have time to watch this soon, so i'll just reply here for now.Epeirogeny
I've played with this a bit, but have been unable to get things working as I'm not really sure how it exactly works. A temporary work around for me was to draw a white background to the canvas before drawing the image, which will only work as long as the image is on a white background. I'll spend some more time trying to wrap my head around this and check back here if you have time to figure it out. Thanks!Circumscribe
I've played around with this, but have been unable to successfully figure out. I was able to get get jsfiddle.net/kpQyE/2 to work, but this is only successful when the scale is 0.50 (anything else results in a bad image). I don't have a clear understanding of how the algorithm is actually working, so I'm doubtful I'll be able to figure it out on my own. Have you had any extra time to think on this @GameAlchemist?Circumscribe
Ok, so i watched the code : you were very near from solution. Two mistakes : your indexes were off by one for tX+1 (they were +3,+4,+5,+6 instead of +4, +5, +6, +7), and changing line in rgba is a mul by 4, not 3. I just tested 4 random values to check (0.1, 0.15, 0.33, 0.8) it seemed ok. your updated fiddle is here : jsfiddle.net/gamealchemist/kpQyE/3Epeirogeny
@Epeirogeny Could you update your printed solution please!Liaoyang
@Epeirogeny I though that you had an improvement on your algorithm here jsfiddle.net/gamealchemist/kpQyE/3 so I wanted you to update your answer if it is relevant.Liaoyang
Ok ! In fact the different fiddles are just as much improvements than they are a different version (handling transparency or not, do we expect a target size or a ratio, ...) so i'm quite puzzled about what i should do about it...Epeirogeny
@Epeirogeny is there a good book or some resources where I can find algorithms for image scaling and manipulating?Liaoyang
@Epeirogeny I tried your algorithm on a png file with transparent parts but it failed. Could you please have a look at it: jsfiddle.net/confile/D7LruLiaoyang
look at the console of your fiddle : your image does not come from same domain, so you cannot get image data out of it. It is called a cross-origin issue (CORS). In a fiddle, only solution is to include the image as base-64. Thus slowing down quite a bit the fiddle's run. For the readings, i made studies in the image processing then i kept updated from very many different sources... i could quote Graphic Gem that i love a lot... donno, sorry :-/Epeirogeny
@Epeirogeny Have you looked at the lattest answer? Enric says that you should youse floor in some cases. Is this true? If so could you correct your answer please.Liaoyang
@confile i did not see the post, no, thanks for telling. I updated the post and the fiddle : jsfiddle.net/gamealchemist/kpQyE/14Epeirogeny
@Epeirogeny it is the post from EnricLiaoyang
Works excellent when you extend Canvas prototype :) HTMLCanvasElement.prototype.downsize = function(scale) { var cv = this; ... code }Secretarial
@GameAlchemist, Thanks for the algorithm. Could you please also suggest that how can i make it work in case, image is scaled down without maintaining it's aspect ratio. In that case, we will pass scale object containing information about scaling in X and Y direction.Girish
The algorithm posted here is awesome it works exactly as advertised. This might be a bit too much to ask but...I have an application that can zoom in and out, and as it does it scales the images on the canvas. I would like to be able to have a function that will generate all of the necessary versions of the image for every zoom level in one pass as opposed to having to call this function several times. Something like: var ScaledImageArray = downScaleImageForScales(img, ScalesArray). Where scales array is a sorted array of needed scales: [.9, .8, etc.] Any advice would be most appropriatedCerussite
@AnthonyR You are on SO probably because you need something, so no worries. However, you should make a new question instead, explaining your problem in detail and better formatting.Esprit
The way I see it, this could be modified to work in web workers using getImageData, which should be a significant improvement.Esprit
I just want to know what the hell all this means... lol... like i can follow and read the code... but I'm not sure I guess what the image data consists of etc... any resources you guys can share? like to learn image data and manipulation?Squeak
I've used this algorith, and it seems to produce results close to Photoshop's results with a very reasonable speed. To make it easier to use, you could wrap it in a self.calling function and expose the downScaleCanvas and downScaleImage functions to the window object. This also prevents someone from doing window.log2 = null; and break the code.Kaltman
This gives the following error during of one of the math calculations : SecurityError: The operation is insecure.Buenabuenaventura
P
67

Fast canvas resample with good quality: http://jsfiddle.net/9g9Nv/442/

Update: version 2.0 (faster, web workers + transferable objects) - https://github.com/viliusle/Hermite-resize

/**
 * Hermite resize - fast image resize/resample using Hermite filter. 1 cpu version!
 * 
 * @param {HtmlElement} canvas
 * @param {int} width
 * @param {int} height
 * @param {boolean} resize_canvas if true, canvas will be resized. Optional.
 */
function resample_single(canvas, width, height, resize_canvas) {
    var width_source = canvas.width;
    var height_source = canvas.height;
    width = Math.round(width);
    height = Math.round(height);

    var ratio_w = width_source / width;
    var ratio_h = height_source / height;
    var ratio_w_half = Math.ceil(ratio_w / 2);
    var ratio_h_half = Math.ceil(ratio_h / 2);

    var ctx = canvas.getContext("2d");
    var img = ctx.getImageData(0, 0, width_source, height_source);
    var img2 = ctx.createImageData(width, height);
    var data = img.data;
    var data2 = img2.data;

    for (var j = 0; j < height; j++) {
        for (var i = 0; i < width; i++) {
            var x2 = (i + j * width) * 4;
            var weight = 0;
            var weights = 0;
            var weights_alpha = 0;
            var gx_r = 0;
            var gx_g = 0;
            var gx_b = 0;
            var gx_a = 0;
            var center_y = (j + 0.5) * ratio_h;
            var yy_start = Math.floor(j * ratio_h);
            var yy_stop = Math.ceil((j + 1) * ratio_h);
            for (var yy = yy_start; yy < yy_stop; yy++) {
                var dy = Math.abs(center_y - (yy + 0.5)) / ratio_h_half;
                var center_x = (i + 0.5) * ratio_w;
                var w0 = dy * dy; //pre-calc part of w
                var xx_start = Math.floor(i * ratio_w);
                var xx_stop = Math.ceil((i + 1) * ratio_w);
                for (var xx = xx_start; xx < xx_stop; xx++) {
                    var dx = Math.abs(center_x - (xx + 0.5)) / ratio_w_half;
                    var w = Math.sqrt(w0 + dx * dx);
                    if (w >= 1) {
                        //pixel too far
                        continue;
                    }
                    //hermite filter
                    weight = 2 * w * w * w - 3 * w * w + 1;
                    var pos_x = 4 * (xx + yy * width_source);
                    //alpha
                    gx_a += weight * data[pos_x + 3];
                    weights_alpha += weight;
                    //colors
                    if (data[pos_x + 3] < 255)
                        weight = weight * data[pos_x + 3] / 250;
                    gx_r += weight * data[pos_x];
                    gx_g += weight * data[pos_x + 1];
                    gx_b += weight * data[pos_x + 2];
                    weights += weight;
                }
            }
            data2[x2] = gx_r / weights;
            data2[x2 + 1] = gx_g / weights;
            data2[x2 + 2] = gx_b / weights;
            data2[x2 + 3] = gx_a / weights_alpha;
        }
    }
    //clear and resize canvas
    if (resize_canvas === true) {
        canvas.width = width;
        canvas.height = height;
    } else {
        ctx.clearRect(0, 0, width_source, height_source);
    }

    //draw
    ctx.putImageData(img2, 0, 0);
}
Palaeontography answered 7/10, 2013 at 11:13 Comment(15)
I need the best qualityLiaoyang
fixed, i changed "good" to "best", is this ok now? :D. On the other hand if you want best possible resample - use imagemagick.Palaeontography
@confile imgur.com was safe to use in jsfiddle, but admins did something wrong? You don't see good quality, becouse your browser gives CORS fatal error. (can not use image from remote sites)Palaeontography
okay you can use any other PNG image with transparent areas. Any idea on this?Liaoyang
@confile you were right, on some cases transparent images had issues in sharp areas. I missed these cases with my test. Fixed resize also fixed remote image support on fiddle: jsfiddle.net/9g9Nv/49Palaeontography
fiddle with small improvements: jsfiddle.net/EugeneOZ/9g9Nv/65 , thank you, @ViliusL, very useful :)Cainozoic
This only seems to work when downsampling by half. The jsfiddle doesn't seem to work when dividing by 4. For example, resample_hermite(canvas, W, H, W/4, H/4);Crepuscular
@Crepuscular it looks like nice bug in begin... but after you generate new width and height, use Math.round(), function do not support float numbers. Example updated with clear way to choose ratio. But thank you for notice.Palaeontography
I've run into some trouble with this resample_hermite function: When I try to shrink an image too much, it bends the image and changes the color. There is a jsfiddle which I've modified to demonstrate this. Original fiddle. Broken fiddle. It seems this algorithm fails when you try to shrink too much. Anyone know why?Highlight
@Highlight you sent with in float format (319/2), function expected int. I updated function to support float, it will be rounded. Your updated example: jsfiddle.net/9g9Nv/92 - p.s. your canvas is smaller than image. Better example should be: jsfiddle.net/9g9Nv/96Palaeontography
FYI: I'm using your algo in my open-source nodejs package: github.com/26medias/image-dataGroundsel
var gx_r = gx_g = gx_b = gx_a = 0; your gx_g , gx_b and gx_a variables are now in the global scopeBreaker
@Palaeontography could you please guide me on how to rotate canvas to 90 degrees clockwise into this jsfiddle. jsfiddle.net/d8cwjb2e/1Masry
@Masry jsfiddle.net/j8qe9ot7 But you should not asked not related questions in comments, but create new separate question.Palaeontography
This gives the following error during of one of the math calculations : SecurityError: The operation is insecure. At var_pos_x = ...Buenabuenaventura
M
31

Suggestion 1 - extend the process pipe-line

You can use step-down as I describe in the links you refer to but you appear to use them in a wrong way.

Step down is not needed to scale images to ratios above 1:2 (typically, but not limited to). It is where you need to do a drastic down-scaling you need to split it up in two (and rarely, more) steps depending on content of the image (in particular where high-frequencies such as thin lines occur).

Every time you down-sample an image you will loose details and information. You cannot expect the resulting image to be as clear as the original.

If you are then scaling down the images in many steps you will loose a lot of information in total and the result will be poor as you already noticed.

Try with just one extra step, or at tops two.

Convolutions

In case of Photoshop notice that it applies a convolution after the image has been re-sampled, such as sharpen. It's not just bi-cubic interpolation that takes place so in order to fully emulate Photoshop we need to also add the steps Photoshop is doing (with the default setup).

For this example I will use my original answer that you refer to in your post, but I have added a sharpen convolution to it to improve quality as a post process (see demo at bottom).

Here is code for adding sharpen filter (it's based on a generic convolution filter - I put the weight matrix for sharpen inside it as well as a mix factor to adjust the pronunciation of the effect):

Usage:

sharpen(context, width, height, mixFactor);

The mixFactor is a value between [0.0, 1.0] and allow you do downplay the sharpen effect - rule-of-thumb: the less size the less of the effect is needed.

Function (based on this snippet):

function sharpen(ctx, w, h, mix) {

    var weights =  [0, -1, 0,  -1, 5, -1,  0, -1, 0],
        katet = Math.round(Math.sqrt(weights.length)),
        half = (katet * 0.5) |0,
        dstData = ctx.createImageData(w, h),
        dstBuff = dstData.data,
        srcBuff = ctx.getImageData(0, 0, w, h).data,
        y = h;
        
    while(y--) {

        x = w;

        while(x--) {

            var sy = y,
                sx = x,
                dstOff = (y * w + x) * 4,
                r = 0, g = 0, b = 0, a = 0;

            for (var cy = 0; cy < katet; cy++) {
                for (var cx = 0; cx < katet; cx++) {

                    var scy = sy + cy - half;
                    var scx = sx + cx - half;

                    if (scy >= 0 && scy < h && scx >= 0 && scx < w) {

                        var srcOff = (scy * w + scx) * 4;
                        var wt = weights[cy * katet + cx];

                        r += srcBuff[srcOff] * wt;
                        g += srcBuff[srcOff + 1] * wt;
                        b += srcBuff[srcOff + 2] * wt;
                        a += srcBuff[srcOff + 3] * wt;
                    }
                }
            }

            dstBuff[dstOff] = r * mix + srcBuff[dstOff] * (1 - mix);
            dstBuff[dstOff + 1] = g * mix + srcBuff[dstOff + 1] * (1 - mix);
            dstBuff[dstOff + 2] = b * mix + srcBuff[dstOff + 2] * (1 - mix)
            dstBuff[dstOff + 3] = srcBuff[dstOff + 3];
        }
    }

    ctx.putImageData(dstData, 0, 0);
}

The result of using this combination will be:

ONLINE DEMO HERE

Result downsample and sharpen convolution

Depending on how much of the sharpening you want to add to the blend you can get result from default "blurry" to very sharp:

Variations of sharpen

Suggestion 2 - low level algorithm implementation

If you want to get the best result quality-wise you'll need to go low-level and consider to implement for example this brand new algorithm to do this.

See Interpolation-Dependent Image Downsampling (2011) from IEEE.
Here is a link to the paper in full (PDF).

There are no implementations of this algorithm in JavaScript AFAIK of at this time so you're in for a hand-full if you want to throw yourself at this task.

The essence is (excerpts from the paper):

Abstract

An interpolation oriented adaptive down-sampling algorithm is proposed for low bit-rate image coding in this paper. Given an image, the proposed algorithm is able to obtain a low resolution image, from which a high quality image with the same resolution as the input image can be interpolated. Different from the traditional down-sampling algorithms, which are independent from the interpolation process, the proposed down-sampling algorithm hinges the down-sampling to the interpolation process. Consequently, the proposed down-sampling algorithm is able to maintain the original information of the input image to the largest extent. The down-sampled image is then fed into JPEG. A total variation (TV) based post processing is then applied to the decompressed low resolution image. Ultimately, the processed image is interpolated to maintain the original resolution of the input image. Experimental results verify that utilizing the downsampled image by the proposed algorithm, an interpolated image with much higher quality can be achieved. Besides, the proposed algorithm is able to achieve superior performance than JPEG for low bit rate image coding.

Snapshot from paper

(see provided link for all details, formulas etc.)

Maigre answered 7/10, 2013 at 22:20 Comment(3)
This is a great solution. I tried it on png files with transparent areas. Here is the result: jsfiddle.net/confile/5CD4N Do you have any idea what to do to make it work?Liaoyang
this is GENIUS! but please can you explain what exactly you're doing ? lol.. i'm totally wanting to know the ins and outs... maybe resources to learn?Squeak
@Carine that can be a bit much for a poor comment field :) but, scaling down resamples a group of pixels to average a new one representing that group. This is in effect a low-pass filter which introduce some blur overall. To compensate for the loss of sharpness simply apply a sharpening convolution. As the sharpening may be very pronounced we can mix it with the image instead so we can control the level of sharpening. Hope that gives some insight.Maigre
C
22

If you wish to use canvas only, the best result will be with multiple downsteps. But that's not good enougth yet. For better quality you need pure js implementation. We just released pica - high speed downscaler with variable quality/speed. In short, it resizes 1280*1024px in ~0.1s, and 5000*3000px image in 1s, with highest quality (lanczos filter with 3 lobes). Pica has demo, where you can play with your images, quality levels, and even try it on mobile devices.

Pica does not have unsharp mask yet, but that will be added very soon. That's much more easy than implement high speed convolution filter for resize.

Contain answered 24/9, 2014 at 11:36 Comment(0)
S
16

Why use the canvas to resize images? Modern browsers all use bicubic interpolation — the same process used by Photoshop (if you're doing it right) — and they do it faster than the canvas process. Just specify the image size you want (use only one dimension, height or width, to resize proportionally).

This is supported by most browsers, including later versions of IE. Earlier versions may require browser-specific CSS.

A simple function (using jQuery) to resize an image would be like this:

function resizeImage(img, percentage) {
    var coeff = percentage/100,
        width = $(img).width(),
        height = $(img).height();

    return {"width": width*coeff, "height": height*coeff}           
}

Then just use the returned value to resize the image in one or both dimensions.

Obviously there are different refinements you could make, but this gets the job done.

Paste the following code into the console of this page and watch what happens to the gravatars:

function resizeImage(img, percentage) {
    var coeff = percentage/100,
        width = $(img).width(),
        height = $(img).height();

    return {"width": width*coeff, "height": height*coeff}           
}

$('.user-gravatar32 img').each(function(){
  var newDimensions = resizeImage( this, 150);
  this.style.width = newDimensions.width + "px";
  this.style.height = newDimensions.height + "px";
});
Superfluous answered 2/10, 2013 at 12:33 Comment(9)
Also note that if you only specify one dimension, the (modern) browser will automatically maintain the image's natural aspect ratio.Surovy
@Andre: I noted that in my first paragraph.Superfluous
@Andre: Also note that the one-dimension approach only works if the other dimension has not been specified. In other words, if the other dimension has been specified, you need to do both.Superfluous
Maybe he needs to send the resized image to a server.Ofelia
@Sergiu: Not necessary, but note that if you are going from a very small image to a very large one you're not going to get great results even from a server.Superfluous
@Superfluous I need to put the image in the canvas afterwards and send it to the server later on. I want to scale down a large image to a small one, modify color in a canvas and send the result to the server. What do you think I should do?Liaoyang
@confile: What does "afterwards" mean? After page load? I'll assume the latter. Scaling down from a big image to a small one is easy. Just feed the function I gave you (or a similar one) a percentage < 100 and the image will scale down. Since it will be using bicubic interpolation it will look good on the client. If the image needs to be uploaded to a server, though, the image will still have to be scaled on the server to the same percentage if it is to be stored at those dimensions.Superfluous
@Superfluous This is the problem. Showing a small image on the client is easy. img.width nad img.height is so trivial. I want to scale it down only once and not again on the server.Liaoyang
Using a downscaled image as thumbnail still improves the performance a lot, especially on mobile devices. If you provide both, thumbnail and full image, it reduces the the traffic by only sending the full image and provides the option to render a screen/device specific thumbnail, without need to handle and process that serversided.Homelike
L
10

Not the right answer for people who really need to resize the image itself, but just to shrink the file size.

I had a problem with "directly from the camera" pictures, that my customers often uploaded in "uncompressed" JPEG.

Not so well known is, that the canvas supports (in most browsers 2017) to change the quality of JPEG

data=canvas.toDataURL('image/jpeg', .85) # [1..0] default 0.92

With this trick I could reduce 4k x 3k pics with >10Mb to 1 or 2Mb, sure it depends on your needs.

look here

Longcloth answered 16/8, 2017 at 17:55 Comment(0)
G
6

This is the improved Hermite resize filter that utilises 1 worker so that the window doesn't freeze.

https://github.com/calvintwr/blitz-hermite-resize

const blitz = Blitz.create()

/* Promise */
blitz({
    source: DOM Image/DOM Canvas/jQuery/DataURL/File,
    width: 400,
    height: 600
}).then(output => {
    // handle output
})catch(error => {
    // handle error
})

/* Await */
let resized = await blitz({...})

/* Old school callback */
const blitz = Blitz.create('callback')
blitz({...}, function(output) {
    // run your callback.
})
Gotten answered 20/9, 2014 at 8:40 Comment(0)
C
5

I found a solution that doesn't need to access directly the pixel data and loop through it to perform the downsampling. Depending on the size of the image this can be very resource intensive, and it would be better to use the browser's internal algorithms.

The drawImage() function is using a linear-interpolation, nearest-neighbor resampling method. That works well when you are not resizing down more than half the original size.

If you loop to only resize max one half at a time, the results would be quite good, and much faster than accessing pixel data.

This function downsample to half at a time until reaching the desired size:

  function resize_image( src, dst, type, quality ) {
     var tmp = new Image(),
         canvas, context, cW, cH;

     type = type || 'image/jpeg';
     quality = quality || 0.92;

     cW = src.naturalWidth;
     cH = src.naturalHeight;

     tmp.src = src.src;
     tmp.onload = function() {

        canvas = document.createElement( 'canvas' );

        cW /= 2;
        cH /= 2;

        if ( cW < src.width ) cW = src.width;
        if ( cH < src.height ) cH = src.height;

        canvas.width = cW;
        canvas.height = cH;
        context = canvas.getContext( '2d' );
        context.drawImage( tmp, 0, 0, cW, cH );

        dst.src = canvas.toDataURL( type, quality );

        if ( cW <= src.width || cH <= src.height )
           return;

        tmp.src = dst.src;
     }

  }
  // The images sent as parameters can be in the DOM or be image objects
  resize_image( $( '#original' )[0], $( '#smaller' )[0] );
Cloddish answered 27/8, 2014 at 14:7 Comment(2)
Could you please post a jsfiddle and some resulting images?Liaoyang
In the link at the bottom you can find resulting images using this techniqueMerrimerriam
T
4

Here is a reusable Angular service for high quality image / canvas resizing: https://gist.github.com/fisch0920/37bac5e741eaec60e983

The service supports lanczos convolution and step-wise downscaling. The convolution approach is higher quality at the cost of being slower, whereas the step-wise downscaling approach produces reasonably antialiased results and is significantly faster.

Example usage:

angular.module('demo').controller('ExampleCtrl', function (imageService) {
  // EXAMPLE USAGE
  // NOTE: it's bad practice to access the DOM inside a controller, 
  // but this is just to show the example usage.

  // resize by lanczos-sinc filter
  imageService.resize($('#myimg')[0], 256, 256)
    .then(function (resizedImage) {
      // do something with resized image
    })

  // resize by stepping down image size in increments of 2x
  imageService.resizeStep($('#myimg')[0], 256, 256)
    .then(function (resizedImage) {
      // do something with resized image
    })
})
Thumbsdown answered 16/9, 2014 at 23:21 Comment(0)
W
2

Maybe man you can try this, which is I always use in my project.In this way you can not only get high quality image ,but any other element on your canvas.

/* 
 * @parame canvas => canvas object
 * @parame rate => the pixel quality
 */
function setCanvasSize(canvas, rate) {
    const scaleRate = rate;
    canvas.width = window.innerWidth * scaleRate;
    canvas.height = window.innerHeight * scaleRate;
    canvas.style.width = window.innerWidth + 'px';
    canvas.style.height = window.innerHeight + 'px';
    canvas.getContext('2d').scale(scaleRate, scaleRate);
}
Wenda answered 25/12, 2018 at 9:26 Comment(0)
E
0

instead of .85, if we add 1.0. You will get exact answer.

data=canvas.toDataURL('image/jpeg', 1.0);

You can get clear and bright image. Please check

Engelhart answered 29/8, 2019 at 7:26 Comment(0)
C
0

I really try to avoid running through image data, especially on larger images. Thus I came up with a rather simple way to decently reduce image size without any restrictions or limitations using a few extra steps. This routine goes down to the lowest possible half step before the desired target size. Then it scales it up to twice the target size and then half again. Sounds funny at first, but the results are astoundingly good and go there swiftly.

function resizeCanvas(canvas, newWidth, newHeight) {
  let ctx = canvas.getContext('2d');
  let buffer = document.createElement('canvas');
  buffer.width = ctx.canvas.width;
  buffer.height = ctx.canvas.height;
  let ctxBuf = buffer.getContext('2d');
  

  let scaleX = newWidth / ctx.canvas.width;
  let scaleY = newHeight / ctx.canvas.height;

  let scaler = Math.min(scaleX, scaleY);
  //see if target scale is less than half...
  if (scaler < 0.5) {
    //while loop in case target scale is less than quarter...
    while (scaler < 0.5) {
      ctxBuf.canvas.width = ctxBuf.canvas.width * 0.5;
      ctxBuf.canvas.height = ctxBuf.canvas.height * 0.5;
      ctxBuf.scale(0.5, 0.5);
      ctxBuf.drawImage(canvas, 0, 0);
      ctxBuf.setTransform(1, 0, 0, 1, 0, 0);
      ctx.canvas.width = ctxBuf.canvas.width;
      ctx.canvas.height = ctxBuf.canvas.height;
      ctx.drawImage(buffer, 0, 0);

      scaleX = newWidth / ctxBuf.canvas.width;
      scaleY = newHeight / ctxBuf.canvas.height;
      scaler = Math.min(scaleX, scaleY);
    }
    //only if the scaler is now larger than half, double target scale trick...
    if (scaler > 0.5) {
      scaleX *= 2.0;
      scaleY *= 2.0;
      ctxBuf.canvas.width = ctxBuf.canvas.width * scaleX;
      ctxBuf.canvas.height = ctxBuf.canvas.height * scaleY;
      ctxBuf.scale(scaleX, scaleY);
      ctxBuf.drawImage(canvas, 0, 0);
      ctxBuf.setTransform(1, 0, 0, 1, 0, 0);
      scaleX = 0.5;
      scaleY = 0.5;
    }
  } else
    ctxBuf.drawImage(canvas, 0, 0);

  //wrapping things up...
  ctx.canvas.width = newWidth;
  ctx.canvas.height = newHeight;
  ctx.scale(scaleX, scaleY);
  ctx.drawImage(buffer, 0, 0);
  ctx.setTransform(1, 0, 0, 1, 0, 0);
}
Chloramphenicol answered 12/7, 2020 at 0:48 Comment(0)
T
-1

context.scale(xScale, yScale)

<canvas id="c"></canvas>
<hr/>
<img id="i" />

<script>
var i = document.getElementById('i');

i.onload = function(){
    var width = this.naturalWidth,
        height = this.naturalHeight,
        canvas = document.getElementById('c'),
        ctx = canvas.getContext('2d');

    canvas.width = Math.floor(width / 2);
    canvas.height = Math.floor(height / 2);

    ctx.scale(0.5, 0.5);
    ctx.drawImage(this, 0, 0);
    ctx.rect(0,0,500,500);
    ctx.stroke();

    // restore original 1x1 scale
    ctx.scale(2, 2);
    ctx.rect(0,0,500,500);
    ctx.stroke();
};

i.src = 'https://static.md/b70a511140758c63f07b618da5137b5d.png';
</script>
Tuberculin answered 2/9, 2015 at 12:11 Comment(0)
E
-1

DEMO: Resizing images with JS and HTML Canvas Demo fiddler.

You may find 3 different methods to do this resize, that will help you understand how the code is working and why.

https://jsfiddle.net/1b68eLdr/93089/

Full code of both demo, and TypeScript method that you may want to use in your code, can be found in the GitHub project.

https://github.com/eyalc4/ts-image-resizer

This is the final code:

export class ImageTools {
base64ResizedImage: string = null;

constructor() {
}

ResizeImage(base64image: string, width: number = 1080, height: number = 1080) {
    let img = new Image();
    img.src = base64image;

    img.onload = () => {

        // Check if the image require resize at all
        if(img.height <= height && img.width <= width) {
            this.base64ResizedImage = base64image;

            // TODO: Call method to do something with the resize image
        }
        else {
            // Make sure the width and height preserve the original aspect ratio and adjust if needed
            if(img.height > img.width) {
                width = Math.floor(height * (img.width / img.height));
            }
            else {
                height = Math.floor(width * (img.height / img.width));
            }

            let resizingCanvas: HTMLCanvasElement = document.createElement('canvas');
            let resizingCanvasContext = resizingCanvas.getContext("2d");

            // Start with original image size
            resizingCanvas.width = img.width;
            resizingCanvas.height = img.height;


            // Draw the original image on the (temp) resizing canvas
            resizingCanvasContext.drawImage(img, 0, 0, resizingCanvas.width, resizingCanvas.height);

            let curImageDimensions = {
                width: Math.floor(img.width),
                height: Math.floor(img.height)
            };

            let halfImageDimensions = {
                width: null,
                height: null
            };

            // Quickly reduce the size by 50% each time in few iterations until the size is less then
            // 2x time the target size - the motivation for it, is to reduce the aliasing that would have been
            // created with direct reduction of very big image to small image
            while (curImageDimensions.width * 0.5 > width) {
                // Reduce the resizing canvas by half and refresh the image
                halfImageDimensions.width = Math.floor(curImageDimensions.width * 0.5);
                halfImageDimensions.height = Math.floor(curImageDimensions.height * 0.5);

                resizingCanvasContext.drawImage(resizingCanvas, 0, 0, curImageDimensions.width, curImageDimensions.height,
                    0, 0, halfImageDimensions.width, halfImageDimensions.height);

                curImageDimensions.width = halfImageDimensions.width;
                curImageDimensions.height = halfImageDimensions.height;
            }

            // Now do final resize for the resizingCanvas to meet the dimension requirments
            // directly to the output canvas, that will output the final image
            let outputCanvas: HTMLCanvasElement = document.createElement('canvas');
            let outputCanvasContext = outputCanvas.getContext("2d");

            outputCanvas.width = width;
            outputCanvas.height = height;

            outputCanvasContext.drawImage(resizingCanvas, 0, 0, curImageDimensions.width, curImageDimensions.height,
                0, 0, width, height);

            // output the canvas pixels as an image. params: format, quality
            this.base64ResizedImage = outputCanvas.toDataURL('image/jpeg', 0.85);

            // TODO: Call method to do something with the resize image
        }
    };
}}
Eileen answered 1/1, 2019 at 9:56 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.