How can I translate an image with subpixel accuracy?
Asked Answered
J

2

6

I have a system that requires moving an image on the screen. I am currently using a png and just placing it at the desired screen coordinates.

Because of a combination of the screen resolution and the required frame rate, some frames are identical because the image has not yet moved a full pixel. Unfortunately, the resolution of the screen is not negotiable.

I have a general understanding of how sub-pixel rendering works to smooth out edges but I have been unable to find a resource (if it exists) as to how I can use shading to translate an image by less than a single pixel.

Ideally, this would be usable with any image but if it was only possible with a simple shape like a circle or a ring, that would also be acceptable.

Jayme answered 14/9, 2011 at 13:44 Comment(0)
P
12

Sub-pixel interpolation is relatively simple. Typically you apply what amounts to an all-pass filter with a constant phase shift, where the phase shift corresponds to the required sub-pixel image shift. Depending on the required image quality you might use e.g. a 5 point Lanczos or other windowed sinc function and then apply this in one or both axes depending on whether you want an X shift or a Y shift or both.

E.g. for a 0.5 pixel shift the coefficients might be [ 0.06645, 0.18965, 0.27713, 0.27713, 0.18965 ]. (Note that the coefficients are normalised, i.e. their sum is equal to 1.0.)

To generate a horizontal shift you would convolve these coefficients with the pixels from x - 2 to x + 2, e.g.

const float kCoeffs[5] = { 0.06645f, 0.18965f, 0.27713f, 0.27713f, 0.18965f };

for (y = 0; y < height; ++y)         // for each row
    for (x = 2; x < width - 2; ++x)  // for each col (apart from 2 pixel border)
    {
        float p = 0.0f;              // convolve pixel with Lanczos coeffs

        for (dx = -2; dx <= 2; ++dx)
            p += in[y][x + dx] * kCoeffs[dx + 2];

        out[y][x] = p;               // store interpolated pixel
    }
Patti answered 14/9, 2011 at 13:48 Comment(6)
I'll test this out and let you know how it goes. ThanksJayme
I hope those coeffs work OK - I just worked them out quickly and haven't tested them - refer to the Wikpedia page for the Lanczos formula.Patti
The coefficients should add up to 1.0 so you don't need to scale the result. Naturally the coefficients will change based on the amount of sub-pixel offset you require, hard-coding them as in the example isn't going to turn out well. Simpler formulas like the bicubic should also work in this application.Baillieu
@Mark: yes, good point - the coeffs need to be normalised. Use of Lanczos versus bicubic etc will depend on requirements for output image quality.Patti
Paul, the difference between Lanczos and bicubic won't be terribly apparent as long as resizing isn't involved. Just IMO.Baillieu
@Mark: probably true if it's just for display purposes.Patti
V
1

Conceptually, the operation is very simple. First you scale up the image (using any method of interpolation, as you like), then you translate the result, and finally you subsample down to the original image size.

The scale factor depends on the precision of sub-pixel translation you want to do. If you want to translate by 0.5 degrees, you need scale up the original image by a factor of 2 then you translate the resulting image by 1 pixel; if you want to translate by 0.25 degrees, you need to scale up by a factor of 4, and so on.

Note that this implementation is not efficient because when you scale up you end up calculating pixel values that you won't actually use because they're just dropped when you subsample back to the original image size. The implementation in Paul's answer is more efficient.

Vitellus answered 29/11, 2018 at 16:45 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.