Bilinear interpolation - DirectX vs. GDI+
Asked Answered
F

3

7

I have a C# app for which I've written GDI+ code that uses Bitmap/TextureBrush rendering to present 2D images, which can have various image processing functions applied. This code is a new path in an application that mimics existing DX9 code, and they share a common library to perform all vector and matrix (e.g. ViewToWorld/WorldToView) operations. My test bed consists of DX9 output images that I compare against the output of the new GDI+ code.

A simple test case that renders to a viewport that matches the Bitmap dimensions (i.e. no zoom or pan) does match pixel-perfect (no binary diff) - but as soon as the image is zoomed up (magnified), I get very minor differences in 5-10% of the pixels. The magnitude of the difference is 1 (occasionally 2)/256. I suspect this is due to interpolation differences.

Question: For a DX9 ortho projection (and identity world space), with a camera perpendicular and centered on a textured quad, is it reasonable to expect DirectX.Direct3D.TextureFilter.Linear to generate identical output to a GDI+ TextureBrush filled rectangle/polygon when using the System.Drawing.Drawing2D.InterpolationMode.Bilinear setting?

For this (magnification) case, the DX9 code is using this (MinFilter,MipFilter set similarly):
Device.SetSamplerState(0, SamplerStageStates.MagFilter, (int)TextureFilter.Linear);

and the GDI+ path is using: g.InterpolationMode = InterpolationMode.Bilinear;

I thought that "Bilinear Interpolation" was a fairly specific filter definition, but then I noticed that there is another option in GDI+ for "HighQualityBilinear" (which I've tried, with no difference - which makes sense given the description of "added prefiltering for shrinking")

Followup Question: Is it reasonable to expect pixel-perfect output matching between DirectX and GDI+ (assuming all external coordinates passed in are equal)? If not, why not?

Clarification: The images I'm using are opaque grayscale (R=G=B, A=1) using Format32bppPArgb.

Finally, there are a number of other APIs I could be using (Direct2D, WPF, GDI, etc.) - and this question generally applies to comparing the output of "equivalent" bilinear interpolated output images across any two of these. Thanks!

Flavoring answered 11/3, 2011 at 16:8 Comment(17)
I don't get it. How can you expect pixel-perfect values when you use interpolation? The point of interpolation is to change pixel values. If your point is that DX and GDI+ don't interpolate the same way: no they don't. Different code.Benedetto
Different code certainly, but as the definition of bilinear interpolation is a simple algorithm (en.wikipedia.org/wiki/Bilinear_interpolation) to find the weighted contributions of adjacent pixels, why should they generate different results? The results should be equivalent (within floating point rounding error, at least) if they are using the same formula. I'm looking for any specific knowledge on how the formulas may differ between DX9 and GDI+.Flavoring
Because this kind of code is optimized to be quick before being accurate. You'd have to call Microsoft Support if you want to find somebody with the specific knowledge. I only know that GDI+'s interpolation is a bit noisy, taking advantage of the human eye not being able to observe small differences in order to gain speed. That really mattered 10 years ago.Benedetto
Good point - posted at MSDN Forums as well. I'd hope that they would keep any error term well below < 1/512 in GDI+, but perhaps not.Flavoring
I haven't gotten a very good answer from MSDN ( social.msdn.microsoft.com/Forums/en-US/winforms/thread/… ), so starting a bounty here.Flavoring
@holtavolt, @John Nicholas's answer brings out a good point. GDI color components (RGBA) are implemented in 8-bit bytes. DX implements color components as floats (0.0-1.0). When converting a float value back to 8-bit bytes for final display on the screen, there can easily be a 1-bit rounding difference. Using floats to interpolate is much more precise than interpolating with bytes, but it all depends on which way the rounding goes. E.g., the standard banker's rounding will round 0.5 up if even and down if odd (or vice versa). Bytes interpolation (GDI) always rounds in the same direction.Tanner
I'm specifying the RGB data (it's grayscale) in the same manner for both APIs: GDI+ = PixelFormat.Format32bppPArgb, DX9 = Format.X8R8G8B8 - unless you mean DX is internally converting my 8-bits/channel to float internally? Banker's rounding would indeed explain much if that's the case.Flavoring
@holtavolt, that's correct. You got it. DX internally represents EVERYTHING as floats. X8R8G8B8 is just the display color depth. It supports displays with different color depths. It just converts a color (with float components) into an int color of the required number of bits. I know this definitely since I used to write D3D programs before.Tanner
Makes sense to me - thanks for the info. I'll try your idea re. reference driver - if this is the case (i.e. where GDI+ and DX diverge), I should see a difference there as well.Flavoring
@holtavolt, eagerly awaiting your results... Just totally curious! :-) Just a follow-on note to bankers rounding -- it is round to nearest even integer and is the default IEEE rounding method.Tanner
@holtavolt, did you get around to doing the reference driver?Tanner
Not yet - had some other priorities come up, but will report my findings. I also found out about the PIX tool, which I'll be using out to see if I can find more detail about internal DX stages: msdn.microsoft.com/en-us/library/ee417062(v=VS.85).aspxFlavoring
@Stephen - as you predicted, reference driver vs. HW path are not pixel-perfect as well.Flavoring
@holtavolt, you'll probably need to set the HW path exactly the same as the reference renderer. It is surprisingly difficult to do. There are certain things like FSAA or multisampling that a GPU may do by default (controlled via a panel app) that the reference renderer won't do. Also, some GPU control panels override application settings on interpolation modes. So if one side has FSAA and the other side doesn't, you won't have pixel-perfection as well, although in this case you'll find much large differences than one-bit.Tanner
@Stephen - the difference is a single value - here's the histogram of the difference via ImageMagik: Compositing differences of testOrigin512-MDX.png with testOrigin512-MDXREF.png 245803: ( 0, 0, 0, 0) #000000000000 black 16341: ( 257, 257, 257, 0) #010101010101 rgb(1,1,1). (This is just a grayscale bitmap) Based on this result, the input here, and from MS own docs re. validating HW drivers vs. the reference driver using PIX, I'm now convinced that the implementation differences and rounding behavior account for this.Flavoring
@holtavolt, interesting. You're comparing DX (with GPU) vs. DX with reference renderer? Strange, since #010101010101 should be rgb(5,5,5), not rgb(1,1,1)...Tanner
@Stephen - correct on the compare. I believe the #0101/257==> rgb(1,...) is some ImageMagik representation issue, since this is only a 24bpp png.Flavoring
T
6

DirectX runs mostly in the GPU and DX9 may be running shaders. GDI+ runs on completely different algorithms. I don't think it is reasonable to expect the two to come up with exactly pixel-matching outputs.

I'd expect DX9 to have better quality than GDI+, which is a step improvement over the old GDI but not much. GDI+ is long understood to have trouble with anti-aliasing lines and also with preserving quality in image scaling (which seems to be your problem). In order to have something similar in quality than latest-generation GPU texture processing, you'll need to move to WPF graphics. This gives quality similar to DX.

WPF also uses the GPU (if available) and falls back to software rendering (if no GPU), therefore the output between GPU and software rendering are reasonably close.

EDIT: Although this has been picked as the answer, it is only an early attempt to explain and doesn't really address the true reason. The reader is referred to discussions laid out in the comments to the question and to the answers instead.

Tanner answered 11/3, 2011 at 16:14 Comment(8)
Same reply as I sent to Hans applies here. Can you point to any specific issues with GDI+ image scaling issues that would explain this (e.g. a document defect in their Bilinear mode?) - I can't find anything. The DX9 code is not using shaders, and I'm pretty sure I'll see the same results using the reference (software) path, but I'll try that.Flavoring
Bounty is still open - anyone know specifically why DX9 generates a different bilinear interp result that GDI+? Pointer to source or definitive reference gets the bounty.Flavoring
@holtavolt, have you tried it with the software reference renderer in DX9? I'd suppose that if you're using GPU hardware filtering, it may depend on optimizations made by the GPU hardware -- e.g. to improve performance in places where nobody is likely to notice.Tanner
Just out of curiosity, can you post the images that show the differences?Tanner
DirectX, at least in 9, isn't using shaders to handle much. It does run on the GPU instead of the CPU, and the GPU is free to handle filtering in the most optimal way for its architecture.Wageworker
@Stephen - I'll try to get some sample images together I can post and use the reference renderer, hadn't thought of that.Flavoring
GDI vertices are being specified as float precision. All info I can find re. D3D indicates it also uses an almost IEEE-754 float type internally</a> (this is 10, not 9 - so maybe that's a change) -->msdn.microsoft.com/en-us/library/cc308050(v=vs.85).aspxFlavoring
@holtavolt, I am not talking about vertices. I am talking about color representation. AFAIK, GDI uses 24-bit or 32-bit representation of color, while DX uses 4xfloats. Interpolating the colors with floats vs. int's will give you the 1-bit rounding diff.Tanner
C
2

Why do you make the assumption that they use the same formula?

Even if they do use the same formula and you accept that the implementation is different would you expect the output to be the same?

At the end of the day the code is designed to work with perception not be mathematically precise. Although you can get this with CUDA if you want.

Rather than being suprised that you get different results i would be very suprised if you got pixel perfect matches.

the way they represent colour is different ... I know for a fact nvidia uses a float(maybe double) to represent colour wheras GDI uses int i believe.

http://en.wikipedia.org/wiki/GPGPU

In DX9 shader 2.0 appears which is when the implementation of colour switched from int to 24 and 32 bit floats.

try comparing ati/amd rendering to nvidia rendering and you can clearly see that colour is very different. I first noticed this in quake 2 ... the difference between the 2 cards was staggering - of course that is due to a great many number of things, least of which is their bilinier interp implementation.

EDIT: the info about how the specification was made happeend after i answered. Anyway i think the datatypes used to store it iwll be different no matter how you specify it. Moreover the implementation of float is likley to be different. I may be wrong but im pretty sure that c# implements float differently to the C compiler that nvidia uses. (and that assumes that GDI+ doesnt just convert the float into the equivalent int ....)

Even if i am wrong about that I would enerally hold it to be exceptional to expect 2 different implementations of an algorithm to be identical. they are optomised for speed as a result the difference in optomisation will directly translate to a difference in image quality as this speed will come from a different approach to cutting corners/approximation.

Chaparro answered 29/3, 2011 at 15:52 Comment(11)
+1 for bringing up the float/byte color-representation issue. The one-bit difference may be simply rounding differences.Tanner
I added a clarifying point - these are grayscale images (R=G=B)Flavoring
@John - the color elements are being specified identically (GDI+ = PixelFormat.Format32bppPArgb, DX9 = Format.X8R8G8B8)Flavoring
well im glad i wasted my time trying to find you concrete reasons to not get awarded the answer.Chaparro
@holtavolt, it is true. @John Nicholas was the one who came up with the color representation issue -- which at this point of time seems the most likely reason. I believe you can still change the bounty award to him, which I think he deserves.Tanner
Hmm - I see that's so in review (although you clarified greatly). John - I'll see if SO moderators can reassign, as I can not.Flavoring
Reassignment isn't possible. I've started another bounty that I'll give to you, John, once the 24 hour bounty assignment timer is done.Flavoring
Actually, it is I who should be giving back the bounty. Gosh this is getting complicated :-) Not sure why I didn't think of this before. @holtavolt, when you've assigned your bounty to @John Nicholas, I'll start a bounty and assign it back to you. Not sure whether I can award a bounty on a question raised by you back to you though...Tanner
@holtavolt, can you ask the SO moderators whether it is possible for me to start a bounty on a question asked by you and then award the bounty back to you -- the owner of the question?Tanner
@stephen - your help was bounty-worthy as well, so please keep it - thanks!Flavoring
@holtavolt, well if you insist... :-DTanner
A
0

There are two possibilities for round-off differences. The first is obvious, when the RGB value is calculated as a fraction of the values on either side. The second is more subtle, when calculating the ratio to use when determining the fraction between the two points.

I'd actually be very surprised if two different implementations of the algorithm were pixel-for-pixel identical, there's so many places for a +/-1 difference to occur. Without getting the exact implementation details of both methods it's impossible to be more precise than that.

Avowal answered 1/4, 2011 at 4:31 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.