I don't understand why the DPI parameters on the RenderTargetBitmap
are used the way they seem to be used.
Quite often I see code that deals with non-standard DPI settings by adjusting the image size based on the DPI like this:
new RenderTargetBitmap ((int) (width/96d)*dpi, (int) (height/96d)*dpi, dpi, dpi, PixelFormats.Pbgra32);
Examples:
http://blogs.msdn.com/b/jaimer/archive/2009/07/03/rendertargetbitmap-tips.aspx (via web.archive.org)
ContentControl + RenderTargetBitmap + empty image
http://social.msdn.microsoft.com/forums/en-US/wpf/thread/984da366-33d3-4fd3-b4bd-4782971785f8/
Other times the dpi seems to be hard coded to either 96 or different values like so:
new RenderTargetBitmap (width, height, 96, 96, PixelFormats.Pbgra32);
Examples:
http://www.ericsink.com/wpf3d/3_Bitmap.html
http://dedjo.blogspot.com/2008/05/performance-appliance-of.html
How to render a checked CheckBox (Aero theme) to a RenderTargetBitmap?
or
new RenderTargetBitmap (width, height, 120, 96, PixelFormats.Pbgra32);
Examples:
http://msdn.microsoft.com/en-us/library/system.windows.media.imaging.rendertargetbitmap.aspx
When would you do the one and one the other?
Am I right to say that you should always adjust the size of the bitmap (as in the first example) if you later want to display the image with the same size as the control inside your app?
And that you probably should use a fixed dpi at 96 when saving it to file, or if you are using a different dpi not adjust the width and height?
Say I want to save the image of a control to a file with a fixed size. Would I then still set the dpi or just use the default 96?