Depending on how accurate you want to be, this can get difficult.
There are many approaches. It sounds like you're trying to draw to a printer-sized bitmap and then shrink it down. The steps to do that are:
- Create a DC (or better yet, an IC--Information Context) for the printer.
- Query the printer DC to find out the resolution, page size, physical offsets, etc.
- Create a DC for the window/screen.
- Create a compatible DC (the memory DC).
- Create a compatible bitmap for the window/screen, but the size should be the pixel size of the printer page. (The problem with this approach is that this is a HUGE bitmap and it can fail.)
- Select the compatible bitmap into the memory DC.
- Draw to the memory DC, using the same coordinates you would use if drawing to the actual printer. (When you select fonts, make sure you scale them to the printer's logical inch, not the screen's logical inch.)
StretchBlt
the memory DC to the window, which will scale down the entire image. You might want to experiment with the stretch mode to see what works best for the kind of image you're going to display.
- Release all the resources.
But before you head in that direction, consider the alternatives. This approach involves allocating a HUGE off-screen bitmap. This can fail on resource-poor computers. Even if it doesn't, you might be starving other apps.
The metafile approach given in another answer is a good choice for many applications. I'd start with this.
Another approach is to figure out all the sizes in some fictional high-resolution unit. For example, assume everything is in 1000ths of an inch. Then your drawing routines would scale this imaginary unit to the actual dpi used by the target device.
The problem with this last approach (and possibly the metafile one) is that GDI fonts don't scale perfectly linearly. The widths of individual characters are tweaked depending on the target resolution. On a high-resolution device (like a 300+ dpi laser printer), this tweaking is minimal. But on a 96-dpi screen, the tweaks can add up to a significant error over the length of a line. So text in your preview window might appear out-of-proportion (typically wider) than it does on the printed page.
Thus the hardcore approach is to measure text in the printer context, and measure again in the screen context, and adjust for the discrepancy. For example (using made-up numbers), you might measure the width of some text in the printer context, and it comes out to 900 printer pixels. Suppose the ratio of printer pixels to screen pixels is 3:1. You'd expect the same text on the screen to be 300 screen pixels wide. But you measure in the screen context and you get a value like 325 screen pixels. When you draw to the screen, you'll have to somehow make the text 25 pixels narrower. You can ram the characters closer together, or choose a slightly smaller font and then stretch them out.
The hardcore approach involves more complexity. You might, for example, try to detect font substitutions made by the printer driver and match them as closely as you can with the available screen fonts.
I've had good luck with a hybrid of the big-bitmap and the hardcore approaches. Instead of making a giant bitmap for the whole page, I make one large enough for a line of text. Then I draw at printer size to the offscreen bitmap and StretchBlt
it down to screen size. This eliminates dealing with the size discrepancy at a slight degradation of font quality. It's suitable for actual print preview, but you wouldn't want to build a WYSIWYG editor like that. The one-line bitmap is small enough to make this practical.
The good news is only text is hard. All other drawing is a simple scaling of coordinates and sizes.
I've not used GDI+ much, but I think it did away with non-linear font scaling. So if you're using GDI+, you should just have to scale your coordinates. The drawback is that I don't think the font quality on GDI+ is as good.
And finally, if you're a native app on Vista or later, make sure you've marked your process as "DPI-aware" . Otherwise, if the user is on a high-DPI screen, Windows will lie to you and claim that the resolution is only 96 dpi and then do a fuzzy up-scaling of whatever you draw. This degrades the visual quality and can make debugging your print preview even more complicated. Since so many programs don't adapt well to higher DPI screens, Microsoft added "high DPI scaling" by default starting in Vista.
Edited to Add
Another caveat: If you select an HFONT into the memory DC with the printer-sized bitmap, it's possible that you get a different font than what would get when selecting that same HFONT into the actual printer DC. That's because some printer drivers will substitute common fonts with in memory ones. For example, some PostScript printers will substitute an internal PostScript font for certain common TrueType fonts.
You can first select the HFONT into the printer IC, then use GDI functions like GetTextFace
, GetTextMetrics
, and maybe GetOutlineTextMetrics
to find out about the actual font selected. Then you can create a new LOGFONT to try to more closely match what the printer would use, turn that into an HFONT, and select that into your memory DC. This is the mark of a really good implementation.
Another Edit
I've recently written new code that uses enhanced meta files, and that works really well, at least for TrueType and OpenType fonts when there's no font substitution. This eliminates all the work I described above trying to create a screen font that is a scaled match for the printer font. You can just run through your normal printing code and print to an enhanced meta file DC as though it's the printer DC.