This question has to do with how browsers render an entire page as tiled images (and is not about rendering images within pages.) I'm most interested in the memory costs.
It is my understanding that a browser such as Chrome will layout the entire page but render sections of it as needed, in small square tiles. When the user scrolls the page, only tiles that do not exist are rendered. Tile generation typically happens in a background thread, but this question is not concerned with threading.
So the question is, what is the total memory usage of this approach?
Let's assume that the screen is 1024x768 and that a tile is 64x64 pixels. So the screen is 16x12 tiles. Further I'm assuming each tile is 32 bits per pixel, that Direct2D is the rendering platform, and a Direct2D SwapChainPanel
is used for performance.
During a given render cycle, Only a fraction of the total (16x12) tiles is likely to be rendered. However, the number is likely to be more than one. Therefore
- It seems to me that a scratch bitmap of 1024x768 is most convenient to render the currently invalid tiles to.
- The valid portions are then copied onto actual tile bitmaps of size 64x64, for use in the next step and in future render cycles.
- The final bitmap to be rendered is composed by blitting the appropriate tiles, some of which may have been produced by an earlier render cycle, and some in this render cycle. This final bitmap is also 1024x768.
Thus, it appears that two 32bpp bitmaps of the full screen size (1024x768) are needed in addition to the tiles.
Questions:
- Do browsers in fact use 32 bits per pixel, or something lower?
- Is step (3) above needed, or is there a way to render the tiles directly without a final bitmap?
- Are there any additional main memory allocations that I may have missed (e.g. by the GPU)?
The number of intermediate copies is a subtlety that requires careful thought, so I'd really appreciate precise answers. No speculation, please.