There are many questions on OpenGL font rendering, many of them are satisfied by texture atlases (fast, but wrong), or string-textures (fixed-text only).
However, those approaches are poor and appear to be years out of date (what about using shaders to do this better/faster?). For OpenGL 4.1 there's this excellent question looking at "what should you use today?":
What is state-of-the-art for text rendering in OpenGL as of version 4.1?
So, what should we be using on iOS GL ES 2 today?
I'm disappointed that there appears to be no open-source (or even commercial solution). I know a lot of teams suck it down and spend weeks of dev time re-inventing this wheel, gradually learning how to kern and space etc (ugh) - but there must be a better way than re-writing the whole of "fonts" from scratch?
As far as I can see, there are two parts to this:
- How do we render text using a font?
- How do we display the output?
For 1 (how to render), Apple provides MANY ways to get the "correct" rendered output - but the "easy" ones don't support OpenGL (maybe some of the others do - e.g. is there a simple way to map CoreText output to OpenGL?).
For 2 (how to display), we have shaders, we have VBOs, we have glyph-textures, we have lookup-textures, and other tecniques (e.g. the OpenGL 4.1 stuff linked above?)
Here are the two common OpenGL approaches I know of:
- Texture atlas (render all glyphs once, then render 1 x textured quad per character, from the shared texture)
- This is wrong, unless you're using a 1980s era "bitmap font" (and even then: texture atlas requires more work than it may seem, if you need it correct for non-trivial fonts)
- (fonts aren't "a collection of glyphs" there's a vast amount of positioning, layout, wrapping, spacing, kerning, styling, colouring, weighting, etc. Texture atlases fail)
- Fixed string (use any Apple class to render correctly, then screenshot the backing image-data, and upload as a texture)
- In human terms, this is fast. In frame-rendering, this is very, very slow. If you do this with a lot of changing text, your frame rate goes through the floor
- Technically, it's mostly correct (not entirely: you lose some information this way) but hugely inefficient
I've also seen, but heard both good and bad things about:
- Imagination/PowerVR "Print3D" (link broken) (from the guys that manufacture the GPU! But their site has moved/removed the text rendering page)
- FreeType (requires pre-processing, interpretation, lots of code, extra libraries?)
- ...and/or FTGL http://sourceforge.net/projects/ftgl/ (rumors: slow? buggy? not updated in a long time?)
- Font-Stash http://digestingduck.blogspot.co.uk/2009/08/font-stash.html (high quality, but very slow?) 1.
Within Apple's own OS / standard libraries, I know of several sources of text rendering. NB: I have used most of these in detail on 2D rendering projects, my statements about them outputting different rendering are based on direct experience
- CoreGraphics with NSString
- Simplest of all: render "into a CGRect"
- Seem to be a slightly faster version of the "fixed string" approach people recommend (even though you'd expect it to be much the same)
- UILabel and UITextArea with plain text
- NB: they are NOT the same! Slight differences in how they render the smae text
- NSAttributedString, rendered to one of the above
- Again: renders differently (the differences I know of are fairly subtle and classified as "bugs", various SO questions about this)
- CATextLayer
- A hybrid between iOS fonts and old C rendering. Uses the "not fully" toll-free-bridged CFFont / UIFont, which reveals some more rendering differences / strangeness
- CoreText
- ... the ultimate solution? But a beast of its own...