Text/font rendering in OpenGLES 2 (iOS - CoreText?) - options and best practice?
Asked Answered
T

3

19

There are many questions on OpenGL font rendering, many of them are satisfied by texture atlases (fast, but wrong), or string-textures (fixed-text only).

However, those approaches are poor and appear to be years out of date (what about using shaders to do this better/faster?). For OpenGL 4.1 there's this excellent question looking at "what should you use today?":

What is state-of-the-art for text rendering in OpenGL as of version 4.1?

So, what should we be using on iOS GL ES 2 today?

I'm disappointed that there appears to be no open-source (or even commercial solution). I know a lot of teams suck it down and spend weeks of dev time re-inventing this wheel, gradually learning how to kern and space etc (ugh) - but there must be a better way than re-writing the whole of "fonts" from scratch?


As far as I can see, there are two parts to this:

  1. How do we render text using a font?
  2. How do we display the output?

For 1 (how to render), Apple provides MANY ways to get the "correct" rendered output - but the "easy" ones don't support OpenGL (maybe some of the others do - e.g. is there a simple way to map CoreText output to OpenGL?).

For 2 (how to display), we have shaders, we have VBOs, we have glyph-textures, we have lookup-textures, and other tecniques (e.g. the OpenGL 4.1 stuff linked above?)

Here are the two common OpenGL approaches I know of:

  1. Texture atlas (render all glyphs once, then render 1 x textured quad per character, from the shared texture)
    1. This is wrong, unless you're using a 1980s era "bitmap font" (and even then: texture atlas requires more work than it may seem, if you need it correct for non-trivial fonts)
    2. (fonts aren't "a collection of glyphs" there's a vast amount of positioning, layout, wrapping, spacing, kerning, styling, colouring, weighting, etc. Texture atlases fail)
  2. Fixed string (use any Apple class to render correctly, then screenshot the backing image-data, and upload as a texture)
    1. In human terms, this is fast. In frame-rendering, this is very, very slow. If you do this with a lot of changing text, your frame rate goes through the floor
    2. Technically, it's mostly correct (not entirely: you lose some information this way) but hugely inefficient

I've also seen, but heard both good and bad things about:

  1. Imagination/PowerVR "Print3D" (link broken) (from the guys that manufacture the GPU! But their site has moved/removed the text rendering page)
  2. FreeType (requires pre-processing, interpretation, lots of code, extra libraries?)
  3. ...and/or FTGL http://sourceforge.net/projects/ftgl/ (rumors: slow? buggy? not updated in a long time?)
  4. Font-Stash http://digestingduck.blogspot.co.uk/2009/08/font-stash.html (high quality, but very slow?)
  5. 1.

Within Apple's own OS / standard libraries, I know of several sources of text rendering. NB: I have used most of these in detail on 2D rendering projects, my statements about them outputting different rendering are based on direct experience

  1. CoreGraphics with NSString
    1. Simplest of all: render "into a CGRect"
    2. Seem to be a slightly faster version of the "fixed string" approach people recommend (even though you'd expect it to be much the same)
  2. UILabel and UITextArea with plain text
    1. NB: they are NOT the same! Slight differences in how they render the smae text
  3. NSAttributedString, rendered to one of the above
    1. Again: renders differently (the differences I know of are fairly subtle and classified as "bugs", various SO questions about this)
  4. CATextLayer
    1. A hybrid between iOS fonts and old C rendering. Uses the "not fully" toll-free-bridged CFFont / UIFont, which reveals some more rendering differences / strangeness
  5. CoreText
    1. ... the ultimate solution? But a beast of its own...
Twigg answered 1/9, 2013 at 10:31 Comment(5)
Has any progress been made in? i am following your footstep. but ...Gladstone
I've got it working, with a few bugs. I handle some very fancy fonts and multi-letter glyphs correctly, but there a couple of small glitches still. Im writig it up as blog posts step by step, but the text layout part wont be finished until xmas 2013 or spring 2014, sorry. Blog pists here t-machine.org/index.php/2013/09/08/opengl-es-2-basic-drawingTwigg
Yeah, would be keen to hear about your text layout bit. Has it worked as well as it should thus far?Entomostracan
It's a lot of code. And I've fixed some of Apple's bugs in CoreText :). But I've still got some bugs (e.g. some characters/glyphs, at certain point-sizes, are 2-3 pixels away from true :(). I'm considering giving access to the git repo as donation-ware ($50 or something), b/c researching + developing this is taking so much time!Twigg
non es version: #8848399Zoarah
T
8

I did some more experimenting, and it seems that CoreText might make for a perfect solution when combined with a texture atlas and Valve's signed-difference textures (which can turn a bitmap glyph into a resolution-independent hi-res texture).

...but I don't have it working yet, still experimenting.


UPDATE: Apple's docs say they give you access to everything except the final detail: which glyph + glyph layout to render (you can get the line layout, and the number of glyphs, but not the glyph itself, according to docs). For no apparent reason, this core piece of info is apparently missing from CoreText (if so, that makes CT almost worthless. I'm still hunting to see if I can find a way to get the actual glpyhs + per-glyph data)


UPDATE2: I now have this working properly with Apple's CT (but no different-textures), but it ends up as 3 class files, 10 data structures, about 300 lines of code, plus the OpenGL code to render it. Too much for an SO answer :(.

The short answer is: yes, you can do it, and it works, if you:

  1. Create CTFrameSetter
  2. Create CTFrame for a theoretical 2D frame
  3. Create a CGContext that you'll convert to a GL texture
  4. Go through glyph-by-glyph, allowing Apple to render to the CGContext
  5. Each time Apple renders a glyph, calculate the boundingbox (this is HARD), and save it somewhere
  6. And save the unique glyph-ID (this will be different for e.g. "o", "f", and "of" (one glyph!))
  7. Finally, send your CGContext up to GL as a texture

When you render, use the list of glyph-IDs that Apple created, and for each one use the saved info, and the texture, to render quads with texture-co-ords that pull individual glyphs out of the texture you uploaded.

This works, it's fast, it works with all fonts, it gets all font layout and kerning correct, etc.

Twigg answered 2/9, 2013 at 20:10 Comment(5)
I'm looking for something just like this. If you don't mind open sourcing it, it sounds like it would be a great Github project!Primatology
@Primatology sorry, no - this has cost me vast amounts of unpaid time (more than a month of full-time work). If I release anything, it would be commercial. More likely I'd consult for specific companies/projects - cheaper for them, and less hassle for me (I don't want to run a website / company / etc)). Sorry!Twigg
Alright, no problem! For anyone else looking for a direction, I think my company may build our system off of FreeType which seems like it won't be too much work and is cross platform.Primatology
sometimes open source projects are made to actually help people. If you don't want and don't mind to release something commercially then help others that research the same thing.Amboise
@ПетърПетров your attitude stinks; you're replying to an answer where I gave step-by-step algorithm for others to follow by insulting me for not helping people. Keep doing that and you'll teach people to stop replying to StackOverflow - to stop "helping people" as you put it. Don't be selfish, don't be greedy: if you want 3 months of coding, don't demand it for free.Twigg
R
3

1.

Create any string by NSMutableAttributedString.

let mabstring = NSMutableAttributedString(string: "This is a test of characterAttribute.")
mabstring.beginEditing()
var matrix = CGAffineTransform(rotationAngle: CGFloat(GLKMathDegreesToRadians(0)))
let font = CTFontCreateWithName("Georgia" as CFString?, 40, &matrix)
mabstring.addAttribute(kCTFontAttributeName as String, value: font, range: NSRange(location: 0, length: 4))
var number: Int8 = 2
let kdl = CFNumberCreate(kCFAllocatorDefault, .sInt8Type, &number)!
mabstring.addAttribute(kCTStrokeWidthAttributeName as String, value: kdl, range: NSRange(location: 0, length: mabstring.length))
mabstring.endEditing()

2.

Create CTFrame. The rect calculate from mabstring by CoreText.CTFramesetterSuggestFrameSizeWithConstraints

let framesetter = CTFramesetterCreateWithAttributedString(mabstring)
let path = CGMutablePath()
path.addRect(rect)
let frame = CTFramesetterCreateFrame(framesetter, CFRangeMake(0, 0), path, nil)

3.

Create bitmap context.

let imageWidth = Int(rect.width)
let imageHeight = Int(rect.height)
var rawData = [UInt8](repeating: 0, count: Int(imageWidth * imageHeight * 4))
let bitmapInfo = CGBitmapInfo(rawValue: CGBitmapInfo.byteOrder32Big.rawValue | CGImageAlphaInfo.premultipliedLast.rawValue)
let rgbColorSpace = CGColorSpaceCreateDeviceRGB()
let bitsPerComponent = 8
let bytesPerRow = Int(rect.width) * 4
let context = CGContext(data: &rawData, width: imageWidth, height: imageHeight, bitsPerComponent: bitsPerComponent, bytesPerRow: bytesPerRow, space: rgbColorSpace, bitmapInfo: bitmapInfo.rawValue)!

4.

Draw CTFrame in bitmap context.

CTFrameDraw(frame, context)

Now, we got the raw pixel data rawData. Create OpenGL Texture , MTLTexture , UIImage with rawData is ok.


Example,

To OpenGL Texture:Convert an UIImage in a texture

Set-up your texture:

GLuint textureID;    
glPixelStorei(GL_UNPACK_ALIGNMENT, 1);
glGenTextures(1, &textureID);

glBindTexture(GL_TEXTURE_2D, textureID);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, width, height, 0, GL_RGBA, GL_UNSIGNED_BYTE, textureData);

,

//to MTLTexture
let textureDescriptor = MTLTextureDescriptor.texture2DDescriptor(pixelFormat: .rgba8Unorm, width: Int(imageWidth), height: Int(imageHeight), mipmapped: true)
let device = MTLCreateSystemDefaultDevice()!
let texture = device.makeTexture(descriptor: textureDescriptor)
let region = MTLRegionMake2D(0, 0, Int(imageWidth), Int(imageHeight))
texture.replace(region: region, mipmapLevel: 0, withBytes: &rawData, bytesPerRow: imageRef.bytesPerRow)

,

//to UIImage
  let providerRef = CGDataProvider(data: NSData(bytes: &rawData, length: rawData.count * MemoryLayout.size(ofValue: UInt8(0))))
  let renderingIntent =  CGColorRenderingIntent.defaultIntent
  let imageRef = CGImage(width: imageWidth, height: imageHeight, bitsPerComponent: 8, bitsPerPixel: 32, bytesPerRow: bytesPerRow, space: rgbColorSpace, bitmapInfo: bitmapInfo, provider: providerRef!, decode: nil, shouldInterpolate: false, intent: renderingIntent)!
  let image = UIImage.init(cgImage: imageRef)
Reiser answered 23/1, 2017 at 3:16 Comment(0)
L
3

I know this post is old, but I came across it while trying to do this exactly in my application. In my search, I came across this sample project

http://metalbyexample.com/rendering-text-in-metal-with-signed-distance-fields/

It is a perfect implementation of CoreText with OpenGL using the techniques of texture atlasing and signed distance fields. It has greatly helped me achieve the results I wanted. Hope this helps someone else.

Lacquer answered 23/8, 2017 at 19:35 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.