Everything You Never Wanted to Know about Unicode Normalization
Canonical Normalization
Unicode includes multiple ways to encode some characters, most notably accented characters. Canonical normalization changes the code points into a canonical encoding form. The resulting code points should appear identical to the original ones barring any bugs in the fonts or rendering engine.
When To Use
Because the results appear identical, it is always safe to apply canonical normalization to a string before storing or displaying it, as long as you can tolerate the result not being bit for bit identical to the input.
Canonical normalization comes in 2 forms: NFD and NFC. The two are equivalent in the sense that one can convert between these two forms without loss. Comparing two strings under NFC will always give the same result as comparing them under NFD.
NFD
NFD has the characters fully expanded out. This is the faster normalization form to calculate, but the results in more code points (i.e. uses more space).
If you just want to compare two strings that are not already normalized, this is the preferred normalization form unless you know you need compatibility normalization.
NFC
NFC recombines code points when possible after running the NFD algorithm. This takes a little longer, but results in shorter strings.
Compatibility Normalization
Unicode also includes many characters that really do not belong, but were used in legacy character sets. Unicode added these to allow text in those character sets to be processed as Unicode, and then be converted back without loss.
Compatibility normalization converts these to the corresponding sequence of "real" characters, and also performs canonical normalization. The results of compatibility normalization may not appear identical to the originals.
Characters that include formatting information are replaced with ones that do not. For example the character ⁹
gets converted to 9
. Others don't involve formatting differences. For example the roman numeral character Ⅸ
is converted to the regular letters IX
.
Obviously, once this transformation has been performed, it is no longer possible to losslessly convert back to the original character set.
When to use
The Unicode Consortium suggests thinking of compatibility normalization like a ToUpperCase
transform. It is something that may be useful in some circumstances, but you should not just apply it willy-nilly.
An excellent use case would be a search engine since you would probably want a search for 9
to match ⁹
.
One thing you should probably not do is display the result of applying compatibility normalization to the user.
NFKC/NFKD
Compatibility normalization form comes in two forms NFKD and NFKC. They have the same relationship as between NFD and C.
Any string in NFKC is inherently also in NFC, and the same for the NFKD and NFD. Thus NFKD(x)=NFD(NFKC(x))
, and NFKC(x)=NFC(NFKD(x))
, etc.
Conclusion
If in doubt, go with canonical normalization. Choose NFC or NFD based on the space/speed trade-off applicable, or based on what is required by something you are inter-operating with.
(begin curved line) (char1) (char2) … (charN) (end curved line)
rather than this:(curved line marker prefix) (char1) (curved line marker prefix) (char2) (curved line marker prefix) (char2)
. In other words, minimal unit which can be rendered? – Leaden