Is UTF8 injective mapping?
Asked Answered
M

5

9

We write a C++ application and need to know this:

Is UTF8 text encoding an injective mapping from bytes to characters, meaning that every single character (letter...) is encoded in only one way? So, e.g. letter 'Ž' cannot be encoded as, say, both 3231 and 32119.

Mario answered 13/11, 2011 at 20:53 Comment(2)
Each UTF character is an encoded Unicode "code point". Each code point has an associated glyph (like the 'Ž'). But Unicode also allows the combination of glyphs. So simple answer no. But each "code point" is supposed to be a unique.Exhaustive
just to get our terminology straight: in unicode, any character corresponds to a single codepoint by definition (but there are non-character codepoints); then, there are user-perceived characters, also known as grapheme clusters, and these are what we generally mean when talking about characters in a non-unicode context; many grapheme clusters can be represented by multiple character sequences, eg á can be encoded as both U+0061 U+0301 and U+00E1 - that's where normalization comes in...Hughs
L
16

That depends very much on what you consider a "letter".

UTF8 is basically a tiny piece of what is Unicode.

Basically there are at least three levels: Bytes, Code points and Grapheme clusters. A Code point can be encoded in one or more bytes, according to a certain encoding, like UTF8, UTF16 or UTF32. This encoding is unique (because all alternative ways are declared invalid). However a code point is not always a glyph because there are so-called combining characters. Such combining characters follow the base character and, as their name says, are combined with the base character. For example, there's the combining character U+0308 COMBINING DIAERESIS which puts a diaeresis (¨) above the preceding letter. So if it follows e.g. an a (U+0061 LATIN SMALL LETTER A), the result is an ä. However there's also a single code point for the letter ä (U+00E4 LATIN SMALL LETTER A WITH DIAERESIS), so this means that the code sequences U+0061 U+0308 and U+00E4 describe the same letter.

So, each code point has a single valid UTF 8 encoding (e.g. U+0061 is "\141", U+0308 is "\314\210" and U+00e4 is "\303\244", but the letter ä is encoded by both the code point sequence U+0061 U+0308, i.e. in UTF8 the byte sequence "\141\314\210" and the single code point U+00E4, i.e. the byte sequence "\303\244".

What's worse is that since the Unicode makers decided that the combining letters follow the base letter instead of preceding it, you cannot know whether your glyph is complete until you've seen the next code point (if it is not a combining code point, your letter is finished).

Liquesce answered 13/11, 2011 at 21:12 Comment(12)
Great answer. How many combining letters can follow the base letter? Only one or zero?Mario
+1 Gotta bookmark this as required reading before trying to get unicode right.Glace
@James: as many as you wish. I think proper Hangul can have up to three, also devanagari may need several. Zalgoist texts have many.Ellerd
you're actually missing two important levels: characters (not every codepoint corresponds to a character) and grapheme clusters (user-perceived characters, which are conceptually different from their representations as glyphs); see unicode.org/glossary for a more detailed list of terminology...Hughs
@James: I'm not aware of any limit. For example, a sequence mentioned on the Unicode FAQ page unicode.org/faq/char_combmark.htmll#12 is U+0061, U+0328, U+0301 for some Navaio letter, therefore it's definitely not only 0 or 1.Liquesce
@Christoph: I already suspected that I missed some levels (Unicode is complex), that's what I said "at least". Thanks for pointing out the missing levels and for the link.Liquesce
@celtschk: the Unicode glossary is sometimes a bit confusing because it also list meanings in non-Unicode contexts or fails to make connections clear - eg, the entries on user-perceived characters and grapheme clusters don't mention any connection between these two terms, and you'll have to look at actual standard documents (quote from unicode.org/reports/tr29 : user-perceived characters are approximated by what is called a grapheme cluster, which can be determined programmatically.) to see what is going onHughs
I'd like to disagree about your "three levels". If anything, they should be "bytes, codepoints, characters, [glyphs]". A glyph is a graphical entity that the font chooses, depending on the character that's to be displayed. Fonts can have any amount of glyphs, often several different stylistic variants, for the same character. The font's character map provides a reverse lookup table. Checkout OpenType's GSUB tables for an example.Duero
I now changed "Glyphs" to "Grapheme clusters" because that's what I actually meant. Sorry for the wrong terminology.Liquesce
There are no bytes, and there are no characters. There are only code points and extended grapheme clusters. There is no limit to the number of Grapheme_Extend code points that can follow a Grapheme_Base code point. Don’t be so sure you know how these things order: consider code points with the Logical_Order_Exception property. Finally, the grapheme cluster U+000D U+000A has no combining characters, and yet is two code points. Many multi-codepoint grapheme clusters contain no combining characters. Infinitely many grapheme clusters that have no single code point rep. Normalize or die.Jenks
@tchrist: Of course there are bytes. Or what else would you call the 8-bit units of UTF8?Liquesce
@celtschk: formally they are code units. So actually there are four levels.Tidemark
H
7

Valid UTF-8 indeed encodes each character uniquely. However, there are so-called overlong sequences which conform to the general encoding scheme, but are invalid by definition as only the shortest sequence may be used to encode a character.

For example, there's a derivative of UTF-8 called modified UTF-8 which encodes NUL as the overlong sequence 0xC0 0x80 instead of 0x00 to get an encoding compatible with null-terminated strings.

If you're asking about grapheme clusters (ie user-perceived characters) instead of characters, then even valid UTF-8 is ambiguous. However, Unicode defines several different normalization forms, and if you restrict yourself to normalized strings, then UTF-8 is indeed injective.

Somewhat off-topic: Here's some ASCII art I came up with to help visualize the different concepts of character. Vertically separated are the human, abstract and machine level. Feel free to come up with better names...

                         [user-perceived characters]<-+
                                      ^               |
                                      |               |
                                      v               |
            [characters] <-> [grapheme clusters]      |
                 ^                    ^               |
                 |                    |               |
                 v                    v               |
[bytes] <-> [codepoints]           [glyphs]<----------+

To get back on topic: This graph also shows where the possible problems may crop up when using bytes to compare abstract strings. In particular (assuming UTF-8), the programmer needs to make sure that

  • the byte sequence is valid, ie doesn't contain overlong sequences or encode non-character codepoints
  • the character sequence is normalized so equivalent grapheme clusters have a unique representation
Hughs answered 13/11, 2011 at 21:11 Comment(0)
E
4

First you need some terminology:

  • Letter: (abstract concept, not in Unicode) some letter or symbol you want to represent.
  • Codepoint: a number associated to an Unicode character.
  • Grapheme cluster: a sequence of Unicode codepoints that correspond to a single letter, e.g: a + ́ for the letter á.
  • Glyph: (concept at the level of fonts, not in Unicode): a graphical representation of a letter.

Each codepoint (e.g: U+1F4A9) gets a unique representation as bytes in UTF-8 (e.g: 0xF0 0x9F 0x92 0xA9).

Some letters can be represented in several different ways as codepoints (i.e: as different grapheme clusters). e.g: á can be represented as a single codepoint á (LATIN SMALL LETTER A WITH ACUTE), or it can be represented as the codepoint for a (LATIN SMALL LETTER A) + the codepoint for ́ (COMBINING ACUTE ACCENT). Unicode has several canonical normalization forms to deal with this (e.g: NFC or Canonical Normalization Form C is loosely a normalization form with fewer codepoints, while NFD is fully decomposed).

And then, there are also ligatures (e.g: ) and some other presentation-related variations of a letter (e.g: superscripts, no-break spaces, letters with different shapes at different places of a word, ...). Some of these are in Unicode to permit lossless roundtrip conversion from-to legacy character sets. Unicode has compatibility normalization forms (NFKC and NFKD) to deal with this.

Ellerd answered 13/11, 2011 at 21:13 Comment(4)
Glyphs and characters aren't the same thing. Check out OpenType's stylistic alternates for a vivid example.Duero
I have no idea what a glyph is. Unicode defines code points, and it defines extended grapheme clusters. Anything else is slippery wishy-washy talk that does not correspond to something in the standard — and so should not be used.Jenks
ninjalj Barely. The K norm forms are not about singletons. In fact, any singleton like the one you mentioned is destroyed by any normalization form.Jenks
@tchrist: is at least passable now or should I delete it altogether?Ellerd
D
2

Yes. UTF-8 is just a standard way to encode Unicode characters. It was made so that there is only one way to encode each of the Unicode characters.

A bit off-topic: it might be useful to know that some characters are very similar in look (to humans), but they are still different - for instance there is a sign in Cyrillic that looks very similar to '/'.

Daukas answered 13/11, 2011 at 20:55 Comment(5)
You mean like space and non-breaking space?Mario
Not sure if there is a character for non-breaking space in Unicode (I guess there is), otherwise: yes, exactly so.Daukas
Unicode doesn’t have characters. It has code points, and it has extended grapheme clusters. Consider simple things like ë and ẍ. Neither is a code point; both are extended grapheme clusters. The first has a normalization form that requires only a single code point, but the second does not.Jenks
There are multiple non-breaking spaces, in fact. U+FEFF (the BOM) is also a zero-width non-breaking space.Wina
@thrist: I stand corrected. However, for practical purposes my answer is correct - and others seem to have posted more in-depth answers by now. :)Daukas
Q
0

Yes, sort of. If used properly, each unicode code point should only be encoded one way in UTF-8, but that's partly because of the requirement that only the shortest applicable UTF-8 byte sequence should be used for any character.

The method used to encode the characters, however, could encode many characters more than one way if not for this requirement -- and though not proper, there are some cases where this is done.

For example, 'Z' could be encoded as 0x5a or {0xc1, 0x9a} (among others) though the only 0x5a is considered correct because it is the shortest sequence.

Quest answered 13/11, 2011 at 21:10 Comment(1)
ummmm {0xa1, 0x9a} is a tuple of 2 continuation bytes in UTF-8, so it couldn't possibly be remotely valid UTF-8. Overly long sequences still require the leading byte be above \277, or 0xBF (it's so much more natural to describe UTF-8 by octals than by hex)Elenor

© 2022 - 2024 — McMap. All rights reserved.