char to Unicode more than U+FFFF in java?
Asked Answered
M

4

18

How can I display a Unicode Character above U+FFFF using char in Java?

I need something like this (if it were valid):

char u = '\u+10FFFF';
Massproduce answered 23/3, 2012 at 6:23 Comment(1)
Take a look at this document. You can't physically put more than 0xFFFF into a char though.Galloglass
C
25

You can't do it with a single char (which holds a UTF-16 code unit), but you can use a String:

// This represents U+10FFFF
String x = "\udbff\udfff";

Alternatively:

String y = new StringBuilder().appendCodePoint(0x10ffff).toString();

That is a surrogate pair (two UTF-16 code units which combine to form a single Unicode code point beyond the Basic Multilingual Plane). Of course, you need whatever's going to display your data to cope with it too...

Coxcomb answered 23/3, 2012 at 6:26 Comment(0)
S
8

Instead of using StringBuilder, you can also use a function directly found in the Character class. The function is toChars() and it has the following spec:

Converts the specified character (Unicode code point) to
its UTF-16 representation stored in a char array.

So you don't need to exactly know how the surrogate pairs look like and you can directly use the code point. An example code then looks as follows:

int ch = 0x10FFFF;
String s = new String(Character.toChars(ch));

Note that the datatype for the code point is int and not char.

Shirashirah answered 23/10, 2016 at 15:56 Comment(0)
F
1

Unicode characters can take more than two bytes which can't be in general hold in a char.

Fairyfairyland answered 23/3, 2012 at 6:27 Comment(1)
Note, a char in Java is 2-bytes.Galloglass
H
1

Source

The char data type are based on the original Unicode specification, which defined characters as fixed-width 16-bit entities. The range of legal code points is now U+0000 to U+10FFFF, known as Unicode scalar value.

The set of characters from U+0000 to U+FFFF is sometimes referred to as the Basic Multilingual Plane (BMP). Characters whose code points are greater than U+FFFF are called supplementary characters. The Java 2 platform uses the UTF-16 representation in char arrays and in the String and StringBuffer classes. In this representation, supplementary characters are represented as a pair of char values, the first from the high-surrogates range, (\uD800-\uDBFF), the second from the low-surrogates range (\uDC00-\uDFFF).

A char value, therefore, represents Basic Multilingual Plane (BMP) code points, including the surrogate code points, or code units of the UTF-16 encoding. An int value represents all Unicode code points, including supplementary code points. The lower (least significant) 21 bits of int are used to represent Unicode code points and the upper (most significant) 11 bits must be zero. Unless otherwise specified, the behavior with respect to supplementary characters and surrogate char values is as follows:

  • The methods that only accept a char value cannot support supplementary characters. They treat char values from the surrogate ranges as undefined characters. For example, Character.isLetter('\uD840') returns false, even though this specific value if followed by any low-surrogate value in a string would represent a letter.

  • The methods that accept an int value support all Unicode characters, including supplementary characters. For example, Character.isLetter(0x2F81A) returns true because the code point value represents a letter (a CJK ideograph).

In the J2SE API documentation, Unicode code point is used for character values in the range between U+0000 and U+10FFFF, and Unicode code unit is used for 16-bit char values that are code units of the UTF-16 encoding.

Hypaethral answered 23/3, 2012 at 6:47 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.