What is the Java's internal represention for String? Modified UTF-8? UTF-16?
Asked Answered
F

7

57

I searched Java's internal representation for String, but I've got two materials which look reliable but inconsistent.

One is:

http://www.codeguru.com/cpp/misc/misc/multi-lingualsupport/article.php/c10451

and it says:

Java uses UTF-16 for the internal text representation and supports a non-standard modification of UTF-8 for string serialization.

The other is:

http://en.wikipedia.org/wiki/UTF-8#Modified_UTF-8

and it says:

Tcl also uses the same modified UTF-8[25] as Java for internal representation of Unicode data, but uses strict CESU-8 for external data.

Modified UTF-8? Or UTF-16? Which one is correct? And how many bytes does Java use for a char in memory?

Please let me know which one is correct and how many bytes it uses.

Fieldsman answered 14/3, 2012 at 9:26 Comment(5)
#4655750, this might answer your question.Derringer
What Java uses and what the JVM uses in-memory doesn't have to be the same. See my answer.Prestigious
your main source of (official) information about Java should be java.sun.com ! (despite of stackoverflow)Goeselt
@CarlosHeuberger You're definitely right! Thanks for the advice :-)Fieldsman
Beware that the Java language specification explicitly doesn't define how strings are stored when in use, just that they are immutable (and there are some hints that they may be interned). So any answer should explicitly list the runtime, and since most of them do not, those are are all tosh.Soften
P
63

Java uses UTF-16 for the internal text representation

The representation for String and StringBuilder etc in Java is UTF-16

https://docs.oracle.com/javase/8/docs/technotes/guides/intl/overview.html

How is text represented in the Java platform?

The Java programming language is based on the Unicode character set, and several libraries implement the Unicode standard. The primitive data type char in the Java programming language is an unsigned 16-bit integer that can represent a Unicode code point in the range U+0000 to U+FFFF, or the code units of UTF-16. The various types and classes in the Java platform that represent character sequences - char[], implementations of java.lang.CharSequence (such as the String class), and implementations of java.text.CharacterIterator - are UTF-16 sequences.

At the JVM level, if you are using -XX:+UseCompressedStrings (which is default for some updates of Java 6) The actual in-memory representation can be 8-bit, ISO-8859-1 but only for strings which do not need UTF-16 encoding.

http://www.oracle.com/technetwork/java/javase/tech/vmoptions-jsp-140102.html

and supports a non-standard modification of UTF-8 for string serialization.

Serialized Strings use UTF-8 by default.

And how many bytes does Java use for a char in memory?

A char is always two bytes, if you ignore the need for padding in an Object.

Note: a code point (which allows character > 65535) can use one or two characters, i.e. 2 or 4 bytes.

Prestigious answered 14/3, 2012 at 9:35 Comment(7)
Java serialization (and class-files) use modified CESU-8 though, which is a modified UTF-8.Lanai
New URL: docs.oracle.com/javase/8/docs/api/java/lang/String.html Note: Java 9 should be out next year. ;)Prestigious
Can you elobrate on alignment issues ?Alfi
@KorayTugay good question. This was 3 years ago but I think I was referring to padding in an object. Adding one char field could add up to 8 bytes with padding / object alignment.Prestigious
What endianness is used for the UTF-16? Also, you should mention that a Java char only supports BMP code points.Recoverable
@Recoverable the endianness is whatever is native to the processor. Generally little but it should almost never matter.Prestigious
This answer is outdated. Generally you should not presume to know what the internal representation looks like. If this answer wants to be saved and not report BS, it should be updated with a specific runtime or runtimes for which this is the case.Soften
D
40

You can confirm the following by looking at the source code of the relevant version of the java.lang.String class in OpenJDK. (For some really old versions of Java, String was partly implemented in native code. That source code is not publicly available.)

Prior to Java 9, the standard in-memory representation for a Java String is UTF-16 code-units held in a char[].

With Java 6 update 21 and later, there was a non-standard option (-XX:UseCompressedStrings) to enable compressed strings. This feature was removed in Java 7.

For Java 9 and later, the implementation of String has been changed to use a compact representation by default. The java command documentation now says this:

-XX:-CompactStrings

Disables the Compact Strings feature. By default, this option is enabled. When this option is enabled, Java Strings containing only single-byte characters are internally represented and stored as single-byte-per-character Strings using ISO-8859-1 / Latin-1 encoding. This reduces, by 50%, the amount of space required for Strings containing only single-byte characters. For Java Strings containing at least one multibyte character: these are represented and stored as 2 bytes per character using UTF-16 encoding. Disabling the Compact Strings feature forces the use of UTF-16 encoding as the internal representation for all Java Strings.


Note that neither classical, "compressed" or "compact" strings ever used UTF-8 encoding as the String representation. Modified UTF-8 is used in other contexts; e.g. in class files, and the object serialization format.

See also:


To answer your specific questions:

Modified UTF-8? Or UTF-16? Which one is correct?

Either UTF-16 or an adaptive representation that depends on the actual data; see above.

And how many bytes does Java use for a char in memory?

A single char uses 2 bytes. There might be some "wastage" due to possible padding, depending on the context.

A char[] is 2 bytes per character plus the object header (typically 12 bytes including the array length) padded to (typically) a multiple of 8 bytes.

Please let me know which one is correct and how many bytes it uses.

If we are talking about a String now, it is not possible to give a general answer. It will depend on the Java version and hardware platform, as well as the String length and (in some cases) what the characters are. Indeed, for some versions of Java it even depends on how you created the String.


Having said all of the above, the API model for String is that it is both a sequence of UTF-16 code-units and a sequence of Unicode code-points. As a Java programmer, you should be able to ignore everything that happens "under the hood". The internal String representation is (should be!) irrelevant.

Dancette answered 14/3, 2012 at 9:30 Comment(0)
R
11

UTF-16.

From http://java.sun.com/javase/technologies/core/basic/intl/faq.jsp :

How is text represented in the Java platform?

The Java programming language is based on the Unicode character set, and several libraries implement the Unicode standard. The primitive data type char in the Java programming language is an unsigned 16-bit integer that can represent a Unicode code point in the range U+0000 to U+FFFF, or the code units of UTF-16. The various types and classes in the Java platform that represent character sequences - char[], implementations of java.lang.CharSequence (such as the String class), and implementations of java.text.CharacterIterator - are UTF-16 sequences.

Roseanneroseate answered 14/3, 2012 at 9:31 Comment(2)
The FAQ that is linked in this answer no longer exists. The closest I can find is this: docs.oracle.com/javase/8/docs/technotes/guides/intl/…. But note that if you carefully parse both the quoted text and the link I found, neither actually says what the internal String representation is. (They say that a String represents a char sequence, but that isn't the same thing.) In fact ... for recent Java implementations, the default implementation of String uses a byte[] rather than a char[] internally. You can check the OpenJDK source code to see.Dancette
While the documentation doesn't explicitly guarantee it, programmers will expect CharAt to be O(1). This largely rules out the use of a mixed-width encoding like UTF-8, though it does allow an implementation that switches between multiple different fixed-width encodings.Lindsaylindsey
R
2

The size of a char is 2 bytes.

Therefore, I would say that Java uses UTF-16 for internal String representation.

Retirement answered 14/3, 2012 at 9:30 Comment(4)
@tchrist How? How can a character in Java be 4 bytes?Alfi
@KorayTugay Unicode characters (code points) are values between 0 and 0x10FFFF.Nicotiana
@Nicotiana Java will treat a 4 byte Unicode Character as 2 Java Characters. Please see: tugay.biz/2016/07/stringlength-method-may-fool-you.htmlAlfi
In fact, your inference is incorrect. Recent implementations do not (always) use UTF-16 for internal String representations.Dancette
P
1

As of 2023, see JEP 254: Compact Strings https://openjdk.org/jeps/254

Before JDK 9 it was UTF-16 char value[], usually 2 bytes per char, 4 bytes for Asian (Chinese, Japanese 日本)

Since JDK 9 it is UTF-8 byte[]
e.g. 1 byte for ASCII/Latin, 2 bytes for Áá Àà Ăă Ắắ Ằằ Ẵẵ (letters with diacritics), 4 bytes for Asian (Chinese, Japanese 日本)
It is still possible to Disables the Compact Strings feature with -XX:-CompactStrings
see Documentation for The java Command https://docs.oracle.com/en/java/javase/17/docs/specs/man/java.html#advanced-runtime-options-for-java

and the article https://howtodoinjava.com/java9/compact-strings/


String class BEFORE Java 9 Prior to Java 9, string data was stored as an array of chars. This required 16 bits for each char.

public final class String
    implements java.io.Serializable, Comparable<String>, CharSequence {
 
    //The value is used for character storage.
  private final char value[];
 
}

String class AFTER Java 9 Starting with Java 9, strings are now internally represented using a byte array along with a flag field for encoding references.

public final class String
    implements java.io.Serializable, Comparable<String>, CharSequence {
 
    /** The value is used for character storage. */
  @Stable
  private final byte[] value;
 
  /**
   * The identifier of the encoding used to encode the bytes in
   * {@code value}. The supported values in this implementation are
   *
   * LATIN1
   * UTF16
   *
   * @implNote This field is trusted by the VM, and is a subject to
   * constant folding if String instance is constant. Overwriting this
   * field after construction will cause problems.
   */
  private final byte coder;
 
}
Pennyroyal answered 14/4, 2023 at 11:17 Comment(3)
"Since JDK 9 it is UTF-8 byte[]" this is not correct. Java 9 and newer do not use UTF-8 encoding internally. Since Java 9, String uses Latin1 if possible, and falls back to UTF-16 if there are any non-Latin1 characters in the String data.Hume
Latin1 is subset of UTF-8. "falls back to UTF-16" is IMHO incorrect: it is UTF-8 being used, and it takes 2 or more bytes for non Latin1 characters. It is not possible to mix UTF-16 with other encodings in one String, that would be some other non standard thing.Pennyroyal
Latin1 is not a subset of UTF-8, they are two completely incompatible encodings. Java does not use UTF-8 for Strings, and does not mix encodings – each string has a flag that indicates whether it's in Latin1 or UTF-16.Imponderable
I
-6

Java stores strings internally as UTF-16 and uses 2 bytes for each character.

Iconology answered 14/3, 2012 at 9:31 Comment(6)
This answer is incorrect. Because Java uses UTF-16, each Unicode character is either 2 bytes or 4 bytes.Nicotiana
@Nicotiana How can a UTF-16 encode end up in 4 bytes? Isn't UTF-16 always 2 bytes?Alfi
@KorayTugay No, UTF-16 is either 2 bytes or 4 bytes. It is a variable-width encoding just like UTF-8. Only the obsolete UCS-2 is 2 bytes, and that's long dead.Nicotiana
The code unit of UT-16 is always 2 bytes. But the character itself needs 1 code unit or 2 code units hence 2 or 4 bytes.Phoebe
@LudovicKuty a "character" is a rendering and language-specific concept - it can take up a large number of codepoints to compose a single character, so a character can take up hundreds of bytes. So it's more like "The codepoint itself - in UTF-16 - needs 2 or 4 bytes" Try an internet search for "unicode composition." You generally only care about "characters" - like at what codepoint a character begins or how many characters are in a string - if you're building a UI framework or implementing rendering logic.Savitt
Yes, the codepoint, my bad. The notion of a character is quite abstract in the Unicode standard (if I remember correctly).Phoebe
T
-7

java is available in 18 international languages and following UNICODE character set, which contains all the characters which are available in 18 international languages and contains 65536 characters.And java following UTF-16 so the size of char in java is 2 bytes.

Trapeziform answered 14/3, 2012 at 10:29 Comment(2)
The size of a Unicode character in Java varies between 2 bytes and 4 bytes, depending on whether we’re in plane 0 or not.Nicotiana
A char is 2 bytes but a character (char with no typewriter font) is 2 or 4 bytes as @Nicotiana mentionnedPhoebe

© 2022 - 2024 — McMap. All rights reserved.