The answer is that in UTF-8, ASCII is just 1 byte, but that in general, most Western languages including English use a few characters here and there that require 2 bytes, so actual percentages vary. The Greek and Cyrillic languages all require at least 2 bytes per character in their script when encoded in UTF-8.
Common Eastern languages require for their characters 3 bytes in UTF-8 but 2 in UTF-16. Note however that “uncommon” Eastern characters require 4 bytes in both UTF-8 and UTF-16 alike.
3 is indeed only 50% greater than 2. But that is for a single code point only. It does not apply to an entire file.
The actual percentage is impossible to state with precision, because you do not know whether the balance of code points down in the 1- or 2-byte UTF-8 range, or in the 4-byte UTF-8 range. If there is white space in the Asian text, then that is only byte of UTF-8, and yet it is a costly 2 bytes of UTF-16.
These things do vary. You can only get precise numbers on precise text, not on general text. Code points in Asian text take 1, 2, 3, or 4 bytes of UTF-8, while in UTF-16 they variously require 2 or 4 bytes apiece.
Case Study
Compare the various languages’ Wikipedia pages on Tokyo to see what I mean. Even in Eastern languages, there is still plenty of ASCII going on. This alone makes your figures fluctuate. Consider:
Paras Lines Words Graphs Chars UTF16 UTF8 8:16 16:8 Language
519 1525 6300 43120 43147 86296 44023 51% 196% English
343 728 1202 8623 8650 17302 9173 53% 189% Welsh
541 1722 9013 57377 57404 114810 59345 52% 193% Spanish
529 1712 9690 63871 63898 127798 67016 52% 191% French
321 837 2442 18999 19026 38054 21148 56% 180% Hungarian
202 464 976 7140 7167 14336 11848 83% 121% Greek
348 937 2938 21439 21467 42936 36585 85% 117% Russian
355 788 613 6439 6466 12934 13754 106% 94% Chinese, simplified
209 419 243 2163 2190 4382 3331 76% 132% Chinese, traditional
461 1127 1030 25341 25368 50738 65636 129% 77% Japanese
410 925 2955 13942 13969 27940 29561 106% 95% Korean
Each of those is the Tokyo Wikipedia page saved as text, not as HTML. All text is in NFC, not in NFD. The meaning of each of the columns is as follows:
- Paras is the number of blankline separated text spans.
- Lines is the number of linebreak separated text spans.
- Words is the number of whitespace separated text spans.
- Graphs is the number of Unicode extended grapheme clusters, sometimes called glyphs. These are user-visible characters.
- Chars is the number of Unicode code points. These are, or should be, programmer-visible characters.
- UTF16 is how many bytes that takes up when the file is stored as UTF-16.
- UTF8 is how many bytes that takes up when the file is stored as UTF-8.
- 8:16 is the ratio of UTF-8 size to UTF-16 size, expressed as a percentage.
- 16:8 is the ratio of UTF-16 size to UTF-8 size, expressed as a percentage.
- Language is which version of the Tokyo page we’re talking about here.
I’ve grouped the languages into Western Latin, Western non-Latin, and Eastern. Observations:
Western languages that use the Latin script suffer terribly upon conversion from UTF-8 to UTF-16, with English suffering the most by expanding by 96% and Hungarian the least by expanding by 80%. All are huge.
Western languages that do not use the Latin script still suffer, but only 15-20%.
Eastern languages DO NOT SUFFER in UTF-8 the way everyone claims that they do! Behold:
- In Korean and in (simplified) Chinese, you get only 6% bigger in UTF-8 than in UTF-16.
- In Japanese, you get only 29% bigger in UTF-8 than in UTF-16.
- The traditional Chinese actually got smaller in UTF-8 than in UTF-16! In fact, it costs 32% to use UTF-16 over UTF-8 for this sample. If you look at the Lines and Words columns, it looks that this might be due to white space usage.
I hope that answers your question. There is simply no +50% to +100% size increase for Eastern languages when encoded in UTF-8 compared to when these same texts are encoded in UTF-16. Only when taking individual code points do you ever see numbers like that, which is a completely unreasonable metric.