Why would I use a Unicode Signature Byte-Order-Mark (BOM)?
Asked Answered
F

8

13

Are these obsolete? They seem like the worst idea ever -- embed something in the contents of your file that no one can see, but impacts the file's functionality. I don't understand why I would want one.

Flugelhorn answered 25/6, 2009 at 19:6 Comment(0)
L
20

They're necessary in some cases, yes, because there are both little-endian and big-endian implementations of UTF-16.

When reading an unknown UTF-16 file, how can you tell which of the two is used? The only solution is to place some kind of easily identifiable marker in the file, which can never be mistaken for anything else, regardless of the endian-ness used.

That's what the BOM does.

And do you need one? Only if you're 1) using an UTF encoding where endianness is an issue (It matters for UTF-16, but UTF8 always looks the same regardless of endianness), and the file is going to be shared with external applications.

If your own app is the only one that's going to read and write the file, you can omit the BOM, and simply decide once and for all which endianness you're going to use. But if another application has to read the file, it won't know the endianness in advance, so adding the BOM might be a good idea.

Lacewing answered 25/6, 2009 at 19:13 Comment(2)
A BOM is not needed in UTF-8. It screws things up. Imagine cat file1 file2 file3 > file123.Glassman
@tchrist: Well, yes. They're not needed in 7-bit ASCII or in JPG images either. That's why my answer quite explicitly talked about UTF-16.Lacewing
R
10

Some excerpts from the UTF and BOM FAQ from the Unicode Consortium may be helpful.

Q: What is a BOM?

A: A byte order mark (BOM) consists of the character code U+FEFF at the beginning of a data stream, where it can be used as a signature defining the byte order and encoding form, primarily of unmarked plaintext files. Under some higher level protocols, use of a BOM may be mandatory (or prohibited) in the Unicode data stream defined in that protocol. (Emphasis mine.)

I wouldn't exactly say the byte-order mark is embedded in the data. Rather, it prefixes the data. The character is only a byte-order mark when it's the first thing in the data stream. Anywhere else, and it's the zero-width non-breaking space. Unicode-aware programs that don't honor the byte-order mark aren't really harmed by its presence anyway since the character is invisible, and a word-joiner at the start of a block of text just joins the next character to nothing, so it has no effect.

Q: Where is a BOM useful?

A: A BOM is useful at the beginning of files that are typed as text, but for which it is not known whether they are in big or little endian format—it can also serve as a hint indicating that the file is in Unicode, as opposed to in a legacy encoding and furthermore, it act as a signature for the specific encoding form used.

So, you'd want a BOM when your program is capable of handling multiple encodings of Unicode. How else will your program know which encoding to use when interpreting its input?

Q: When a BOM is used, is it only in 16-bit Unicode text?

A: No, a BOM can be used as a signature no matter how the Unicode text is transformed: UTF-16, UTF-8, UTF-7, etc. The exact bytes comprising the BOM will be whatever the Unicode character U+FEFF is converted into by that transformation format. In that form, the BOM serves to indicate both that it is a Unicode file, and which of the formats it is in.

That's probably the case where the BOM is used most frequently today. It distinguishes UTF-8-encoded text from any other encodings; it's not really marking the order of the bytes since UTF-8 only has one order.

If you're designing your own protocol or data format, you're not required to use a BOM. Another question from the FAQ touches on that:

Q: How do I tag data that does not interpret U+FEFF as a BOM?

A: Use the tag UTF-16BE to indicate big-endian UTF-16 text, and UTF-16LE to indicate little-endian UTF-16 text. If you do use a BOM, tag the text as simply UTF-16.

It mentions the concept of tagging your data's format. That means specifying the format out-of-band from the data itself. That's great if such a facility is available to you, but it's often not, especially when older systems are being retrofitted for Unicode.

Rew answered 25/6, 2009 at 19:55 Comment(0)
G
5

The BOM signifies which encoding of Unicode the file is in. Without this distinction, a unicode reader would not know how to read the file.

However, UTF-8 doesn't require a BOM.

Check out the Wikipedia article.

Grazier answered 25/6, 2009 at 19:13 Comment(5)
The byte-order mark does not indicate Unicode version. Unicode is at version 5.1 right now, with 5.2 under beta review, but the BOM remains unchanged.Rew
@Rob I meant encoding (UTF-8, 16, 32, etc. along with endianess). I dind't mean 5.1, 5.2, etc. I changed my answer to reflect.Grazier
I suspect that Wikipedia article is simply biased toward *nix folks. The problems they cite probably stem from software that blindly treats UTF-8 as ANSI and hopes for the best. Sort of ethnocentric if you ask me. This could be an advantage of using a BOM: software that doesn't recognize the UTF-8 BOM won't work when assuming the encoding is ANSI.Tijerina
@Bob: Why would cat need to care what the encoding is?Status
ANSI? Why would *nix software treat anything as ANSI? That's a Microsoft-ism meaning "one of several 8-bit extensions of ASCII, but you have to guess which one". *nix software is much more likely to assume either ASCII or UTF-8 without a BOM, the use of which is discouraged by the Unicode Consortium.Qp
T
4

As you tagged this with UTF-8 I'm going to say you don't need a BOM. Byto Order Marks are only useful for UTF-16 and UTF-32 as it informs the computer whether the file is in Big Endian or Little Endian. Some text editors may use the Byte Order Mark to decide what encoding the document uses but this is not part of the Unicode standard.

Tanganyika answered 25/6, 2009 at 19:27 Comment(0)
S
4

The "BOM" is a holdover from the early days of Unicode when it was assumed that using Unicode would mean using 16-bit characters. It is completely pointless in an encoding like UTF-8 which has only one byte order. The choice of U+FEFF is also suboptimal for UTF-32, because it cannot distinguish between all possible middle-endian byte orders (to do so would require a BOM encoded with 4 different bytes).

The only reason you'd use one is when sending UTF-16 or UTF-32 data between platforms with different byte orders, but (1) most people use UTF-8 anyway, and (2) the MIME charset parameter provides a better mechanism.

Status answered 14/8, 2010 at 22:32 Comment(0)
C
2

As UTF16 and UTF32 BOMs tell whether the content is in Big-Endian or Little-Endian Format and also that content is Unicode, the UTF-8 BOM classifies the file as utf-8 encoded. Without the UTF-8 BOM, how can you know if it is a ANSI file or UTF-8 encoded file? The UTF-8 BOM doesn't tell endianess of course, because utf-8 is always a byte stream, but it tells if content is utf-8 encoded Unicode or ANSI. Of course you can scan for valid utf-8 sequences but in my opinion, it is easier to check the first three Bytes of the file.

Cormack answered 3/3, 2016 at 10:53 Comment(0)
A
1

UTF16 and UTF32 can be written in both Big-Endian and Little-Endian forms. You could try to heuristically determine the endianess by analysing the result of treating the file in either endianess, but to save you all that bother, the BOM can tell you right away.

UTF-8 doesn't really need a BOM though, as you decode it byte by byte.

Alphaalphabet answered 25/6, 2009 at 19:14 Comment(0)
H
1

Regardless of whether you use these yourself when creating text files, its probably worthwhile to be aware of when you read text files. i.e. detect and skip (and ideally handle accordingly) the BOM at the beginning of the file. I've run into a few which had it and which caused my some issues initially until I figured out what was going on.

Hexateuch answered 8/11, 2011 at 18:24 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.