To understand why sync-safe integers are used, it's helpful to understand a little about the format of MP3 data as well as how an MP3 file is played by a media player. MP3 data is stored in a file as a series of frames. Each frame contains a small bit of digital music encoded in MP3 format as well as some meta data about the frame itself. At the beginning of each MP3 frame are 11 bits (sometimes 12) all set to 1. This is called the sync, and it's the pattern a media player looks for when attempting to play an MP3 file or stream. If the player finds this 11 bit sequence, then it knows its found an MP3 frame which can be decoded and played back.
See: www.id3.org/mp3Frame
As you know an ID3 tag contains data about the track as a whole. An ID3 tag -- in version 2.x and later -- is located at the beginning of a file, or can even be embedded in an MP3 stream(though this is not often done). The header of an ID3 tag contains a 32 bit size field, which indicates how many bytes are in the tag. The max value an unsigned, 32 bit integer can hold is 0xFFFFFFFF. So if we write 0xFFFFFFFF into the size field, we're claiming a really big tag (pragmatically too big). When the player attempts to play the file or stream, it looks for the 11 bit sequence of an MP3 data frame, but instead finds the size field in the ID3 tag header and tries to play the tag, since the size field has the first 11 bits set. This usually doesn't sound as good, depending on your musical tastes. The solution is to create an integer format that contains no 11 bit sequences of all 1's. Hence the sync-safe integer format.
A sync-safe integer can be converted to an integer in C/C++ using something like the following:
int ID3_sync_safe_to_int( uint8_t* sync_safe )
{
uint32_t byte0 = sync_safe[0];
uint32_t byte1 = sync_safe[1];
uint32_t byte2 = sync_safe[2];
uint32_t byte3 = sync_safe[3];
return byte0 << 21 | byte1 << 14 | byte2 << 7 | byte3;
}
Hope this helps.