How do you get to limits of 8060 bytes per row and 8000 per (varchar, nvarchar) value?
Asked Answered
Q

2

3

Comining from my question "8078 bytes in 8060 B datapage (SQL Server)?" where it was explained to me how to derive 8078 bytes of data per page in MS SQL Server.

If I calculate the number of bytes per page used for data storage (without overhead) of only one row with one column of non-indexed fixed-size type record (as per the MSDN article Estimating the Size of a Heap), then I come to 8087 bytes (per page).

How do I get to to the limits of 8060 bytes per row (mentioned in my other question's answers) and to 8000 bytes per (varchar, nvarchar) without buying and studying 1000+ page books?

I am certainly missing something in storage allocation: the fewer chunks to manage, the more overhead...

Quinquennial answered 25/9, 2010 at 8:20 Comment(0)
P
6

Inside the Storage Engine: Anatomy of a record

This is for SQL Server 2005

  • record header
    • 4 bytes long
    • two bytes of record metadata (record type)
    • two bytes pointing forward in the record to the NULL bitmap
  • fixed length portion of the record, containing the columns storing data types that have fixed lengths (e.g. bigint, char(10), datetime)
  • NULL bitmap
    • two bytes for count of columns in the record
    • variable number of bytes to store one bit per column in the record, regardless of whether the column is nullable or not (this is different and simpler than SQL Server 2000 which had one bit per nullable column only)
    • this allows an optimization when reading columns that are NULL
  • variable-length column offset array
    • two bytes for the count of variable-length columns
    • two bytes per variable length column, giving the offset to the end of the column value versioning tag
  • this is in SQL Server 2005 only and is a 14-byte structure that contains a timestamp plus a pointer into the version store in tempdb

So, for one char(8000)

  • 4 bytes (record header)
  • 8000 fixed length
  • 3 null bitmap
  • 2 bytes to count variable-length
  • 14 timestamp

However, if you had 40 varchar(200) columns

  • 4 bytes (record header)
  • 0 fixed length
  • 6 null bitmap
  • 2 bytes to count variable-length
  • 202 x 40 = 8080
  • 14 timestamp

Total = 8080 + 4 + 6 + 2 + 14 = 8106. WTF? You get a warning when you created this table

I would not get too hung up on it: this information has no practical day to day value

Piet answered 25/9, 2010 at 9:44 Comment(1)
This has lots of practical day-to-day value if you work in data warehousing. Scoping the number of rows of a billion row fact table fit in a page is an exercise you will have to do to calculate storage growth.Lupitalupo
P
1

I'm not totally sure what you're asking, but if your question is "why is the maximum row length 8060 bytes and not some other number?", or "why does NVARCHAR have a maximum length of 8000 bytes?" then the short answer is, who knows? (And as gbn said, who cares?)

All MSSQL limits - data type sizes, maximum row size, columns per table, columns per index etc. - are just design decisions by Microsoft. I'm quite sure that there is a huge amount of engineering documentation behind the decisions, but most people don't have access to that information.

If you're interested in specific details of what the limits are, how row size is calculated etc. then reading the MSSQL documentation is the best place to start.

Platas answered 25/9, 2010 at 14:12 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.