Why are there gaps in my IDENTITY column values? [duplicate]
Asked Answered
C

7

16

I have a problem.

My ID Primary (IDENTITY) is configure to auto-increment (type: int). But, when I insert a new row, this new id is not consecutive. What is happening? Any solutions?

EDITED:

[...]
[id]int] IDENTITY(1,1) NOT NULL,
[...]
CONTRAINT [PK_Medida] PRIMARY KEY CLUSTERED
(
[id] ASC
) WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]
Capsulate answered 1/2, 2013 at 8:4 Comment(6)
show your table structure.,Occultism
Please tell us why the gaps are a problem for you.Predict
This is like asking "My dog is barking. What is the problem?". It is a Dog and it is supposed to bark. That is its natural behaviour. It is an Identity column. It is just an abstract number and not guaranteed to be consecutive. That is its natural behaviour. What exactly is the problem with it not being consecutive?Disbelieve
The gaps is a problem but i think that the value of id will be out of range. My calculations is that this table will be very very long (billions of rows)Capsulate
Then INT is definitely not the right datatypeDisbelieve
In addition to the answers here, you might find it useful to review the many other questions about this behaviourDecimal
M
20

Do not expect the identities to be consecutive. There are many scenarios that can leave gaps. Consider the identity like an abstract number and do not attach any business meaning to it.

Malaria answered 1/2, 2013 at 8:7 Comment(7)
It is correct, but I expect that my table will be very long, and I have fear that the id in the future take a value out of range.Capsulate
In MySQL the id-autoincrement is always consecutive, in SQL Server why not?Capsulate
In MySQL it can also leave gaps. The main reason is rollbacks of inserts, they cause gaps. A straight forward insert/commit/insert/commit will create a dense IDs series w/o gaps. But gaps can always appear. If you have so many gaps as to worry about ID exhaustion of the 4 byte signed int address space then you must be doing something wrong in your code to leave such gaps. Ultimately, use a bigint instead of int if you're really concerned.Malaria
The "jump" of the previous id and the recent id is very big. My id previous is 1444 and the new is 2433. A "jump" of 1000!Capsulate
That indicates a 1000 inserts that had rolled back. There are just about 1MM ways this can happen and is not a big deal. Eg. you tested some batch insert and stopped/aborted in the debugger. The question is how does this work in a real use scenario. I don't expect your main use case is rollbacks.Malaria
btw, here is the SQL Server fiddle illustrating the very same: sqlfiddle.com/#!6/8c000/1/0 , thanks no_nameMalaria
The problem was the "rollbacks".Capsulate
A
25

The identity property on a column does not guarantee the following:

Uniqueness of the value – Uniqueness must be enforced by using a PRIMARY KEY or UNIQUE constraint or UNIQUE index.

Consecutive values within a transaction – A transaction inserting multiple rows is not guaranteed to get consecutive values for the rows because other concurrent inserts might occur on the table. If values must be consecutive then the transaction should use an exclusive lock on the table or use the SERIALIZABLE isolation level.

Consecutive values after server restart or other failures –SQL Server might cache identity values for performance reasons and some of the assigned values can be lost during a database failure or server restart. This can result in gaps in the identity value upon insert. If gaps are not acceptable then the application should use a sequence generator with the NOCACHE option or use their own mechanism to generate key values.

Reuse of values – For a given identity property with specific seed/increment, the identity values are not reused by the engine. If a particular insert statement fails or if the insert statement is rolled back then the consumed identity values are lost and will not be generated again. This can result in gaps when the subsequent identity values are generated.

Also,

If an identity column exists for a table with frequent deletions, gaps can occur between identity values. If this is a concern, do not use the IDENTITY property. However, to make sure that no gaps have been created or to fill an existing gap, evaluate the existing identity values before explicitly entering one with SET IDENTITY_INSERT ON.

Also, Check the Identity Column Properties & check the Identity Increment value. Its should be 1.

enter image description here

Afford answered 1/2, 2013 at 8:10 Comment(1)
+1, but I'm personally seeing reuse (resulting in primary key violations). I've never inserted a row by hand. I've never deleted a row, ever. All insertions are done by one process within a transaction (though the process is sometimes killed in the middle). It just stopped working all of a sudden, sigh.Shang
M
20

Do not expect the identities to be consecutive. There are many scenarios that can leave gaps. Consider the identity like an abstract number and do not attach any business meaning to it.

Malaria answered 1/2, 2013 at 8:7 Comment(7)
It is correct, but I expect that my table will be very long, and I have fear that the id in the future take a value out of range.Capsulate
In MySQL the id-autoincrement is always consecutive, in SQL Server why not?Capsulate
In MySQL it can also leave gaps. The main reason is rollbacks of inserts, they cause gaps. A straight forward insert/commit/insert/commit will create a dense IDs series w/o gaps. But gaps can always appear. If you have so many gaps as to worry about ID exhaustion of the 4 byte signed int address space then you must be doing something wrong in your code to leave such gaps. Ultimately, use a bigint instead of int if you're really concerned.Malaria
The "jump" of the previous id and the recent id is very big. My id previous is 1444 and the new is 2433. A "jump" of 1000!Capsulate
That indicates a 1000 inserts that had rolled back. There are just about 1MM ways this can happen and is not a big deal. Eg. you tested some batch insert and stopped/aborted in the debugger. The question is how does this work in a real use scenario. I don't expect your main use case is rollbacks.Malaria
btw, here is the SQL Server fiddle illustrating the very same: sqlfiddle.com/#!6/8c000/1/0 , thanks no_nameMalaria
The problem was the "rollbacks".Capsulate
S
3

Gaps occur when:

  1. records are deleted.
  2. error has occurred when attempting to insert a new record (e.g. not-null constraint error).the identity value is helplessly skipped.
  3. somebody has inserted/updated it with explicit value (e.g. identity_insert option).
  4. incremental value is more than 1.
Sverdlovsk answered 5/3, 2015 at 3:42 Comment(2)
5 . 'Default Value or Binding' is set for field (e.g. getdate()) - causing the field to update after row creationIncisor
For me it was that the database logs were full and entity framework is having DBUpdateExceptions.Biogeography
I
1

You can avoid this error either by evaluating the expected error before executing the increment statement, Or by using transaction so that statement is never executed and rolled back if there is any error. Hope it helps

Inearth answered 27/8, 2015 at 18:58 Comment(1)
Rolling back an insert on a table that has an identity column does NOT roll back the identity value to the previous value. Using transactions only saves The identity value IF you do not perform the insert. Once the INSERT is issued, that identity value is consumed UNLESS you explicitly tell SQL to reseed back to the number you just wasted.Sevier
S
0

The auto-ID of the row which is deleted is not used anymore by a new inserted row. I'm not able to give you a solution fir this, but this is the behavior.

Wouter

Spitter answered 1/2, 2013 at 8:7 Comment(1)
I have not deleted any rowCapsulate
K
0

Short Version

It was fixed in SQL Server 2017—but you have to opt-in to the fix.

Long Version

This was a "won't fix" bug introduced in SQL Server 2012.

It was fixed in SQL Server 2017; but you have to manually enable the fix in your database:

ALTER DATABASE SCOPED CONFIGURATION SET IDENTITY_CACHE=OFF;

You can check the current setting in your database using:

SELECT * FROM sys.database_scoped_configurations WHERE name = 'IDENTITY_CACHE'

IDENTITY_CACHE = { ON | OFF }

Applies to: SQL Server (Starting with SQL Server 2017 (14.x)), Azure SQL Database and Azure SQL Managed Instance

Enables or disables identity cache at the database level. The default is ON. Identity caching is used to improve INSERT performance on tables with identity columns. To avoid gaps in the values of an identity column in cases where the server restarts unexpectedly or fails over to a secondary server, disable the IDENTITY_CACHE option. This option is similar to the existing Trace Flag 272, except that it can be set at the database level rather than only at the server level

This restores the pre-SQL Server 2012 behavior.

And then the bellyaching

You will then hear a small, but vocal, group:

But you could still have gaps!

We will have the exact same number of gaps if we used a SEQUENCE with NO CACHE and an increment of 1, which is the exact same number of gaps we had before SQL Server 2012: zero.

There is technically a performance penalty. But the cost of generating a new identity value is in the noise of performance metrics. You will not experience any performance issues by returning to the SQL Server 6.5, 7, 2000, 2005, 2008 default. It only got introduced in 2012 to solve a problem no one was experiencing. And as a result created thousands of problems that everyone has to Google and solve over, and over, and over, and over and over, and over, and over, and over. It's fine to turn it off—it's micro-optimization that should never have been turned on.

Kenakenaf answered 17/7, 2023 at 14:6 Comment(0)
D
-1

I have a table with a few constraints that I expect to cause a massive number of insert statements to fail. It was causing huge gaps in my index which is handled by Identity(1,1).

The solution I devised was to create a staging table without the ID column, but has all of the other columns of the table. I then specify a trigger to run on the staging table table that after an insert succeeds, the record is transferred into the actual table with the index. In this case the ID reservation is done at a different time and allows all of the values to be grouped together for the ID.

I'm aware that this seems a little inefficient, but it's worked very well for me thus far.

Demarcusdemaria answered 24/1, 2016 at 16:15 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.