Delete statement in SQL is very slow
Asked Answered
W

15

97

I have statements like this that are timing out:

DELETE FROM [table] WHERE [COL] IN ( '1', '2', '6', '12', '24', '7', '3', '5')

I tried doing one at a time like this:

DELETE FROM [table] WHERE [COL] IN ( '1' )

and so far it's at 22 minutes and still going.

The table has 260,000 rows in it and is four columns.

Does anyone have any ideas why this would be so slow and how to speed it up? I do have a non-unique, non-clustered index on the [COL] that i'm doing the WHERE on. I'm using SQL Server 2008 R2

update: I have no triggers on the table.

Wigley answered 5/6, 2012 at 16:39 Comment(12)
what happens if you do where [col] = '1'?Palette
have you checked the execution plan on the query to see if it shows anything?Magnolia
DELETE statement are generally slow because of the log. TRUNCATE is faster. but you cannot use TRUNCATE in this situation. I don't have any further clueRussian
possible duplicate of Is IN with the multiple of the same terms slower?Magnolia
There must be a trigger or something that deadlocks.Pasco
how is this question any different from the one you posted a few minutes ago? #10901738Magnolia
@bluefeet because they are asking very different things? One question i ask if including the same term in a in statement is slower than just including it once. In this question I ask for ideas on why deleting in general might be so slow, even when I just am using one term.Wigley
@AlexGitelman i did try that and it is just as slow.Wigley
If your table is referenced by FK's from other tables your QP will reveal the means if checking those constraints. If you don't have indexes on the FK's your will have a table scan when the constraint is checked. Not a big deal if the tables are small but if you have a big one (or a couple) in there that could be what is causing your delete to be slow. Check the QP (query plan).Lyford
I had the same issue. I deleted first couple of rows (using https://mcmap.net/q/157152/-how-to-delete-the-top-1000-rows-from-a-table-using-sql-server-2008) to get query execution plan, and it suggested creation of 2 non-clustered indexes, which fixed my issue.Arsonist
Possible duplicate of Optimizing Delete on SQL ServerTalapoin
I tried to delete 200 rows from total of 300 rows and it was taking too long. I found that another table with 70,000,000 rows referenced table with 300 rows with a foreign key. Once I deleted that foreign key delete was instant.Benally
S
111

Things that can cause a delete to be slow:

  • deleting a lot of records
  • many indexes
  • missing indexes on foreign keys in child tables. (thank you to @CesarAlvaradoDiaz for mentioning this in the comments)
  • deadlocks and blocking
  • triggers
  • cascade delete (those ten parent records you are deleting could mean millions of child records getting deleted)
  • Transaction log needing to grow
  • Many Foreign keys to check

So your choices are to find out what is blocking and fix it or run the deletes in off hours when they won't be interfering with the normal production load. You can run the delete in batches (useful if you have triggers, cascade delete, or a large number of records). You can drop and recreate the indexes (best if you can do that in off hours too).

Spain answered 5/6, 2012 at 17:9 Comment(5)
I had a case when I was trying to delete from a table containing ~1 million rows but it took forever. Querying the rows with select * was fast, but the delete was insanely slow. Then I realized there was a foreign key to the table from another one with 2 billion(!) rows. Of course, the FK column was not indexed. Solution: dropped the FK, deleted the rows, recreated the FK. Recreating the FK still took some time, but it was much faster.Maryrosemarys
Important: if you have foreign keys these in your tables should be indexedAbstractionist
Is it a good practice to disable trigger while deleting a huge record list from a table?Porterfield
@Aviator, only if your delete script handles what ever the trigger handled. Otherwise you are likely going to cause a massive data integrity problem.Spain
A little more detail on the foreign key issue, because I missed it at first. Don't just think about the foreign key constraints that this table has on others. Think about the foreign keys that other tables have on this one. So, if you run \d my table, look under Referenced by. My issue was that I had a huge table that was referencing the one I was trying to delete, and the column the other table was using needed an index.Amigo
B
90
  1. Disable CONSTRAINT

    ALTER TABLE [TableName] NOCHECK CONSTRAINT ALL;

  2. Disable Index

    ALTER INDEX ALL ON [TableName] DISABLE;

  3. Rebuild Index

    ALTER INDEX ALL ON [TableName] REBUILD;

  4. Enable CONSTRAINT

    ALTER TABLE [TableName] CHECK CONSTRAINT ALL;

  5. Delete again

Bertha answered 10/7, 2015 at 3:49 Comment(12)
This worked, but an explanation would have been nice.Odrick
One drawback is that after alter index all on mytable disable most queries stop working. "The query processor is unable to produce a plan". They seem to work again once the indexes have been rebuilt and the constraints re-enabled.Unreal
When disabling the indexes on the table, all foreign keys pointing to the table will be disabled too. This could be the explanation for the speedup. Afterwards, you can select referrer, fk from foreign_keys where is_disabled = 1, or check the warning messages which were produced when disabling the indexes. Then for each affected child table, alter table mychild with check check constraint all. This might find some child rows that now need deleting!Unreal
You may also end up with FKs marked 'not trusted', so select referrer, fk from foreign_keys where is_not_trusted = 1 and fix those too.Unreal
For me, rebuilding the indexes on the Primary Keys that were being used in clustered index searches gave me a massive speed improvement. I had a tonne of fragmentation on the PKs.Bucket
WARNING: Rebuilding indexes in a BIG table (2 million+) takes a LOT of time (hours...). In my case it was better just to do as in @Andomar's answer.Nowadays
You made my day. KuddosStopover
Good one. Thank you.Dah
This one finally did the trick for meLovato
Be very very very carefull to use this, after this all tables that had a foreign key to this table had their foreign key turned off !!!!!!!!! You have to turn them on manually for each table againLovato
This worked for me, and I didn't notice any foreign keys being disabled.Reginiaregiomontanus
This works by dropping the check on FK and marks the FK as not trusted. You can enable the checks again SELECT 'ALTER TABLE ' + OBJECT_NAME(parent_object_id) + ' WITH CHECK CHECK CONSTRAINT ' + name AS [SQL to Execute] FROM sys.foreign_keys WHERE is_not_trusted = 1 --AND OBJECT_NAME(parent_object_id) = 'tableToCheck' ORDER BY OBJECT_NAME(parent_object_id)Catamenia
F
35

Deleting a lot of rows can be very slow. Try to delete a few at a time, like:

delete top (10) YourTable where col in ('1','2','3','4')
while @@rowcount > 0
    begin
    delete top (10) YourTable where col in ('1','2','3','4')
    end
Furred answered 5/6, 2012 at 16:48 Comment(2)
This actually worked! My only explanation is that it doesn't need to write a large log. Please share with me know if you have any other explanations.Alderman
This affects how much memory is granted to the operation, which is good. Too much of a memory grant can cause problems for other queries on the server too due to extra RESOURCE_SEMAPHORE waits. My execution plan had a warning saying The query memory grant detected "ExcessiveGrant", which may impact the reliability. The "Estimated Number of Rows" was really high also. Adding the "top 10" made it estimate the number of rows to be 10, so the memory grant was much lower. Since I was deleting based on PK, I could also accomplish this by hard-coding an upper and lower limit to the ID.Chair
D
9

In my case the database statistics had become corrupt. The statement

delete from tablename where col1 = 'v1' 

was taking 30 seconds even though there were no matching records but

delete from tablename where col1 = 'rubbish'

ran instantly

running

update statistics tablename

fixed the issue

Dacoity answered 11/10, 2018 at 13:36 Comment(1)
Worked for me too!Apple
G
5

Preventive Action

Check with the help of SQL Profiler for the root cause of this issue. There may be Triggers causing the delay in Execution. It can be anything. Don't forget to Select the Database Name and Object Name while Starting the Trace to exclude scanning unnecessary queries...

Database Name Filtering

Table/Stored Procedure/Trigger Name Filtering

Corrective Action

As you said your table contains 260,000 records...and IN Predicate contains six values. Now, each record is being search 260,000 times for each value in IN Predicate. Instead it should be the Inner Join like below...

Delete K From YourTable1 K
Inner Join YourTable2 T on T.id = K.id

Insert the IN Predicate values into a Temporary Table or Local Variable

Gaitskell answered 5/6, 2012 at 16:55 Comment(0)
A
4

If the table you are deleting from has BEFORE/AFTER DELETE triggers, something in there could be causing your delay.

Additionally, if you have foreign keys referencing that table, additional UPDATEs or DELETEs may be occurring.

Atalanta answered 5/6, 2012 at 16:45 Comment(0)
K
3

It's possible that other tables have FK constraint to your [table]. So the DB needs to check these tables to maintain the referential integrity. Even if you have all needed indexes corresponding these FKs, check their amount.

I had the situation when NHibernate incorrectly created duplicated FKs on the same columns, but with different names (which is allowed by SQL Server). It has drastically slowed down running of the DELETE statement.

Kaycekaycee answered 7/12, 2015 at 9:6 Comment(1)
This should have been a comment, not an answer. With a bit more rep, you will be able to post comments.Haag
F
1

Check execution plan of this delete statement. Have a look if index seek is used. Also what is data type of col?

If you are using wrong data type, change update statement (like from '1' to 1 or N'1').

If index scan is used consider using some query hint..

Farcy answered 6/6, 2012 at 8:26 Comment(1)
In my case, an index was missing. Adding the index solved the problem.Instancy
S
1

If you're deleting all the records in the table rather than a select few it may be much faster to just drop and recreate the table.

Selfsealing answered 20/3, 2017 at 21:13 Comment(0)
A
0

Is [COL] really a character field that's holding numbers, or can you get rid of the single-quotes around the values? @Alex is right that IN is slower than =, so if you can do this, you'll be better off:

DELETE FROM [table] WHERE [COL] = '1'

But better still is using numbers rather than strings to find the rows (sql likes numbers):

 DELETE FROM [table] WHERE [COL] = 1

Maybe try:

 DELETE FROM [table] WHERE CAST([COL] AS INT) = 1

In either event, make sure you have an index on column [COL] to speed up the table scan.

Astronomical answered 5/6, 2012 at 17:59 Comment(3)
Unfortunately the column holds letters and numbers and i do have an index =/.Wigley
I'm not sure if it will help, but one thing we do here is put an "IsActive" column into all of our important tables, allowing us to update a field rather than delete rows. Handy for auditing and less messy in terms of rebuilding indexes on deletion. Of course, all subsequent views/queries/procs/functions have to include "WHERE ISACTIVE = 1".Astronomical
Casting all values in the database to an int may actually hurt performance significantly. It may also prevent that sql server can use the index. So if your proposal has any effect, it's most probably worse.Subscribe
P
0

I read this article it was really helpful for troubleshooting any kind of inconveniences

https://support.microsoft.com/en-us/kb/224453

this is a case of waitresource KEY: 16:72057595075231744 (ab74b4daaf17)

-- First SQL Provider to find the SPID (Session ID)

-- Second Identify problem, check Status, Open_tran, Lastwaittype, waittype, and waittime
-- iMPORTANT Waitresource select * from sys.sysprocesses where spid = 57

select * from sys.databases where database_id=16

-- with Waitresource check this to obtain object id 
select * from sys.partitions where hobt_id=72057595075231744

select * from sys.objects where object_id=2105058535
Prague answered 22/10, 2015 at 20:27 Comment(0)
M
0

After inspecting an SSIS Package(due to a SQL Server executing commands really slow), that was set up in a client of ours about 5-4 years before the time of me writing this, I found out that there were the below tasks: 1) insert data from an XML file into a table called [Importbarcdes].

2) merge command on an another target table, using as source the above mentioned table.

3) "delete from [Importbarcodes]", to clear the table of the row that was inserted after the XML file was read by the task of the SSIS Package.

After a quick inspection all statements(SELECT, UPDATE, DELETE etc.) on the table ImportBarcodes that had only 1 row, took about 2 minutes to execute.

Extended Events showed a whole lot PAGEIOLATCH_EX wait notifications.

No indexes were present of the table and no triggers were registered.

Upon close inspection of the properties of the table, in the Storage Tab and under general section, the Data Space field showed more than 6 GIGABYTES of space allocated in pages.

What happened:

The query ran for a good portion of time each day for the last 4 years, inserting and deleting data in the table, leaving unused pagefiles behind with out freeing them up.

So, that was the main reason of the wait events that were captured by the Extended Events Session and the slowly executed commands upon the table.

Running ALTER TABLE ImportBarcodes REBUILD fixed the issue freeing up all the unused space. TRUNCATE TABLE ImportBarcodes did a similar thing, with the only difference of deleting all pagefiles and data.

Moll answered 26/7, 2019 at 12:59 Comment(0)
D
0

Older topic but one still relevant. Another issue occurs when an index has become fragmented to the extent of becoming more of a problem than a help. In such a case, the answer would be to rebuild or drop and recreate the index and issuing the delete statement again.

Deadpan answered 3/8, 2020 at 14:31 Comment(1)
In my case, just updating statistics on the tables that I was trying to delete records from (as well as joined tables) did it. I.e.: update statistics MyTableCinda
A
0

As an extension to Andomar's answer, above, I had a scenario where the first 700,000,000 records (of ~1.2 billion) processed very quickly, with chunks of 25,000 records processing per second (roughly). But, then it starting taking 15 minutes to do a batch of 25,000. I reduced the chunk size down to 5,000 records and it went back to its previous speed. I'm not certain what internal threshold I hit, but the fix was to reduce the number of records, further, to regain the speed.

Arpeggio answered 12/10, 2020 at 19:20 Comment(0)
T
-10

open CMD and run this commands

NET STOP MSSQLSERVER
NET START MSSQLSERVER

this will restart the SQL Server instance. try to run again after your delete command

I have this command in a batch script and run it from time to time if I'm encountering problems like this. A normal PC restart will not be the same so restarting the instance is the most effective way if you are encountering some issues with your sql server.

Trilobate answered 5/2, 2018 at 3:52 Comment(2)
This doesn't answer the original question, or if it does, you're not explaining why this would help.Consumptive
it is so not a solution, especially in a production environment, it is the kind of thing not to do. Also how will this increase the speed of deletion?Somerville

© 2022 - 2024 — McMap. All rights reserved.