I have a large data table. There are 10 million records in this table.
What is the best way for this query
Delete LargeTable where readTime < dateadd(MONTH,-7,GETDATE())
I have a large data table. There are 10 million records in this table.
What is the best way for this query
Delete LargeTable where readTime < dateadd(MONTH,-7,GETDATE())
If you are Deleting All the rows in that table the simplest option is to Truncate table, something like
TRUNCATE TABLE LargeTable
GO
Truncate table will simply empty the table, you cannot use WHERE clause to limit the rows being deleted and no triggers will be fired.
On the other hand if you are deleting more than 80-90 Percent of the data, say if you have total of 11 million rows and you want to delete 10 million another way would be to Insert these 1 million rows (records you want to keep) to another staging table. Truncate this large table and Insert back these 1 million rows.
Or if permissions/views or other objects which has this large table as their underlying table doesn't get affected by dropping this table, you can get these relatively small amounts of the rows into another table, drop this table and create another table with same schema, and import these rows back into this ex-Large table.
One last option I can think of is to change your database's Recovery Mode to SIMPLE
and then delete rows in smaller batches using a while loop something like this:
DECLARE @Deleted_Rows INT;
SET @Deleted_Rows = 1;
WHILE (@Deleted_Rows > 0)
BEGIN
-- Delete some small number of rows at a time
DELETE TOP (10000) LargeTable
WHERE readTime < dateadd(MONTH,-7,GETDATE())
SET @Deleted_Rows = @@ROWCOUNT;
END
and don't forget to change the Recovery mode back to full and I think you have to take a backup to make it fully effective (the change or recovery modes).
optimal solution for unknown case
that’s the dream isn't it? Unfortunately you cannot cure every disease with any one pill; I have suggested some possible solutions for different scenarios. There is no sliver bullet here unfortunately. –
Michellemichels where readTime < dateadd(...)
and pk>pk_start AND pk<=pk_end
conditions. –
Mentality @m-ali answer is right but also keep in mind that logs could grow a lot if you don't commit the transaction after each chunk and perform a checkpoint. This is how I would do it and take this article http://sqlperformance.com/2013/03/io-subsystem/chunk-deletes as reference, with performance tests and graphs:
DECLARE @Deleted_Rows INT;
SET @Deleted_Rows = 1;
WHILE (@Deleted_Rows > 0)
BEGIN
BEGIN TRANSACTION
-- Delete some small number of rows at a time
DELETE TOP (10000) LargeTable
WHERE readTime < dateadd(MONTH,-7,GETDATE())
SET @Deleted_Rows = @@ROWCOUNT;
COMMIT TRANSACTION
CHECKPOINT -- for simple recovery model
END
COMMIT TRANSACTION
and CHECKPOINT
the logs are still growing. Thanks for making this clear. –
Chandra @Deleted_Rows
to 10000 or you might end up with an infinitely loop due to it indefinitely deleting small sets of data. So WHILE (@Deleted_Rows = 10000)
- as soon as there wasn't a full "page" of data to delete it will stop. In your implementation, WHILE (@Deleted_Rows > 0)
, the while-loop will execute again even if it only deleted one row, and the next execution might also find a row or two to delete - resulting in an infinite loop. –
Argueta WHILE
loop itself: dateadd(MONTH,-7,GETDATE())
. –
Argueta WHILE
loop. –
Argueta You can also use GO + how many times you want to execute the same query.
DELETE TOP (10000) [TARGETDATABASE].[SCHEMA].[TARGETTABLE]
WHERE readTime < dateadd(MONTH,-1,GETDATE());
-- how many times you want the query to repeat
GO 100
GO xx
supposed to work? I get a "Could not find stored procedure '' " error. Without the GO
command it works fine though. –
Err @Francisco Goldenstein, just a minor correction. The COMMIT must be used after you set the variable, otherwise the WHILE will be executed just once:
DECLARE @Deleted_Rows INT;
SET @Deleted_Rows = 1;
WHILE (@Deleted_Rows > 0)
BEGIN
BEGIN TRANSACTION
-- Delete some small number of rows at a time
DELETE TOP (10000) LargeTable
WHERE readTime < dateadd(MONTH,-7,GETDATE())
SET @Deleted_Rows = @@ROWCOUNT;
COMMIT TRANSACTION
CHECKPOINT -- for simple recovery model
END
This variation of M.Ali's is working fine for me. It deletes some, clears the log and repeats. I'm watching the log grow, drop and start over.
DECLARE @Deleted_Rows INT;
SET @Deleted_Rows = 1;
WHILE (@Deleted_Rows > 0)
BEGIN
-- Delete some small number of rows at a time
delete top (100000) from InstallLog where DateTime between '2014-12-01' and '2015-02-01'
SET @Deleted_Rows = @@ROWCOUNT;
dbcc shrinkfile (MobiControlDB_log,0,truncateonly);
END
# of rows
to delete at a time, and also the WHERE
clause. Works like a charm! –
Epos If you are willing (and able) to implement partitioning, that is an effective technique for removing large quantities of data with little run-time overhead. Not cost-effective for a once-off exercise, though.
I was able to delete 19 million rows from my table of 21 million rows in matter of minutes. Here is my approach.
If you have a auto-incrementing primary key on this table, then you can make use of this primary key.
Get minimum value of primary key of the large table where readTime < dateadd(MONTH,-7,GETDATE()). (Add index on readTime, if not already present, this index will anyway be deleted along with the table in step 3.). Lets store it in a variable 'min_primary'
Insert all the rows having primary key > min_primary into a staging table (memory table if no. of rows is not large).
Drop the large table.
Recreate the table. Copy all the rows from staging table to main table.
Drop the staging table.
Shorter syntax
select 1
WHILE (@@ROWCOUNT > 0)
BEGIN
DELETE TOP (10000) LargeTable
WHERE readTime < dateadd(MONTH,-7,GETDATE())
END
You can delete small batches using a while loop, something like this:
DELETE TOP (10000) LargeTable
WHERE readTime < dateadd(MONTH,-7,GETDATE())
WHILE @@ROWCOUNT > 0
BEGIN
DELETE TOP (10000) LargeTable
WHERE readTime < dateadd(MONTH,-7,GETDATE())
END
If you are using SQL server 2016 or higher and if your table is having partitions created based on column you are trying to delete(for example Timestamp column), then you could use this new command to delete data by partitions.
TRUNCATE TABLE WITH ( PARTITIONS ( { | } [ , ...n ] ) )
This will delete the data in selected partition(s) only and should be the most efficient way to delete data from part of table since it will not create transaction logs and will be done just as fast as regular truncate but without having all the data deleted from the table.
Drawback is if your table is not setup with partition, then you need to go old school and delete the data with regular approach and then recreate the table with partitions so that you can do this in future, which is what I did. I added the partition creation and deletion into insertion procedure itself. I had table with 500 million rows so this was the only option to reduce deletion time.
For more details refer to below links: https://learn.microsoft.com/en-us/sql/t-sql/statements/truncate-table-transact-sql?view=sql-server-2017
SQL server 2016 Truncate table with partitions
Below is what I did first to delete the data before I could recreate the table with partitions with required data in it. This query will run for days during specified time window until the data is deleted.
:connect <<ServerName>>
use <<DatabaseName>>
SET NOCOUNT ON;
DECLARE @Deleted_Rows INT;
DECLARE @loopnum INT;
DECLARE @msg varchar(100);
DECLARE @FlagDate datetime;
SET @FlagDate = getdate() - 31;
SET @Deleted_Rows = 1;
SET @loopnum = 1;
/*while (getdate() < convert(datetime,'2018-11-08 14:00:00.000',120))
BEGIN
RAISERROR( 'WAIT for START' ,0,1) WITH NOWAIT
WAITFOR DELAY '00:10:00'
END*/
RAISERROR( 'STARTING PURGE' ,0,1) WITH NOWAIT
WHILE (1=1)
BEGIN
WHILE (@Deleted_Rows > 0 AND (datepart(hh, getdate() ) >= 12 AND datepart(hh, getdate() ) <= 20)) -- (getdate() < convert(datetime,'2018-11-08 19:00:00.000',120) )
BEGIN
-- Delete some small number of rows at a time
DELETE TOP (500000) dbo.<<table_name>>
WHERE timestamp_column < convert(datetime, @FlagDate,102)
SET @Deleted_Rows = @@ROWCOUNT;
WAITFOR DELAY '00:00:01'
select @msg = 'ROWCOUNT' + convert(varchar,@Deleted_Rows);
set @loopnum = @loopnum + 1
if @loopnum > 1000
begin
begin try
DBCC SHRINKFILE (N'<<databasename>>_log' , 0, TRUNCATEONLY)
RAISERROR( @msg ,0,1) WITH NOWAIT
end try
begin catch
RAISERROR( 'DBCC SHRINK' ,0,1) WITH NOWAIT
end catch
set @loopnum = 1
end
END
WAITFOR DELAY '00:10:00'
END
select getdate()
Another use:
SET ROWCOUNT 1000 -- Buffer
DECLARE @DATE AS DATETIME = dateadd(MONTH,-7,GETDATE())
DELETE LargeTable WHERE readTime < @DATE
WHILE @@ROWCOUNT > 0
BEGIN
DELETE LargeTable WHERE readTime < @DATE
END
SET ROWCOUNT 0
Optional;
If transaction log is enabled, disable transaction logs.
ALTER DATABASE dbname SET RECOVERY SIMPLE;
If i say without loop, i can use GOTO
statement for delete large amount of records using sql server.
exa.
IsRepeat:
DELETE TOP (10000)
FROM <TableName>
IF @@ROWCOUNT > 0
GOTO IsRepeat
like this way you can delete large amount of data with smaller size of delete.
let me know if requires more information.
This question is a little old, but I just stumbled onto it looking for assistance. The fastest way to delete a whole bunch of rows, while keeping some, is to create a script that
Creates a temp table (I used a table variable)
Select the rows to keep into the temp table
Truncate the target table
Insert the kept rows back into the target table.
Begin Tran
I always test first by selecting the rows in the @tmpSaveTable and rolling back the transaction. I just did 17 million rows in a couple of seconds.
Begin tran
DECLARE @tmpSaveTable table (
...your columns, types, etc. go here )
INSERT @tmpSaveTable (columns here)
SELECT (appropriate columns from target here)
WHERE (which rows to save)
-- appropriate place to test w/ select from @tmpSaveTable
TRUNCATE SourceTable
INSERT SourceTable (columns)
SELECT (all values from @tmpSaveTable)
--Rollback Tran testing
Commit Tran
If you want to delete the records of a table with a large number of records but keep some of the records, You can save the required records in a similar table and truncate the main table and then return the saved records to the main table.
© 2022 - 2024 — McMap. All rights reserved.