How can I remove duplicate rows?
Asked Answered
M

43

1375

I need to remove duplicate rows from a fairly large SQL Server table (i.e. 300,000+ rows).

The rows, of course, will not be perfect duplicates because of the existence of the RowID identity field.

MyTable

RowID int not null identity(1,1) primary key,
Col1 varchar(20) not null,
Col2 varchar(2048) not null,
Col3 tinyint not null

How can I do this?

Machinist answered 20/8, 2008 at 21:51 Comment(3)
Quick tip for PostgreSQL users reading this (lots, going by how often it's linked to): Pg doesn't expose CTE terms as updatable views so you can't DELETE FROM a CTE term directly. See https://mcmap.net/q/46349/-postgresql-with-delete-quot-relation-does-not-exists-quot/398670Minded
@CraigRinger the same is true for Sybase - I have collected the remaining solutions here (should be valid for PG and others, too: https://mcmap.net/q/46350/-how-to-delete-duplicate-rows-in-sybase-when-you-have-no-unique-key/1855801 (just replace the ROWID() function by the RowID column, if any)Nickola
Just to add a caveat here. When running any de-duplication process, always double check what you are deleting first! This is one of those areas where it is very common to accidentally delete good data.Actinopod
W
1193

Assuming no nulls, you GROUP BY the unique columns, and SELECT the MIN (or MAX) RowId as the row to keep. Then, just delete everything that didn't have a row id:

DELETE FROM MyTable
LEFT OUTER JOIN (
   SELECT MIN(RowId) as RowId, Col1, Col2, Col3 
   FROM MyTable 
   GROUP BY Col1, Col2, Col3
) as KeepRows ON
   MyTable.RowId = KeepRows.RowId
WHERE
   KeepRows.RowId IS NULL

In case you have a GUID instead of an integer, you can replace

MIN(RowId)

with

CONVERT(uniqueidentifier, MIN(CONVERT(char(36), MyGuidColumn)))
Wheelock answered 20/8, 2008 at 22:0 Comment(32)
Would this work as well? DELETE FROM MyTable WHERE RowId NOT IN (SELECT MIN(RowId) FROM MyTable GROUP BY Col1, Col2, Col3);Elis
Aswsome solution! Seems for PostgreSQL you need one subquery more like in gist.github.com/754805Skolnik
@Georg: I think it would. Your solution is shorter and clearer. Not so sure about performance, maybe it is equivalent to Mark's, but with really big tables I would probably stick to LEFT JOIN.Peipeiffer
@Andriy: Isn't SQL supposed to choose the fastest possible algorithm no matter how the SQL query is structured?Elis
@Georg: If you say so, sir. :) Honestly, it's only here on SO that I've started to take notice about such issues, like differently structured queries resulting in the same actual algorithm or quite the other way when the queries seemingly differ very slightly. From what I've learned so far, I would rather agree with you. It's just that the LEFT JOIN version seems (no more than that) to me more optimisable.Peipeiffer
@Andriy - In SQL Server LEFT JOIN is less efficient than NOT EXISTS sqlinthewild.co.za/index.php/2010/03/23/… The same site also compares NOT IN vs NOT EXISTS. sqlinthewild.co.za/index.php/2010/02/18/not-exists-vs-not-in Out of the 3 I think NOT EXISTS performs best. All three will generate a plan with a self join though that can be avoided.Septuagesima
@Martin: Very interesting, thanks. And I'm going to make some tests similar to those described, only with self-derived tables, as more applicable to this here question.Peipeiffer
@Martin, @Georg: So, I've made a small test. A big table was created and populated as described here: sqlinthewild.co.za/index.php/2010/03/23/… Two SELECTs then were produced, one using the LEFT JOIN + WHERE IS NULL technique, the other using the NOT IN one. Then I proceeded with the execution plans, and guess what? The query costs were 18% for LEFT JOIN against 82% for NOT IN, a big surprise to me. I might have done something I shouldn't have or vice versa, which, if true, I would really like to know.Peipeiffer
Coming very late I know, but sqlinthewild.co.za/index.php/2010/02/18/not-exists-vs-not-in. If the columns were nullable, NOT IN behaves differently and performs terribly. It's why I recommend NOT EXISTS.Thumb
I have searched for such a simple solution for more than half hour now. I have come across solutions with DELETE TOP(n) using cursors, and no solutions is close to yours. You solution is lean and swift, and does exactly what is expected from it. Thanks for sharing such a great knowledge! =)Markova
Use CONVERT(uniqueidentifier, MAX(CONVERT(char(36), MyGuidColumn))) if you have a GUID instead of an integer.Enervate
Amazing how complicated this is given how common this problem must be - have worked on several projects where this kind of thing was needed. Core SQL is really crying out for a simpler way of doing this, especially given the ratings and number of comments this question and others like it have.Ovarian
@GeorgSchölly has provided an elegant answer. I've used it on a table where a PHP bug of mine created duplicate rows.Quit
As far as I know, RowId does not exists for Sql Server, it's an Oracle feature, and the question is tagged as sql-server. Am I right?Immense
@Immense - In this case RowId is just the name of a column. It has no special meaning. There is no direct equivalent to Oracle's RowId in SQL Server.Septuagesima
Sorry but why is DELETE MyTable FROM MyTable correct syntax? I don't see putting the table name right after the DELETE as an option in the documentation here. Sorry if this is obvious to others; I'm a newbie to SQL just trying to learn. More importantly than why does it work: what is the difference between including the name of the table there or not?Scotch
@GeorgSchölly - your suggestion does not appear to work in MySQL unfortunately. I think it complains about it being a cyclical query.Shumate
@GeorgSchölly: this statement also works in SQLite. Thank you!Shirberg
@MarkBrackett: Tnx, yr query helped me too, but it just does not work for one of my big tables (about 2,000,000 rows), so I chose an index but still it takes a long time and at the end nothing happens! I don’t know if the problem is for my join or choosing index by mistake or something else?Hinda
@Georg's solution errors: You can't specify target table 'products' for update in FROM clauseDoorjamb
@GeorgSchölly, you query returns this error "#1093 - You can't specify target table MyTable for update in FROM clause "Mori
One thing to keep in mind is that if your table is active (i.e. inserting new entries all the time) then it is better to run this query with a restricted time period which ends just before the current time. Since self join might result in a mismatch if the new rows are read by outer query but not the sub-query. In this case non-duplicate rows could be deleted.Lammers
@Georg: for a table with very many rows, where only very few are duplicates that ought to be deleted, inverting the query to reduce the number of IN parameters can make the query much faster: DELETE FROM myTable WHERE id IN ( SELECT id FROM myTable EXCEPT (SELECT MIN(id) id FROM myTable GROUP BY col1, col2, col3));Manakin
@Scotch - see FROM table_source (the T-SQL extension which allows FROM and JOIN in a DELETE) and FROM table_alias (the FROM is optional); the first MyTable is table_alias, the second is table_source.Wheelock
Here is @MarkBrackett's answer for Postgres: delete from MyTable where not exists (select 1 from (select min(RowId) as RowId from MyTable group by Col1, Col2, Col3) as KeepRows where MyTable.RowId = KeepRows.RowId); It is much too slow for 7M rows.Cosher
@MarkBrackett Is your 9/20/15 comment saying that your answer only works in T-SQL? That would be good for skimmers to know.Bakki
@StefanSteiger what is uniqueidentifier in your comment? the second CONVERT is understandable, but the first one I do not understand. could you explain? CONVERT(uniqueidentifier, MIN(CONVERT(char(36), MyGuidColumn)))Wrier
@MarkBrackett what is exactly uniqueidentifier in CONVERT(uniqueidentifier, MIN(CONVERT(char(36), MyGuidColumn))) ? is it a column name or a data type, or something else?Wrier
@ericyoung: uniqueidentifier is an v4-uuid. That's a data-type. The first convert is because min cannot be applied to datatype uniqueidentifier.Enervate
Using the suggestion from @GeorgSchölly I got an error about specifying the table in the FROM clause that I'm trying to update. To fix it, I had to modify the query with a subselect in the subselect: DELETE FROM MyTable WHERE RowId NOT IN (SELECT MIN(RowId) FROM (SELECT * FROM MyTable) AS MyTableSubselect GROUP BY Col1, Col2, Col3);Farrow
If the goal is to get only one row of the duplicated rows in a table, why don't just write a part of the syntax above? Like below: select col1,col2,col3 into MyNewTable from MyTable group by col1,col2col3? Is this causing some problem I don't see?Country
@Country - you'd lose the id then (from OP: "The rows, of course, will not be perfect duplicates because of the existence of the RowID identity field")Wheelock
S
794

Another possible way of doing this is

; 

--Ensure that any immediately preceding statement is terminated with a semicolon above
WITH cte
     AS (SELECT ROW_NUMBER() OVER (PARTITION BY Col1, Col2, Col3 
                                       ORDER BY ( SELECT 0)) RN
         FROM   #MyTable)
DELETE FROM cte
WHERE  RN > 1;

I am using ORDER BY (SELECT 0) above as it is arbitrary which row to preserve in the event of a tie.

To preserve the latest one in RowID order for example you could use ORDER BY RowID DESC

Execution Plans

The execution plan for this is often simpler and more efficient than that in the accepted answer as it does not require the self join.

Execution Plans

This is not always the case however. One place where the GROUP BY solution might be preferred is situations where a hash aggregate would be chosen in preference to a stream aggregate.

The ROW_NUMBER solution will always give pretty much the same plan whereas the GROUP BY strategy is more flexible.

Execution Plans

Factors which might favour the hash aggregate approach would be

  • No useful index on the partitioning columns
  • relatively fewer groups with relatively more duplicates in each group

In extreme versions of this second case (if there are very few groups with many duplicates in each) one could also consider simply inserting the rows to keep into a new table then TRUNCATE-ing the original and copying them back to minimise logging compared to deleting a very high proportion of the rows.

Septuagesima answered 29/9, 2010 at 14:52 Comment(17)
If I may add: The accepted answer doesn't work with tables that uses uniqueidentifier. This one is much simpler and works perfectly on any table. Thanks Martin.Reciprocal
This is the only solution that is workable on my large table (30M rows). Wish I could give it more than +1Carlton
This is such an awesome answer! It worked event when I had removed the old PK before I realised there where duplicates. +100Wendling
I suggest asking and then answering this question (with this answer) on DBA.SE. Then we can add it to our list of canonical answers.Augustinaaugustine
Does anyone know how I could return the number of duplicate records in this same query while also deleting them? I believe, while using the With statement, you can only reverence the temporary cte once, correct?Ide
@Ide - If you just want to know how many rows were deleted look in the messages tab in SSMS for the rows affected message for more complicated needs look at the OUTPUT clause.Septuagesima
Unlike the accepted answer, this also worked on a table that had no key (RowId) to compare on.Numbing
it has synonym syntax: delete t from (select ROW_NUMBER() OVER (PARTITION BY name ORDER BY (SELECT 0)) as rn from @table) t where rn > 1Xuanxunit
Great solution as can also be used on tables with a compound primary key.Cogitation
Just FYI this article on codeproject works as wellhttp://www.codeproject.com/Articles/157977/Remove-Duplicate-Rows-from-a-Table-in-SQL-ServerMarion
@Marion Already mentioned in this answer. Unless you are stuck on SQL Server 2000 that seems unnecessarily cumbersome and inefficient compared with ROW_NUMBER though.Septuagesima
This one doesn't work on all SQL server versions, on the other handElswick
can one explain how does delete statement on (CTE) common table expression is able to delete the results in temporary table #MyTable?Transom
@Transom - The same way that deleting rows from a view works. The CTE needs to meet the criteria for updatable views so that the Database Engine must be able to unambiguously trace modifications from the view definition to one base table.Septuagesima
Here is @MartinSmith's answer for Postgres: with cte as (select id, row_number() over (partition by Col1 order by id) as rn from MyTable) delete from MyTable where id in (select id from cte where rn > 1); This will leave the lowest primary key and delete the rest (order by id).Cosher
@Elswick it works on versions 2005, 2008, 2008 R2, 2012, 2014, 2016, 2017, 2019. You need to go back to 2000 to find a version this does not work on.Septuagesima
@jeppoo1 - The CTE in the answer here is only referencing a single base table. You should ask a new question about your case and whether it can be written in a way that avoids this error.Septuagesima
G
159

There's a good article on removing duplicates on the Microsoft Support site. It's pretty conservative - they have you do everything in separate steps - but it should work well against large tables.

I've used self-joins to do this in the past, although it could probably be prettied up with a HAVING clause:

DELETE dupes
FROM MyTable dupes, MyTable fullTable
WHERE dupes.dupField = fullTable.dupField 
AND dupes.secondDupField = fullTable.secondDupField 
AND dupes.uniqueField > fullTable.uniqueField
Geter answered 20/8, 2008 at 21:53 Comment(3)
perfect! i found this is the most efficient way to remove duplicate rows on my old mariadb version 10.1.xx. thank you!Pharyngology
Much simpler and easier to understand!Listen
I have one doubt, in your sql query why are you not using 'From' keyword after 'DELETE' ? I have seen from in many other solution.Foremost
P
103

The following query is useful to delete duplicate rows. The table in this example has ID as an identity column and the columns which have duplicate data are Column1, Column2 and Column3.

DELETE FROM TableName
WHERE  ID NOT IN (SELECT MAX(ID)
                  FROM   TableName
                  GROUP  BY Column1,
                            Column2,
                            Column3
                  /*Even if ID is not null-able SQL Server treats MAX(ID) as potentially
                    nullable. Because of semantics of NOT IN (NULL) including the clause
                    below can simplify the plan*/
                  HAVING MAX(ID) IS NOT NULL) 

The following script shows usage of GROUP BY, HAVING, ORDER BY in one query, and returns the results with duplicate column and its count.

SELECT YourColumnName,
       COUNT(*) TotalCount
FROM   YourTableName
GROUP  BY YourColumnName
HAVING COUNT(*) > 1
ORDER  BY COUNT(*) DESC 
Pachyderm answered 23/11, 2011 at 15:32 Comment(4)
MySQL error with the first script 'You can't specify target table 'TableName' for update in FROM clause'Sulcus
Apart from the error D.Rosado already reported, your first query is also very slow. The corresponding SELECT query took on my setup +- 20 times longer than the accepted answer.Stormy
@Stormy - The question is tagged SQL Server not MySQL. The syntax is fine in SQL Server. Also MySQL is notoriously bad at optimising sub queries see for example here. This answer is fine in SQL Server. In fact NOT IN often performs better than OUTER JOIN ... NULL. I would add a HAVING MAX(ID) IS NOT NULL to the query though even though semantically it ought not be necessary as that can improve the plan example of that hereSeptuagesima
Works great in PostgreSQL 8.4.Addlebrained
C
75
delete t1
from table t1, table t2
where t1.columnA = t2.columnA
and t1.rowid>t2.rowid

Postgres:

delete
from table t1
using table t2
where t1.columnA = t2.columnA
and t1.rowid > t2.rowid
Chimera answered 30/9, 2010 at 2:35 Comment(4)
Why post a Postgres solution on a SQL Server question?Vermiculate
@Lankymart Because postgres users are coming here too. Look at the score of this answer.Marbut
I've seen this in some popular SQL questions, as in here, here and here. The OP got his answer and everyone else got some help too. No problem IMHO.Marbut
in one query you are using 'From' after Delete and in one your not using 'From', whats the logic ?Foremost
M
47
DELETE LU 
FROM   (SELECT *, 
               Row_number() 
                 OVER ( 
                   partition BY col1, col1, col3 
                   ORDER BY rowid DESC) [Row] 
        FROM   mytable) LU 
WHERE  [row] > 1 
Melleta answered 21/5, 2014 at 7:54 Comment(1)
I get this message on azure SQL DW: A FROM clause is currently not supported in a DELETE statement.Allargando
P
42

This will delete duplicate rows, except the first row

DELETE
FROM
    Mytable
WHERE
    RowID NOT IN (
        SELECT
            MIN(RowID)
        FROM
            Mytable
        GROUP BY
            Col1,
            Col2,
            Col3
    )

Refer (http://www.codeproject.com/Articles/157977/Remove-Duplicate-Rows-from-a-Table-in-SQL-Server)

Pint answered 10/9, 2013 at 13:7 Comment(1)
For mysql it will give error: Error Code: 1093. You can't specify target table 'Mytable' for update in FROM clause. but this small change will work for mysql: DELETE FROM Mytable WHERE RowID NOT IN ( SELECT ID FROM (SELECT MIN(RowID) AS ID FROM Mytable GROUP BY Col1,Col2,Col3) AS TEMP)Dichlamydeous
M
38

I would prefer CTE for deleting duplicate rows from sql server table

Strongly recommend this article.

By keeping original

WITH CTE AS
(
SELECT *,ROW_NUMBER() OVER (PARTITION BY col1,col2,col3 ORDER BY col1,col2,col3) AS RN
FROM MyTable
)
 
DELETE FROM CTE WHERE RN<>1

Without keeping original

WITH CTE AS
(SELECT *,R=RANK() OVER (ORDER BY col1,col2,col3)
FROM MyTable)
 
DELETE CTE
WHERE R IN (SELECT R FROM CTE GROUP BY R HAVING COUNT(*)>1)
Maryleemarylin answered 19/5, 2015 at 14:35 Comment(1)
in one query you are using 'from' after delete and in another 'from' is not there, what is this, I am confused ?Foremost
O
30

To Fetch Duplicate Rows:

SELECT
name, email, COUNT(*)
FROM 
users
GROUP BY
name, email
HAVING COUNT(*) > 1

To Delete the Duplicate Rows:

DELETE users 
WHERE rowid NOT IN 
(SELECT MIN(rowid)
FROM users
GROUP BY name, email);      
Orva answered 29/12, 2016 at 10:31 Comment(2)
For MySQL users, note that first of all it has to be DELETE FROM, second, it won't work, because you can't SELECT from the same table you're DELETEing from. In MySQL this blasts off MySQL error 1093.Enos
I think is is much more reasonable than the rather esotheric accepted answer using DELETE FROM ... LEFT OUTER JOIN that also does not work on some systems (e.g. SQL Server). If you run into the limitation stated above, you can always save the results of your selection into a temporary TABLE variable: DECLARE @idsToKeep TABLE(rowid INT); and then INSERT INTO @idsToKeep(rowid) SELECT MIN... GROUP BY ... followed by DELETE users WHERE rowid NOT IN (SELECT rowid FROM @idsToKeep);Mesoderm
G
24

Quick and Dirty to delete exact duplicated rows (for small tables):

select  distinct * into t2 from t1;
delete from t1;
insert into t1 select *  from t2;
drop table t2;
Gifferd answered 5/2, 2013 at 21:44 Comment(2)
Note that the question actually specifies non-exact duplication (dueto row id).Inshrine
You also have to deal with identity (key) columns using set identity_insert t1 on.Beeves
S
21

I prefer the subquery\having count(*) > 1 solution to the inner join because I found it easier to read and it was very easy to turn into a SELECT statement to verify what would be deleted before you run it.

--DELETE FROM table1 
--WHERE id IN ( 
     SELECT MIN(id) FROM table1 
     GROUP BY col1, col2, col3 
     -- could add a WHERE clause here to further filter
     HAVING count(*) > 1
--)
Sweatband answered 1/3, 2014 at 7:40 Comment(6)
Doesn't it delete all the records that show up in the inner query. We need to remove only duplicates and preserve the original.Orelle
You're only returning the one with the lowest id, based on the min(id) in the select clause.Sweatband
Yes, but the question was not asking about how to return the rows that are to be deleted, but it is asking about how to delete the rows that are duplicates. Can you elaborate on how I can delete the rows that the query has returned?Orelle
Uncomment out the first, second, and last lines of the query.Sweatband
This won't clean up all duplicates. If you have 3 rows that are duplicates, it will only select the row with the MIN(id), and delete that one, leaving two rows left that are duplicates.Cosher
Nevertheless, I ended up using this statement repeated over & over again, so that it would actually make progress instead of having the connection timing out or the computer go to sleep. I changed it to MAX(id) to eliminate the latter duplicates, and added LIMIT 1000000 to the inner query so it wouldn't have to scan the whole table. This showed progress much quicker than the other answers, which would seem to hang for hours. After the table was pruned to a manageable size, then you can finish with the other queries. Tip: make sure col1/col2/col3 has indices for group by.Cosher
N
17
SELECT  DISTINCT *
      INTO tempdb.dbo.tmpTable
FROM myTable

TRUNCATE TABLE myTable
INSERT INTO myTable SELECT * FROM tempdb.dbo.tmpTable
DROP TABLE tempdb.dbo.tmpTable
Noenoel answered 10/10, 2012 at 11:17 Comment(1)
Truncating won't work if you have foreign key references to myTable.Multinuclear
C
15

I thought I'd share my solution since it works under special circumstances. I my case the table with duplicate values did not have a foreign key (because the values were duplicated from another db).

begin transaction
-- create temp table with identical structure as source table
Select * Into #temp From tableName Where 1 = 2

-- insert distinct values into temp
insert into #temp 
select distinct * 
from  tableName

-- delete from source
delete from tableName 

-- insert into source from temp
insert into tableName 
select * 
from #temp

rollback transaction
-- if this works, change rollback to commit and execute again to keep you changes!!

PS: when working on things like this I always use a transaction, this not only ensures everything is executed as a whole, but also allows me to test without risking anything. But off course you should take a backup anyway just to be sure...

Corkage answered 27/1, 2014 at 12:20 Comment(0)
N
14

Using CTE. The idea is to join on one or more columns that form a duplicate record and then remove whichever you like:

;with cte as (
    select 
        min(PrimaryKey) as PrimaryKey
        UniqueColumn1,
        UniqueColumn2
    from dbo.DuplicatesTable 
    group by
        UniqueColumn1, UniqueColumn1
    having count(*) > 1
)
delete d
from dbo.DuplicatesTable d 
inner join cte on 
    d.PrimaryKey > cte.PrimaryKey and
    d.UniqueColumn1 = cte.UniqueColumn1 and 
    d.UniqueColumn2 = cte.UniqueColumn2;
Nne answered 13/11, 2014 at 16:20 Comment(1)
I think you're missing an AND in your JOIN.Asyllabic
T
14

This query showed very good performance for me:

DELETE tbl
FROM
    MyTable tbl
WHERE
    EXISTS (
        SELECT
            *
        FROM
            MyTable tbl2
        WHERE
            tbl2.SameValue = tbl.SameValue
        AND tbl.IdUniqueValue < tbl2.IdUniqueValue
    )

it deleted 1M rows in little more than 30sec from a table of 2M (50% duplicates)

Termination answered 10/12, 2014 at 19:36 Comment(0)
P
13

Yet another easy solution can be found at the link pasted here. This one easy to grasp and seems to be effective for most of the similar problems. It is for SQL Server though but the concept used is more than acceptable.

Here are the relevant portions from the linked page:

Consider this data:

EMPLOYEE_ID ATTENDANCE_DATE
A001    2011-01-01
A001    2011-01-01
A002    2011-01-01
A002    2011-01-01
A002    2011-01-01
A003    2011-01-01

So how can we delete those duplicate data?

First, insert an identity column in that table by using the following code:

ALTER TABLE dbo.ATTENDANCE ADD AUTOID INT IDENTITY(1,1)  

Use the following code to resolve it:

DELETE FROM dbo.ATTENDANCE WHERE AUTOID NOT IN (SELECT MIN(AUTOID) _
    FROM dbo.ATTENDANCE GROUP BY EMPLOYEE_ID,ATTENDANCE_DATE) 
Pileup answered 6/8, 2013 at 17:14 Comment(3)
"Easy to grasp", "seems to be effective", but not a word about what the method consists in. Just imagine that the link becomes invalid, what use would then be to know that the method was easy to grasp and effective? Please consider adding essential parts of the method's description into your post, otherwise this is not an answer.Peipeiffer
This method is useful for tables where you don't yet have an identity defined. Often you need to get rid of duplicates in order to define the primary key!Actinopod
@JeffDavis - The ROW_NUMBER version works fine for that case without needing to go to the lengths of adding a new column before you begin.Septuagesima
D
13

This is the easiest way to delete duplicate record

 DELETE FROM tblemp WHERE id IN 
 (
  SELECT MIN(id) FROM tblemp
   GROUP BY  title HAVING COUNT(id)>1
 )
Dropout answered 28/9, 2016 at 5:26 Comment(1)
Why is anyone upvoting this? If you have more than two of the same id this WON'T work. Instead write: delete from tblemp where id not in (select min(id) from tblemp group by title)Criminality
M
12

Use this

WITH tblTemp as
(
SELECT ROW_NUMBER() Over(PARTITION BY Name,Department ORDER BY Name)
   As RowNumber,* FROM <table_name>
)
DELETE FROM tblTemp where RowNumber >1
Melone answered 23/7, 2015 at 11:42 Comment(0)
C
11

Here is another good article on removing duplicates.

It discusses why its hard: "SQL is based on relational algebra, and duplicates cannot occur in relational algebra, because duplicates are not allowed in a set."

The temp table solution, and two mysql examples.

In the future are you going to prevent it at a database level, or from an application perspective. I would suggest the database level because your database should be responsible for maintaining referential integrity, developers just will cause problems ;)

Condorcet answered 20/8, 2008 at 21:58 Comment(1)
SQL is based on multi-sets. But even if it was based on sets, this two tuples (1, a) & (2, a) are different.Overweight
S
11

Oh sure. Use a temp table. If you want a single, not-very-performant statement that "works" you can go with:

DELETE FROM MyTable WHERE NOT RowID IN
    (SELECT 
        (SELECT TOP 1 RowID FROM MyTable mt2 
        WHERE mt2.Col1 = mt.Col1 
        AND mt2.Col2 = mt.Col2 
        AND mt2.Col3 = mt.Col3) 
    FROM MyTable mt)

Basically, for each row in the table, the sub-select finds the top RowID of all rows that are exactly like the row under consideration. So you end up with a list of RowIDs that represent the "original" non-duplicated rows.

Slew answered 20/8, 2008 at 22:27 Comment(0)
E
11

I had a table where I needed to preserve non-duplicate rows. I'm not sure on the speed or efficiency.

DELETE FROM myTable WHERE RowID IN (
  SELECT MIN(RowID) AS IDNo FROM myTable
  GROUP BY Col1, Col2, Col3
  HAVING COUNT(*) = 2 )
Exactly answered 11/12, 2009 at 13:47 Comment(2)
This assumes that there is at most 1 duplicate.Septuagesima
Why not HAVING COUNT(*) > 1?Erosive
P
10

The other way is Create a new table with same fields and with Unique Index. Then move all data from old table to new table. Automatically SQL SERVER ignore (there is also an option about what to do if there will be a duplicate value: ignore, interrupt or sth) duplicate values. So we have the same table without duplicate rows. If you don't want Unique Index, after the transfer data you can drop it.

Especially for larger tables you may use DTS (SSIS package to import/export data) in order to transfer all data rapidly to your new uniquely indexed table. For 7 million row it takes just a few minute.

Pulchia answered 18/9, 2013 at 6:36 Comment(0)
C
9
  1. Create new blank table with the same structure

  2. Execute query like this

    INSERT INTO tc_category1
    SELECT *
    FROM tc_category
    GROUP BY category_id, application_id
    HAVING count(*) > 1
    
  3. Then execute this query

    INSERT INTO tc_category1
    SELECT *
    FROM tc_category
    GROUP BY category_id, application_id
    HAVING count(*) = 1
    
Capitalize answered 8/5, 2009 at 13:6 Comment(0)
P
9

By useing below query we can able to delete duplicate records based on the single column or multiple column. below query is deleting based on two columns. table name is: testing and column names empno,empname

DELETE FROM testing WHERE empno not IN (SELECT empno FROM (SELECT empno, ROW_NUMBER() OVER (PARTITION BY empno ORDER BY empno) 
AS [ItemNumber] FROM testing) a WHERE ItemNumber > 1)
or empname not in
(select empname from (select empname,row_number() over(PARTITION BY empno ORDER BY empno) 
AS [ItemNumber] FROM testing) a WHERE ItemNumber > 1)
Pyuria answered 8/2, 2012 at 12:6 Comment(0)
N
8

Another way of doing this :--

DELETE A
FROM   TABLE A,
       TABLE B
WHERE  A.COL1 = B.COL1
       AND A.COL2 = B.COL2
       AND A.UNIQUEFIELD > B.UNIQUEFIELD 
Nighthawk answered 2/2, 2016 at 6:59 Comment(1)
What's different to this existing answer from Aug 20 2008? - https://mcmap.net/q/45332/-how-can-i-remove-duplicate-rowsVermiculate
C
7

From the application level (unfortunately). I agree that the proper way to prevent duplication is at the database level through the use of a unique index, but in SQL Server 2005, an index is allowed to be only 900 bytes, and my varchar(2048) field blows that away.

I dunno how well it would perform, but I think you could write a trigger to enforce this, even if you couldn't do it directly with an index. Something like:

-- given a table stories(story_id int not null primary key, story varchar(max) not null)
CREATE TRIGGER prevent_plagiarism 
ON stories 
after INSERT, UPDATE 
AS 
    DECLARE @cnt AS INT 

    SELECT @cnt = Count(*) 
    FROM   stories 
           INNER JOIN inserted 
                   ON ( stories.story = inserted.story 
                        AND stories.story_id != inserted.story_id ) 

    IF @cnt > 0 
      BEGIN 
          RAISERROR('plagiarism detected',16,1) 

          ROLLBACK TRANSACTION 
      END 

Also, varchar(2048) sounds fishy to me (some things in life are 2048 bytes, but it's pretty uncommon); should it really not be varchar(max)?

Curtate answered 20/8, 2008 at 22:53 Comment(0)
D
7

I would mention this approach as well as it can be helpful, and works in all SQL servers: Pretty often there is only one - two duplicates, and Ids and count of duplicates are known. In this case:

SET ROWCOUNT 1 -- or set to number of rows to be deleted
delete from myTable where RowId = DuplicatedID
SET ROWCOUNT 0
Dinny answered 30/1, 2013 at 19:45 Comment(0)
R
7
DELETE
FROM
    table_name T1
WHERE
    rowid > (
        SELECT
            min(rowid)
        FROM
            table_name T2
        WHERE
            T1.column_name = T2.column_name
    );
Richardricharda answered 3/10, 2013 at 6:18 Comment(1)
Hi Teena, you have missed the table Alice name T1 after the delete comment otherwise it will throgh syntax exception.Proceleusmatic
S
6
CREATE TABLE car(Id int identity(1,1), PersonId int, CarId int)

INSERT INTO car(PersonId,CarId)
VALUES(1,2),(1,3),(1,2),(2,4)

--SELECT * FROM car

;WITH CTE as(
SELECT ROW_NUMBER() over (PARTITION BY personid,carid order by personid,carid) as rn,Id,PersonID,CarId from car)

DELETE FROM car where Id in(SELECT Id FROM CTE WHERE rn>1)
Sexagenarian answered 11/7, 2012 at 11:46 Comment(0)
D
6
DELETE 
FROM MyTable
WHERE NOT EXISTS (
              SELECT min(RowID)
              FROM Mytable
              WHERE (SELECT RowID 
                     FROM Mytable
                     GROUP BY Col1, Col2, Col3
                     ))
               );
Disappear answered 2/1, 2014 at 15:27 Comment(0)
K
6

I you want to preview the rows you are about to remove and keep control over which of the duplicate rows to keep. See http://developer.azurewebsites.net/2014/09/better-sql-group-by-find-duplicate-data/

with MYCTE as (
  SELECT ROW_NUMBER() OVER (
    PARTITION BY DuplicateKey1
                ,DuplicateKey2 -- optional
    ORDER BY CreatedAt -- the first row among duplicates will be kept, other rows will be removed
  ) RN
  FROM MyTable
)
DELETE FROM MYCTE
WHERE RN > 1
Kobold answered 1/1, 2015 at 15:32 Comment(1)
DELETE u1 FROM users u1 JOIN users u2 WHERE u1.id > u2.id AND u1.email=u2.emailZirconia
R
1
alter table MyTable add sno int identity(1,1)
    delete from MyTable where sno in
    (
    select sno from (
    select *,
    RANK() OVER ( PARTITION BY RowID,Col3 ORDER BY sno DESC )rank
    From MyTable
    )T
    where rank>1
    )

    alter table MyTable 
    drop  column sno
Risorgimento answered 16/12, 2015 at 6:11 Comment(0)
R
1

Sometimes a soft delete mechanism is used where a date is recorded to indicate the deleted date. In this case an UPDATE statement may be used to update this field based on duplicate entries.

UPDATE MY_TABLE
   SET DELETED = getDate()
 WHERE TABLE_ID IN (
    SELECT x.TABLE_ID
      FROM MY_TABLE x
      JOIN (SELECT min(TABLE_ID) id, COL_1, COL_2, COL_3
              FROM MY_TABLE d
             GROUP BY d.COL_1, d.COL_2, d.COL_3
            HAVING count(*) > 1) AS d ON d.COL_1 = x.COL_1
                                     AND d.COL_2 = x.COL_2
                                     AND d.COL_3 = x.COL_3
                                     AND d.TABLE_ID <> x.TABLE_ID
             /*WHERE x.COL_4 <> 'D' -- Additional filter*/)

This method has served me well for fairly moderate tables containing ~30 million rows with high and low amounts of duplications.

Roslyn answered 7/6, 2016 at 1:29 Comment(0)
B
1

I know that this question has been already answered, but I've created pretty useful sp which will create a dynamic delete statement for a table duplicates:

    CREATE PROCEDURE sp_DeleteDuplicate @tableName varchar(100), @DebugMode int =1
AS 
BEGIN
SET NOCOUNT ON;

IF(OBJECT_ID('tempdb..#tableMatrix') is not null) DROP TABLE #tableMatrix;

SELECT ROW_NUMBER() OVER(ORDER BY name) as rn,name into #tableMatrix FROM sys.columns where [object_id] = object_id(@tableName) ORDER BY name

DECLARE @MaxRow int = (SELECT MAX(rn) from #tableMatrix)
IF(@MaxRow is null)
    RAISERROR  ('I wasn''t able to find any columns for this table!',16,1)
ELSE 
    BEGIN
DECLARE @i int =1 
DECLARE @Columns Varchar(max) ='';

WHILE (@i <= @MaxRow)
BEGIN 
    SET @Columns=@Columns+(SELECT '['+name+'],' from #tableMatrix where rn = @i)

    SET @i = @i+1;
END

---DELETE LAST comma
SET @Columns = LEFT(@Columns,LEN(@Columns)-1)

DECLARE @Sql nvarchar(max) = '
WITH cteRowsToDelte
     AS (
SELECT ROW_NUMBER() OVER (PARTITION BY '+@Columns+' ORDER BY ( SELECT 0)) as rowNumber,* FROM '+@tableName
+')

DELETE FROM cteRowsToDelte
WHERE  rowNumber > 1;
'
SET NOCOUNT OFF;
    IF(@DebugMode = 1)
       SELECT @Sql
    ELSE
       EXEC sp_executesql @Sql
    END
END

So if you create table like that:

IF(OBJECT_ID('MyLitleTable') is not null)
    DROP TABLE MyLitleTable 


CREATE TABLE MyLitleTable
(
    A Varchar(10),
    B money,
    C int
)
---------------------------------------------------------

    INSERT INTO MyLitleTable VALUES
    ('ABC',100,1),
    ('ABC',100,1), -- only this row should be deleted
    ('ABC',101,1),
    ('ABC',100,2),
    ('ABCD',100,1)

    -----------------------------------------------------------

     exec sp_DeleteDuplicate 'MyLitleTable',0

It will delete all duplicates from your table. If you run it without the second parameter it will return a SQL statement to run.

If you need to exclude any of the column just run it in the debug mode get the code and modify it whatever you like.

Bernabernadene answered 13/4, 2017 at 8:49 Comment(0)
V
1

If all the columns in duplicate rows are same then below query can be used to delete the duplicate records.

SELECT DISTINCT * INTO #TemNewTable FROM #OriginalTable
TRUNCATE TABLE #OriginalTable
INSERT INTO #OriginalTable SELECT * FROM #TemNewTable
DROP TABLE #TemNewTable
Velleman answered 29/10, 2018 at 14:20 Comment(0)
M
1

For the table structure

MyTable

RowID int not null identity(1,1) primary key,
Col1 varchar(20) not null,
Col2 varchar(2048) not null,
Col3 tinyint not null

The query for removing duplicates:

DELETE t1
FROM MyTable t1
INNER JOIN MyTable t2
WHERE t1.RowID > t2.RowID
  AND t1.Col1 = t2.Col1
  AND t1.Col2=t2.Col2
  AND t1.Col3=t2.Col3;

I am assuming that RowID is kind of auto-increment and rest of the columns have duplicate values.

Marcelline answered 6/8, 2020 at 4:1 Comment(0)
I
0

Now lets look elasticalsearch table which this tables has duplicated rows and Id is identical uniq field. We know if some id exist by a group criteria then we can delete other rows outscope of this group. My manner shows this criteria.

So many case of this thread are in the like state of mine. Just change your target group criteria according your case for deleting repeated (duplicated) rows.

DELETE 
FROM elasticalsearch
WHERE Id NOT IN 
               (SELECT min(Id)
                     FROM elasticalsearch
                     GROUP BY FirmId,FilterSearchString
                     ) 

cheers

Iced answered 11/1, 2016 at 20:31 Comment(2)
Could you explain how/why your code works? That'll enable the OP and others to understand and apply your methods (where applicable) elsewhere. Code-only answers are discouraged and liable to be deleted. — During reviewColo
ok i explaned my my answwer Wai Ha Lee inspute of the code shows all detailsIced
E
0

I think this would be helpfull. Here, ROW_NUMBER() OVER(PARTITION BY res1.Title ORDER BY res1.Id)as num has been used to differentiate the duplicate rows.

delete FROM
(SELECT res1.*,ROW_NUMBER() OVER(PARTITION BY res1.Title ORDER BY res1.Id)as num
 FROM 
(select * from [dbo].[tbl_countries])as res1
)as res2
WHERE res2.num > 1
Erskine answered 10/6, 2018 at 9:27 Comment(1)
Can you describe what makes your answer different from this one?Hydrolysis
E
0

Other way to remove duplicates based on two columns

I found this query easier to read and replace.

DELETE 
FROM 
 TABLE_NAME 
 WHERE FIRST_COLUMNS 
 IN( 
       SELECT * FROM 
           ( SELECT MIN(FIRST_COLUMNS) 
             FROM TABLE_NAME 
             GROUP BY 
                      FIRST_COLUMNS,
                      SECOND_COLUMNS 
             HAVING COUNT(FIRST_COLUMNS) > 1 
            ) temp 
   )

note: It's good to simulate query before you run it.

enter image description here

Evanevander answered 5/3, 2021 at 20:23 Comment(0)
S
0

First you can select minimum RowId's using MIN() and Group By. We will keep these Rows.

   SELECT MIN(RowId) as RowId
   FROM MyTable 
   GROUP BY Col1, Col2, Col3

And Delete RowId's those are not in selected minimum RowId's using

DELETE FROM MyTable WHERE RowId Not IN()

Final query:

DELETE FROM MyTable WHERE RowId Not IN(

    SELECT MIN(RowId) as RowId
    FROM MyTable 
    GROUP BY Col1, Col2, Col3
)

You can also check my answer in SQL Fiddle

Shivers answered 18/9, 2021 at 19:2 Comment(0)
O
-1

A very simple way to delete duplicate rows of table in postgresql.

DELETE FROM table1 a
USING table1 b
WHERE a.id < b.id
AND a.column1 = b.column1
AND a.column2 = b.column2;
Osuna answered 30/4, 2021 at 19:24 Comment(0)
Z
-1

Delete Duplicate record

Greater Than operator in this case delete all record except first record

DELETE u1 FROM users u1 JOIN users u2 WHERE u1.id > u2.id AND u1.email=u2.email

< Less than operator in this case delete all record except last record

DELETE u1 FROM users u1 JOIN users u2 WHERE u1.id < u2.id AND u1.email=u2.email

Zirconia answered 17/6, 2022 at 12:21 Comment(0)
P
-1

Create another table that will consist of original values:

CREATE TABLE table2 AS SELECT *, COUNT(*) FROM table1 GROUP BY name HAVING COUNT (*) > 0
Paymar answered 7/10, 2022 at 22:9 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.