How to skip row when importing bad MySQL dump
Asked Answered
B

6

51

Given bad mysqldump that causes error on import:

namtar backups # mysql -p < 2010-12-01.sql
Enter password: 
ERROR 1062 (23000) at line 8020: Duplicate entry 'l�he?' for key 'wrd_txt'

Is there an easy way to tell import to just skip given row and continue?

(Yes, I know I can manually edit the file or parse output, but it's not very convinient)

Bank answered 1/10, 2011 at 18:51 Comment(2)
I was also having this issue I created a new database and then imported the data from .sql file into my newly created database. It worked for me.Mikamikado
In my case, triggers were causing duplicate entry error while import.Jame
C
62

If you can make the dump again you could add --insert-ignore to the command-line when dumping.

Or you can try using the mysqlimport command with --force,which will continue even if it encounters MySQL Errors.

Centennial answered 1/10, 2011 at 18:56 Comment(2)
I cannot create export again...and how do I use mysqlimport agains output from mysqldump --all-databases ?Bank
I just tried using --force with mysql and it causes the insertion to stop when the error occurs. mysqlimport doesn't work with an --all-databases dump, so that's out of the question as well. The easiest way would be to edit the file, not to remove the offending line(s) which could be tedious, but to turn the "INSERT" commands into "INSERT IGNORE" commands. A simple find/replace should suffice. I realise it's not an ideal solution, but it's better than having to dig through the file replacing offending inserts manually when you find out about them.Centennial
S
46

mysql -f -p < 2010-12-01.sql

the -f (force) being the operative option here, worked for me.

Swadeshi answered 5/4, 2012 at 13:35 Comment(1)
In my case, I restored .sql file in mysql cluster 5.5.29-7.2.10, -f also works. It lists all the ERROR lines, and this can also ignore the error like: ERROR 1528 (HY000) at line 22: Failed to create LOGFILE GROUP, and reuse the LOGFILE already created.Dissogeny
F
29

Following the advice from jmlsteele's answer and comment, here's how to turn the inserts into INSERT IGNORE on the fly.

If you're importing from an sql file:

sed -e "s/^INSERT INTO/INSERT IGNORE INTO/" < 2010-12-01.sql | mysql -p

If you're importing from a gz file, just pipe the output from gunzip into sed instead of using the file input:

gunzip < 2010-12-01.sql.gz | sed -e "s/^INSERT INTO/INSERT IGNORE INTO/" | mysql -p
Furmark answered 3/7, 2013 at 16:11 Comment(4)
Wouldn't this also replace "INSERT INTO" texts inside text fields?Bank
@Bank - It would only change the first INSERT INTO, and any additional on the same line would not be changed. As far as I'm aware, any time you'd have "INSERT INTO" in a text field it would be following an INSERT INTO statement -- But, I changed the answer to make sure its only replaced when its at the beginning of a new line.Furmark
A dump can have multiple INSERT statements (even thousands or more) per-row: in that case the solution wouldn't affect the other statements after the first. Maybe with a more complex regex it's possible to achieve the result, substituting only the INSERT INTO not in text fields.Constitutionally
It COULD have multiple statements per row, but only if produced by hand. Which limits effective size of the file and amount of data stored. In most common use case, the dump is provided by mysqldump or similar utilities and contains one insert statement per line at most.Inadvisable
H
4

The other options certainly are viable options, but another solution would be to simply edit the .sql file obtained from the mysqldump.

Change:

INSERT INTO table_name ...

TO

INSERT IGNORE INTO table_name ...
Hudak answered 17/3, 2017 at 14:1 Comment(1)
@Bank the solution you linked allows you to make the same change I proposed from the command line at the time of importing and does not modify the .sql file. My suggestion here is to actually open the file, edit its contents and save the file before importing. While similar I don't feel these are duplicate answers.Hudak
P
4

Great tip. I did it a little different but same result.

perl -pi -e 's/INSERT INTO/INSERT IGNORE INTO/g' filename.sql
Purifoy answered 22/2, 2019 at 10:51 Comment(1)
do you mind explain your codeCogon
C
1

Just a thought did you delete the MySQL directives at the top of dump? (I unintentionally did when I restarted a restore after deleting all the records/tables I'd already inserted with a sed command). These directives tell MySQL ,among other things, not to do unique tests, foreign key tests etc)

/*!40101 SET @OLD_CHARACTER_SET_CLIENT=@@CHARACTER_SET_CLIENT */;
/*!40101 SET @OLD_CHARACTER_SET_RESULTS=@@CHARACTER_SET_RESULTS */;
/*!40101 SET @OLD_COLLATION_CONNECTION=@@COLLATION_CONNECTION */;
/*!40101 SET NAMES utf8 */;
/*!40103 SET @OLD_TIME_ZONE=@@TIME_ZONE */;
/*!40103 SET TIME_ZONE='+00:00' */;
/*!40014 SET @OLD_UNIQUE_CHECKS=@@UNIQUE_CHECKS, UNIQUE_CHECKS=0 */;
/*!40014 SET @OLD_FOREIGN_KEY_CHECKS=@@FOREIGN_KEY_CHECKS, FOREIGN_KEY_CHECKS=0 */;
/*!40101 SET @OLD_SQL_MODE=@@SQL_MODE, SQL_MODE='NO_AUTO_VALUE_ON_ZERO' */;
/*!40111 SET @OLD_SQL_NOTES=@@SQL_NOTES, SQL_NOTES=0 */;
Carlist answered 6/3, 2013 at 13:33 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.