JDBC batch insert performance
Asked Answered
P

6

76

I need to insert a couple hundreds of millions of records into the mysql db. I'm batch inserting it 1 million at a time. Please see my code below. It seems to be slow. Is there any way to optimize it?

try {
        // Disable auto-commit
        connection.setAutoCommit(false);

        // Create a prepared statement
        String sql = "INSERT INTO mytable (xxx), VALUES(?)";
        PreparedStatement pstmt = connection.prepareStatement(sql);

        Object[] vals=set.toArray();
        for (int i=0; i<vals.length; i++) {
            pstmt.setString(1, vals[i].toString());
            pstmt.addBatch();
        }

        // Execute the batch
        int [] updateCounts = pstmt.executeBatch();
        System.out.append("inserted "+updateCounts.length);
Pilcher answered 7/6, 2010 at 21:17 Comment(5)
Your code got a little corrupted (and truncated prematurely)Trefoil
BTW, which driver are you using? A general JDBC, or the JDBC-Mysql connector?Trefoil
I'm using com.mysql.jdbc.DriverPilcher
How long does it take? What's the comparison material on which you concluded that it's slow?Hybridism
I'm only doing a test insert of 1 million into my local pc(4gb ram), it took about 10 min, just wondering if there's any room for improvementPilcher
W
197

I had a similar performance issue with mysql and solved it by setting the useServerPrepStmts and the rewriteBatchedStatements properties in the connection url.

Connection c = DriverManager.getConnection("jdbc:mysql://host:3306/db?useServerPrepStmts=false&rewriteBatchedStatements=true", "username", "password");
Whiteheaded answered 16/5, 2012 at 11:41 Comment(14)
Nice! I'm seeing a 3x improvmentSnort
@Snort - so why not accept this answer? Thanks, mate! This works like magic!Ruff
OMG! Adding the above params to my connection URL sped up the batch inserts nearly 30x. I am not sure what other implications these variables have. But amazing it is! Thanks.Anarthrous
5x Improvement! But I want more.Thallophyte
10 seconds instead of one hour. Well deserved +1!Semirigid
The MySQL reference documentation for the driver properties has moved.Newsom
I noticed another requirement with insert batching when using the Java driver (at least in the 5.1.10 version of the driver): In the SQL statement, the "VALUES" part must be followed by a space, not directly by the opening parenthesis, or else the driver will fall back to sequential inserts.Catsup
I'm seeing a 40x improvement. Why didn't the OP accept the answer? I completely missed this answer earlier.Pyrethrin
Can you explain why adding useServerPrepStmts=false works? After reading #32287018 I would assume it would be the opposite. I'm sure you're right I just don't fully understand why/how that settings improves performance? Thank you.Asia
Why the heck wouldn't this be the default?Lys
how to check if these connection properties are working or not?? Cause I have been using these properties but it still cant make the execution faster. is there a way to check that if these properties are working so that I can check where the issue is? Thanks.Siding
@Siding - In my case as well, still no improvements by adding useServerPrepStmts=false&rewriteBatchedStatements=true and also spring.jdbc.getParameterType.ignore=true, I am using NamedParameterJdbcTemplate.batchUpdateDearborn
In my case, what actually worked was rewriteBatchedStatements=true. set useServerPrepStmts=false did not helpDebarath
This solution not working for the Postgres DB.Dearborn
G
75

I'd like to expand on Bertil's answer, as I've been experimenting with the connection URL parameters.

rewriteBatchedStatements=true is the important parameter. useServerPrepStmts is already false by default, and even changing it to true doesn't make much difference in terms of batch insert performance.

Now I think is the time to write how rewriteBatchedStatements=true improves the performance so dramatically. It does so by rewriting of prepared statements for INSERT into multi-value inserts when executeBatch() (Source). That means that instead of sending the following n INSERT statements to the mysql server each time executeBatch() is called :

INSERT INTO X VALUES (A1,B1,C1)
INSERT INTO X VALUES (A2,B2,C2)
...
INSERT INTO X VALUES (An,Bn,Cn)

It would send a single INSERT statement :

INSERT INTO X VALUES (A1,B1,C1),(A2,B2,C2),...,(An,Bn,Cn)

You can observe it by toggling on the mysql logging (by SET global general_log = 1) which would log into a file each statement sent to the mysql server.

Goldbrick answered 3/6, 2014 at 23:18 Comment(3)
Eran - what about Updates, does rewriteBatchedStatements=true improve performance on updates same as on inserts, because the syntax on update isn't the same as on insert, and it cannot execute one as a whole(as per my knowledge). Saying this because i have like for a 10k+ rows maybe less than a second on insert, and for 100 rows 1 second on update.Baluster
Thanks for the explanation. I'm a bit astounded that the JDBC driver is sending a long SQL string instead of using a binary API.Chainplate
@SamGoldberg well, this answer is 8.5 years old, so I'm not sure if the JDBC driver still behaves this way.Goldbrick
C
13

You can insert multiple rows with one insert statement, doing a few thousands at a time can greatly speed things up, that is, instead of doing e.g. 3 inserts of the form INSERT INTO tbl_name (a,b,c) VALUES(1,2,3); , you do INSERT INTO tbl_name (a,b,c) VALUES(1,2,3),(1,2,3),(1,2,3); (It might be JDBC .addBatch() does similar optimization now - though the mysql addBatch used to be entierly un-optimized and just issuing individual queries anyhow - I don't know if that's still the case with recent drivers)

If you really need speed, load your data from a comma separated file with LOAD DATA INFILE , we get around 7-8 times speedup doing that vs doing tens of millions of inserts.

Conservative answered 7/6, 2010 at 21:49 Comment(3)
load data infile might be a good alternative, but my input file needs a cleanup, I'm only interested in inserting certain rows where the second token matches a string(space delimited tokens), is load data infile flexible enough to filter rows?Pilcher
I don't think it can filter, but you can clean up the data yourself , write a new file with the cleaned data and load that file.Conservative
My inserts are 10x faster now!Cherisecherish
I
6

If:

  1. It's a new table, or the amount to be inserted is greater then the already inserted data
  2. There are indexes on the table
  3. You do not need other access to the table during the insert

Then ALTER TABLE tbl_name DISABLE KEYS can greatly improve the speed of your inserts. When you're done, run ALTER TABLE tbl_name ENABLE KEYS to start building the indexes, which can take a while, but not nearly as long as doing it for every insert.

Issy answered 7/6, 2010 at 22:39 Comment(0)
H
1

You may try using DDBulkLoad object.

// Get a DDBulkLoad object
DDBulkLoad bulkLoad = DDBulkLoadFactory.getInstance(connection);
bulkLoad.setTableName(“mytable”);
bulkLoad.load(“data.csv”);
Herbie answered 7/6, 2010 at 23:0 Comment(0)
B
1
try {
        // Disable auto-commit
        connection.setAutoCommit(false);
        int maxInsertBatch = 10000;     
        // Create a prepared statement
        String sql = "INSERT INTO mytable (xxx), VALUES(?)";
        PreparedStatement pstmt = connection.prepareStatement(sql);

        Object[] vals=set.toArray();
        int count = 1;
        for (int i=0; i<vals.length; i++) {
            pstmt.setString(1, vals[i].toString());
            pstmt.addBatch();
            if(count%maxInsertBatch == 0){
                 pstmt.executeBatch();
            }
            count++;
        }

        // Execute the batch
        pstmt.executeBatch();
        System.out.append("inserted "+count);
Built answered 26/8, 2015 at 10:33 Comment(2)
instead of downvoting there could be comment on this, why it can or cannot improve performance when executing several batches in between and not all at once...Coroner
Looks like the above answer has an almost same code snippet which is there is a question.Slight

© 2022 - 2024 — McMap. All rights reserved.