Fastest way to iterate through large table using JDBC
Asked Answered
K

3

25

I'm trying to create a java program to cleanup and merge rows in my table. The table is large, about 500k rows and my current solution is running very slowly. The first thing I want to do is simply get an in-memory array of objects representing all the rows of my table. Here is what I'm doing:

  • pick an increment of say 1000 rows at a time
  • use JDBC to fetch a resultset on the following SQL query SELECT * FROM TABLE WHERE ID > 0 AND ID < 1000
  • add the resulting data to an in-memory array
  • continue querying all the way up to 500,000 in increments of 1000, each time adding results.

This is taking way to long. In fact its not even getting past the second increment from 1000 to 2000. The query takes forever to finish (although when I run the same thing directly through a MySQL browser its decently fast). Its been a while since I've used JDBC directly. Is there a faster alternative?

Kenishakenison answered 3/7, 2009 at 21:13 Comment(0)
B
25

First of all, are you sure you need the whole table in memory? Maybe you should consider (if possible) selecting rows that you want to update/merge/etc. If you really have to have the whole table you could consider using a scrollable ResultSet. You can create it like this.

// make sure autocommit is off (postgres)
con.setAutoCommit(false);

Statement stmt = con.createStatement(
                   ResultSet.TYPE_SCROLL_INSENSITIVE, //or ResultSet.TYPE_FORWARD_ONLY
                   ResultSet.CONCUR_READ_ONLY);
ResultSet srs = stmt.executeQuery("select * from ...");

It enables you to move to any row you want by using 'absolute' and 'relative' methods.

Barnett answered 3/7, 2009 at 21:26 Comment(5)
Awesome. that did the trick. To answer your point about selectively getting data. Unfortunately I don't know which rows to merge and fix ahead of time -- I have to go through all the rows and inspect, build appropriate in-memory hashmaps, and then go back and clean the table based on certain qualities.Kenishakenison
This method is rather fragile. If you have millions of rows and some processing to do you might hit a network lag or a timeout which will make it difficult to resume the action in certain cases.Vermiculate
Unfortunately this works very slow on large tables, because MySQL JDBC driver doesn't support cursors and driver tries to load all data into the memoryMeeks
i'm using postgressql for the DB, and it didn't help. still gettng OOM.Ferro
Check if the driver you're using supports this functionality. Think about other possible causes (what's the heap size? how much data are you trying to put in memory?).Barnett
N
3

One thing that helped me was Statement.setFetchSize(Integer.MIN_VALUE). I got this idea from Jason's blog. This cut down execution time by more than half. Memory consumed went down dramatically (as only one row is read at a time.)

This trick doesn't work for PreparedStatement, though.

Nombles answered 23/7, 2010 at 14:45 Comment(1)
Fetch size of 2000 already worked for me quite well. Larger values didn't change anything.Enamor
R
2

Although it's probably not optimum, your solution seems like it ought to be fine for a one-off database cleanup routine. It shouldn't take that long to run a query like that and get the results (I'm assuming that since it's a one off a couple of seconds would be fine). Possible problems -

  • is your network (or at least your connection to mysql ) very slow? You could try running the process locally on the mysql box if so, or something better connected.

  • is there something in the table structure that's causing it? pulling down 10k of data for every row? 200 fields? calculating the id values to get based on a non-indexed row? You could try finding a more db-friendly way of pulling the data (e.g. just the columns you need, have the db aggregate values, etc.etc)

If you're not getting through the second increment something is really wrong - efficient or not, you shouldn't have any problem dumping 2000, or 20,000 rows into memory on a running JVM. Maybe you're storing the data redundantly or extremely inefficiently?

Riley answered 3/7, 2009 at 22:18 Comment(1)
Thanks for the suggestions. I believe the main problem was I wasn't using the JDBC API in an optimal way. I am able to fetch my data relatively quickly right now in 10k-20k increments. Good suggestion though on only pulling the necessary columns instead of doing a SELECT *.Kenishakenison

© 2022 - 2024 — McMap. All rights reserved.