How to use SQLAlchemy to dump an SQL file from query expressions to bulk-insert into a DBMS?
Asked Answered
O

3

12

Please bear with me as I explain the problem, how I tried to solve it, and my question on how to improve it is at the end.

I have a 100,000 line csv file from an offline batch job and I needed to insert it into the database as its proper models. Ordinarily, if this is a fairly straight-forward load, this can be trivially loaded by just munging the CSV file to fit a schema; but, I had to do some external processing that requires querying and it's just much more convenient to use SQLAlchemy to generate the data I want.

The data I want here is 3 models that represent 3 pre-exiting tables in the database and each subsequent model depends on the previous model. For example:

Model C --> Foreign Key --> Model B --> Foreign Key --> Model A

So, the models must be inserted in the order A, B, and C. I came up with a producer/consumer approach:

 - instantiate a multiprocessing.Process which contains a
 threadpool of 50 persister threads that have a threadlocal 
 connection to a database

 - read a line from the file using the csv DictReader

 - enqueue the dictionary to the process, where each thread creates
 the appropriate models by querying the right values and each
 thread persists the models in the appropriate order

This was faster than a non-threaded read/persist but it is way slower than bulk-loading a file into the database. The job finished persisting after about 45 minutes. For fun, I decided to write it in SQL statements, it took 5 minutes.

Writing the SQL statements took me a couple of hours, though. So my question is, could I have used a faster method to insert rows using SQLAlchemy? As I understand it, SQLAlchemy is not designed for bulk insert operations, so this is less than ideal.

This follows to my question, is there a way to generate the SQL statements using SQLAlchemy, throw them in a file, and then just use a bulk-load into the database? I know about str(model_object) but it does not show the interpolated values.

I would appreciate any guidance for how to do this faster.

Thanks!

Oxyacid answered 21/5, 2010 at 8:19 Comment(1)
Have you made sure to turn off autocommit mode? It will generally be faster if you insert multiple rows and then commit rather than committing after each insert.Ingulf
S
2

First, unless you actually have a machine with 50 CPU cores, using 50 threads/processes won't help performance -- it will actually make things slower.

Second, I've a feeling that if you used SQLAlchemy's way of inserting multiple values at once, it would be much faster than creating ORM objects and persisting them one-by-one.

Smacker answered 10/7, 2010 at 12:16 Comment(1)
Sorry for not accepting this answer earlier, but this is the correct approach.Oxyacid
N
3

Ordinarily, no, there's no way to get the query with the values included.

What database are you using though? Cause a lot of databases do have some bulk load feature for CSV available.

If you're willing to accept that certain values might not be escaped correctly than you can use this hack I wrote for debugging purposes:

'''Replace the parameter placeholders with values'''
params = compiler.params.items()
params.sort(key=lambda (k, v): len(str(k)), reverse=True)
for k, v in params:
    '''Some types don't need escaping'''
    if isinstance(v, (int, long, float, bool)):
        v = unicode(v)
    else:
        v = "'%s'" % v

    '''Replace the placeholders with values
    Works both with :1 and %(foo)s type placeholders'''
    query = query.replace(':%s' % k, v)
    query = query.replace('%%(%s)s' % k, v)
Nupercaine answered 12/6, 2010 at 15:21 Comment(0)
S
2

First, unless you actually have a machine with 50 CPU cores, using 50 threads/processes won't help performance -- it will actually make things slower.

Second, I've a feeling that if you used SQLAlchemy's way of inserting multiple values at once, it would be much faster than creating ORM objects and persisting them one-by-one.

Smacker answered 10/7, 2010 at 12:16 Comment(1)
Sorry for not accepting this answer earlier, but this is the correct approach.Oxyacid
T
0

I would venture to say the time spent in the python script is in the per-record upload portion. To determine this you could write to CSV or discard the results instead of uploading new records. This will determine where the bottleneck is; at least from a lookup-vs-insert standpoint. If, as I suspect, that is indeed where it is you can take advantage of the bulk import feature most DBS have. There is no reason, and indeed some arguments against, inserting record-by-record in this kind of circumstance.

Bulk imports tend to do some interestng optimization such as doing it as one transaction w/o commits for each record (even just doing this could see an appreciable drop in run time); whenever feasible I recommend the bulk insert for large record counts. You could still use the producer/consumer approach, but have the consumer instead store the values in memory or in a file and then call the bulk import statement specific to the DB you are using. This might be the route to go if you need to do processing for each record in the CSV file. If so I would also consider how much of that can be cached and shared between records.

it is also possible that the bottleneck is using SQLAlchemy. Not that I know of any inherent issues, but given what you are doing it might be requiring a lot more processing than is necessary - as evidenced by the 8x difference in run times.

For fun, since you already know the SQL, try using a direct DBAPI module in Python to do it and compare run times.

Telecommunication answered 7/7, 2010 at 16:21 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.