With this table:
CREATE TABLE test_insert (
col1 INT,
col2 VARCHAR(10),
col3 DATE
)
the following code takes 40 seconds to run:
import pyodbc
from datetime import date
conn = pyodbc.connect('DRIVER={SQL Server Native Client 10.0};'
'SERVER=localhost;DATABASE=test;UID=xxx;PWD=yyy')
rows = []
row = [1, 'abc', date.today()]
for i in range(10000):
rows.append(row)
cursor = conn.cursor()
cursor.executemany('INSERT INTO test_insert VALUES (?, ?, ?)', rows)
conn.commit()
The equivalent code with psycopg2 only takes 3 seconds. I don't think mssql is that much slower than postgresql. Any idea on how to improve the bulk insert speed when using pyodbc?
EDIT: Add some notes following ghoerz's discovery
In pyodbc, the flow of executemany
is:
- prepare statement
- loop for each set of parameters
- bind the set of parameters
- execute
In ceODBC, the flow of executemany
is:
- prepare statement
- bind all parameters
- execute