I'm using this code to sync my db with the clients:
import pyodbc
SYNC_FETCH_ARRAY_SIZE=25000
# define connection + cursor
connection = pyodbc.connect()
cursor = connection.cursor()
query = 'select some_columns from mytable'
cursor.execute(query)
while True:
rows = cursor.fetchmany(SYNC_FETCH_ARRAY_SIZE) # <<< error here
if not rows:
break
insert_to_our_db(rows)
cursor.close()
I'm getting the below error intermitently:
File "....py", line 120, in ...
rows = sg_cur.fetchmany(SYNC_FETCH_ARRAY_SIZE)
pyodbc.OperationalError: ('08S01', '[08S01] [Microsoft][ODBC Driver 17 for SQL Server]TCP Provider: Error code 0x68 (104) (SQLGetData)')
How should i handle this error? Is this an error on the connection (thus i need to close and recreate the connection) or on the cursor and i simply need to add a retry?
I will add the below to retry (in place of the line throwing the error), is this enough to solve the issue? Will retries have any effect at all if im experiencing a tcp error?
MAX_RETRIES=5
def get_rows(retry_count=0):
"""
Wrapper function to add retry functionality to fetchmany
"""
try:
rows = sg_cur.fetchmany(SYNC_FETCH_ARRAY_SIZE)
except Exception as e:
if retry_count >= MAX_RETRIES:
raise ConnectionError(f'fetchmany caused the error: {e}')
else:
logger.debug(f'Error in get_rows: {e}, attempt: {retry_count}/{MAX_RETRIES}')
retry_count += 1
return get_rows(retry_count=retry_count)
return rows
Edit:
There is an issue open on github for this. In the meantime what would be a viable workaround?
- Store the last read after
insert_to_our_db
in our db then restart from there in case of a bug - just run the entire process again
Notes:
- The error is intermittent, thus hard to test
- I'm syncing a large table from a client's db on a different host. Thus i need to update/insert all rows at once to be sure that the data is current
- I cant make changes to the client's db
SYNC_FETCH_ARRAY_SIZE
value? – Biebel1000
records?25000
is more. – Biebel