Handle TCP Provider: Error code 0x68 (104)
Asked Answered
T

3

8

I'm using this code to sync my db with the clients:

    import pyodbc

    SYNC_FETCH_ARRAY_SIZE=25000

    # define connection + cursor
    connection = pyodbc.connect()
    cursor = connection.cursor()

    query = 'select some_columns from mytable'
    cursor.execute(query)

    while True:
        rows = cursor.fetchmany(SYNC_FETCH_ARRAY_SIZE) # <<< error here

        if not rows:
            break

        insert_to_our_db(rows)

    cursor.close()

I'm getting the below error intermitently:

File "....py", line 120, in ...
  rows = sg_cur.fetchmany(SYNC_FETCH_ARRAY_SIZE)
pyodbc.OperationalError: ('08S01', '[08S01] [Microsoft][ODBC Driver 17 for SQL Server]TCP Provider: Error code 0x68 (104) (SQLGetData)')

How should i handle this error? Is this an error on the connection (thus i need to close and recreate the connection) or on the cursor and i simply need to add a retry?

I will add the below to retry (in place of the line throwing the error), is this enough to solve the issue? Will retries have any effect at all if im experiencing a tcp error?

MAX_RETRIES=5

def get_rows(retry_count=0):
    """
    Wrapper function to add retry functionality to fetchmany
    """
    try:
        rows = sg_cur.fetchmany(SYNC_FETCH_ARRAY_SIZE)

    except Exception as e:
        if retry_count >= MAX_RETRIES:
            raise ConnectionError(f'fetchmany caused the error: {e}')

        else:
            logger.debug(f'Error in get_rows: {e}, attempt: {retry_count}/{MAX_RETRIES}')
            retry_count += 1
            return get_rows(retry_count=retry_count)

    return rows

Edit:

There is an issue open on github for this. In the meantime what would be a viable workaround?

  • Store the last read after insert_to_our_db in our db then restart from there in case of a bug
  • just run the entire process again

Notes:

  • The error is intermittent, thus hard to test
  • I'm syncing a large table from a client's db on a different host. Thus i need to update/insert all rows at once to be sure that the data is current
  • I cant make changes to the client's db
Talmudist answered 16/12, 2020 at 18:36 Comment(5)
what is the SYNC_FETCH_ARRAY_SIZE value?Biebel
@İsmailDurmaz added constantsTalmudist
Could you use the value around 1000 records? 25000 is more.Biebel
github.com/mkleehammer/pyodbc/issues/680Pudendas
@GordThompson Thanks for the link, appreciate the help. Any advice on a workaround? as its a long running process should i store the number of results read, then restart from there in the case it bugs? maybe you have a neater solution?Talmudist
P
2

Setting pool related parameters helped:

engine = create_engine(
    sql_connection_string,
    echo=False,
    connect_args={
        "autocommit": False,
        "connect_timeout": 30,
    },
    pool_pre_ping=True,
    pool_size=25,
    pool_recycle=3600,
)
Peyton answered 5/4, 2022 at 14:13 Comment(2)
Helped ? But how ?Wisdom
docs.sqlalchemy.org/en/14/core/pooling.html#pool-disconnectsJurado
M
2

try this:

sudo ifconfig eth0 mtu 1350
Mothering answered 29/6, 2022 at 19:26 Comment(3)
Could you elaborate a bit please?Talmudist
I used Ubuntu 20.04 on WSL2 on my system, and I had the "('08S01', '[08S01] [Microsoft][ODBC Driver 17 for SQL Server]TCP Provider: Error code 0x68 (104) (SQLExecDirectW)') unable to rollback" error. You can read about MTU here en.wikipedia.org/wiki/Maximum_transmission_unitMothering
This worked for me - Ubuntu 20.04 on WSL2 with the additional complication of having to connect to DBs via VPN.Daystar
T
1

I realized that the table into which i was inserting wasn't indexed.

To add the rows i was running an upsert, to avoid duplication. Adding an index to the table fixed my issue.

Looking at other people who had this issue, this seems to be the accepted solution. I definitely should have had the table indexed anyway- lesson learned.

Edit. I'll leave this answer unaccepted in the hope that someone can shed more light on the issue. For me though, adding the index (which sped up the transaction) fixed the problem.

Talmudist answered 6/2, 2021 at 17:30 Comment(3)
Hi, could you please explain why an index solves that issue? Or why not having one causes it in the first place? Thank you.Cineaste
@BogdanCondurache Were you two able to get/understand the reason for this?Neutron
@Talmudist Were you able to understand the problem?Neutron

© 2022 - 2024 — McMap. All rights reserved.