flask_sqlalchemy `pool_pre_ping` only working sometimes
Asked Answered
I

2

53

For testing, I amend the MYSQL (RDS) parameters as follows;

wait_timeout = 40 (default was 28800)

max_allowed_packet = 1GB (max - just to be sure issue not caused by small packets)

net_read_timeout = 10

interactive_timeout unchanged

Then tested my app without pool_pre_ping options set (defaults to False), kept the app inactive for 40 seconds, tried to login, and i get

Nov 14 20:05:20 ip-172-31-33-52 gunicorn[16962]: Traceback (most recent call last):
Nov 14 20:05:20 ip-172-31-33-52 gunicorn[16962]:   File "/var/www/api_server/venv/lib/python3.6/site-packages/sqlalchemy/engine/base.py", line 1193, in _execute_context
Nov 14 20:05:20 ip-172-31-33-52 gunicorn[16962]:     context)
Nov 14 20:05:20 ip-172-31-33-52 gunicorn[16962]:   File "/var/www/api_server/venv/lib/python3.6/site-packages/sqlalchemy/engine/default.py", line 507, in do_execute
Nov 14 20:05:20 ip-172-31-33-52 gunicorn[16962]:     cursor.execute(statement, parameters)
Nov 14 20:05:20 ip-172-31-33-52 gunicorn[16962]:   File "/var/www/api_server/venv/lib/python3.6/site-packages/MySQLdb/cursors.py", line 206, in execute
Nov 14 20:05:20 ip-172-31-33-52 gunicorn[16962]:     res = self._query(query)
Nov 14 20:05:20 ip-172-31-33-52 gunicorn[16962]:   File "/var/www/api_server/venv/lib/python3.6/site-packages/MySQLdb/cursors.py", line 312, in _query
Nov 14 20:05:20 ip-172-31-33-52 gunicorn[16962]:     db.query(q)
Nov 14 20:05:20 ip-172-31-33-52 gunicorn[16962]:   File "/var/www/api_server/venv/lib/python3.6/site-packages/MySQLdb/connections.py", line 224, in query
Nov 14 20:05:20 ip-172-31-33-52 gunicorn[16962]:     _mysql.connection.query(self, query)
Nov 14 20:05:20 ip-172-31-33-52 gunicorn[16962]: MySQLdb._exceptions.OperationalError: (2013, 'Lost connection to MySQL server during query')

Added the pool_pre_ping like this (Using flask_sqlalchamy version 2.4.1);

import os
from flask import Flask
from flask_sqlalchemy import SQLAlchemy as _BaseSQLAlchemy


class SQLAlchemy(_BaseSQLAlchemy):
    def apply_pool_defaults(self, app, options):
        super(SQLAlchemy, self).apply_pool_defaults(app, options)
        options["pool_pre_ping"] = True
#        options["pool_recycle"] = 30
#        options["pool_timeout"] = 35

db = SQLAlchemy()


class DevConfig():
    SQLALCHEMY_ENGINE_OPTIONS = {'pool_recycle': 280, 'pool_timeout': 100, 'pool_pre_ping': True} # These configs doesn't get applied in engine configs :/
    DEBUG = True
    # SERVER_NAME = '127.0.0.1:5000'
    SQLALCHEMY_DATABASE_URI = os.getenv('SQLALCHEMY_DATABASE_URI_DEV')
    SQLALCHEMY_TRACK_MODIFICATIONS = False

config = dict(
    dev=DevConfig,
)

app = Flask(__name__, instance_relative_config=True)
app.config.from_object(config['dev'])

# INIT DATABASE
db.init_app(app)
with app.app_context():
    db.create_all()

-----------run.py
app.run(host='127.0.0.1', port=5000)

With this, now the webapp manages to get new connection even after MySQL server has closed the previous connection. It always works fine when I access the database right after its closed by server (tried max 50 seconds after)... but when I keep connection inactive for long time (haven't noted, but ~ >10-15 min), again I see same error.
According to the docs, (especially the section Dealing with disconnects), the pool_pre_ping option should handle this kind of scenario at background rite? Or is there any other timeout variable that I need to change in MySQL server?

Injustice answered 14/11, 2019 at 21:10 Comment(8)
What's your MySQL connector?Wimsatt
mysql+mysqldb://root:*****@1<RDS address>:3306/database_nameInjustice
Ignoring the connection and pool issues, how long should your program run? Hours? Days? Does it sit idle for hours? Or at least not touching MySQL, yet hoping the connection will stay intact? Do you have any "transactions"? Are you running entirely with autocommit=ON?Coronach
This is a backend server, that needs to run 24/7. and yes, rite now we expect to have idle time for hours and days. I don't think there are any transactions left unclosed after every mysql connection. How do I check if autocommit=ON ?Injustice
Hi @AnumSheraz, Did you found the solution for this? I'm into the same issue. I have tried with SQLALCHEMY_POOL_RECYCLE = 45 and SQLALCHEMY_ENGINE_OPTIONS = {'pool_pre_ping': True}, but no success. I debugged the init file from flask_sqlalchemy (def create_engine) and seems this handle sqlalchemy.create_engine with all the custom config, but still don't know why doesn't work. Any help will be great. Regards. P.S. I'm using Flask-SQLAlchemy==2.4.1Crackup
Same here, can't figure it out how to re-connect to the database but on PostgreSQL, the same issue, after 10-20 mins, getting OperationalError and can't reconnect. Tried to write a test for that to simulate OperationalError when connections are lost, but can't do it manually for some reason. Would be great to know how to do it using flask-sqlalchemy.Champac
did anyone fix this? i am facing the same issueJeaninejeanlouis
I still have this issue, and pool_pre_ping and pool_recycle have no effect. At this point, I'm just considering switching to a different database.Nicholasnichole
C
5

I set the following settings:

SQLALCHEMY_ENGINE_OPTIONS = {
    'pool_size': 10,
    'pool_recycle': 60,
    'pool_pre_ping': True
}

The last few months have stopped falling...

Coherence answered 11/10, 2021 at 13:16 Comment(2)
So is the below correct? SQLALCHEMY_ENGINE_OPTIONS = {'pool_size': 10,'pool_recycle': 60,'pool_pre_ping': True} app.config['SQLALCHEMY_ENGINE_OPTIONS'] = SQLALCHEMY_ENGINE_OPTIONS db = SQLAlchemy(app)Circle
Yes, this is correctCoherence
R
3

Let's check the manual. From the Flask-SQLAlchemy Configuration docs:

Certain database backends may impose different inactive connection timeouts, which interferes with Flask-SQLAlchemy’s connection pooling.

By default, MariaDB is configured to have a 600 second timeout. This often surfaces hard to debug, production environment only exceptions like

2013: Lost connection to MySQL server during query.

If you are using a backend (or a pre-configured database-as-a-service) with a lower connection timeout, it is recommended that you set SQLALCHEMY_POOL_RECYCLE to a value less than your backend’s timeout.

The script cited in the question shows discrepancies between its MySQL timeout-configs (wait_timeout, net_read_timeout) and its SQLAlchemy (pool_recycle, pool_timeout) and Flask-SQLAlchemy timeouts (SQLALCHEMY_POOL_RECYCLE, SQLALCHEMY_POOL_TIMEOUT).

When you notice repetition of information which is likely to change, consider replacing it with abstractions that are less likely to change. This is the "Don't repeat yourself" (DRY) principle of software development.

We can achieve this abstraction by using the DevConfig helper-class to coordinate the db connection config constants across the app. With such an approach, we assign our config to static attributes and refer back to them so that there are no conflicting timeout expectations. Here is an implementation:

import os
from flask import Flask
from flask_sqlalchemy import SQLAlchemy as _BaseSQLAlchemy

# Coordinate DevConfig with SQLAlchemy and Flask-SQLAlchemy (don't repeat yourself!)

class DevConfig():
    SQLALCHEMY_POOL_RECYCLE = 35  # value less than backend’s timeout
    SQLALCHEMY_POOL_TIMEOUT = 7  # value less than backend’s timeout
    SQLALCHEMY_PRE_PING = True
    SQLALCHEMY_ENGINE_OPTIONS = {'pool_recycle': SQLALCHEMY_POOL_RECYCLE, 'pool_timeout': SQLALCHEMY_POOL_TIMEOUT, 'pool_pre_ping': SQLALCHEMY_PRE_PING}
    DEBUG = True
    # SERVER_NAME = '127.0.0.1:5000'
    SQLALCHEMY_DATABASE_URI = os.getenv('SQLALCHEMY_DATABASE_URI_DEV')
    SQLALCHEMY_TRACK_MODIFICATIONS = False

class SQLAlchemy(_BaseSQLAlchemy):
    def apply_pool_defaults(self, app, options):
        super(SQLAlchemy, self).apply_pool_defaults(app, options)
        options["pool_pre_ping"] = DevConfig.SQLALCHEMY_PRE_PING
#        options["pool_recycle"] = 30
#        options["pool_timeout"] = 35

db = SQLAlchemy()

config = dict(
    dev=DevConfig,
)

app = Flask(__name__, instance_relative_config=True)
app.config.from_object(config['dev'])

# INIT DATABASE
db.init_app(app)
with app.app_context():
    db.create_all()

If you like, you can check the diff for the changes I made: diffchecker.com/Q1e85Hhc

If you want to learn more about software design principles, take a look at the references on the Wikipedia page linked above.

Radiotelephony answered 28/7, 2021 at 22:32 Comment(1)
Thanks for the detailed answer. I haven't tried this, since it was an old project. If anyone could confirm it works, I am happy to accept as answer,Injustice

© 2022 - 2024 — McMap. All rights reserved.