Very slow: ActiveRecord::QueryCache#call
Asked Answered
G

3

14

I have an app on heroku, running on Puma:

workers 2
threads_count 3
pool 5

It looks like some requests get stuck in the middleware, and it makes the app very slow (VERY!). I have seen other people threads about this problem but no solution so far.

Please let me know if you have any hint.

aactiverecord_querycache_1 !

aactiverecord_querycache_2 !

Globetrotter answered 18/2, 2016 at 21:11 Comment(2)
response to myself: Just check the slowest database queries, even if they are rarely requested, their impact is huge on the rest of queries.Globetrotter
what cache store are you using? The first thing to debug would be why the store is taking so long to respond or (if the cache store is filed-based) is the data in the cache store taking too long to deserialize?Vansickle
G
5

I will answer my own question: I simply had to check all queries to my DB. One of them was taking a VERY long time, and even if it was not requested often, it would slow down the whole server for quite some time afterwards(even after the process was done, there was a sort of "traffic jam" on the server). Solution: Check all the queries to your database, fix the slowest ones (it might simply mean breaking it down in few steps, it might mean make it run at night when there is no traffic, etc...). Once this queries are fixed, everything should go back to normal.

Globetrotter answered 7/12, 2017 at 10:57 Comment(0)
R
12

I work for Heroku support and Middleware/Rack/ActiveRecord::QueryCache#call is a commonly reported as a problem by New Relic. Unfortunately, it's usually a red herring as each time the source of the problem lies elsewhere.

QueryCache is where Rails first tries to check out a connection for use, so any problems with a connection will show up here as a request getting 'stuck' waiting. This doesn't mean the database server is out of connections necessarily (if you have Librato charts for Postgres they will show this). It likely means something is causing certain database connections to enter a bad state, and new requests for a connection are waiting. This can occur in older versions of Puma where multiple threads are used and the reaping_frequency is set - if some connections get into a bad state and the others are reaped this will cause problems.

Some high-level suggestions are as follows:

  • Upgrade Ruby & Puma
  • If using the rack-timeout gem, upgrade that too

These upgrades often help. If not, there are other options to look into such as switching from threads to worker based processes or using a Postgres connection pool such as PgBouncer. We have more suggestions on configuring concurrent web servers for use with Postgres here: https://devcenter.heroku.com/articles/concurrency-and-database-connections

Rangy answered 24/5, 2017 at 12:41 Comment(0)
G
5

I will answer my own question: I simply had to check all queries to my DB. One of them was taking a VERY long time, and even if it was not requested often, it would slow down the whole server for quite some time afterwards(even after the process was done, there was a sort of "traffic jam" on the server). Solution: Check all the queries to your database, fix the slowest ones (it might simply mean breaking it down in few steps, it might mean make it run at night when there is no traffic, etc...). Once this queries are fixed, everything should go back to normal.

Globetrotter answered 7/12, 2017 at 10:57 Comment(0)
T
0

I recently started seeing a spike in time spent in ActiveRecord::QueryCache#call. After looking at the source, I decided to try clearing said cache using ActiveRecord::Base.connection.clear_query_cache from a Rails Console attached to the production environment. The error I got back was PG::ConnectionBad: could not fork new process for connection: Cannot allocate memory which lead me to this other SO question at least Heroku Rails could not fork new process for connection: Cannot allocate memory

Tuberculin answered 25/5, 2016 at 21:38 Comment(2)
How does that answer the question?Rogovy
Having run into the same experience as the OP and having done more research I shared what I found, rather than leaving the question completely unanswered, in the hope that it might assist someone.Tuberculin

© 2022 - 2024 — McMap. All rights reserved.