Heroku "psql: FATAL: remaining connection slots are reserved for non-replication superuser connections"
Asked Answered
S

9

194

I'm developing an app on Heroku with a Postgresql backend. Periodically, I get this error message when trying to access the database, both from the CLI and from loading a page on the server:

psql: FATAL: remaining connection slots are reserved for non-replication superuser connections

Anyone seen this before or please help point me in the right direction?

Saponaceous answered 7/8, 2012 at 13:31 Comment(7)
Having the same problem. I read somewhere that the Heroku support had "detected some issues" on some servers, and they recommended the user in question to provision a new Basic database and migrate to it using pgbackups. My problem now is that the app is so new that I didn't yet create a backup, and I get the same error trying to create one of course :DTamelatameless
For the record, I was able to use heroku pgbackups command to create a backup despite this error.Shoddy
To reproduce this issue you can create large amount of terminals. .bat script in Windows for this: for /l %%x in (1, 1, 100) do ( start psql ) where 100 is desired number of backends.Fugleman
I had the same problem. I wasn't able to transfer the data to a new database using pg:backups copy, pg:backups capture, by connecting to it from pgAdmin on my own computer, or any other way I could imagine. Even pg:killall didn't help. An hour later, I tried again and connections were 50-50, so after a few attempts I had a successful pg:backups copy and my app is back in the air. This was... not a fun day. If you googled this, drink a cup of water.Alaster
I'm now having this again. How lucky to have documented my previous experience... Edit: this time restarting was enough.Alaster
I was having that error while updating gitlab. Restarting gitlab and retrying the installation solved the issue.Transmutation
I replicate the error creating at least 100 connection requests to Postgres. I will try to increase the max connections limitsRemus
M
113

You either need to increase the max_connections configuration setting or (probably better) use connection pooling to route a large number of user requests through a smaller connection pool.

https://wiki.postgresql.org/wiki/Number_Of_Database_Connections

Marinara answered 7/8, 2012 at 14:58 Comment(4)
Also a common problem is something like that client side app crashing and leaving connections open and then opening new ones when it restarts. If this kind of thing happens a lot then you'll run out of connections. Or the app is just configured improperly and opens too many connections.Hufford
I don't think I can change configuration settings on Heroku. Is there a way to close all open connections?Saponaceous
Hopefully the server is configured for keepalive packets on a fairly aggressive basis. This will cause sessions to terminate in a reasonable time if connections are abruptly broken. If that's not configured, if you can log in as a database superuser you can identify the pid values for the sessions and use the pg_terminate_backend() function to kill them. To avoid the problem, be sure to close the connections properly rather than abruptly killing off the client side.Marinara
@Saponaceous Terminate all connections to the database: heroku pg:killallEth
P
24

This exception happened when I forgot to close the connections

Papule answered 11/4, 2013 at 14:17 Comment(0)
A
10

See Heroku “psql: FATAL: remaining connection slots are reserved for non-replication superuser connections”:

Heroku sometimes has a problem with database load balancing.

André Laszlo, markshiz and me all reported dealing with that in comments on the question.

To save you the support call, here's the response I got from Heroku Support for a similar issue:

Hello,

One of the limitations of the hobby tier databases is unannounced maintenance. Many hobby databases run on a single shared server, and we will occasionally need to restart that server for hardware maintenance purposes, or migrate databases to another server for load balancing. When that happens, you'll see an error in your logs or have problems connecting. If the server is restarting, it might take 15 minutes or more for the database to come back online.

Most apps that maintain a connection pool (like ActiveRecord in Rails) can just open a new connection to the database. However, in some cases an app won't be able to reconnect. If that happens, you can heroku restart your app to bring it back online.

This is one of the reasons we recommend against running hobby databases for critical production applications. Standard and Premium databases include notifications for downtime events, and are much more performant and stable in general. You can use pg:copy to migrate to a standard or premium plan.

If this continues, you can try provisioning a new database (on a different server) with heroku addons:add, then use pg:copy to move the data. Keep in mind that hobby tier rules apply to the $9 basic plan as well as the free database.

Thanks, Bradley

Alaster answered 12/8, 2015 at 16:30 Comment(3)
I wonder what the canned response is when you're on a $50/month database plan?Guay
Only the hobby tiers use shared database servers. With the $50/month plan you have your own server, so if you are running into this issue, it's your own app creating it. You have more administrative options with the $50/month plan, so it's easier to diagnose and fix.Dahlgren
Can you explain what are the administrative options you can with standard option? Also getting this message with only 200 connections.Lucrece
S
6

I actually tried to implement connection pooling on the django end using:

https://github.com/gmcguire/django-db-pool

but I still received this error, despite lowering the number of connections available to below the standard development DB quota of 20 open connections.

There is an article here about how to move your postgresql database to the free/cheap tier of Amazon RDS. This would allow you to set max_connections higher. This will also allow you to pool connections at the database level using PGBouncer.

https://www.lewagon.com/blog/how-to-migrate-heroku-postgres-database-to-amazon-rds

UPDATE:

Heroku responded to my open ticket and stated that my database was improperly load balanced in their network. They said that improvements to their system should prevent similar problems in the future. Nonetheless, support manually relocated my database and performance is noticeably improved.

Shoddy answered 17/1, 2013 at 5:22 Comment(1)
I think moving the database is the best solution to have full control over it. Thanks for that article.Saponaceous
N
0

I had a lot of idle connections in my case, so I had to reuse idle connections before creating new ones.

Nonresistant answered 29/9, 2021 at 11:43 Comment(1)
how did you do this?Deciare
D
0

The error message means that the app has used up all available connections.

While using postgres in aws with knex and typescript to do some query and update job, the problem pops up when it finishes 390 database operations, for which a mistake prevents the normal knex.destroy() operation. The error message is:

(node:66236) UnhandledPromiseRejectionWarning: error: remaining connection slots are reserved for non-replication superuser connections

When knex.destroy() goes to the right place the error is gone.

Digitoxin answered 16/2, 2022 at 4:38 Comment(0)
S
0
There seems no questions asking this issue with the context of **.NET** world.

For me, this is caused by async void, and this method passed to a Action delegate, Action<TMessage> action, and there are multiple threads trying to call this Persist method at the same time.

private async void Persist(WhateverData data)
{
    await _repository.InsertAsync(data);
}

the solution is, remove async and await, and just use it synchronously

private void Persist(PriceInfo price)
{
    _repository.InsertAsync().WaitAndUnwrapException();
}
Solidus answered 19/10, 2022 at 14:2 Comment(5)
is async void ever correct? async Task might work with the aboveDeciare
hi @TimRutter check this, https://mcmap.net/q/136608/-should-i-avoid-39-async-void-39-event-handlersSolidus
is the above an event handler, it doesn't look like one. Don't the calling threads need to await that function? And if they did then you might not have the problem you describe?Deciare
Not saying i know, just asking the questions :) I'm trying to understand whether the async thing really is the problem (i have issues with too many db connections) or that is just a red herring.Deciare
hi @TimRutter when I re-visit this issue, I start to doubt my conslusion is too hasty, probably I will try to revert to my previous code and menawhile, add a semaphore to limit number of connection.Solidus
D
0

set the maximum active size for tomcat-jdbc, set this property in your .properties or .yml file:

spring.datasource.maxActive=5

or

spring.datasource.max-active=5

as below:

spring: datasource: driverClassName: password: url: username: max-active: 5

Dehumidifier answered 5/12, 2022 at 5:38 Comment(0)
E
-1

To reproduce same issue in Linux:

for i in {1..300}; do
     PGPASSWORD=MY_PASSWORD gnome-terminal -e  $'/usr/bin/psql -h \'127.0.0.1\' -p 5432 -U MY_USERNAME' 
done

In a dotnet client you can read:

  System.InvalidOperationException: An exception has been raised that is likely due to a transient failure.
   ---> Npgsql.PostgresException (0x80004005): 53300: sorry, too many clients already
Evanevander answered 13/5, 2021 at 15:52 Comment(2)
and how do you solve it?Deciare
this is not a solutionHaggadist

© 2022 - 2024 — McMap. All rights reserved.