How to increase the max connections in postgres?
Asked Answered
S

7

222

I am using Postgres DB for my product. While doing the batch insert using slick 3, I am getting an error message:

org.postgresql.util.PSQLException: FATAL: sorry, too many clients already.

My batch insert operation will be more than thousands of records. Max connection for my postgres is 100.

How to increase the max connections?

Shenika answered 11/6, 2015 at 10:17 Comment(5)
#9799205Lukasz
Use a connection pool like pgBouncer or pgPoolIcicle
1. First check if your middleware is not keeping too many connections open, or is leaking them. 2. (maybe) next use a connection pooler. 3. (almost) never increase the number of allowed connections on this type of problem, in most cases this will make things worse. (except if you are absolutely sure)Vikiviking
See also https://mcmap.net/q/120429/-how-to-change-max_connections-for-postgres-through-sql-command for an option using SQL commands instead of configuration.Dorrie
@FrankHeikens What is the preferred connection pooler for postgres in windows server?Atomism
T
432

Just increasing max_connections is bad idea. You need to increase shared_buffers and kernel.shmmax as well.


Considerations

max_connections determines the maximum number of concurrent connections to the database server. The default is typically 100 connections.

Before increasing your connection count you might need to scale up your deployment. But before that, you should consider whether you really need an increased connection limit.

Each PostgreSQL connection consumes RAM for managing the connection or the client using it. The more connections you have, the more RAM you will be using that could instead be used to run the database.

A well-written app typically doesn't need a large number of connections. If you have an app that does need a large number of connections then consider using a tool such as pg_bouncer which can pool connections for you. As each connection consumes RAM, you should be looking to minimize their use.


How to increase max connections

1. Increase max_connection and shared_buffers

in /var/lib/pgsql/{version_number}/data/postgresql.conf

change

max_connections = 100
shared_buffers = 24MB

to

max_connections = 300
shared_buffers = 80MB

The shared_buffers configuration parameter determines how much memory is dedicated to PostgreSQL to use for caching data.

  • If you have a system with 1GB or more of RAM, a reasonable starting value for shared_buffers is 1/4 of the memory in your system.
  • it's unlikely you'll find using more than 40% of RAM to work better than a smaller amount (like 25%)
  • Be aware that if your system or PostgreSQL build is 32-bit, it might not be practical to set shared_buffers above 2 ~ 2.5GB.
  • Note that on Windows, large values for shared_buffers aren't as effective, and you may find better results keeping it relatively low and using the OS cache more instead. On Windows the useful range is 64MB to 512MB.

2. Change kernel.shmmax

You would need to increase kernel max segment size to be slightly larger than the shared_buffers.

In file /etc/sysctl.conf set the parameter as shown below. It will take effect when postgresql reboots (The following line makes the kernel max to 96Mb)

kernel.shmmax=100663296

References

Postgres Max Connections And Shared Buffers

Tuning Your PostgreSQL Server

Trapezohedron answered 15/9, 2015 at 10:54 Comment(10)
great answer. couple of questions.. how is 100663296 equal to 96MB? and why do we change shmmax what does that do?Erelong
100663296 Bytes = 96 MB (in binary). shmmax is maximum size of a shared memory segment. Now because we increased size of shared buffers we need to change shmmax to accomodate the increased memory for caching.Trapezohedron
@Erelong 96 * 1024 * 1024 = 100663296Thanks
You may use PGTune to help you determine these settings for your systemFreer
I found this online calculator useful for converting units, for example 96MB to kB..Sashenka
Check your current kernel.shmmax setting (cat /proc/sys/kernel/shmmax) prior to changing it. On modern systems it's already set ridiculously high and should not be changed. Mine is set to 18446744073692774399 by default on Ubuntu 18.04.Alluvial
Is it fair to assume that just pg_bouncer alone will NOT solve the idle connection problem? (postgres v13)Coolidge
I ran ipcs l and it says max total shared memory (kbytes) = 18014398442373116. Should I worry?Grandaunt
From the linked article: "Generally, PostgreSQL on good hardware can support a few hundred connections". Your statement "A well-written app typically doesn't need a large number of connections" applys only to low-traffic apps. If each session needs 0.05s on the database you can serve max. 2000 clients per sec. Even pgbouncer (you should definetly use it!) can't get you around this. If you need to serve 5000 clients/sec you will need 250 connections and so on. Otherwise you will see "query_wait_timeout" from pgbouncer. If you have >=16G RAM & enough CPU cores (8+) that is absolutely no problem.Alage
Is there any rule of thumb regarding how the max_connections and shared_buffers correlate?Mononucleosis
T
100

Adding to Winnie's great answer,

If anyone is not able to find the postgresql.conf file location in your setup, you can always ask the postgres itself.

SHOW config_file;

For me changing the max_connections alone made the trick.

EDIT: From @gies0r: In Ubuntu 18.04 it is at

/etc/postgresql/11/main/postgresql.conf
Talented answered 29/11, 2018 at 9:36 Comment(3)
Thank you. In Ubuntu 18.04 it is /etc/postgresql/11/main/postgresql.conf Lita
you can also query pg_settings: SELECT name, setting FROM pg_settings;Anticlockwise
In Windows it is C:\Program Files\PostgreSQL\12\data\postgresql.confJecho
A
13

If your postgres instance is hosted by Amazon RDS, Amazon configures the max connections for you based on the amount of memory available.

Their documentation says you get 112 connections per 1 GB of memory (with a limit of 5000 connections no matter how much memory you have), but we found we started getting error messages closer to 80 connections in an instance with only 1 GB of memory. Increasing to 2 GB let us use 110 connections without a problem (and probably more, but that's the most we've tried so far.) We were able to increase the memory of an existing instance from 1 GB to 2 GB in just a few minutes pretty easily.

Here's the link to the relevant Amazon documentation:

https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/CHAP_Limits.html#RDS_Limits.MaxConnections

Alb answered 3/12, 2021 at 0:31 Comment(0)
W
2
  1. Locate postgresql.conf file by below command

locate postgresql.conf

  1. Edit postgresql.conf file by below command

sudo nano /etc/postgresql/14/main/postgresql.conf

  1. Change

    max_connections = 100 shared_buffers = 24MB

to

max_connections = 300
shared_buffers = 80MB
Whirlabout answered 27/11, 2022 at 10:52 Comment(0)
M
1

change max_connections variable in postgresql.conf file located in /var/lib/pgsql/data or /usr/local/pgsql/data/

Meehan answered 11/6, 2015 at 10:46 Comment(1)
Why pg_hba.conf? It doesn't have any parameter about the number of connections, only how to connect. postgresql.conf on the other hand.... But hundreds of connections is a bad idea, don't do it, use a connection pool or suffer from performance issues. PostgreSQL 8.3 is EOL for many years, please use an up to date version.Icicle
R
1

For Mac M1 users

/opt/homebrew/var/postgresql@14/postgresql.conf

Remittent answered 23/6, 2023 at 16:7 Comment(0)
N
0

for postgres 13 running in docker/kubernetes the path is: /var/lib/postgresql/data/postgressql.conf update max_connections = 300 and restart container/pod

Nowlin answered 24/5, 2023 at 11:39 Comment(1)
Already answered.Playoff

© 2022 - 2024 — McMap. All rights reserved.