pg_dump on Database throwing error 'out of shared memory'
Asked Answered
S

3

8

Getting problem when taking backup on database contains around 50 schema with each schema having around 100 tables.

pg_dump throwing below error suggesting that to increase max_locks_per_transaction.

pg_dump: WARNING:  out of shared memory
pg_dump: SQL command failed
pg_dump: Error message from server: ERROR:  out of shared memory
HINT:  You might need to increase max_locks_per_transaction.
pg_dump: The command was: SELECT tableoid, oid, prsname, prsnamespace, prsstart::oid, prstoken::oid, prsend::oid, prsheadline::oid, prslextype::oid FROM pg_ts_parser

An updated of max_locks_per_transaction to 256 in postgresql.conf did not solve the problem.

Are there any possibilities which can cause this problem?

Edited:(07 May, 2016)

Postgresql version = 9.1

Operating system = Ubuntu 14.04.2 LTS

shared_buffers in postgresql.conf = 2GB

Edited:(09 May, 2016)

My postgres.conf

maintenance_work_mem = 640MB
wal_buffers = 64MB
shared_buffers = 2GB
max_connections = 100
max_locks_per_transaction=10000
Spoilfive answered 6/5, 2016 at 13:3 Comment(0)
S
8

I solved this problem by taking backup for all schema individually as size of database (be it no.of schemas or no.of tables) increases it's hard to take backup using pg_dump.

I have done following modification to the script to take schema-wise backup:

  1. Before running pg_dump, list all database schemas into a file. So that we can iterate all schemas and take backup for a schema.

    Here is the command to list all schema to a file

    psql <db_name> -o <output_file> < <sql_to_list_schema>

    Here sql_to_list_schema contains

    SELECT n.nspname FROM pg_catalog.pg_namespace n WHERE n.nspname !~ '^pg_' AND n.nspname <> 'information_schema';

  2. Now read all lines of output_file and take backup of that schema

    pg_dump <db_name> -f <backup_file> -i -x -O -R -n <schema_name_read_from_file>

Spoilfive answered 7/5, 2016 at 13:30 Comment(2)
The link you provided for the script, doesn't work anymore...Gantz
You might want to use some extra params with to the psql command. This will easier reading of the names to a variable like: --no-align --tuples-only --record-separator=" ".Postilion
T
9

You might need to increase max_locks_per_transaction even further. Check the documentation for details on what this parameter means. If in doubt, check how many tables you have and set max_locks_per_transaction to at least that much, then you should be OK.

Trichomonad answered 6/5, 2016 at 18:54 Comment(3)
Details? How many tables do you have, what are your configuration settings?Trichomonad
@PeterEisentraut, thanks for the tip. Initially not sure in which file that property has to be modified. Later got help from friends where to modify. ../data/postgresql.confWestphal
@Rao: And? Did it help?Aporia
S
8

I solved this problem by taking backup for all schema individually as size of database (be it no.of schemas or no.of tables) increases it's hard to take backup using pg_dump.

I have done following modification to the script to take schema-wise backup:

  1. Before running pg_dump, list all database schemas into a file. So that we can iterate all schemas and take backup for a schema.

    Here is the command to list all schema to a file

    psql <db_name> -o <output_file> < <sql_to_list_schema>

    Here sql_to_list_schema contains

    SELECT n.nspname FROM pg_catalog.pg_namespace n WHERE n.nspname !~ '^pg_' AND n.nspname <> 'information_schema';

  2. Now read all lines of output_file and take backup of that schema

    pg_dump <db_name> -f <backup_file> -i -x -O -R -n <schema_name_read_from_file>

Spoilfive answered 7/5, 2016 at 13:30 Comment(2)
The link you provided for the script, doesn't work anymore...Gantz
You might want to use some extra params with to the psql command. This will easier reading of the names to a variable like: --no-align --tuples-only --record-separator=" ".Postilion
F
-1

Try this

shared_buffers = 512GB
max_locks_per_transaction=10000

Please note that every env is different

Floppy answered 15/6, 2020 at 9:3 Comment(1)
I get this error: HINT: This error usually means that PostgreSQL's request for a shared memory segment exceeded available memory, swap space, or huge pages. To reduce the request size (currently 563431931904 bytes), reduce PostgreSQL's shared memory usage, perhaps by reducing shared_buffers or max_connections.Seismic

© 2022 - 2024 — McMap. All rights reserved.