pg_dump postgres database from remote server when port 5432 is blocked
Asked Answered
B

5

106

I'm trying to pg_dump a SQL database on a remote server in our DMZ. There are 2 problems.

  1. there is not a lot of space left on the remote server so the normal command run to locally backup the database pg_dump -C database > sqldatabase.sql.bak won't work due to space issues.

  2. I also can't run the other version of pg_dump command to dump database from remote server to local server using:

    pg_dump -C -h remotehost -U remoteuser db_name | psql localhost -U localuser db_name

as the server is in our DMZ and port 5432 is blocked. What I'm looking to see is if it is possible to pg_dump the database and immediatly save it (ssh or some other form) as a file to a remote server. What I was trying was: pg_dump -C testdb | ssh [email protected] | > /home/admin/testdb.sql.bak

Does anyone know if what i am trying to achieve is possible?

Butterflies answered 15/4, 2015 at 10:52 Comment(1)
[untested] (1) Set up a netcat listener on your local machine (using a portnumber that is not in use or blocked), eg netcat -l -p 1234 > thefile & . (2) from the remote machine , do: pg_dump ... | netcat your.local.ip 1234Surrealism
D
145

You can connect with ssh to your remote server, do with the connect the pg_dump call and send the output back to stdout of local machine.

ssh user@remote_machine "pg_dump -U dbuser -h localhost -C --column-inserts" \
 > backup_file_on_your_local_machine.sql
Dumdum answered 15/4, 2015 at 11:48 Comment(7)
This is much simpler than my netcat hack.Surrealism
Thank you so much this sorted out my problem, did the job i was missing the \ is allButterflies
1 little note is that I'd replace >> with > to avoid writing to the end if file is not emptyBehlau
don't we need to specify the database name?Pentimento
@AsnadAtta you can do that, important is the idea to execute the command over ssh and get the result over stdout. Basically all arguments of pg_dump can be used.Dumdum
while trying this command from windows i had to remove \ to get it correctTransplant
Would that be the same user and password as for directly contacting the DB at 5432?Pelorus
M
73

let's create a backup from remote postgresql database using pg_dump:

pg_dump -h [host address] -Fc -o -U [database user] <database name> > [dump file]

later it could be restored at the same remote server using:

sudo -u postgres pg_restore -C mydb_backup.dump

Ex:

pg_dump -h 67.8.78.10 -Fc -o -U myuser mydb > mydb_backup.dump

complete (all databases and objects)

pg_dumpall -U myuser -h 67.8.78.10 --clean --file=mydb_backup.dump

restore from pg_dumpall --clean:

psql -f mydb_backup.dump postgres #it doesn't matter which db you select here

Copied from: https://codepad.co/snippet/73eKCuLx

Milksop answered 12/12, 2018 at 11:13 Comment(6)
This doesn't answer the question. The limitation is that you can't connect to the remote server through 5432.Sandisandidge
I get pg_dump: invalid option -- o on macos.Sallust
The Option is "O" in capital :). This is what documentation says about this option "Do not output commands to set ownership of objects to match the original database. By default, pg_dump issues ALTER OWNER or SET SESSION AUTHORIZATION statements to set ownership of created database objects. These statements will fail when the script is run unless it is started by a superuser (or the same user that owns all of the objects in the script). To make a script that can be restored by any user, but will give that user ownership of all the objects, specify -O."Rhizogenic
Can you please explain all of the parameters?Alluvial
This method would work for local machines which are in the same local net work with the server machineAprylapse
-F, --format=c|d|t|p output file format (custom, directory, tar, plain text (default)) and -O, --no-owner skip restoration of object ownership in plain-text formatFlameout
H
8

You can try to dump part of the table to a file in your local machine like this (assume your local machine has psql installed):

psql -h ${db_host} -p 5432 -U ${db_user} -d ${db_name} \
-c "\copy (SELECT * FROM my_table LIMIT 10000) to 'some_local_file.csv' csv;"

And you can import the exported csv into another db later like this:

COPY my_table FROM '/path/to/some_local_file.csv' WITH (FORMAT csv);
Hangover answered 23/1, 2018 at 7:31 Comment(0)
A
3

One possible solution - pipe through ssh - has been mentioned.

You also could make your DB server listen on the public inet address, add a hostssl entry for your backup machine to pg_hba.conf, maybe configure a client certificate for security, and then simply run the dump on the client/backup machine with pg_dump -h dbserver.example.com ...

This is simpler for unattended backups.

For the configuration of the connection (sslmode) see also the supported environment variables.

Anemology answered 15/4, 2015 at 12:18 Comment(2)
Would be helpful to show the entire command here, since I can't understand how you can also restore to the target machine this way (of course you can first dump and then restore but that doesn't fit the question)Perchloride
Sorry I have no simple command example, my setup is quite evolved with configuration files for ssh (for the ssh pipe alternative) and for psql with service names for connecting. The command is the problem? You just have to add the host name to the command line. I would rather think the problem/task is the setup of TLS on the server! Once you have done that, it is just plain command line psql and pg_dump with a hostname different from localhost.Anemology
A
2

If you would like to periodically backup a database PostgreSQL that is inside of a container in the remote server to your local host by using pg_dump over ssh, this is useful for you:

https://github.com/omidraha/periodic-pgdump-over-ssh

Alo answered 6/12, 2019 at 14:56 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.