How to check if postgresql backup was successful?
Asked Answered
L

2

7

We have a postgresql database that is backed up nightly from a cron job with a following command:

su postgres -c "pg_dump our_database | gzip > /home/smb/shared/database_backup.bak.gz"

recently we had a disk failure that started with a few bad sectors and during that time pg_dump exited with the following errors

pg_dump: SQL command failed
pg_dump: Error message from server: ERROR: catalog is missing 17 attribute(s) from relid 20158
pd_dump: The command was: LOCK TABLE public.obvez IN ACCESS SHARE MODE

Now, since it was in cron job, nobody noticed error messages, the backup was interrupted but it wasn't zero sized, everything seemed ok and the error went unnoticed until final disk failure when we realized we didn't have backup.

We managed to restore data from an older backup but now I would like to know what would be the proper way to check if pg_dump finished its job with success or not?

Licit answered 14/6, 2011 at 9:3 Comment(2)
You should ask this on DBA.stackexchange.com instead. StackOverflow is for programming.Indies
@Indies This should stay here. OP is asking for a way to check for errors. It happens that he's trying to automate a database, but this could apply to pretty much anything, so I don't see I reason why should this be moved to DBA.Lanham
D
10

I write the outcome into a logfile, and at the end of the cronjob, I send the content of the logfile to my e-mail address. That way, I'll know when something went wrong.

su postgres "pg_dump our_database 2>> $LOG_FILE | gzip > /home/smb/shared/database_backup.bak.gz"
cat $LOG_FILE | mailx $MAINTAINERS -s "Postgresql backup"

ADDENDUM: if you want to send the e-mail only if anything went wrong, you can check the return code of pg_dump:

LOG_FILE=/tmp/pgdump.err

if ! pg_dump -U backupuser "our_database" 2> $LOG_FILE 
then 
    cat $LOG_FILE | mailx 'youremailaddress' -s "Postgresql backup failure!"
fi
Daves answered 14/6, 2011 at 9:9 Comment(2)
The problem is that this database is actually at customer's site, and I'm sure they would know how to read a log file. I suppose I could modify this and to send an e-mail only if something went wrong.Licit
@Dragan Matic: Ah, you didn't state that in your original message. I've expanded the example to send mail only when pg_dump returned something other than "0" (which is success). Also, you should consider using a dedicated "backup user" which is allowed to backup, and use .pgpass instead of sudo. Well, at least, I don't like to sudo if I can help it ;)Daves
E
0

Some programs don't behave nicely when used in a pipe on unix-like systems. For example, I was using pg_dump piped through gzip like so:

Broken script: error condition never occurs

export PGPASSWORD=secret
pg_dump -h host -U user db | gzip - > backup.sql.gz

if [ $? -ne 0 ]; then
    ### Never happens: gzip never fails! ###
    echo Backup failed.
    exit 1
fi

This uses the normal way of checking the exit status of the previous command ($?), but it doesn't work. If the pg_dump fails for any reason, gzip doesn't return any error response. $? is set to 0, indicating success.

Fortunately, there's a better way. In bash, the PIPESTATUS environment variable is an array with the return codes of all the commands executed in the last pipe. Checking for the overall return status and the status of pg_dump is now done like this:

Correct script: check the result of pg_dump separately

export PGPASSWORD=secret
pg_dump -h host -U user db | gzip - > backup.sql.gz

if [ $? -ne 0 -o ${PIPESTATUS[0]} -ne 0 ]; then
    echo Backup failed.
    exit 1
fi

Now I can be sure my automated database backups aren't going to fail silently.

From https://mattryall.net/blog/piped-exit-status

Ejecta answered 12/6, 2023 at 12:42 Comment(2)
You should add explanation.Bethune
Ok, Im added explanation.Ejecta

© 2022 - 2024 — McMap. All rights reserved.