Automated or regular backup of mysql data
Asked Answered
C

2

3

I want to take regular backups of some tables in my mysql database using <insert favorite PHP framework here> / plain php / my second favorite language. I want it to be automated so that the backup can be restored later on in case something goes wrong.

I tried executing a query and saving the results to a file. Ended up with code that looks somewhat like this.

$sql = 'SELECT * FROM my_table ORDER id DESC';
$result = mysqli_query( $connect, $sql );  
if( mysqli_num_rows( $result ) > 0){

    $output=fopen('/tmp/dumpfile.csv','w+');

    /* loop through recordset and add that to the file */
    while( $row = mysqli_fetch_array( $result ) ) {
        fputcsv( $output, $row, ',', '"');
    }

    fclose( $output );
}

I set up a cron job on my local machine to hit the web page with this code. I also tried writing a cronjob on the server run the script as a CLI. But it's causing all sorts of problems. These include

  1. Sometimes the data is not consistent
  2. The file appears to be truncated
  3. The output cannot be imported into another database
  4. Sometimes the script times out

I have also heard about mysqldump. I tried to run it with exec but it produces an error.

How can I solve this?

Counselor answered 12/8, 2016 at 10:49 Comment(0)
C
3

CSV and SELECT INTO OUTFILE

http://dev.mysql.com/doc/refman/5.7/en/select-into.html

SELECT ... INTO OUTFILE writes the selected rows to a file. Column and line terminators can be specified to produce a specific output format.

Here is a complete example:

SELECT * FROM my_table INTO OUTFILE '/tmp/my_table.csv'
  FIELDS TERMINATED BY ',' OPTIONALLY ENCLOSED BY '"'
  LINES TERMINATED BY '\n'
  FROM test_table;

The file is saved on the server and the path chosen needs to be writable. Though this query can be executed through PHP and a web request, it is best executed through the mysql console.

The data that's exported in this manner can be imported into another database using LOAD DATA INFILE

While this method is superior iterating through a result set and saving to a file row by row, it's not as good as using....

mysqldump

mysqldump is superior to SELECT INTO OUTFILE in many ways, producing CSV is just one of the many things that this command can do.

The mysqldump client utility performs logical backups, producing a set of SQL statements that can be executed to reproduce the original database object definitions and table data. It dumps one or more MySQL databases for backup or transfer to another SQL server. The mysqldump command can also generate output in CSV, other delimited text, or XML format.

Ideally mysqldump should be invoked from your shell. It is possible to use exec in php to run it but since producing the dump might take a long time depending on the amount of data, and php scripts usually run only for 30 seconds, you would need to run it as a background process.

mysqldump isn't without it's fair share of problems.

It is not intended as a fast or scalable solution for backing up substantial amounts of data. With large data sizes, even if the backup step takes a reasonable time, restoring the data can be very slow because replaying the SQL statements involves disk I/O for insertion, index creation, and so on.

A classic example see this question: Server crash on MySQL backup using python where one mysqldump seems to start before the earlier one has finished and rendered the website completely unresponsive.

Mysql replication

Replication enables data from one MySQL database server (the master) to be copied to one or more MySQL database servers (the slaves). Replication is asynchronous by default; slaves do not need to be connected permanently to receive updates from the master. Depending on the configuration, you can replicate all databases, selected databases, or even selected tables within a database.

Thus replication operates differently from SELECT INTO OUTFILE or msyqldump It's ideal keeping the data in the local copy almost upto date (Would have said perfectly in sync but there is something called slave lag) On the other hand if you use a scheduled task to run mysqldump once every 24 hours. Imagine what can happen if the server crashes after 23 hours?

Each time you run mysqldump you are producing a large amount of data, keep doing it regularly and you will find your hard disk filled up or your file storage bills are hitting the roof. With replication only the changes are passed on to the server (by using the so called binlog)

XtraBackup

An alternative to replication is to use Percona XtraBackup.

Percona XtraBackup is an open-source hot backup utility for MySQL - based servers that doesn’t lock your database during the backup.

Though by Percona, it's compatible with Mysql and Mariadb. It has the ability to do incremental backups lack of which is the biggest limitation of mysqldump.

Counselor answered 12/8, 2016 at 10:49 Comment(0)
W
1

I am suggesting to get database backup by command line utility using script file instead of PHP script.

Make my.ini file for store configurations

make file my.ini for default db username and password in root directory of user. so script will take username, password and hostname from this file

[client]
user = <db_user_name>
password = <db_password>
host = <db_host>

Create sh file called backup.sh

#!/bin/sh
#
# script for get backup everyday

#change directory to your backup directory
cd /path_of_your_directory
#get backup of database of applications
mysqldump <your_database_name> tmp_db.sql;

#compress it in zip file
zip app_database-$(date +%Y-%m-%d).sql.zip tmp_db.sql;

#remove  sql file
rm -rf tmp_db.sql;

Give Executable permission to sh.file

chmod +x backup.sh

Set Cronjob

sh /<script_path>/backup.sh >/dev/null 2>&1

Thats all

Goodluck

Winger answered 12/8, 2016 at 11:6 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.