469,581 Members | 1,960 Online
Bytes | Developer Community
New Post

Home Posts Topics Members FAQ

Post your question to a community of 469,581 developers. It's quick & easy.

Restoring select databases/tables from an --all-databases backup

I use the --all-databases switch to backup my entire database.
Sometimes there's a need to restore individual databases or tables form
the backup file. What command should I use for this?

Thanks,
Raffi

Jul 23 '05 #1
2 3022

Taken from: Linux Server Hacks
By Rob Flickenger

Here is a method for restoring a single mysql table from a huge
mysqldump

Like a good admin, you faithfully dump your mysql tables every night,
and save them to the filesystem in compressed form (presumably to be
picked up by a backup script later). You probably have something like
this running in cron on your database server (or one of its replicated
slaves):

for x in `mysql -Bse show databases`; do
mysqldump $x | gzip -9 > /var/spool/mysqldump/$x.`date +%Y%m%d`.gz
done
This will cover you if anything catastrophic happens to your live
database. But if your database grows to an appreciable size, doing
partial restores can be difficult. On a database with several million
rows, your dumps suddenly become massive piles of data that need to be
sifted through. How can you easily restore a single table out of a
several hundred megabyte compressed dump?

Here's a simple method using Perl. Create a script called
extract-table, with this in it:

#!/usr/bin/perl -wn
BEGIN { $table = shift @ARGV }
print if /^create table $table\b/io .. /^create table (?!$table)\b/io;
To extract the User table from the dump of a database called randomdb,
try something like this:

# zcat /var/spool/mysqldump/randomdb.20020901.gz | extract-table Users
~/

Users.dump
Now you can restore your Users table with a simple:

# mysql randomdb -e "drop table Users"
# mysql randomdb < ~/Users.dump

Jul 23 '05 #2
Bill Turczyn wrote:
Taken from: Linux Server Hacks
By Rob Flickenger

Here is a method for restoring a single mysql table from a huge
mysqldump

Like a good admin, you faithfully dump your mysql tables every night,
and save them to the filesystem in compressed form (presumably to be
picked up by a backup script later). You probably have something like
this running in cron on your database server (or one of its replicated slaves):

for x in `mysql -Bse show databases`; do
mysqldump $x | gzip -9 > /var/spool/mysqldump/$x.`date +%Y%m%d`.gz
done
This will cover you if anything catastrophic happens to your live
database. But if your database grows to an appreciable size, doing
partial restores can be difficult. On a database with several million
rows, your dumps suddenly become massive piles of data that need to be sifted through. How can you easily restore a single table out of a
several hundred megabyte compressed dump?

Here's a simple method using Perl. Create a script called
extract-table, with this in it:

#!/usr/bin/perl -wn
BEGIN { $table = shift @ARGV }
print if /^create table $table\b/io .. /^create table (?!$table)\b/io;

To extract the User table from the dump of a database called randomdb, try something like this:

# zcat /var/spool/mysqldump/randomdb.20020901.gz | extract-table

Users
~/

Users.dump
Now you can restore your Users table with a simple:

# mysql randomdb -e "drop table Users"
# mysql randomdb < ~/Users.dump


That's exactly what is happening and many times only specific
tables/databases get corrupted and it's inefficient to restore the
complete database structure. I'll try the script out as soon as I get a
chance. I guess mysql doesn't have a built in feature that resores
individual tables and databases from a full backup and leaves
everything else alone.

Raffi

Jul 23 '05 #3

This discussion thread is closed

Replies have been disabled for this discussion.

Similar topics

1 post views Thread by Gerald Maher | last post: by
4 posts views Thread by Raffi | last post: by
1 post views Thread by Luc Le Blanc | last post: by
3 posts views Thread by rdemyan via AccessMonster.com | last post: by
4 posts views Thread by guiromero | last post: by
By using this site, you agree to our Privacy Policy and Terms of Use.