>I have a db that is moderately busy (a couple million records added
over a day, 60-300 db hits per second, average about 120 or so).
The database is in constant use and the server can't be stopped under
normal circumstances.
I need to be able to remove data from the db at periodic intervals,
generally at least once a day, depending on the sizes of the tables.
I've been doing this via scripts that first do a mysql dump for a
date/time interval, then a delete for the same date/time interval (if
the record numbers are above a certain threshold). I also do periodic
checks and optimizes.
mysqldumping a table may acquire a read lock during that dump, which
can block other queries to the same table. Some of this can be
affected by the options given to mysqldump.
At times this seems to have an adverse affect on the mysqlserver
performance. I then split out the tables to be backedup and deleted
individually, which kind of works.
My main question is what is the best way to do this (i.e. backup and
then remove data from an operating db) with the least impact on
operation? I haven't seen any option for mysqldump to remove data,
which of course doesn't mean that I haven't missed something like that.
One possibility is to use replication. Do the mysqldump on your
SLAVE server, which won't affect queries on the master. You can
also do a STOP SLAVE on the slave to freeze changes while you take
a whole dump of the database, then start it up again. Unfortunately,
you still need to do your delete on the master (which will propagate
to the slave).
Gordon L. Burditt