RoB wrote:
Thanks for that Ian. The procedure made will basically be the same
described here only that it'll be done to a filesystem (the size of
their db is in the range of 60 GB and that will, as you correctly
pointed out, take quite a while to perform on a filesystem):
And they realize that the database may appear to "hang" during this
window?
Seriously, if your customer is really prepared to have a possible
(perhaps even likely) outage because I/O is suspended and 60Gb data
is getting backed up, why not just shut down DB2 to do the backup?
Or, better, explain that an online DB2 backup is far better (and
won't have the side effect of causing the database to appear as
though it has "locked up" because I/O is suspended.) You can do
the backup, dump it out to a file system. Or media manager like
NetBackup, NetWorker or TSM.
I just have a really hard time imagining a customer who would be so
stuck on a particular technical implementation that they would be
willing to sacrifice availability to achieve it. It just seems more
likely that they are not understanding something properly.
i.e., Thinking that 'set write suspend' is equivalent to putting a
database in read-only mode, effectively disabling insert/update/delete
queries. SET WRITE SUSPEND does *not* do this, it blocks ALL I/O,
including I/O to temporary tables that gets gets flushed.
http://www.db2mag.com/story/showArti...leID=173500277
If using a good SAN solution (like EMC etc) the creation of the
mirrors can by the SAN can normally be performed while the database
server is online so you only need to issue the blocking mechanism at a
time when the SAN has finished with creating the mirrors. Any changes
done against the disks while the mirrors are being performed will be
tracked by the SAN and applied to the mirror image before finishing.
I have stressed the importance (and will continue doing so) of a db2
type backup to the customer but implementing it is their choice to
make.
So, is there actually a way to force a db2 instance to flush the dirty
buffers in the buffer pools to disk?
The only way to force this to occur is to deactivate the database (i.e.
shut it down). Otherwise, you're at the mercy of the page cleaners.
Theoretically you could de-tune them such that they are constantly
cleaning pages as soon as any become dirty, but that would be a bad
idea.
The flushing of the OS cache is probably handled differently on
different OSes but does anyone have any idea of how to force RedHat to
do this?
Also, while we're at it, does anyone know how to disable the OS
caching of filesystems on RedHat to prevent the double caching of
accessed pages from the containers (by both the OS and the db2
instance)?
Well, you can tell DB2 to open files and disable any caching
(alter tablespace X no file system caching).
On AIX, you can mount the file system with the 'dio' option
(or the 'cio' option, which is better); I *think* the equivalent
for this is the 'sync' option on Linux. (mount -o sync /db2/data).
However, use care with this. IBM's recommendation is to use the
'alter tablespace ... no file system caching' option instead of
setting file system mount options.