472,142 Members | 1,021 Online
Bytes | Software Development & Data Engineering Community
Post +

Home Posts Topics Members FAQ

Join Bytes to post your question to a community of 472,142 software developers and data experts.

Question about external backups to filesystems

RoB
Hi all,

I'm coming from the Informix world and I have a customer using DB2
8.2.3 for Linux on Red Hat Enterprise ES.

The customer is performing filesystem backups of the containers etc
every night but they are not shutting down the database server while
doing this. I only assume that this most likelly would leave an
inconsistant backup image as there is nothing assuring that the
modified pages in the buffer pool get written to disk before the
filesystem backup starts. There is plenty of acticivty on the database
24x7.

Question: Apart from shutting down the instance before performing such
an external backup, is there a way in db2 to block all access and make
sure that all modified pages in memory gets written to disk? I've
tried the not so user-friendly online db2 documentation but without
any luck. With Informix Dynamic Server (IDS) this would be archieved
by issuing the commands "onmode -c BLOCK" and then "onmode -c UNBLOCK"
once the external backup has finished.

Any input is appreciated.

RoB

Mar 23 '07 #1
11 2138
krx
On Mar 23, 8:25 am, "RoB" <pluma...@yahoo.comwrote:
Hi all,

I'm coming from the Informix world and I have a customer using DB2
8.2.3 for Linux on Red Hat Enterprise ES.

The customer is performing filesystem backups of the containers etc
every night but they are not shutting down the database server while
doing this. I only assume that this most likelly would leave an
inconsistant backup image as there is nothing assuring that the
modified pages in the buffer pool get written to disk before the
filesystem backup starts. There is plenty of acticivty on the database
24x7.

Question: Apart from shutting down the instance before performing such
an external backup, is there a way in db2 to block all access and make
sure that all modified pages in memory gets written to disk? I've
tried the not so user-friendly online db2 documentation but without
any luck. With Informix Dynamic Server (IDS) this would be archieved
by issuing the commands "onmode -c BLOCK" and then "onmode -c UNBLOCK"
once the external backup has finished.

Any input is appreciated.

RoB
There's no need to shutdown the database if it needs to be used 24x7.
Set up Log archival and start taking online backups.

Mar 23 '07 #2
RoB
On Mar 23, 12:51 pm, "krx" <kedar.she...@gmail.comwrote:
On Mar 23, 8:25 am, "RoB" <pluma...@yahoo.comwrote:


Hi all,
I'm coming from the Informix world and I have a customer using DB2
8.2.3 for Linux on Red Hat Enterprise ES.
The customer is performing filesystem backups of the containers etc
every night but they are not shutting down the database server while
doing this. I only assume that this most likelly would leave an
inconsistant backup image as there is nothing assuring that the
modified pages in the buffer pool get written to disk before the
filesystem backup starts. There is plenty of acticivty on the database
24x7.
Question: Apart from shutting down the instance before performing such
an external backup, is there a way in db2 to block all access and make
sure that all modified pages in memory gets written to disk? I've
tried the not so user-friendly online db2 documentation but without
any luck. With Informix Dynamic Server (IDS) this would be archieved
by issuing the commands "onmode -c BLOCK" and then "onmode -c UNBLOCK"
once the external backup has finished.
Any input is appreciated.
RoB

There's no need to shutdown the database if it needs to be used 24x7.
Set up Log archival and start taking online backups.- Hide quoted text -

- Show quoted text -
Hi,

This customer was "having problems" with setting up a proper db2
backup strategy so they went on with using filesystem backups. As far
as they are concerned, they think this works fine and want to continue
doing it. So I'm bascially not looking for other solutions here,
although I know that they exist, but I'm looking for a way to tweak
their current strategy into something that actually will restore a
consistent backup.

RoB

Mar 23 '07 #3
RoB wrote:
On Mar 23, 12:51 pm, "krx" <kedar.she...@gmail.comwrote:
>On Mar 23, 8:25 am, "RoB" <pluma...@yahoo.comwrote:


Hi all,
I'm coming from the Informix world and I have a customer using DB2
8.2.3 for Linux on Red Hat Enterprise ES.
The customer is performing filesystem backups of the containers etc
every night but they are not shutting down the database server while
doing this. I only assume that this most likelly would leave an
inconsistant backup image as there is nothing assuring that the
modified pages in the buffer pool get written to disk before the
filesystem backup starts. There is plenty of acticivty on the database
24x7.
Question: Apart from shutting down the instance before performing such
an external backup, is there a way in db2 to block all access and make
sure that all modified pages in memory gets written to disk? I've
tried the not so user-friendly online db2 documentation but without
any luck. With Informix Dynamic Server (IDS) this would be archieved
by issuing the commands "onmode -c BLOCK" and then "onmode -c UNBLOCK"
once the external backup has finished.
Any input is appreciated.
RoB

There's no need to shutdown the database if it needs to be used 24x7.
Set up Log archival and start taking online backups.- Hide quoted text -

- Show quoted text -

Hi,

This customer was "having problems" with setting up a proper db2
backup strategy so they went on with using filesystem backups. As far
as they are concerned, they think this works fine and want to continue
doing it. So I'm bascially not looking for other solutions here,
although I know that they exist, but I'm looking for a way to tweak
their current strategy into something that actually will restore a
consistent backup.

RoB
RoB,

I'd say that if the customer is using filesystem backups, especially if he
actually isn't shutting down the instance, then "all bets are off" on being
able to recover anything at all.

In fact, without using the tools provided for the job, which have worked
exceptionally well for me over many years, I don't think he will get any
sympathy from IBM if he loses all his data.

I wonder if he has ever attempted a restore with what he has.

So there really is no solution but to put in place a backup and recovery
strategy using the DB2 BACKUP command as the foundation for what he does.

If he needs someone to do this for him, then he just has to ask. I'm sure
there are plenty of people on the list (myself included) with the skills to
do such a thing.

Phil Nelson
ScotDB Limited
(te*****@scotdb.com)
Mar 23 '07 #4
RoB
On Mar 23, 1:22 pm, Philip Nelson <team...@scotdb.comwrote:
RoB wrote:
On Mar 23, 12:51 pm, "krx" <kedar.she...@gmail.comwrote:
On Mar 23, 8:25 am, "RoB" <pluma...@yahoo.comwrote:
Hi all,
I'm coming from the Informix world and I have a customer using DB2
8.2.3 for Linux on Red Hat Enterprise ES.
The customer is performing filesystem backups of the containers etc
every night but they are not shutting down the database server while
doing this. I only assume that this most likelly would leave an
inconsistant backup image as there is nothing assuring that the
modified pages in the buffer pool get written to disk before the
filesystem backup starts. There is plenty of acticivty on the database
24x7.
Question: Apart from shutting down the instance before performing such
an external backup, is there a way in db2 to block all access and make
sure that all modified pages in memory gets written to disk? I've
tried the not so user-friendly online db2 documentation but without
any luck. With Informix Dynamic Server (IDS) this would be archieved
by issuing the commands "onmode -c BLOCK" and then "onmode -c UNBLOCK"
once the external backup has finished.
Any input is appreciated.
RoB
There's no need to shutdown the database if it needs to be used 24x7.
Set up Log archival and start taking online backups.- Hide quoted text -
- Show quoted text -
Hi,
This customer was "having problems" with setting up a proper db2
backup strategy so they went on with using filesystem backups. As far
as they are concerned, they think this works fine and want to continue
doing it. So I'm bascially not looking for other solutions here,
although I know that they exist, but I'm looking for a way to tweak
their current strategy into something that actually will restore a
consistent backup.
RoB

RoB,

I'd say that if the customer is using filesystem backups, especially if he
actually isn't shutting down the instance, then "all bets are off" on being
able to recover anything at all.

In fact, without using the tools provided for the job, which have worked
exceptionally well for me over many years, I don't think he will get any
sympathy from IBM if he loses all his data.

I wonder if he has ever attempted a restore with what he has.

So there really is no solution but to put in place a backup and recovery
strategy using the DB2 BACKUP command as the foundation for what he does.

If he needs someone to do this for him, then he just has to ask. I'm sure
there are plenty of people on the list (myself included) with the skills to
do such a thing.

Phil Nelson
ScotDB Limited
(team...@scotdb.com)- Hide quoted text -

- Show quoted text -
My thoughts precisely. As far as I know they have never tried to
restore... Which probably won't be successful anyway..

So if we go back to the original question: Is there a way (with a
command) to block the database server without shutting it down, and
hence killing off all the user sessions, and get all content in the
buffer pools to be written to their disk?

If this is not possible, then I wonder how a, say 15 terabyte large
database gets backed up nightly by db2 without shutting it down? Such
a backup with any standard server utility will most likely take more
than 24 hours which stops it from being performed each night. With
other database servers you can have a SAN that internally replicates
the disks and then just block the database server for a few seconds
(which also writes all dirty pages to disk) while you split off the
newly created disk image within the SAN. You can combine this level 0
external archive with database server logical log backups to be able
to bring the instance back to the point in time of the failure.

RoB

Mar 23 '07 #5
aj
Does your customer use a SAN? If so, split mirrors are a possibility.
Look at:

http://www-128.ibm.com/developerwork...azi/index.html

Basically, you use functionality provided by your storage vendor to make
an instantaneous copy of DB2 containers, utilizing a few DB2 tricks (SET
WRITE SUSPEND FOR DATABASE, SET WRITE RESUME FOR DATABASE, and DB2INIDB)
to make sure there are no in-flight transactions in the database. You
can then bring the copy online.

We happen to use EMC, and their name for it is "SnapView"

HTH

aj

PS - I'm a former Informix guy also. Welcome to DB2! :)

RoB wrote:
On Mar 23, 1:22 pm, Philip Nelson <team...@scotdb.comwrote:
>RoB wrote:
>>On Mar 23, 12:51 pm, "krx" <kedar.she...@gmail.comwrote:
On Mar 23, 8:25 am, "RoB" <pluma...@yahoo.comwrote:
Hi all,
I'm coming from the Informix world and I have a customer using DB2
8.2.3 for Linux on Red Hat Enterprise ES.
The customer is performing filesystem backups of the containers etc
every night but they are not shutting down the database server while
doing this. I only assume that this most likelly would leave an
inconsistant backup image as there is nothing assuring that the
modified pages in the buffer pool get written to disk before the
filesystem backup starts. There is plenty of acticivty on the database
24x7.
Question: Apart from shutting down the instance before performing such
an external backup, is there a way in db2 to block all access and make
sure that all modified pages in memory gets written to disk? I've
tried the not so user-friendly online db2 documentation but without
any luck. With Informix Dynamic Server (IDS) this would be archieved
by issuing the commands "onmode -c BLOCK" and then "onmode -c UNBLOCK"
once the external backup has finished.
Any input is appreciated.
RoB
There's no need to shutdown the database if it needs to be used 24x7.
Set up Log archival and start taking online backups.- Hide quoted text -
- Show quoted text -
Hi,
This customer was "having problems" with setting up a proper db2
backup strategy so they went on with using filesystem backups. As far
as they are concerned, they think this works fine and want to continue
doing it. So I'm bascially not looking for other solutions here,
although I know that they exist, but I'm looking for a way to tweak
their current strategy into something that actually will restore a
consistent backup.
RoB
RoB,

I'd say that if the customer is using filesystem backups, especially if he
actually isn't shutting down the instance, then "all bets are off" on being
able to recover anything at all.

In fact, without using the tools provided for the job, which have worked
exceptionally well for me over many years, I don't think he will get any
sympathy from IBM if he loses all his data.

I wonder if he has ever attempted a restore with what he has.

So there really is no solution but to put in place a backup and recovery
strategy using the DB2 BACKUP command as the foundation for what he does.

If he needs someone to do this for him, then he just has to ask. I'm sure
there are plenty of people on the list (myself included) with the skills to
do such a thing.

Phil Nelson
ScotDB Limited
(team...@scotdb.com)- Hide quoted text -

- Show quoted text -

My thoughts precisely. As far as I know they have never tried to
restore... Which probably won't be successful anyway..

So if we go back to the original question: Is there a way (with a
command) to block the database server without shutting it down, and
hence killing off all the user sessions, and get all content in the
buffer pools to be written to their disk?

If this is not possible, then I wonder how a, say 15 terabyte large
database gets backed up nightly by db2 without shutting it down? Such
a backup with any standard server utility will most likely take more
than 24 hours which stops it from being performed each night. With
other database servers you can have a SAN that internally replicates
the disks and then just block the database server for a few seconds
(which also writes all dirty pages to disk) while you split off the
newly created disk image within the SAN. You can combine this level 0
external archive with database server logical log backups to be able
to bring the instance back to the point in time of the failure.

RoB
Mar 23 '07 #6
aj
PS - I suggest your customer start doing online backups rather than
filesystem container backups right quick, before they have a Really,
Really Bad Day(tm).

Also, verify the online backups w/ the db2ckbkp command.

aj

aj wrote:
Does your customer use a SAN? If so, split mirrors are a possibility.
Look at:

http://www-128.ibm.com/developerwork...azi/index.html
Basically, you use functionality provided by your storage vendor to make
an instantaneous copy of DB2 containers, utilizing a few DB2 tricks (SET
WRITE SUSPEND FOR DATABASE, SET WRITE RESUME FOR DATABASE, and DB2INIDB)
to make sure there are no in-flight transactions in the database. You
can then bring the copy online.

We happen to use EMC, and their name for it is "SnapView"

HTH

aj

PS - I'm a former Informix guy also. Welcome to DB2! :)

RoB wrote:
>On Mar 23, 1:22 pm, Philip Nelson <team...@scotdb.comwrote:
>>RoB wrote:
On Mar 23, 12:51 pm, "krx" <kedar.she...@gmail.comwrote:
On Mar 23, 8:25 am, "RoB" <pluma...@yahoo.comwrote:
>Hi all,
>I'm coming from the Informix world and I have a customer using DB2
>8.2.3 for Linux on Red Hat Enterprise ES.
>The customer is performing filesystem backups of the containers etc
>every night but they are not shutting down the database server while
>doing this. I only assume that this most likelly would leave an
>inconsistant backup image as there is nothing assuring that the
>modified pages in the buffer pool get written to disk before the
>filesystem backup starts. There is plenty of acticivty on the
>database
>24x7.
>Question: Apart from shutting down the instance before performing
>such
>an external backup, is there a way in db2 to block all access and
>make
>sure that all modified pages in memory gets written to disk? I've
>tried the not so user-friendly online db2 documentation but without
>any luck. With Informix Dynamic Server (IDS) this would be archieved
>by issuing the commands "onmode -c BLOCK" and then "onmode -c
>UNBLOCK"
>once the external backup has finished.
>Any input is appreciated.
>RoB
There's no need to shutdown the database if it needs to be used 24x7.
Set up Log archival and start taking online backups.- Hide quoted
text -
- Show quoted text -
Hi,
This customer was "having problems" with setting up a proper db2
backup strategy so they went on with using filesystem backups. As far
as they are concerned, they think this works fine and want to continue
doing it. So I'm bascially not looking for other solutions here,
although I know that they exist, but I'm looking for a way to tweak
their current strategy into something that actually will restore a
consistent backup.
RoB
RoB,

I'd say that if the customer is using filesystem backups, especially
if he
actually isn't shutting down the instance, then "all bets are off" on
being
able to recover anything at all.

In fact, without using the tools provided for the job, which have worked
exceptionally well for me over many years, I don't think he will get any
sympathy from IBM if he loses all his data.

I wonder if he has ever attempted a restore with what he has.

So there really is no solution but to put in place a backup and recovery
strategy using the DB2 BACKUP command as the foundation for what he
does.

If he needs someone to do this for him, then he just has to ask. I'm
sure
there are plenty of people on the list (myself included) with the
skills to
do such a thing.

Phil Nelson
ScotDB Limited
(team...@scotdb.com)- Hide quoted text -

- Show quoted text -

My thoughts precisely. As far as I know they have never tried to
restore... Which probably won't be successful anyway..

So if we go back to the original question: Is there a way (with a
command) to block the database server without shutting it down, and
hence killing off all the user sessions, and get all content in the
buffer pools to be written to their disk?

If this is not possible, then I wonder how a, say 15 terabyte large
database gets backed up nightly by db2 without shutting it down? Such
a backup with any standard server utility will most likely take more
than 24 hours which stops it from being performed each night. With
other database servers you can have a SAN that internally replicates
the disks and then just block the database server for a few seconds
(which also writes all dirty pages to disk) while you split off the
newly created disk image within the SAN. You can combine this level 0
external archive with database server logical log backups to be able
to bring the instance back to the point in time of the failure.

RoB
Mar 23 '07 #7
RoB
On Mar 23, 3:32 pm, aj <ron...@mcdonalds.comwrote:
Does your customer use a SAN? If so, split mirrors are a possibility.
Look at:

http://www-128.ibm.com/developerwork...rticle/dm-0508...

Basically, you use functionality provided by your storage vendor to make
an instantaneous copy of DB2 containers, utilizing a few DB2 tricks (SET
WRITE SUSPEND FOR DATABASE, SET WRITE RESUME FOR DATABASE, and DB2INIDB)
to make sure there are no in-flight transactions in the database. You
can then bring the copy online.

We happen to use EMC, and their name for it is "SnapView"

HTH

aj

PS - I'm a former Informix guy also. Welcome to DB2! :)

RoB wrote:
On Mar 23, 1:22 pm, Philip Nelson <team...@scotdb.comwrote:
RoB wrote:
On Mar 23, 12:51 pm, "krx" <kedar.she...@gmail.comwrote:
On Mar 23, 8:25 am, "RoB" <pluma...@yahoo.comwrote:
Hi all,
I'm coming from the Informix world and I have a customer using DB2
8.2.3 for Linux on Red Hat Enterprise ES.
The customer is performing filesystem backups of the containers etc
every night but they are not shutting down the database server while
doing this. I only assume that this most likelly would leave an
inconsistant backup image as there is nothing assuring that the
modified pages in the buffer pool get written to disk before the
filesystem backup starts. There is plenty of acticivty on the database
24x7.
Question: Apart from shutting down the instance before performing such
an external backup, is there a way in db2 to block all access and make
sure that all modified pages in memory gets written to disk? I've
tried the not so user-friendly online db2 documentation but without
any luck. With Informix Dynamic Server (IDS) this would be archieved
by issuing the commands "onmode -c BLOCK" and then "onmode -c UNBLOCK"
once the external backup has finished.
Any input is appreciated.
RoB
There's no need to shutdown the database if it needs to be used 24x7.
Set up Log archival and start taking online backups.- Hide quoted text -
- Show quoted text -
Hi,
This customer was "having problems" with setting up a proper db2
backup strategy so they went on with using filesystem backups. As far
as they are concerned, they think this works fine and want to continue
doing it. So I'm bascially not looking for other solutions here,
although I know that they exist, but I'm looking for a way to tweak
their current strategy into something that actually will restore a
consistent backup.
RoB
RoB,
I'd say that if the customer is using filesystem backups, especially if he
actually isn't shutting down the instance, then "all bets are off" on being
able to recover anything at all.
In fact, without using the tools provided for the job, which have worked
exceptionally well for me over many years, I don't think he will get any
sympathy from IBM if he loses all his data.
I wonder if he has ever attempted a restore with what he has.
So there really is no solution but to put in place a backup and recovery
strategy using the DB2 BACKUP command as the foundation for what he does.
If he needs someone to do this for him, then he just has to ask. I'm sure
there are plenty of people on the list (myself included) with the skills to
do such a thing.
Phil Nelson
ScotDB Limited
(team...@scotdb.com)- Hide quoted text -
- Show quoted text -
My thoughts precisely. As far as I know they have never tried to
restore... Which probably won't be successful anyway..
So if we go back to the original question: Is there a way (with a
command) to block the database server without shutting it down, and
hence killing off all the user sessions, and get all content in the
buffer pools to be written to their disk?
If this is not possible, then I wonder how a, say 15 terabyte large
database gets backed up nightly by db2 without shutting it down? Such
a backup with any standard server utility will most likely take more
than 24 hours which stops it from being performed each night. With
other database servers you can have a SAN that internally replicates
the disks and then just block the database server for a few seconds
(which also writes all dirty pages to disk) while you split off the
newly created disk image within the SAN. You can combine this level 0
external archive with database server logical log backups to be able
to bring the instance back to the point in time of the failure.
RoB- Hide quoted text -

- Show quoted text -
Thanks AJ! That was exactly the sequence of commands what I was
after.

This customer doesn't use SAN splits but by applying these commands
around their filesystem backup it will be consistent, which is what
I've been trying to achieve.

Thanks again you all for your input,

RoB

Mar 23 '07 #8
Ian
RoB wrote:
Thanks AJ! That was exactly the sequence of commands what I was
after.

This customer doesn't use SAN splits but by applying these commands
around their filesystem backup it will be consistent, which is what
I've been trying to achieve.
You need to read the documentation to understand how this is used, and
the ins and outs (and consequences) of it.

Using SET WRITE SUSPEND will block DB2 from doing any writes to disk,
however it does not flush data from bufferpools to disk, or do anything
to cause the OS to flush applicable I/O buffers (at least that's my
understanding). The database is not consistent.

The intent of this (when used with split-mirrors) is to allow you to
mount the mirrors on another server and then use the BACKUP DATABASE
command to actually back up the database. This is a very common way to
handle large data warehouses (like 15Tb), if the entire database must be
backed up.

However, you have to realize that "unfreezing the database" using the
'db2inidb' command can have consequences for recovery.

Using SET WRITE SUSPEND with the customer's desired backup mechanism
might work, but I wouldn't depend on it. There's a reason people are
suggesting you use the supported backup mechanism. And you really
should direct your customer towards a workable solution.

In addition, your customer would need to keep the database in
write-suspend mode for the duration of the backup (i.e. to ensure that
all DB2 data backed up is consistent). If you're talking about 15Tb
of data, this file system level backup will take a LONG time, and
will likely have some effect on database users. i.e., if you can't
write to temporary tablespaces, even select queries can block.

Mar 23 '07 #9
RoB
On Mar 23, 9:38 pm, Ian <ianb...@mobileaudio.comwrote:
RoB wrote:
Thanks AJ! That was exactly the sequence of commands what I was
after.
This customer doesn't use SAN splits but by applying these commands
around their filesystem backup it will be consistent, which is what
I've been trying to achieve.

You need to read the documentation to understand how this is used, and
the ins and outs (and consequences) of it.

Using SET WRITE SUSPEND will block DB2 from doing any writes to disk,
however it does not flush data from bufferpools to disk, or do anything
to cause the OS to flush applicable I/O buffers (at least that's my
understanding). The database is not consistent.

The intent of this (when used with split-mirrors) is to allow you to
mount the mirrors on another server and then use the BACKUP DATABASE
command to actually back up the database. This is a very common way to
handle large data warehouses (like 15Tb), if the entire database must be
backed up.

However, you have to realize that "unfreezing the database" using the
'db2inidb' command can have consequences for recovery.

Using SET WRITE SUSPEND with the customer's desired backup mechanism
might work, but I wouldn't depend on it. There's a reason people are
suggesting you use the supported backup mechanism. And you really
should direct your customer towards a workable solution.

In addition, your customer would need to keep the database in
write-suspend mode for the duration of the backup (i.e. to ensure that
all DB2 data backed up is consistent). If you're talking about 15Tb
of data, this file system level backup will take a LONG time, and
will likely have some effect on database users. i.e., if you can't
write to temporary tablespaces, even select queries can block.

Thanks for that Ian. The procedure made will basically be the same
described here only that it'll be done to a filesystem (the size of
their db is in the range of 60 GB and that will, as you correctly
pointed out, take quite a while to perform on a filesystem):

http://www.db2mag.com/story/showArti...leID=173500277

If using a good SAN solution (like EMC etc) the creation of the
mirrors can by the SAN can normally be performed while the database
server is online so you only need to issue the blocking mechanism at a
time when the SAN has finished with creating the mirrors. Any changes
done against the disks while the mirrors are being performed will be
tracked by the SAN and applied to the mirror image before finishing.

I have stressed the importance (and will continue doing so) of a db2
type backup to the customer but implementing it is their choice to
make.

So, is there actually a way to force a db2 instance to flush the dirty
buffers in the buffer pools to disk?

The flushing of the OS cache is probably handled differently on
different OSes but does anyone have any idea of how to force RedHat to
do this?
Also, while we're at it, does anyone know how to disable the OS
caching of filesystems on RedHat to prevent the double caching of
accessed pages from the containers (by both the OS and the db2
instance)?

Thanks,

RoB

Mar 26 '07 #10
On Mar 26, 10:31 am, "RoB" <pluma...@yahoo.comwrote:
On Mar 23, 9:38 pm, Ian <ianb...@mobileaudio.comwrote:
RoB wrote:
Thanks AJ! That was exactly the sequence of commands what I was
after.
This customer doesn't use SAN splits but by applying these commands
around their filesystem backup it will be consistent, which is what
I've been trying to achieve.
You need to read the documentation to understand how this is used, and
the ins and outs (and consequences) of it.
Using SET WRITE SUSPEND will block DB2 from doing any writes to disk,
however it does not flush data from bufferpools to disk, or do anything
to cause the OS to flush applicable I/O buffers (at least that's my
understanding). The database is not consistent.
The intent of this (when used with split-mirrors) is to allow you to
mount the mirrors on another server and then use the BACKUP DATABASE
command to actually back up the database. This is a very common way to
handle large data warehouses (like 15Tb), if the entire database must be
backed up.
However, you have to realize that "unfreezing the database" using the
'db2inidb' command can have consequences for recovery.
Using SET WRITE SUSPEND with the customer's desired backup mechanism
might work, but I wouldn't depend on it. There's a reason people are
suggesting you use the supported backup mechanism. And you really
should direct your customer towards a workable solution.
In addition, your customer would need to keep the database in
write-suspend mode for the duration of the backup (i.e. to ensure that
all DB2 data backed up is consistent). If you're talking about 15Tb
of data, this file system level backup will take a LONG time, and
will likely have some effect on database users. i.e., if you can't
write to temporary tablespaces, even select queries can block.

Thanks for that Ian. The procedure made will basically be the same
described here only that it'll be done to a filesystem (the size of
their db is in the range of 60 GB and that will, as you correctly
pointed out, take quite a while to perform on a filesystem):

http://www.db2mag.com/story/showArti...leID=173500277

If using a good SAN solution (like EMC etc) the creation of the
mirrors can by the SAN can normally be performed while the database
server is online so you only need to issue the blocking mechanism at a
time when the SAN has finished with creating the mirrors. Any changes
done against the disks while the mirrors are being performed will be
tracked by the SAN and applied to the mirror image before finishing.

I have stressed the importance (and will continue doing so) of a db2
type backup to the customer but implementing it is their choice to
make.

So, is there actually a way to force a db2 instance to flush the dirty
buffers in the buffer pools to disk?

The flushing of the OS cache is probably handled differently on
different OSes but does anyone have any idea of how to force RedHat to
do this?
Also, while we're at it, does anyone know how to disable the OS
caching of filesystems on RedHat to prevent the double caching of
accessed pages from the containers (by both the OS and the db2
instance)?

Thanks,

RoB
1. As a system can crash any time, DB2 does have a capability to
recover from it. So there is no need to sync as DB2 is taking care.

2. In AIX -only- there is a way to disable mmap read, write usage, so
I would conclude there is no double caching on Linux when using SMS
(Remark: double caching can be benefical on AIX depending on the
environment)

3. if file system backup is prefered, if the file systems support a
software snapshot capability, a read-only snapshot could be taken :
1. suspend I/O DB2 2. start file systems snapshot 3. resume I/O on DB2
4. backup the snapped file systems 5. drop read-only snapped part of
file sytems

Bernard Dhooghe

Mar 26 '07 #11
Ian
RoB wrote:
Thanks for that Ian. The procedure made will basically be the same
described here only that it'll be done to a filesystem (the size of
their db is in the range of 60 GB and that will, as you correctly
pointed out, take quite a while to perform on a filesystem):
And they realize that the database may appear to "hang" during this
window?

Seriously, if your customer is really prepared to have a possible
(perhaps even likely) outage because I/O is suspended and 60Gb data
is getting backed up, why not just shut down DB2 to do the backup?

Or, better, explain that an online DB2 backup is far better (and
won't have the side effect of causing the database to appear as
though it has "locked up" because I/O is suspended.) You can do
the backup, dump it out to a file system. Or media manager like
NetBackup, NetWorker or TSM.

I just have a really hard time imagining a customer who would be so
stuck on a particular technical implementation that they would be
willing to sacrifice availability to achieve it. It just seems more
likely that they are not understanding something properly.

i.e., Thinking that 'set write suspend' is equivalent to putting a
database in read-only mode, effectively disabling insert/update/delete
queries. SET WRITE SUSPEND does *not* do this, it blocks ALL I/O,
including I/O to temporary tables that gets gets flushed.
http://www.db2mag.com/story/showArti...leID=173500277

If using a good SAN solution (like EMC etc) the creation of the
mirrors can by the SAN can normally be performed while the database
server is online so you only need to issue the blocking mechanism at a
time when the SAN has finished with creating the mirrors. Any changes
done against the disks while the mirrors are being performed will be
tracked by the SAN and applied to the mirror image before finishing.

I have stressed the importance (and will continue doing so) of a db2
type backup to the customer but implementing it is their choice to
make.

So, is there actually a way to force a db2 instance to flush the dirty
buffers in the buffer pools to disk?
The only way to force this to occur is to deactivate the database (i.e.
shut it down). Otherwise, you're at the mercy of the page cleaners.
Theoretically you could de-tune them such that they are constantly
cleaning pages as soon as any become dirty, but that would be a bad
idea.

The flushing of the OS cache is probably handled differently on
different OSes but does anyone have any idea of how to force RedHat to
do this?
Also, while we're at it, does anyone know how to disable the OS
caching of filesystems on RedHat to prevent the double caching of
accessed pages from the containers (by both the OS and the db2
instance)?
Well, you can tell DB2 to open files and disable any caching
(alter tablespace X no file system caching).

On AIX, you can mount the file system with the 'dio' option
(or the 'cio' option, which is better); I *think* the equivalent
for this is the 'sync' option on Linux. (mount -o sync /db2/data).
However, use care with this. IBM's recommendation is to use the
'alter tablespace ... no file system caching' option instead of
setting file system mount options.

Mar 28 '07 #12

This discussion thread is closed

Replies have been disabled for this discussion.

Similar topics

1 post views Thread by Sgt. Sausage | last post: by
1 post views Thread by Loopy | last post: by
1 post views Thread by Andre | last post: by
1 post views Thread by Peter Sands | last post: by
1 post views Thread by Christos Kalantzis | last post: by
21 posts views Thread by DP | last post: by

By using Bytes.com and it's services, you agree to our Privacy Policy and Terms of Use.

To disable or enable advertisements and analytics tracking please visit the manage ads & tracking page.