473,509 Members | 8,693 Online
Bytes | Software Development & Data Engineering Community
+ Post

Home Posts Topics Members FAQ

Question about external backups to filesystems

RoB
Hi all,

I'm coming from the Informix world and I have a customer using DB2
8.2.3 for Linux on Red Hat Enterprise ES.

The customer is performing filesystem backups of the containers etc
every night but they are not shutting down the database server while
doing this. I only assume that this most likelly would leave an
inconsistant backup image as there is nothing assuring that the
modified pages in the buffer pool get written to disk before the
filesystem backup starts. There is plenty of acticivty on the database
24x7.

Question: Apart from shutting down the instance before performing such
an external backup, is there a way in db2 to block all access and make
sure that all modified pages in memory gets written to disk? I've
tried the not so user-friendly online db2 documentation but without
any luck. With Informix Dynamic Server (IDS) this would be archieved
by issuing the commands "onmode -c BLOCK" and then "onmode -c UNBLOCK"
once the external backup has finished.

Any input is appreciated.

RoB

Mar 23 '07 #1
11 2216
krx
On Mar 23, 8:25 am, "RoB" <pluma...@yahoo.comwrote:
Hi all,

I'm coming from the Informix world and I have a customer using DB2
8.2.3 for Linux on Red Hat Enterprise ES.

The customer is performing filesystem backups of the containers etc
every night but they are not shutting down the database server while
doing this. I only assume that this most likelly would leave an
inconsistant backup image as there is nothing assuring that the
modified pages in the buffer pool get written to disk before the
filesystem backup starts. There is plenty of acticivty on the database
24x7.

Question: Apart from shutting down the instance before performing such
an external backup, is there a way in db2 to block all access and make
sure that all modified pages in memory gets written to disk? I've
tried the not so user-friendly online db2 documentation but without
any luck. With Informix Dynamic Server (IDS) this would be archieved
by issuing the commands "onmode -c BLOCK" and then "onmode -c UNBLOCK"
once the external backup has finished.

Any input is appreciated.

RoB
There's no need to shutdown the database if it needs to be used 24x7.
Set up Log archival and start taking online backups.

Mar 23 '07 #2
RoB
On Mar 23, 12:51 pm, "krx" <kedar.she...@gmail.comwrote:
On Mar 23, 8:25 am, "RoB" <pluma...@yahoo.comwrote:


Hi all,
I'm coming from the Informix world and I have a customer using DB2
8.2.3 for Linux on Red Hat Enterprise ES.
The customer is performing filesystem backups of the containers etc
every night but they are not shutting down the database server while
doing this. I only assume that this most likelly would leave an
inconsistant backup image as there is nothing assuring that the
modified pages in the buffer pool get written to disk before the
filesystem backup starts. There is plenty of acticivty on the database
24x7.
Question: Apart from shutting down the instance before performing such
an external backup, is there a way in db2 to block all access and make
sure that all modified pages in memory gets written to disk? I've
tried the not so user-friendly online db2 documentation but without
any luck. With Informix Dynamic Server (IDS) this would be archieved
by issuing the commands "onmode -c BLOCK" and then "onmode -c UNBLOCK"
once the external backup has finished.
Any input is appreciated.
RoB

There's no need to shutdown the database if it needs to be used 24x7.
Set up Log archival and start taking online backups.- Hide quoted text -

- Show quoted text -
Hi,

This customer was "having problems" with setting up a proper db2
backup strategy so they went on with using filesystem backups. As far
as they are concerned, they think this works fine and want to continue
doing it. So I'm bascially not looking for other solutions here,
although I know that they exist, but I'm looking for a way to tweak
their current strategy into something that actually will restore a
consistent backup.

RoB

Mar 23 '07 #3
RoB wrote:
On Mar 23, 12:51 pm, "krx" <kedar.she...@gmail.comwrote:
>On Mar 23, 8:25 am, "RoB" <pluma...@yahoo.comwrote:


Hi all,
I'm coming from the Informix world and I have a customer using DB2
8.2.3 for Linux on Red Hat Enterprise ES.
The customer is performing filesystem backups of the containers etc
every night but they are not shutting down the database server while
doing this. I only assume that this most likelly would leave an
inconsistant backup image as there is nothing assuring that the
modified pages in the buffer pool get written to disk before the
filesystem backup starts. There is plenty of acticivty on the database
24x7.
Question: Apart from shutting down the instance before performing such
an external backup, is there a way in db2 to block all access and make
sure that all modified pages in memory gets written to disk? I've
tried the not so user-friendly online db2 documentation but without
any luck. With Informix Dynamic Server (IDS) this would be archieved
by issuing the commands "onmode -c BLOCK" and then "onmode -c UNBLOCK"
once the external backup has finished.
Any input is appreciated.
RoB

There's no need to shutdown the database if it needs to be used 24x7.
Set up Log archival and start taking online backups.- Hide quoted text -

- Show quoted text -

Hi,

This customer was "having problems" with setting up a proper db2
backup strategy so they went on with using filesystem backups. As far
as they are concerned, they think this works fine and want to continue
doing it. So I'm bascially not looking for other solutions here,
although I know that they exist, but I'm looking for a way to tweak
their current strategy into something that actually will restore a
consistent backup.

RoB
RoB,

I'd say that if the customer is using filesystem backups, especially if he
actually isn't shutting down the instance, then "all bets are off" on being
able to recover anything at all.

In fact, without using the tools provided for the job, which have worked
exceptionally well for me over many years, I don't think he will get any
sympathy from IBM if he loses all his data.

I wonder if he has ever attempted a restore with what he has.

So there really is no solution but to put in place a backup and recovery
strategy using the DB2 BACKUP command as the foundation for what he does.

If he needs someone to do this for him, then he just has to ask. I'm sure
there are plenty of people on the list (myself included) with the skills to
do such a thing.

Phil Nelson
ScotDB Limited
(te*****@scotdb.com)
Mar 23 '07 #4
RoB
On Mar 23, 1:22 pm, Philip Nelson <team...@scotdb.comwrote:
RoB wrote:
On Mar 23, 12:51 pm, "krx" <kedar.she...@gmail.comwrote:
On Mar 23, 8:25 am, "RoB" <pluma...@yahoo.comwrote:
Hi all,
I'm coming from the Informix world and I have a customer using DB2
8.2.3 for Linux on Red Hat Enterprise ES.
The customer is performing filesystem backups of the containers etc
every night but they are not shutting down the database server while
doing this. I only assume that this most likelly would leave an
inconsistant backup image as there is nothing assuring that the
modified pages in the buffer pool get written to disk before the
filesystem backup starts. There is plenty of acticivty on the database
24x7.
Question: Apart from shutting down the instance before performing such
an external backup, is there a way in db2 to block all access and make
sure that all modified pages in memory gets written to disk? I've
tried the not so user-friendly online db2 documentation but without
any luck. With Informix Dynamic Server (IDS) this would be archieved
by issuing the commands "onmode -c BLOCK" and then "onmode -c UNBLOCK"
once the external backup has finished.
Any input is appreciated.
RoB
There's no need to shutdown the database if it needs to be used 24x7.
Set up Log archival and start taking online backups.- Hide quoted text -
- Show quoted text -
Hi,
This customer was "having problems" with setting up a proper db2
backup strategy so they went on with using filesystem backups. As far
as they are concerned, they think this works fine and want to continue
doing it. So I'm bascially not looking for other solutions here,
although I know that they exist, but I'm looking for a way to tweak
their current strategy into something that actually will restore a
consistent backup.
RoB

RoB,

I'd say that if the customer is using filesystem backups, especially if he
actually isn't shutting down the instance, then "all bets are off" on being
able to recover anything at all.

In fact, without using the tools provided for the job, which have worked
exceptionally well for me over many years, I don't think he will get any
sympathy from IBM if he loses all his data.

I wonder if he has ever attempted a restore with what he has.

So there really is no solution but to put in place a backup and recovery
strategy using the DB2 BACKUP command as the foundation for what he does.

If he needs someone to do this for him, then he just has to ask. I'm sure
there are plenty of people on the list (myself included) with the skills to
do such a thing.

Phil Nelson
ScotDB Limited
(team...@scotdb.com)- Hide quoted text -

- Show quoted text -
My thoughts precisely. As far as I know they have never tried to
restore... Which probably won't be successful anyway..

So if we go back to the original question: Is there a way (with a
command) to block the database server without shutting it down, and
hence killing off all the user sessions, and get all content in the
buffer pools to be written to their disk?

If this is not possible, then I wonder how a, say 15 terabyte large
database gets backed up nightly by db2 without shutting it down? Such
a backup with any standard server utility will most likely take more
than 24 hours which stops it from being performed each night. With
other database servers you can have a SAN that internally replicates
the disks and then just block the database server for a few seconds
(which also writes all dirty pages to disk) while you split off the
newly created disk image within the SAN. You can combine this level 0
external archive with database server logical log backups to be able
to bring the instance back to the point in time of the failure.

RoB

Mar 23 '07 #5
aj
Does your customer use a SAN? If so, split mirrors are a possibility.
Look at:

http://www-128.ibm.com/developerwork...azi/index.html

Basically, you use functionality provided by your storage vendor to make
an instantaneous copy of DB2 containers, utilizing a few DB2 tricks (SET
WRITE SUSPEND FOR DATABASE, SET WRITE RESUME FOR DATABASE, and DB2INIDB)
to make sure there are no in-flight transactions in the database. You
can then bring the copy online.

We happen to use EMC, and their name for it is "SnapView"

HTH

aj

PS - I'm a former Informix guy also. Welcome to DB2! :)

RoB wrote:
On Mar 23, 1:22 pm, Philip Nelson <team...@scotdb.comwrote:
>RoB wrote:
>>On Mar 23, 12:51 pm, "krx" <kedar.she...@gmail.comwrote:
On Mar 23, 8:25 am, "RoB" <pluma...@yahoo.comwrote:
Hi all,
I'm coming from the Informix world and I have a customer using DB2
8.2.3 for Linux on Red Hat Enterprise ES.
The customer is performing filesystem backups of the containers etc
every night but they are not shutting down the database server while
doing this. I only assume that this most likelly would leave an
inconsistant backup image as there is nothing assuring that the
modified pages in the buffer pool get written to disk before the
filesystem backup starts. There is plenty of acticivty on the database
24x7.
Question: Apart from shutting down the instance before performing such
an external backup, is there a way in db2 to block all access and make
sure that all modified pages in memory gets written to disk? I've
tried the not so user-friendly online db2 documentation but without
any luck. With Informix Dynamic Server (IDS) this would be archieved
by issuing the commands "onmode -c BLOCK" and then "onmode -c UNBLOCK"
once the external backup has finished.
Any input is appreciated.
RoB
There's no need to shutdown the database if it needs to be used 24x7.
Set up Log archival and start taking online backups.- Hide quoted text -
- Show quoted text -
Hi,
This customer was "having problems" with setting up a proper db2
backup strategy so they went on with using filesystem backups. As far
as they are concerned, they think this works fine and want to continue
doing it. So I'm bascially not looking for other solutions here,
although I know that they exist, but I'm looking for a way to tweak
their current strategy into something that actually will restore a
consistent backup.
RoB
RoB,

I'd say that if the customer is using filesystem backups, especially if he
actually isn't shutting down the instance, then "all bets are off" on being
able to recover anything at all.

In fact, without using the tools provided for the job, which have worked
exceptionally well for me over many years, I don't think he will get any
sympathy from IBM if he loses all his data.

I wonder if he has ever attempted a restore with what he has.

So there really is no solution but to put in place a backup and recovery
strategy using the DB2 BACKUP command as the foundation for what he does.

If he needs someone to do this for him, then he just has to ask. I'm sure
there are plenty of people on the list (myself included) with the skills to
do such a thing.

Phil Nelson
ScotDB Limited
(team...@scotdb.com)- Hide quoted text -

- Show quoted text -

My thoughts precisely. As far as I know they have never tried to
restore... Which probably won't be successful anyway..

So if we go back to the original question: Is there a way (with a
command) to block the database server without shutting it down, and
hence killing off all the user sessions, and get all content in the
buffer pools to be written to their disk?

If this is not possible, then I wonder how a, say 15 terabyte large
database gets backed up nightly by db2 without shutting it down? Such
a backup with any standard server utility will most likely take more
than 24 hours which stops it from being performed each night. With
other database servers you can have a SAN that internally replicates
the disks and then just block the database server for a few seconds
(which also writes all dirty pages to disk) while you split off the
newly created disk image within the SAN. You can combine this level 0
external archive with database server logical log backups to be able
to bring the instance back to the point in time of the failure.

RoB
Mar 23 '07 #6
aj
PS - I suggest your customer start doing online backups rather than
filesystem container backups right quick, before they have a Really,
Really Bad Day(tm).

Also, verify the online backups w/ the db2ckbkp command.

aj

aj wrote:
Does your customer use a SAN? If so, split mirrors are a possibility.
Look at:

http://www-128.ibm.com/developerwork...azi/index.html
Basically, you use functionality provided by your storage vendor to make
an instantaneous copy of DB2 containers, utilizing a few DB2 tricks (SET
WRITE SUSPEND FOR DATABASE, SET WRITE RESUME FOR DATABASE, and DB2INIDB)
to make sure there are no in-flight transactions in the database. You
can then bring the copy online.

We happen to use EMC, and their name for it is "SnapView"

HTH

aj

PS - I'm a former Informix guy also. Welcome to DB2! :)

RoB wrote:
>On Mar 23, 1:22 pm, Philip Nelson <team...@scotdb.comwrote:
>>RoB wrote:
On Mar 23, 12:51 pm, "krx" <kedar.she...@gmail.comwrote:
On Mar 23, 8:25 am, "RoB" <pluma...@yahoo.comwrote:
>Hi all,
>I'm coming from the Informix world and I have a customer using DB2
>8.2.3 for Linux on Red Hat Enterprise ES.
>The customer is performing filesystem backups of the containers etc
>every night but they are not shutting down the database server while
>doing this. I only assume that this most likelly would leave an
>inconsistant backup image as there is nothing assuring that the
>modified pages in the buffer pool get written to disk before the
>filesystem backup starts. There is plenty of acticivty on the
>database
>24x7.
>Question: Apart from shutting down the instance before performing
>such
>an external backup, is there a way in db2 to block all access and
>make
>sure that all modified pages in memory gets written to disk? I've
>tried the not so user-friendly online db2 documentation but without
>any luck. With Informix Dynamic Server (IDS) this would be archieved
>by issuing the commands "onmode -c BLOCK" and then "onmode -c
>UNBLOCK"
>once the external backup has finished.
>Any input is appreciated.
>RoB
There's no need to shutdown the database if it needs to be used 24x7.
Set up Log archival and start taking online backups.- Hide quoted
text -
- Show quoted text -
Hi,
This customer was "having problems" with setting up a proper db2
backup strategy so they went on with using filesystem backups. As far
as they are concerned, they think this works fine and want to continue
doing it. So I'm bascially not looking for other solutions here,
although I know that they exist, but I'm looking for a way to tweak
their current strategy into something that actually will restore a
consistent backup.
RoB
RoB,

I'd say that if the customer is using filesystem backups, especially
if he
actually isn't shutting down the instance, then "all bets are off" on
being
able to recover anything at all.

In fact, without using the tools provided for the job, which have worked
exceptionally well for me over many years, I don't think he will get any
sympathy from IBM if he loses all his data.

I wonder if he has ever attempted a restore with what he has.

So there really is no solution but to put in place a backup and recovery
strategy using the DB2 BACKUP command as the foundation for what he
does.

If he needs someone to do this for him, then he just has to ask. I'm
sure
there are plenty of people on the list (myself included) with the
skills to
do such a thing.

Phil Nelson
ScotDB Limited
(team...@scotdb.com)- Hide quoted text -

- Show quoted text -

My thoughts precisely. As far as I know they have never tried to
restore... Which probably won't be successful anyway..

So if we go back to the original question: Is there a way (with a
command) to block the database server without shutting it down, and
hence killing off all the user sessions, and get all content in the
buffer pools to be written to their disk?

If this is not possible, then I wonder how a, say 15 terabyte large
database gets backed up nightly by db2 without shutting it down? Such
a backup with any standard server utility will most likely take more
than 24 hours which stops it from being performed each night. With
other database servers you can have a SAN that internally replicates
the disks and then just block the database server for a few seconds
(which also writes all dirty pages to disk) while you split off the
newly created disk image within the SAN. You can combine this level 0
external archive with database server logical log backups to be able
to bring the instance back to the point in time of the failure.

RoB
Mar 23 '07 #7
RoB
On Mar 23, 3:32 pm, aj <ron...@mcdonalds.comwrote:
Does your customer use a SAN? If so, split mirrors are a possibility.
Look at:

http://www-128.ibm.com/developerwork...rticle/dm-0508...

Basically, you use functionality provided by your storage vendor to make
an instantaneous copy of DB2 containers, utilizing a few DB2 tricks (SET
WRITE SUSPEND FOR DATABASE, SET WRITE RESUME FOR DATABASE, and DB2INIDB)
to make sure there are no in-flight transactions in the database. You
can then bring the copy online.

We happen to use EMC, and their name for it is "SnapView"

HTH

aj

PS - I'm a former Informix guy also. Welcome to DB2! :)

RoB wrote:
On Mar 23, 1:22 pm, Philip Nelson <team...@scotdb.comwrote:
RoB wrote:
On Mar 23, 12:51 pm, "krx" <kedar.she...@gmail.comwrote:
On Mar 23, 8:25 am, "RoB" <pluma...@yahoo.comwrote:
Hi all,
I'm coming from the Informix world and I have a customer using DB2
8.2.3 for Linux on Red Hat Enterprise ES.
The customer is performing filesystem backups of the containers etc
every night but they are not shutting down the database server while
doing this. I only assume that this most likelly would leave an
inconsistant backup image as there is nothing assuring that the
modified pages in the buffer pool get written to disk before the
filesystem backup starts. There is plenty of acticivty on the database
24x7.
Question: Apart from shutting down the instance before performing such
an external backup, is there a way in db2 to block all access and make
sure that all modified pages in memory gets written to disk? I've
tried the not so user-friendly online db2 documentation but without
any luck. With Informix Dynamic Server (IDS) this would be archieved
by issuing the commands "onmode -c BLOCK" and then "onmode -c UNBLOCK"
once the external backup has finished.
Any input is appreciated.
RoB
There's no need to shutdown the database if it needs to be used 24x7.
Set up Log archival and start taking online backups.- Hide quoted text -
- Show quoted text -
Hi,
This customer was "having problems" with setting up a proper db2
backup strategy so they went on with using filesystem backups. As far
as they are concerned, they think this works fine and want to continue
doing it. So I'm bascially not looking for other solutions here,
although I know that they exist, but I'm looking for a way to tweak
their current strategy into something that actually will restore a
consistent backup.
RoB
RoB,
I'd say that if the customer is using filesystem backups, especially if he
actually isn't shutting down the instance, then "all bets are off" on being
able to recover anything at all.
In fact, without using the tools provided for the job, which have worked
exceptionally well for me over many years, I don't think he will get any
sympathy from IBM if he loses all his data.
I wonder if he has ever attempted a restore with what he has.
So there really is no solution but to put in place a backup and recovery
strategy using the DB2 BACKUP command as the foundation for what he does.
If he needs someone to do this for him, then he just has to ask. I'm sure
there are plenty of people on the list (myself included) with the skills to
do such a thing.
Phil Nelson
ScotDB Limited
(team...@scotdb.com)- Hide quoted text -
- Show quoted text -
My thoughts precisely. As far as I know they have never tried to
restore... Which probably won't be successful anyway..
So if we go back to the original question: Is there a way (with a
command) to block the database server without shutting it down, and
hence killing off all the user sessions, and get all content in the
buffer pools to be written to their disk?
If this is not possible, then I wonder how a, say 15 terabyte large
database gets backed up nightly by db2 without shutting it down? Such
a backup with any standard server utility will most likely take more
than 24 hours which stops it from being performed each night. With
other database servers you can have a SAN that internally replicates
the disks and then just block the database server for a few seconds
(which also writes all dirty pages to disk) while you split off the
newly created disk image within the SAN. You can combine this level 0
external archive with database server logical log backups to be able
to bring the instance back to the point in time of the failure.
RoB- Hide quoted text -

- Show quoted text -
Thanks AJ! That was exactly the sequence of commands what I was
after.

This customer doesn't use SAN splits but by applying these commands
around their filesystem backup it will be consistent, which is what
I've been trying to achieve.

Thanks again you all for your input,

RoB

Mar 23 '07 #8
Ian
RoB wrote:
Thanks AJ! That was exactly the sequence of commands what I was
after.

This customer doesn't use SAN splits but by applying these commands
around their filesystem backup it will be consistent, which is what
I've been trying to achieve.
You need to read the documentation to understand how this is used, and
the ins and outs (and consequences) of it.

Using SET WRITE SUSPEND will block DB2 from doing any writes to disk,
however it does not flush data from bufferpools to disk, or do anything
to cause the OS to flush applicable I/O buffers (at least that's my
understanding). The database is not consistent.

The intent of this (when used with split-mirrors) is to allow you to
mount the mirrors on another server and then use the BACKUP DATABASE
command to actually back up the database. This is a very common way to
handle large data warehouses (like 15Tb), if the entire database must be
backed up.

However, you have to realize that "unfreezing the database" using the
'db2inidb' command can have consequences for recovery.

Using SET WRITE SUSPEND with the customer's desired backup mechanism
might work, but I wouldn't depend on it. There's a reason people are
suggesting you use the supported backup mechanism. And you really
should direct your customer towards a workable solution.

In addition, your customer would need to keep the database in
write-suspend mode for the duration of the backup (i.e. to ensure that
all DB2 data backed up is consistent). If you're talking about 15Tb
of data, this file system level backup will take a LONG time, and
will likely have some effect on database users. i.e., if you can't
write to temporary tablespaces, even select queries can block.

Mar 23 '07 #9
RoB
On Mar 23, 9:38 pm, Ian <ianb...@mobileaudio.comwrote:
RoB wrote:
Thanks AJ! That was exactly the sequence of commands what I was
after.
This customer doesn't use SAN splits but by applying these commands
around their filesystem backup it will be consistent, which is what
I've been trying to achieve.

You need to read the documentation to understand how this is used, and
the ins and outs (and consequences) of it.

Using SET WRITE SUSPEND will block DB2 from doing any writes to disk,
however it does not flush data from bufferpools to disk, or do anything
to cause the OS to flush applicable I/O buffers (at least that's my
understanding). The database is not consistent.

The intent of this (when used with split-mirrors) is to allow you to
mount the mirrors on another server and then use the BACKUP DATABASE
command to actually back up the database. This is a very common way to
handle large data warehouses (like 15Tb), if the entire database must be
backed up.

However, you have to realize that "unfreezing the database" using the
'db2inidb' command can have consequences for recovery.

Using SET WRITE SUSPEND with the customer's desired backup mechanism
might work, but I wouldn't depend on it. There's a reason people are
suggesting you use the supported backup mechanism. And you really
should direct your customer towards a workable solution.

In addition, your customer would need to keep the database in
write-suspend mode for the duration of the backup (i.e. to ensure that
all DB2 data backed up is consistent). If you're talking about 15Tb
of data, this file system level backup will take a LONG time, and
will likely have some effect on database users. i.e., if you can't
write to temporary tablespaces, even select queries can block.

Thanks for that Ian. The procedure made will basically be the same
described here only that it'll be done to a filesystem (the size of
their db is in the range of 60 GB and that will, as you correctly
pointed out, take quite a while to perform on a filesystem):

http://www.db2mag.com/story/showArti...leID=173500277

If using a good SAN solution (like EMC etc) the creation of the
mirrors can by the SAN can normally be performed while the database
server is online so you only need to issue the blocking mechanism at a
time when the SAN has finished with creating the mirrors. Any changes
done against the disks while the mirrors are being performed will be
tracked by the SAN and applied to the mirror image before finishing.

I have stressed the importance (and will continue doing so) of a db2
type backup to the customer but implementing it is their choice to
make.

So, is there actually a way to force a db2 instance to flush the dirty
buffers in the buffer pools to disk?

The flushing of the OS cache is probably handled differently on
different OSes but does anyone have any idea of how to force RedHat to
do this?
Also, while we're at it, does anyone know how to disable the OS
caching of filesystems on RedHat to prevent the double caching of
accessed pages from the containers (by both the OS and the db2
instance)?

Thanks,

RoB

Mar 26 '07 #10
On Mar 26, 10:31 am, "RoB" <pluma...@yahoo.comwrote:
On Mar 23, 9:38 pm, Ian <ianb...@mobileaudio.comwrote:
RoB wrote:
Thanks AJ! That was exactly the sequence of commands what I was
after.
This customer doesn't use SAN splits but by applying these commands
around their filesystem backup it will be consistent, which is what
I've been trying to achieve.
You need to read the documentation to understand how this is used, and
the ins and outs (and consequences) of it.
Using SET WRITE SUSPEND will block DB2 from doing any writes to disk,
however it does not flush data from bufferpools to disk, or do anything
to cause the OS to flush applicable I/O buffers (at least that's my
understanding). The database is not consistent.
The intent of this (when used with split-mirrors) is to allow you to
mount the mirrors on another server and then use the BACKUP DATABASE
command to actually back up the database. This is a very common way to
handle large data warehouses (like 15Tb), if the entire database must be
backed up.
However, you have to realize that "unfreezing the database" using the
'db2inidb' command can have consequences for recovery.
Using SET WRITE SUSPEND with the customer's desired backup mechanism
might work, but I wouldn't depend on it. There's a reason people are
suggesting you use the supported backup mechanism. And you really
should direct your customer towards a workable solution.
In addition, your customer would need to keep the database in
write-suspend mode for the duration of the backup (i.e. to ensure that
all DB2 data backed up is consistent). If you're talking about 15Tb
of data, this file system level backup will take a LONG time, and
will likely have some effect on database users. i.e., if you can't
write to temporary tablespaces, even select queries can block.

Thanks for that Ian. The procedure made will basically be the same
described here only that it'll be done to a filesystem (the size of
their db is in the range of 60 GB and that will, as you correctly
pointed out, take quite a while to perform on a filesystem):

http://www.db2mag.com/story/showArti...leID=173500277

If using a good SAN solution (like EMC etc) the creation of the
mirrors can by the SAN can normally be performed while the database
server is online so you only need to issue the blocking mechanism at a
time when the SAN has finished with creating the mirrors. Any changes
done against the disks while the mirrors are being performed will be
tracked by the SAN and applied to the mirror image before finishing.

I have stressed the importance (and will continue doing so) of a db2
type backup to the customer but implementing it is their choice to
make.

So, is there actually a way to force a db2 instance to flush the dirty
buffers in the buffer pools to disk?

The flushing of the OS cache is probably handled differently on
different OSes but does anyone have any idea of how to force RedHat to
do this?
Also, while we're at it, does anyone know how to disable the OS
caching of filesystems on RedHat to prevent the double caching of
accessed pages from the containers (by both the OS and the db2
instance)?

Thanks,

RoB
1. As a system can crash any time, DB2 does have a capability to
recover from it. So there is no need to sync as DB2 is taking care.

2. In AIX -only- there is a way to disable mmap read, write usage, so
I would conclude there is no double caching on Linux when using SMS
(Remark: double caching can be benefical on AIX depending on the
environment)

3. if file system backup is prefered, if the file systems support a
software snapshot capability, a read-only snapshot could be taken :
1. suspend I/O DB2 2. start file systems snapshot 3. resume I/O on DB2
4. backup the snapped file systems 5. drop read-only snapped part of
file sytems

Bernard Dhooghe

Mar 26 '07 #11
Ian
RoB wrote:
Thanks for that Ian. The procedure made will basically be the same
described here only that it'll be done to a filesystem (the size of
their db is in the range of 60 GB and that will, as you correctly
pointed out, take quite a while to perform on a filesystem):
And they realize that the database may appear to "hang" during this
window?

Seriously, if your customer is really prepared to have a possible
(perhaps even likely) outage because I/O is suspended and 60Gb data
is getting backed up, why not just shut down DB2 to do the backup?

Or, better, explain that an online DB2 backup is far better (and
won't have the side effect of causing the database to appear as
though it has "locked up" because I/O is suspended.) You can do
the backup, dump it out to a file system. Or media manager like
NetBackup, NetWorker or TSM.

I just have a really hard time imagining a customer who would be so
stuck on a particular technical implementation that they would be
willing to sacrifice availability to achieve it. It just seems more
likely that they are not understanding something properly.

i.e., Thinking that 'set write suspend' is equivalent to putting a
database in read-only mode, effectively disabling insert/update/delete
queries. SET WRITE SUSPEND does *not* do this, it blocks ALL I/O,
including I/O to temporary tables that gets gets flushed.
http://www.db2mag.com/story/showArti...leID=173500277

If using a good SAN solution (like EMC etc) the creation of the
mirrors can by the SAN can normally be performed while the database
server is online so you only need to issue the blocking mechanism at a
time when the SAN has finished with creating the mirrors. Any changes
done against the disks while the mirrors are being performed will be
tracked by the SAN and applied to the mirror image before finishing.

I have stressed the importance (and will continue doing so) of a db2
type backup to the customer but implementing it is their choice to
make.

So, is there actually a way to force a db2 instance to flush the dirty
buffers in the buffer pools to disk?
The only way to force this to occur is to deactivate the database (i.e.
shut it down). Otherwise, you're at the mercy of the page cleaners.
Theoretically you could de-tune them such that they are constantly
cleaning pages as soon as any become dirty, but that would be a bad
idea.

The flushing of the OS cache is probably handled differently on
different OSes but does anyone have any idea of how to force RedHat to
do this?
Also, while we're at it, does anyone know how to disable the OS
caching of filesystems on RedHat to prevent the double caching of
accessed pages from the containers (by both the OS and the db2
instance)?
Well, you can tell DB2 to open files and disable any caching
(alter tablespace X no file system caching).

On AIX, you can mount the file system with the 'dio' option
(or the 'cio' option, which is better); I *think* the equivalent
for this is the 'sync' option on Linux. (mount -o sync /db2/data).
However, use care with this. IBM's recommendation is to use the
'alter tablespace ... no file system caching' option instead of
setting file system mount options.

Mar 28 '07 #12

This thread has been closed and replies have been disabled. Please start a new discussion.

Similar topics

1
2429
by: Sgt. Sausage | last post by:
I've got a server (SQL 2K, Win2K) where the backups have started running long. The database is a bit largish -- 150GB or so. Up until last month, the backups were taking on the order of 4 to 5...
1
4689
by: Loopy | last post by:
I'm trying to create a backup set which maintains only a fixed number of days. As such, I've got the following script: BACKUP DATABASE to WITH NOINIT, NOUNLOAD, NAME=N'My Database Backup',...
1
1922
by: Andre | last post by:
Hi, I have a database (or better: used to have) and backup consisting of - the initial (complete) database - all log files since then (or so I thought) After making a data entry error I wrote...
1
1790
by: Peter Sands | last post by:
Hi, I am right in assuming to recover a database where logretain is on. That I only need the logs that are reported in the list history.. for instance; db2 list history backup since...
1
1507
by: Christos Kalantzis | last post by:
Hello, DB2 7.2 on AIX. I take online backups EVERY night and rsync my log folder every hour after running the DB2 ARCHIVE LOG command to make sure I dump the log buffer to a log file. Now...
3
5742
by: eieiohh | last post by:
MySQL 3.23.49 PHP 4.3.8 Apache 2.0.51 Hi All! Newbie.. I had a CRM Open Source application installed and running. Windows Xp crashed. I was able to copy the contents of the entire hard...
16
3802
by: DataPro | last post by:
New to Sql Server, running SQL Server 2000. Our transaction log file backups occasionally fail as the size of the transaction log gets really huge. We'd like to schedule additional transaction...
0
1651
by: flobroed | last post by:
Hi, I've a question regarding the transaction-log backup on SQL-Server 2000. We have implemented a low cost replication. Every evening we make a full backup and beginning at 7 to 18 we make...
21
1542
by: DP | last post by:
Hi, I'm not sure if this is the right group to ask. I am developing a small image library and I don't know how to hide the actual path to the image. So I go to the stock photo library websites...
0
7237
marktang
by: marktang | last post by:
ONU (Optical Network Unit) is one of the key components for providing high-speed Internet services. Its primary function is to act as an endpoint device located at the user's premises. However,...
0
7137
by: Hystou | last post by:
Most computers default to English, but sometimes we require a different language, especially when relocating. Forgot to request a specific language before your computer shipped? No problem! You can...
0
7416
jinu1996
by: jinu1996 | last post by:
In today's digital age, having a compelling online presence is paramount for businesses aiming to thrive in a competitive landscape. At the heart of this digital strategy lies an intricately woven...
1
7073
by: Hystou | last post by:
Overview: Windows 11 and 10 have less user interface control over operating system update behaviour than previous versions of Windows. In Windows 11 and 10, there is no way to turn off the Windows...
0
5656
agi2029
by: agi2029 | last post by:
Let's talk about the concept of autonomous AI software engineers and no-code agents. These AIs are designed to manage the entire lifecycle of a software development project—planning, coding, testing,...
0
3218
by: TSSRALBI | last post by:
Hello I'm a network technician in training and I need your help. I am currently learning how to create and manage the different types of VPNs and I have a question about LAN-to-LAN VPNs. The...
0
1571
by: 6302768590 | last post by:
Hai team i want code for transfer the data from one system to another through IP address by using C# our system has to for every 5mins then we have to update the data what the data is updated ...
1
779
muto222
by: muto222 | last post by:
How can i add a mobile payment intergratation into php mysql website.
0
443
bsmnconsultancy
by: bsmnconsultancy | last post by:
In today's digital era, a well-designed website is crucial for businesses looking to succeed. Whether you're a small business owner or a large corporation in Toronto, having a strong online presence...

By using Bytes.com and it's services, you agree to our Privacy Policy and Terms of Use.

To disable or enable advertisements and analytics tracking please visit the manage ads & tracking page.