473,836 Members | 1,577 Online
Bytes | Software Development & Data Engineering Community
+ Post

Home Posts Topics Members FAQ

Question about external backups to filesystems

RoB
Hi all,

I'm coming from the Informix world and I have a customer using DB2
8.2.3 for Linux on Red Hat Enterprise ES.

The customer is performing filesystem backups of the containers etc
every night but they are not shutting down the database server while
doing this. I only assume that this most likelly would leave an
inconsistant backup image as there is nothing assuring that the
modified pages in the buffer pool get written to disk before the
filesystem backup starts. There is plenty of acticivty on the database
24x7.

Question: Apart from shutting down the instance before performing such
an external backup, is there a way in db2 to block all access and make
sure that all modified pages in memory gets written to disk? I've
tried the not so user-friendly online db2 documentation but without
any luck. With Informix Dynamic Server (IDS) this would be archieved
by issuing the commands "onmode -c BLOCK" and then "onmode -c UNBLOCK"
once the external backup has finished.

Any input is appreciated.

RoB

Mar 23 '07 #1
11 2260
krx
On Mar 23, 8:25 am, "RoB" <pluma...@yahoo .comwrote:
Hi all,

I'm coming from the Informix world and I have a customer using DB2
8.2.3 for Linux on Red Hat Enterprise ES.

The customer is performing filesystem backups of the containers etc
every night but they are not shutting down the database server while
doing this. I only assume that this most likelly would leave an
inconsistant backup image as there is nothing assuring that the
modified pages in the buffer pool get written to disk before the
filesystem backup starts. There is plenty of acticivty on the database
24x7.

Question: Apart from shutting down the instance before performing such
an external backup, is there a way in db2 to block all access and make
sure that all modified pages in memory gets written to disk? I've
tried the not so user-friendly online db2 documentation but without
any luck. With Informix Dynamic Server (IDS) this would be archieved
by issuing the commands "onmode -c BLOCK" and then "onmode -c UNBLOCK"
once the external backup has finished.

Any input is appreciated.

RoB
There's no need to shutdown the database if it needs to be used 24x7.
Set up Log archival and start taking online backups.

Mar 23 '07 #2
RoB
On Mar 23, 12:51 pm, "krx" <kedar.she...@g mail.comwrote:
On Mar 23, 8:25 am, "RoB" <pluma...@yahoo .comwrote:


Hi all,
I'm coming from the Informix world and I have a customer using DB2
8.2.3 for Linux on Red Hat Enterprise ES.
The customer is performing filesystem backups of the containers etc
every night but they are not shutting down the database server while
doing this. I only assume that this most likelly would leave an
inconsistant backup image as there is nothing assuring that the
modified pages in the buffer pool get written to disk before the
filesystem backup starts. There is plenty of acticivty on the database
24x7.
Question: Apart from shutting down the instance before performing such
an external backup, is there a way in db2 to block all access and make
sure that all modified pages in memory gets written to disk? I've
tried the not so user-friendly online db2 documentation but without
any luck. With Informix Dynamic Server (IDS) this would be archieved
by issuing the commands "onmode -c BLOCK" and then "onmode -c UNBLOCK"
once the external backup has finished.
Any input is appreciated.
RoB

There's no need to shutdown the database if it needs to be used 24x7.
Set up Log archival and start taking online backups.- Hide quoted text -

- Show quoted text -
Hi,

This customer was "having problems" with setting up a proper db2
backup strategy so they went on with using filesystem backups. As far
as they are concerned, they think this works fine and want to continue
doing it. So I'm bascially not looking for other solutions here,
although I know that they exist, but I'm looking for a way to tweak
their current strategy into something that actually will restore a
consistent backup.

RoB

Mar 23 '07 #3
RoB wrote:
On Mar 23, 12:51 pm, "krx" <kedar.she...@g mail.comwrote:
>On Mar 23, 8:25 am, "RoB" <pluma...@yahoo .comwrote:


Hi all,
I'm coming from the Informix world and I have a customer using DB2
8.2.3 for Linux on Red Hat Enterprise ES.
The customer is performing filesystem backups of the containers etc
every night but they are not shutting down the database server while
doing this. I only assume that this most likelly would leave an
inconsistant backup image as there is nothing assuring that the
modified pages in the buffer pool get written to disk before the
filesystem backup starts. There is plenty of acticivty on the database
24x7.
Question: Apart from shutting down the instance before performing such
an external backup, is there a way in db2 to block all access and make
sure that all modified pages in memory gets written to disk? I've
tried the not so user-friendly online db2 documentation but without
any luck. With Informix Dynamic Server (IDS) this would be archieved
by issuing the commands "onmode -c BLOCK" and then "onmode -c UNBLOCK"
once the external backup has finished.
Any input is appreciated.
RoB

There's no need to shutdown the database if it needs to be used 24x7.
Set up Log archival and start taking online backups.- Hide quoted text -

- Show quoted text -

Hi,

This customer was "having problems" with setting up a proper db2
backup strategy so they went on with using filesystem backups. As far
as they are concerned, they think this works fine and want to continue
doing it. So I'm bascially not looking for other solutions here,
although I know that they exist, but I'm looking for a way to tweak
their current strategy into something that actually will restore a
consistent backup.

RoB
RoB,

I'd say that if the customer is using filesystem backups, especially if he
actually isn't shutting down the instance, then "all bets are off" on being
able to recover anything at all.

In fact, without using the tools provided for the job, which have worked
exceptionally well for me over many years, I don't think he will get any
sympathy from IBM if he loses all his data.

I wonder if he has ever attempted a restore with what he has.

So there really is no solution but to put in place a backup and recovery
strategy using the DB2 BACKUP command as the foundation for what he does.

If he needs someone to do this for him, then he just has to ask. I'm sure
there are plenty of people on the list (myself included) with the skills to
do such a thing.

Phil Nelson
ScotDB Limited
(te*****@scotdb .com)
Mar 23 '07 #4
RoB
On Mar 23, 1:22 pm, Philip Nelson <team...@scotdb .comwrote:
RoB wrote:
On Mar 23, 12:51 pm, "krx" <kedar.she...@g mail.comwrote:
On Mar 23, 8:25 am, "RoB" <pluma...@yahoo .comwrote:
Hi all,
I'm coming from the Informix world and I have a customer using DB2
8.2.3 for Linux on Red Hat Enterprise ES.
The customer is performing filesystem backups of the containers etc
every night but they are not shutting down the database server while
doing this. I only assume that this most likelly would leave an
inconsistant backup image as there is nothing assuring that the
modified pages in the buffer pool get written to disk before the
filesystem backup starts. There is plenty of acticivty on the database
24x7.
Question: Apart from shutting down the instance before performing such
an external backup, is there a way in db2 to block all access and make
sure that all modified pages in memory gets written to disk? I've
tried the not so user-friendly online db2 documentation but without
any luck. With Informix Dynamic Server (IDS) this would be archieved
by issuing the commands "onmode -c BLOCK" and then "onmode -c UNBLOCK"
once the external backup has finished.
Any input is appreciated.
RoB
There's no need to shutdown the database if it needs to be used 24x7.
Set up Log archival and start taking online backups.- Hide quoted text -
- Show quoted text -
Hi,
This customer was "having problems" with setting up a proper db2
backup strategy so they went on with using filesystem backups. As far
as they are concerned, they think this works fine and want to continue
doing it. So I'm bascially not looking for other solutions here,
although I know that they exist, but I'm looking for a way to tweak
their current strategy into something that actually will restore a
consistent backup.
RoB

RoB,

I'd say that if the customer is using filesystem backups, especially if he
actually isn't shutting down the instance, then "all bets are off" on being
able to recover anything at all.

In fact, without using the tools provided for the job, which have worked
exceptionally well for me over many years, I don't think he will get any
sympathy from IBM if he loses all his data.

I wonder if he has ever attempted a restore with what he has.

So there really is no solution but to put in place a backup and recovery
strategy using the DB2 BACKUP command as the foundation for what he does.

If he needs someone to do this for him, then he just has to ask. I'm sure
there are plenty of people on the list (myself included) with the skills to
do such a thing.

Phil Nelson
ScotDB Limited
(team...@scotdb .com)- Hide quoted text -

- Show quoted text -
My thoughts precisely. As far as I know they have never tried to
restore... Which probably won't be successful anyway..

So if we go back to the original question: Is there a way (with a
command) to block the database server without shutting it down, and
hence killing off all the user sessions, and get all content in the
buffer pools to be written to their disk?

If this is not possible, then I wonder how a, say 15 terabyte large
database gets backed up nightly by db2 without shutting it down? Such
a backup with any standard server utility will most likely take more
than 24 hours which stops it from being performed each night. With
other database servers you can have a SAN that internally replicates
the disks and then just block the database server for a few seconds
(which also writes all dirty pages to disk) while you split off the
newly created disk image within the SAN. You can combine this level 0
external archive with database server logical log backups to be able
to bring the instance back to the point in time of the failure.

RoB

Mar 23 '07 #5
aj
Does your customer use a SAN? If so, split mirrors are a possibility.
Look at:

http://www-128.ibm.com/developerwork...azi/index.html

Basically, you use functionality provided by your storage vendor to make
an instantaneous copy of DB2 containers, utilizing a few DB2 tricks (SET
WRITE SUSPEND FOR DATABASE, SET WRITE RESUME FOR DATABASE, and DB2INIDB)
to make sure there are no in-flight transactions in the database. You
can then bring the copy online.

We happen to use EMC, and their name for it is "SnapView"

HTH

aj

PS - I'm a former Informix guy also. Welcome to DB2! :)

RoB wrote:
On Mar 23, 1:22 pm, Philip Nelson <team...@scotdb .comwrote:
>RoB wrote:
>>On Mar 23, 12:51 pm, "krx" <kedar.she...@g mail.comwrote:
On Mar 23, 8:25 am, "RoB" <pluma...@yahoo .comwrote:
Hi all,
I'm coming from the Informix world and I have a customer using DB2
8.2.3 for Linux on Red Hat Enterprise ES.
The customer is performing filesystem backups of the containers etc
every night but they are not shutting down the database server while
doing this. I only assume that this most likelly would leave an
inconsistan t backup image as there is nothing assuring that the
modified pages in the buffer pool get written to disk before the
filesyste m backup starts. There is plenty of acticivty on the database
24x7.
Question: Apart from shutting down the instance before performing such
an external backup, is there a way in db2 to block all access and make
sure that all modified pages in memory gets written to disk? I've
tried the not so user-friendly online db2 documentation but without
any luck. With Informix Dynamic Server (IDS) this would be archieved
by issuing the commands "onmode -c BLOCK" and then "onmode -c UNBLOCK"
once the external backup has finished.
Any input is appreciated.
RoB
There's no need to shutdown the database if it needs to be used 24x7.
Set up Log archival and start taking online backups.- Hide quoted text -
- Show quoted text -
Hi,
This customer was "having problems" with setting up a proper db2
backup strategy so they went on with using filesystem backups. As far
as they are concerned, they think this works fine and want to continue
doing it. So I'm bascially not looking for other solutions here,
although I know that they exist, but I'm looking for a way to tweak
their current strategy into something that actually will restore a
consistent backup.
RoB
RoB,

I'd say that if the customer is using filesystem backups, especially if he
actually isn't shutting down the instance, then "all bets are off" on being
able to recover anything at all.

In fact, without using the tools provided for the job, which have worked
exceptionall y well for me over many years, I don't think he will get any
sympathy from IBM if he loses all his data.

I wonder if he has ever attempted a restore with what he has.

So there really is no solution but to put in place a backup and recovery
strategy using the DB2 BACKUP command as the foundation for what he does.

If he needs someone to do this for him, then he just has to ask. I'm sure
there are plenty of people on the list (myself included) with the skills to
do such a thing.

Phil Nelson
ScotDB Limited
(team...@scotd b.com)- Hide quoted text -

- Show quoted text -

My thoughts precisely. As far as I know they have never tried to
restore... Which probably won't be successful anyway..

So if we go back to the original question: Is there a way (with a
command) to block the database server without shutting it down, and
hence killing off all the user sessions, and get all content in the
buffer pools to be written to their disk?

If this is not possible, then I wonder how a, say 15 terabyte large
database gets backed up nightly by db2 without shutting it down? Such
a backup with any standard server utility will most likely take more
than 24 hours which stops it from being performed each night. With
other database servers you can have a SAN that internally replicates
the disks and then just block the database server for a few seconds
(which also writes all dirty pages to disk) while you split off the
newly created disk image within the SAN. You can combine this level 0
external archive with database server logical log backups to be able
to bring the instance back to the point in time of the failure.

RoB
Mar 23 '07 #6
aj
PS - I suggest your customer start doing online backups rather than
filesystem container backups right quick, before they have a Really,
Really Bad Day(tm).

Also, verify the online backups w/ the db2ckbkp command.

aj

aj wrote:
Does your customer use a SAN? If so, split mirrors are a possibility.
Look at:

http://www-128.ibm.com/developerwork...azi/index.html
Basically, you use functionality provided by your storage vendor to make
an instantaneous copy of DB2 containers, utilizing a few DB2 tricks (SET
WRITE SUSPEND FOR DATABASE, SET WRITE RESUME FOR DATABASE, and DB2INIDB)
to make sure there are no in-flight transactions in the database. You
can then bring the copy online.

We happen to use EMC, and their name for it is "SnapView"

HTH

aj

PS - I'm a former Informix guy also. Welcome to DB2! :)

RoB wrote:
>On Mar 23, 1:22 pm, Philip Nelson <team...@scotdb .comwrote:
>>RoB wrote:
On Mar 23, 12:51 pm, "krx" <kedar.she...@g mail.comwrote:
On Mar 23, 8:25 am, "RoB" <pluma...@yahoo .comwrote:
>Hi all,
>I'm coming from the Informix world and I have a customer using DB2
>8.2.3 for Linux on Red Hat Enterprise ES.
>The customer is performing filesystem backups of the containers etc
>every night but they are not shutting down the database server while
>doing this. I only assume that this most likelly would leave an
>inconsista nt backup image as there is nothing assuring that the
>modified pages in the buffer pool get written to disk before the
>filesyst em backup starts. There is plenty of acticivty on the
>database
>24x7.
>Question : Apart from shutting down the instance before performing
>such
>an external backup, is there a way in db2 to block all access and
>make
>sure that all modified pages in memory gets written to disk? I've
>tried the not so user-friendly online db2 documentation but without
>any luck. With Informix Dynamic Server (IDS) this would be archieved
>by issuing the commands "onmode -c BLOCK" and then "onmode -c
>UNBLOCK"
>once the external backup has finished.
>Any input is appreciated.
>RoB
There's no need to shutdown the database if it needs to be used 24x7.
Set up Log archival and start taking online backups.- Hide quoted
text -
- Show quoted text -
Hi,
This customer was "having problems" with setting up a proper db2
backup strategy so they went on with using filesystem backups. As far
as they are concerned, they think this works fine and want to continue
doing it. So I'm bascially not looking for other solutions here,
although I know that they exist, but I'm looking for a way to tweak
their current strategy into something that actually will restore a
consistent backup.
RoB
RoB,

I'd say that if the customer is using filesystem backups, especially
if he
actually isn't shutting down the instance, then "all bets are off" on
being
able to recover anything at all.

In fact, without using the tools provided for the job, which have worked
exceptional ly well for me over many years, I don't think he will get any
sympathy from IBM if he loses all his data.

I wonder if he has ever attempted a restore with what he has.

So there really is no solution but to put in place a backup and recovery
strategy using the DB2 BACKUP command as the foundation for what he
does.

If he needs someone to do this for him, then he just has to ask. I'm
sure
there are plenty of people on the list (myself included) with the
skills to
do such a thing.

Phil Nelson
ScotDB Limited
(team...@scot db.com)- Hide quoted text -

- Show quoted text -

My thoughts precisely. As far as I know they have never tried to
restore... Which probably won't be successful anyway..

So if we go back to the original question: Is there a way (with a
command) to block the database server without shutting it down, and
hence killing off all the user sessions, and get all content in the
buffer pools to be written to their disk?

If this is not possible, then I wonder how a, say 15 terabyte large
database gets backed up nightly by db2 without shutting it down? Such
a backup with any standard server utility will most likely take more
than 24 hours which stops it from being performed each night. With
other database servers you can have a SAN that internally replicates
the disks and then just block the database server for a few seconds
(which also writes all dirty pages to disk) while you split off the
newly created disk image within the SAN. You can combine this level 0
external archive with database server logical log backups to be able
to bring the instance back to the point in time of the failure.

RoB
Mar 23 '07 #7
RoB
On Mar 23, 3:32 pm, aj <ron...@mcdonal ds.comwrote:
Does your customer use a SAN? If so, split mirrors are a possibility.
Look at:

http://www-128.ibm.com/developerwork...rticle/dm-0508...

Basically, you use functionality provided by your storage vendor to make
an instantaneous copy of DB2 containers, utilizing a few DB2 tricks (SET
WRITE SUSPEND FOR DATABASE, SET WRITE RESUME FOR DATABASE, and DB2INIDB)
to make sure there are no in-flight transactions in the database. You
can then bring the copy online.

We happen to use EMC, and their name for it is "SnapView"

HTH

aj

PS - I'm a former Informix guy also. Welcome to DB2! :)

RoB wrote:
On Mar 23, 1:22 pm, Philip Nelson <team...@scotdb .comwrote:
RoB wrote:
On Mar 23, 12:51 pm, "krx" <kedar.she...@g mail.comwrote:
On Mar 23, 8:25 am, "RoB" <pluma...@yahoo .comwrote:
Hi all,
I'm coming from the Informix world and I have a customer using DB2
8.2.3 for Linux on Red Hat Enterprise ES.
The customer is performing filesystem backups of the containers etc
every night but they are not shutting down the database server while
doing this. I only assume that this most likelly would leave an
inconsista nt backup image as there is nothing assuring that the
modified pages in the buffer pool get written to disk before the
filesystem backup starts. There is plenty of acticivty on the database
24x7.
Question: Apart from shutting down the instance before performing such
an external backup, is there a way in db2 to block all access and make
sure that all modified pages in memory gets written to disk? I've
tried the not so user-friendly online db2 documentation but without
any luck. With Informix Dynamic Server (IDS) this would be archieved
by issuing the commands "onmode -c BLOCK" and then "onmode -c UNBLOCK"
once the external backup has finished.
Any input is appreciated.
RoB
There's no need to shutdown the database if it needs to be used 24x7.
Set up Log archival and start taking online backups.- Hide quoted text -
- Show quoted text -
Hi,
This customer was "having problems" with setting up a proper db2
backup strategy so they went on with using filesystem backups. As far
as they are concerned, they think this works fine and want to continue
doing it. So I'm bascially not looking for other solutions here,
although I know that they exist, but I'm looking for a way to tweak
their current strategy into something that actually will restore a
consistent backup.
RoB
RoB,
I'd say that if the customer is using filesystem backups, especially if he
actually isn't shutting down the instance, then "all bets are off" on being
able to recover anything at all.
In fact, without using the tools provided for the job, which have worked
exceptionally well for me over many years, I don't think he will get any
sympathy from IBM if he loses all his data.
I wonder if he has ever attempted a restore with what he has.
So there really is no solution but to put in place a backup and recovery
strategy using the DB2 BACKUP command as the foundation for what he does.
If he needs someone to do this for him, then he just has to ask. I'm sure
there are plenty of people on the list (myself included) with the skills to
do such a thing.
Phil Nelson
ScotDB Limited
(team...@scotdb .com)- Hide quoted text -
- Show quoted text -
My thoughts precisely. As far as I know they have never tried to
restore... Which probably won't be successful anyway..
So if we go back to the original question: Is there a way (with a
command) to block the database server without shutting it down, and
hence killing off all the user sessions, and get all content in the
buffer pools to be written to their disk?
If this is not possible, then I wonder how a, say 15 terabyte large
database gets backed up nightly by db2 without shutting it down? Such
a backup with any standard server utility will most likely take more
than 24 hours which stops it from being performed each night. With
other database servers you can have a SAN that internally replicates
the disks and then just block the database server for a few seconds
(which also writes all dirty pages to disk) while you split off the
newly created disk image within the SAN. You can combine this level 0
external archive with database server logical log backups to be able
to bring the instance back to the point in time of the failure.
RoB- Hide quoted text -

- Show quoted text -
Thanks AJ! That was exactly the sequence of commands what I was
after.

This customer doesn't use SAN splits but by applying these commands
around their filesystem backup it will be consistent, which is what
I've been trying to achieve.

Thanks again you all for your input,

RoB

Mar 23 '07 #8
Ian
RoB wrote:
Thanks AJ! That was exactly the sequence of commands what I was
after.

This customer doesn't use SAN splits but by applying these commands
around their filesystem backup it will be consistent, which is what
I've been trying to achieve.
You need to read the documentation to understand how this is used, and
the ins and outs (and consequences) of it.

Using SET WRITE SUSPEND will block DB2 from doing any writes to disk,
however it does not flush data from bufferpools to disk, or do anything
to cause the OS to flush applicable I/O buffers (at least that's my
understanding). The database is not consistent.

The intent of this (when used with split-mirrors) is to allow you to
mount the mirrors on another server and then use the BACKUP DATABASE
command to actually back up the database. This is a very common way to
handle large data warehouses (like 15Tb), if the entire database must be
backed up.

However, you have to realize that "unfreezing the database" using the
'db2inidb' command can have consequences for recovery.

Using SET WRITE SUSPEND with the customer's desired backup mechanism
might work, but I wouldn't depend on it. There's a reason people are
suggesting you use the supported backup mechanism. And you really
should direct your customer towards a workable solution.

In addition, your customer would need to keep the database in
write-suspend mode for the duration of the backup (i.e. to ensure that
all DB2 data backed up is consistent). If you're talking about 15Tb
of data, this file system level backup will take a LONG time, and
will likely have some effect on database users. i.e., if you can't
write to temporary tablespaces, even select queries can block.

Mar 23 '07 #9
RoB
On Mar 23, 9:38 pm, Ian <ianb...@mobile audio.comwrote:
RoB wrote:
Thanks AJ! That was exactly the sequence of commands what I was
after.
This customer doesn't use SAN splits but by applying these commands
around their filesystem backup it will be consistent, which is what
I've been trying to achieve.

You need to read the documentation to understand how this is used, and
the ins and outs (and consequences) of it.

Using SET WRITE SUSPEND will block DB2 from doing any writes to disk,
however it does not flush data from bufferpools to disk, or do anything
to cause the OS to flush applicable I/O buffers (at least that's my
understanding). The database is not consistent.

The intent of this (when used with split-mirrors) is to allow you to
mount the mirrors on another server and then use the BACKUP DATABASE
command to actually back up the database. This is a very common way to
handle large data warehouses (like 15Tb), if the entire database must be
backed up.

However, you have to realize that "unfreezing the database" using the
'db2inidb' command can have consequences for recovery.

Using SET WRITE SUSPEND with the customer's desired backup mechanism
might work, but I wouldn't depend on it. There's a reason people are
suggesting you use the supported backup mechanism. And you really
should direct your customer towards a workable solution.

In addition, your customer would need to keep the database in
write-suspend mode for the duration of the backup (i.e. to ensure that
all DB2 data backed up is consistent). If you're talking about 15Tb
of data, this file system level backup will take a LONG time, and
will likely have some effect on database users. i.e., if you can't
write to temporary tablespaces, even select queries can block.

Thanks for that Ian. The procedure made will basically be the same
described here only that it'll be done to a filesystem (the size of
their db is in the range of 60 GB and that will, as you correctly
pointed out, take quite a while to perform on a filesystem):

http://www.db2mag.com/story/showArti...leID=173500277

If using a good SAN solution (like EMC etc) the creation of the
mirrors can by the SAN can normally be performed while the database
server is online so you only need to issue the blocking mechanism at a
time when the SAN has finished with creating the mirrors. Any changes
done against the disks while the mirrors are being performed will be
tracked by the SAN and applied to the mirror image before finishing.

I have stressed the importance (and will continue doing so) of a db2
type backup to the customer but implementing it is their choice to
make.

So, is there actually a way to force a db2 instance to flush the dirty
buffers in the buffer pools to disk?

The flushing of the OS cache is probably handled differently on
different OSes but does anyone have any idea of how to force RedHat to
do this?
Also, while we're at it, does anyone know how to disable the OS
caching of filesystems on RedHat to prevent the double caching of
accessed pages from the containers (by both the OS and the db2
instance)?

Thanks,

RoB

Mar 26 '07 #10

This thread has been closed and replies have been disabled. Please start a new discussion.

Similar topics

1
2444
by: Sgt. Sausage | last post by:
I've got a server (SQL 2K, Win2K) where the backups have started running long. The database is a bit largish -- 150GB or so. Up until last month, the backups were taking on the order of 4 to 5 hours -- depending on the level of activity on the server. I'm using a T-SQL script in the SQLAgent to run the backups. Native SQL backup to an AIT tape drive.
1
4701
by: Loopy | last post by:
I'm trying to create a backup set which maintains only a fixed number of days. As such, I've got the following script: BACKUP DATABASE to WITH NOINIT, NOUNLOAD, NAME=N'My Database Backup', NOSKIP, STATS=10, NOFORMAT, RETAINDAYS=5 DECLARE @i INT select @i=position from msdb.backupset where database name='mydb' and type!='F' amd backup set id=(select max(backupset set id) from msdb.backupset where database name='mydb')
1
1944
by: Andre | last post by:
Hi, I have a database (or better: used to have) and backup consisting of - the initial (complete) database - all log files since then (or so I thought) After making a data entry error I wrote the log to the backup and tried a point of time restore. Unfortunately that failed with the message "The log in this backup set begins at LSN xxx, which is too late to apply to
1
1810
by: Peter Sands | last post by:
Hi, I am right in assuming to recover a database where logretain is on. That I only need the logs that are reported in the list history.. for instance; db2 list history backup since 20040228190050 for db trdb reports that the earliest log is:S0000012.LOG and the current log is: S0000012.LOG I only need that S0000012.LOG log to recover in a roll-forward
1
1537
by: Christos Kalantzis | last post by:
Hello, DB2 7.2 on AIX. I take online backups EVERY night and rsync my log folder every hour after running the DB2 ARCHIVE LOG command to make sure I dump the log buffer to a log file. Now the question...
3
5777
by: eieiohh | last post by:
MySQL 3.23.49 PHP 4.3.8 Apache 2.0.51 Hi All! Newbie.. I had a CRM Open Source application installed and running. Windows Xp crashed. I was able to copy the contents of the entire hard drive onto a USB External Hard Drive. I have to assume I also copied the data. I
16
3834
by: DataPro | last post by:
New to Sql Server, running SQL Server 2000. Our transaction log file backups occasionally fail as the size of the transaction log gets really huge. We'd like to schedule additional transaction log backups. Does that require an exclusive on the database or can the db be used during a transaction log backup? Also, does switching to a bulk mode recovery model before a bulk operation then switching back to full recovery mode after present...
0
1666
by: flobroed | last post by:
Hi, I've a question regarding the transaction-log backup on SQL-Server 2000. We have implemented a low cost replication. Every evening we make a full backup and beginning at 7 to 18 we make transaction-log backups which are restore (no recovery) to the "standby-server". The full backups are restored every evening. Today i noticed something strange. Yesterday the last transaction log was made at 19 and afterwards applied to the standby...
21
1580
by: DP | last post by:
Hi, I'm not sure if this is the right group to ask. I am developing a small image library and I don't know how to hide the actual path to the image. So I go to the stock photo library websites to see how they hide their images. This is what I see from gettyimages.com <img src="http://cache2.asset-cache.net/xc/200186170-001.jpg? v=1&amp;c=NewsMaker&amp;k=2&amp;d=1EF4EE1EFB3A2CD3E55FD7668143826B10621193DB58D674" id="ctl12_ctlComp_imgThumb"...
0
10843
Oralloy
by: Oralloy | last post by:
Hello folks, I am unable to find appropriate documentation on the type promotion of bit-fields when using the generalised comparison operator "<=>". The problem is that using the GNU compilers, it seems that the internal comparison operator "<=>" tries to promote arguments from unsigned to signed. This is as boiled down as I can make it. Here is my compilation command: g++-12 -std=c++20 -Wnarrowing bit_field.cpp Here is the code in...
0
10546
jinu1996
by: jinu1996 | last post by:
In today's digital age, having a compelling online presence is paramount for businesses aiming to thrive in a competitive landscape. At the heart of this digital strategy lies an intricately woven tapestry of website design and digital marketing. It's not merely about having a website; it's about crafting an immersive digital experience that captivates audiences and drives business growth. The Art of Business Website Design Your website is...
1
10589
by: Hystou | last post by:
Overview: Windows 11 and 10 have less user interface control over operating system update behaviour than previous versions of Windows. In Windows 11 and 10, there is no way to turn off the Windows Update option using the Control Panel or Settings app; it automatically checks for updates and installs any it finds, whether you like it or not. For most users, this new feature is actually very convenient. If you want to control the update process,...
0
10254
tracyyun
by: tracyyun | last post by:
Dear forum friends, With the development of smart home technology, a variety of wireless communication protocols have appeared on the market, such as Zigbee, Z-Wave, Wi-Fi, Bluetooth, etc. Each protocol has its own unique characteristics and advantages, but as a user who is planning to build a smart home system, I am a bit confused by the choice of these technologies. I'm particularly interested in Zigbee because I've heard it does some...
1
7790
isladogs
by: isladogs | last post by:
The next Access Europe User Group meeting will be on Wednesday 1 May 2024 starting at 18:00 UK time (6PM UTC+1) and finishing by 19:30 (7.30PM). In this session, we are pleased to welcome a new presenter, Adolph Dupré who will be discussing some powerful techniques for using class modules. He will explain when you may want to use classes instead of User Defined Types (UDT). For example, to manage the data in unbound forms. Adolph will...
0
6978
by: conductexam | last post by:
I have .net C# application in which I am extracting data from word file and save it in database particularly. To store word all data as it is I am converting the whole word file firstly in HTML and then checking html paragraph one by one. At the time of converting from word file to html my equations which are in the word document file was convert into image. Globals.ThisAddIn.Application.ActiveDocument.Select();...
1
4448
by: 6302768590 | last post by:
Hai team i want code for transfer the data from one system to another through IP address by using C# our system has to for every 5mins then we have to update the data what the data is updated we have to send another system
2
4015
muto222
by: muto222 | last post by:
How can i add a mobile payment intergratation into php mysql website.
3
3112
bsmnconsultancy
by: bsmnconsultancy | last post by:
In today's digital era, a well-designed website is crucial for businesses looking to succeed. Whether you're a small business owner or a large corporation in Toronto, having a strong online presence can significantly impact your brand's success. BSMN Consultancy, a leader in Website Development in Toronto offers valuable insights into creating effective websites that not only look great but also perform exceptionally well. In this comprehensive...

By using Bytes.com and it's services, you agree to our Privacy Policy and Terms of Use.

To disable or enable advertisements and analytics tracking please visit the manage ads & tracking page.