Here is my situation.
We run a very mission critical application that uses DB2 7.2 for the data
repository. The database is hosed at a data center in Los Angeles, Ca.
Our main office is in Portland, Oregon.
For disaster recovery purposes, we have been conducting an offline backup of
the database each night, and then sending the backup file and the log files
to our Portland office over a T1 connection. We have a DR machine in
Portland that is identical to the production machine in LA. In the event of
a problem, our plan was to restore the back up and log files to the Portland
machine and then redirect all users to that database.
First Issue: Because of the increasing database size, we have reached the
point where the overnight time window is not enough to complete the backup
and file transfer process before the start of business the next day.
Second Issue: Since this is a mission critical application, we have been
instructed that in the event of a problem, the application can only be
unavailable for 1 hour. RTO = 1 hr. We also cannot stand to loose more than
1 hours worth of data. RPO = 1 hr. Therefore, our old plan, which could
cause a data loss of 24 hours, is no longer good enough.
I am toying with the notion of replicating the database in LA to the DR
machine in Portland over the T1 link. Is this possible? Can it be done?
Would it accomplish our objectives?
Given this situation, what is the best way to accomplish our DR goals? (RTO
= 1 hr. and RPO = 1 hr.)
Any help with this is greatly appreciated.
Thanks in advance,
John 5 2071
johnm wrote: We run a very mission critical application that uses DB2 7.2 for the data repository. The database is hosed at a data center in Los Angeles, Ca.
I hope you mean "hosted" not "hosed". :-) I am toying with the notion of replicating the database in LA to the DR machine in Portland over the T1 link. Is this possible? Can it be done? Would it accomplish our objectives?
Using replication for an HA solution is generally a bad idea. Much too
complicated.
Given this situation, what is the best way to accomplish our DR
goals? (RTO = 1 hr. and RPO = 1 hr.)
With DB2 UDB V7.2 (which is out of support, as you probably know), your
options are limited.
Have you looked into log shipping (you only ship the log files you need
and an occaisional full backup). There is a good article on DB2
Developer Domain about implementing this, http://tinyurl.com/89vwg
and it includes links to docs on other HA features and DB2 UDB.
I would migrate to V8.2 and use Q-Replication. But Ian is kinda right
.... you have to maintain a subscription for each table. But there are
people doing this at 1000s txns/sec and maintaining < 1 sec latency. Ask
your IBM rep.
Larry Edelstein
johnm wrote: Here is my situation. We run a very mission critical application that uses DB2 7.2 for the data repository. The database is hosed at a data center in Los Angeles, Ca. Our main office is in Portland, Oregon. For disaster recovery purposes, we have been conducting an offline backup of the database each night, and then sending the backup file and the log files to our Portland office over a T1 connection. We have a DR machine in Portland that is identical to the production machine in LA. In the event of a problem, our plan was to restore the back up and log files to the Portland machine and then redirect all users to that database. First Issue: Because of the increasing database size, we have reached the point where the overnight time window is not enough to complete the backup and file transfer process before the start of business the next day. Second Issue: Since this is a mission critical application, we have been instructed that in the event of a problem, the application can only be unavailable for 1 hour. RTO = 1 hr. We also cannot stand to loose more than 1 hours worth of data. RPO = 1 hr. Therefore, our old plan, which could cause a data loss of 24 hours, is no longer good enough. I am toying with the notion of replicating the database in LA to the DR machine in Portland over the T1 link. Is this possible? Can it be done? Would it accomplish our objectives? Given this situation, what is the best way to accomplish our DR goals? (RTO = 1 hr. and RPO = 1 hr.) Any help with this is greatly appreciated. Thanks in advance, John
Hi JohnM,
If it is just the database that you need to be highly available, you can use
Data Propagator so that you can replicate a change as it happens. This will
also allow you to distribute the changes within a one day period rather than
in one batch - near real time data replication. One good thing about Data
Propagator is that you do not need to setup an MQ environment as would be
needed if you use Q Replication. Even if you already have MQ, you would
still need to make sure that you might still need to upgrade your MQ
environment especially if you are just using it for messaging - if you go Q
Replication. Having said that, Data Propagator's performance will start to
degrade once you hit the 5,000/changes per second data change rate, in which
case Q Replication may be a better option (more IT investments might be
needed though (go DB2 8.2 as per one of th esuggestions in this thread but
still cheaper than not meeting Service Level Agreements). I have seen this
in situations wherein volumes hit the 5,000 per second change.
Hope this helps,
RdR
"johnm" <jo***@matrixsg .com> wrote in message
news:11******** *****@corp.supe rnews.com... Here is my situation. We run a very mission critical application that uses DB2 7.2 for the data repository. The database is hosed at a data center in Los Angeles, Ca. Our main office is in Portland, Oregon. For disaster recovery purposes, we have been conducting an offline backup of the database each night, and then sending the backup file and the log files to our Portland office over a T1 connection. We have a DR machine in Portland that is identical to the production machine in LA. In the event of a problem, our plan was to restore the back up and log files to the Portland machine and then redirect all users to that database. First Issue: Because of the increasing database size, we have reached the point where the overnight time window is not enough to complete the backup and file transfer process before the start of business the next day. Second Issue: Since this is a mission critical application, we have been instructed that in the event of a problem, the application can only be unavailable for 1 hour. RTO = 1 hr. We also cannot stand to loose more than 1 hours worth of data. RPO = 1 hr. Therefore, our old plan, which could cause a data loss of 24 hours, is no longer good enough. I am toying with the notion of replicating the database in LA to the DR machine in Portland over the T1 link. Is this possible? Can it be done? Would it accomplish our objectives? Given this situation, what is the best way to accomplish our DR goals? (RTO = 1 hr. and RPO = 1 hr.) Any help with this is greatly appreciated. Thanks in advance, John
If you're going to consider moving to V8.2 anyway, another solution you
should definitely look at is HADR, which is highly suited to exactly
this kind of disaster recovery scenario. HADR essentially automates
the hand-rolled log shipping solution mentioned above, while reducing
the granule of transfer to a log buffer flush (down from a full log
file). This means the standby can run very close behind the primary,
with little (ASYNC mode) or no (SYNC mode) transaction loss possible at
failover. In addition, having the standby keeping up to current work
on the primary means that failover can be accomplished very quickly
(typically from a few seconds to a few minutes), as there is little or
no additional log data to play through at the time the operation is
initiated.
HADR is also very simple to set up and manage. HADR operates on an
entire database, replicating all logged operations, as opposed to other
DB2 Replication offerings that require configuration by database
object.
Whether your existing comm facility is sufficient depends on a couple
main factors:
1. Whether it can handle the expected data rate. You can estimate the
data rate by looking at how fast your system generates DB2 log data.
With HADR, the amount of data transferred will be modestly greater than
the existing log generation rate (due some log pages being sent more
than one time as they fill up, and to occasional control messages).
2. Depending on the synchronization mode you choose, the round trip
latency on your network could also come into play. SYNC and NEARSYNC
modes require acknowledgement s of log data receipt from standby to
primary, and can put backpressure on the logging rate of the primary if
they are not delivered fast enough (in context of what load the
application is driving). http://www-306.ibm.com/software/data/db2/udb/hadr.html http://publib.boulder.ibm.com/infoce...n/c0011267.htm ftp://ftp.software.ibm.com/ps/produc...S/db2hae81.pdf
(Chapter 7)
Regards,
- Steve P.
IBM DB2 UDB for LUW Development
Portland, OR
I would go for HADR as suggested previously. I am using HADR between a
database in Sydney (AU) and Auckland (NZ), this works like a dream. The
automatic client reroute helps as well so the clients automatically switch
over to the standy when we bring the primary down.
Cheers,
"Steve Pearson (news only)" <st*******@my-deja.com> wrote in message
news:11******** *************@z 14g2000cwz.goog legroups.com... If you're going to consider moving to V8.2 anyway, another solution you should definitely look at is HADR, which is highly suited to exactly this kind of disaster recovery scenario. HADR essentially automates the hand-rolled log shipping solution mentioned above, while reducing the granule of transfer to a log buffer flush (down from a full log file). This means the standby can run very close behind the primary, with little (ASYNC mode) or no (SYNC mode) transaction loss possible at failover. In addition, having the standby keeping up to current work on the primary means that failover can be accomplished very quickly (typically from a few seconds to a few minutes), as there is little or no additional log data to play through at the time the operation is initiated.
HADR is also very simple to set up and manage. HADR operates on an entire database, replicating all logged operations, as opposed to other DB2 Replication offerings that require configuration by database object.
Whether your existing comm facility is sufficient depends on a couple main factors:
1. Whether it can handle the expected data rate. You can estimate the data rate by looking at how fast your system generates DB2 log data. With HADR, the amount of data transferred will be modestly greater than the existing log generation rate (due some log pages being sent more than one time as they fill up, and to occasional control messages).
2. Depending on the synchronization mode you choose, the round trip latency on your network could also come into play. SYNC and NEARSYNC modes require acknowledgement s of log data receipt from standby to primary, and can put backpressure on the logging rate of the primary if they are not delivered fast enough (in context of what load the application is driving).
http://www-306.ibm.com/software/data/db2/udb/hadr.html
http://publib.boulder.ibm.com/infoce...n/c0011267.htm
ftp://ftp.software.ibm.com/ps/produc...S/db2hae81.pdf (Chapter 7)
Regards, - Steve P. IBM DB2 UDB for LUW Development Portland, OR This thread has been closed and replies have been disabled. Please start a new discussion. Similar topics |
by: Cherrish Vaidiyan |
last post by:
Hello,
I have certain doubts regarding replication of Oracle 9i in Red Hat
Linux 9.
1) I want to implement asynchronous/synchronous multimaster
replication.I have heard about Replication Management Tool in OEM.
What are the steps to be taken initailly for using Replication
Management Tool ?
|
by: Cherrish Vaidiyan |
last post by:
Frank <fbortel@nescape.net> wrote in message news:<bqgb99$a04$1@news1.tilbu1.nb.home.nl>...
> Cherrish Vaidiyan wrote:
> > Hello,
> >
> > I have certain doubts regarding replication of Oracle 9i in Red Hat
> > Linux 9.
> >
> > 1) I want to implement asynchronous/synchronous multimaster
> > replication.I have heard about Replication Management Tool in OEM.
> > What are the steps to be taken initailly for using Replication
|
by: steve |
last post by:
Hi,
several years ago , I implemented the oracle replication system.
At that time our database was ported from an old Fox pro application. ( K ,
no laughing at the back).
As the foxpro did not use "primary keys", I set up a replication column
|
by: Craig HB |
last post by:
I have a distributed inventory control database that I am going to
migrate from Access to SQL Server. I am going to use SQL Server
Replication to keep the data current.
There will one SQL Server 2000 database at HeadOffice, and about 200
branches will each have MSDE 2000.
For tables that are the same for each branch (e.g. StockItems,
Suppliers), I'm going to use Transactional replication with the
HeadOffice as Publisher. There is no...
|
by: John |
last post by:
Hi
We have an access app (front-end+backend) running on the company network. I
am trying to setup replication for laptop users who go into field and need
the data synched between their laptops and the server upon return to the
office. I am planning it this;
Move all access tables to sql server and then link the tables to access
front-end mdb app (using odbc?). Copy the same setup (access front end + sql
backend) onto each laptop....
| |
by: dlesandrini |
last post by:
I need advice about my decision to go with Replication in general.
This post was placed on the Microsoft Replication newsgroup, but
I really value the feedback that comes from this group as well.
I have a new client who want their Excel data moved to Access.
They have only 4-6 users of this data and a couple of them want
to work disconnected, with their laptops and synchronize when
they come into the office. The database will be...
|
by: Raphi |
last post by:
Hi,
I've been using an Access application I wrote for an office with the
front-end stored on all computers and the back-end on one of them serving as
an Access file server.
Now we're moving to a 2nd office 15 minutes down the road. Only one office
will be open at a time, so theoretically it'd be possible to copy the
back-end manually every night from one office to another, but frankly,
that's pretty annoying.
|
by: David W. Fenton |
last post by:
See:
Updated version of the Microsoft Jet 4.0 Service Pack 8
replication
files is available in the Download Center
http://support.microsoft.com/?scid=kb;en-us;321076
This includes the Jet 4 synchronizer.
This allows anyone to do indirect replication, even without
|
by: Gert van der Kooij |
last post by:
Hi,
Our SQL Replication is between DB2 databases on Windows servers.
I'm searching for the document which tells me how to migrate our SQL
Replication environment from V8 to V9 (we also need to migrate from V7
to V8 but that's fully described so that's no problem).
The PDF 'Migrating to Replication Version 9' doesn't contain a
description about migrating SQL Replication, only Q replication. I found
some links to PDF manuals which should...
|
by: Query Builder |
last post by:
Hi,
I have transactional replication set up on on of our MS SQL 2000 (SP4)
Std Edition database server
Because of an unfortunate scenario, I had to restore one of the
publication databases. I scripted the replication module and dropped
the publication first. Then did a full restore.
When I try to set up the replication thru the script, it created the
|
by: marktang |
last post by:
ONU (Optical Network Unit) is one of the key components for providing high-speed Internet services. Its primary function is to act as an endpoint device located at the user's premises. However, people are often confused as to whether an ONU can Work As a Router. In this blog post, we’ll explore What is ONU, What Is Router, ONU & Router’s main usage, and What is the difference between ONU and Router. Let’s take a closer look !
Part I. Meaning of...
| |
by: Hystou |
last post by:
Most computers default to English, but sometimes we require a different language, especially when relocating. Forgot to request a specific language before your computer shipped? No problem! You can effortlessly switch the default language on Windows 10 without reinstalling. I'll walk you through it.
First, let's disable language synchronization. With a Microsoft account, language settings sync across devices. To prevent any complications,...
|
by: Oralloy |
last post by:
Hello folks,
I am unable to find appropriate documentation on the type promotion of bit-fields when using the generalised comparison operator "<=>".
The problem is that using the GNU compilers, it seems that the internal comparison operator "<=>" tries to promote arguments from unsigned to signed.
This is as boiled down as I can make it.
Here is my compilation command:
g++-12 -std=c++20 -Wnarrowing bit_field.cpp
Here is the code in...
|
by: Hystou |
last post by:
Overview:
Windows 11 and 10 have less user interface control over operating system update behaviour than previous versions of Windows. In Windows 11 and 10, there is no way to turn off the Windows Update option using the Control Panel or Settings app; it automatically checks for updates and installs any it finds, whether you like it or not. For most users, this new feature is actually very convenient. If you want to control the update process,...
|
by: tracyyun |
last post by:
Dear forum friends,
With the development of smart home technology, a variety of wireless communication protocols have appeared on the market, such as Zigbee, Z-Wave, Wi-Fi, Bluetooth, etc. Each protocol has its own unique characteristics and advantages, but as a user who is planning to build a smart home system, I am a bit confused by the choice of these technologies. I'm particularly interested in Zigbee because I've heard it does some...
|
by: agi2029 |
last post by:
Let's talk about the concept of autonomous AI software engineers and no-code agents. These AIs are designed to manage the entire lifecycle of a software development project—planning, coding, testing, and deployment—without human intervention. Imagine an AI that can take a project description, break it down, write the code, debug it, and then launch it, all on its own....
Now, this would greatly impact the work of software developers. The idea...
|
by: conductexam |
last post by:
I have .net C# application in which I am extracting data from word file and save it in database particularly. To store word all data as it is I am converting the whole word file firstly in HTML and then checking html paragraph one by one.
At the time of converting from word file to html my equations which are in the word document file was convert into image.
Globals.ThisAddIn.Application.ActiveDocument.Select();...
| |
by: TSSRALBI |
last post by:
Hello
I'm a network technician in training and I need your help.
I am currently learning how to create and manage the different types of VPNs and I have a question about LAN-to-LAN VPNs.
The last exercise I practiced was to create a LAN-to-LAN VPN between two Pfsense firewalls, by using IPSEC protocols.
I succeeded, with both firewalls in the same network. But I'm wondering if it's possible to do the same thing, with 2 Pfsense firewalls...
|
by: 6302768590 |
last post by:
Hai team
i want code for transfer the data from one system to another through IP address by using C# our system has to for every 5mins then we have to update the data what the data is updated we have to send another system
| |