se*****@yahoo.com wrote:
Serge Rielau wrote: Typically DPF (Data partitioning feature) is used for warehousing.
But OLTP applications, when designed properly, can also take advantage
of it.
Serge,
I'm interested in this. Can you point us toward additional
information. It seems to me that the shared nothing architecture is
superior for data warehousing but not optimal for OLTP. It bugs me a
bit that for the OLTP UDB systems I've been involved with we usually
end up with an active-passive setup to allow for failover. We
essentially have an unused piece of hardware. With RAC at least you
get to use all the hardware even though there is an overhead cost.
Lew,
This depends on your frame of mind.
Note that there doesn't have to be a 1-1 mapping between database and
the physical machine.
While the standby HADR system is idle (well it is rolling forward) the
machine doesn't have to.
If you have two databases for example you can use them to mutually fail
over.
Also the standby machine can be your test and development environment.
If production fails over you shoot development down if necessary.
Keep in mind that a good test environment should as much as possible
mimic the production system.
Even if you don't use the other machine at all you have to consider
total management cost of an active active clustering.
HADR is certified with the vast majority of apps because it is
non-invasive. You know what you get. There is soemthing to be said for
simplicity.
The gist of DPF in OLTP is that typically the OLTP system does not
undergo as rigorous a design as a warehouse. DPF demands dicipline and
rewards with near linear scalability.
I just went to
www.ibm.com and did two searches: HADR; and DB2 BCU
Both yield quite a bit of information.
(BCU stands for balanced configuration unit.. a standard way of
designing DPF systems)
Cheers
Serge
--
Serge Rielau
DB2 Solutions Development
IBM Toronto Lab