Hi Spencer,
When I first encountered degradation of federating data, my first reaction
was to upgrade to the fastest and latest hardware, fine tune DB2, use
Materialized Query Tables. My conclusion was, federation with 3 million
records with queries, nothing of the above helped. Good thing that we were
scheduled for an upgrade, otherwise, I would have asked our company to spend
a lot of hardware for small improvements. The hardware upgrades and the fine
tuning helped but not to justify the time we have to wait for data to be
available. I might be wrong so I think the right step will be to use
monitoring tools to see where the bottleneck is. If the bottleneck is
getting data from federated sources, based on our experience having a CPU
that is 20 times more will not bring down the wait time even by half. If the
bottleneck is writing the data, perhaps more DASDs to write on with fast
rpms and large cache but our experience is if you are writing about 20,000
inserts, updates, deletes per minute, the fastests DASDs and the biggest
cache will not help. With sending data on the network, if you will be
sending about the same size of transaction the bigger the pipe the better
but if the i/o will move to its max, your pipe will always be full. You
mentioned your data will grow by 30 million records, calculate how much data
you will be sending per minute (or will you will be batching them every hour
or so?). It is just hard to send purchased hardware back to the store.
Like I mentioned, we found that replicating data to a smaller table will do
the job. Data Propagator is free anyways with DB2 LUW (I think for
mainframes and AS/400s you need to buy them).
Just trying to play Devil's advocate, not in any way trying to criticize. I
hope I am wrong and that having fast channels in your network does the job.
Thanks,
RdR
<sp*****@tabbert.net> wrote in message
news:11*********************@g14g2000cwa.googlegro ups.com...
It appears that the hardware does support some sort of VLAN between the
LPARS which is going to be gigabit I believe so this should help.
Currently we are not using MQT's for the purpose of keeping a local
copy of the data in the remote tables and are just running queries
across the nicknames. Utilizing replicated MQT's is probably something
we are going to have to more closely consider with the hardware
changes.
Spencer