Hi hi.
Being fairly new in the whole db2/database 'scene', I've been wondering
how other companies 'do their stuff'.
For instance, we provide one of those SAAS services and have a few 40
racks stuffed with webservers, application servers and one database
server. So it's a lot of OLTP.
We run db2 8.2 on a linux box with a bit of memory and use a netapp
filer for storage. ATM we make nightly online backups of our database
and use snapmirror replication between our production and backup
location through our netapps.
But since I'm unaware of how other companies do it, I'm wondering
whether this is the best solution. I'm wondering this since I keep
running into different capacity problems (storage, time for backups,
memory, bandwidth, etc).
Having explained this, I'm wondering how other companies handle their
databases. Do you all have huge mainframes with terrabytes of storage
and tivoli storagemanagers?
If you have a database of 4TB, how do you make backups without crippling
performance and without taking the database down for a few hours? (kinda
hard in these 24/7 times). Do you even make backups on a database level,
or rather at a storage level? Or do you perhaps make an offline backup
every first day of the month and collect logs for the rest of the time
to ensure a recent backup?
Just curious, because sometimes I feel like I'm reinventing the wheel.
-R-