"Tim Schaefer" <we*******@datad.com> wrote in message
news:Sr********************@fe12.atl2.webusenet.co m...
Mark,
Thanks much for your help and suggestions. Sorry if my post reads like a
jerk, I am typically to-the-point or long-winded, email is not my strong point
in life. At work I typically will get up from my desk and walk to another persons'
desk instead of using the phone or email. Email usually gets me in trouble,
and I'm terrible with a phone other than dialing a number and chatting. In
your case we'll have to settle for the worst of skillsets, my apologies.
I did peruse the TPC site a week or so ago, but it is my belief that there
are lies damn lies and statistics, thusly I'd rather hear from real people who have
actually implemented the ICE and can let me know in advance whether or not we would
be spending money foolishly or wisely. It is always easy to suggest
somebody else buy something but more difficult to absorb the loss if the decision
is horribly wrong. DB2 continues to gain ground with me intellectually however I need
to hear that others have found it as appealing as it looks in the brochure
enough to buy, and it works as advertised.
Thanks,
Tim
Tim,
Thanks for your post.
I think you need to approach the ICE solution from two different
perspectives: software and hardware.
The DB2 software is ESE with partitioning option running on Linux (in this
case with support for 64 bit AMD processors). The multi-node configuration
of DB2 runs on other Linux boxes also, so the ICE offering is not
necessarily new from that standpoint. DB2 ESE (and especially its
predecessor DB2 V7 EEE) is running on many other platforms, including AIX
and Windows, and there should be many customer references available, even
some on Linux (not using the ICE hardware).
The hardware component of the ICE offering is really just a bunch of PC
(usually 2 processors each) running AMD 64 bit processors with some shared
disk components. This is really not a true "shared nothing" architecture
(despite what IBM says) since the disks are shared, and in most DB2
configurations (like the TPC benchmark configuration) there are two DB2
partitions per node, which is not share nothing since they share the OS
memory (however one could configure DB2 with one partition per node if you
really wanted to).
Since the days when Teradata invented shared nothing parallel database
processing, huge enhancements in disk technology and memory management has
lessened the need for an absolutely shared nothing configuration (although
to repeat--DB2 will support shared nothing if you want to set it up that way
from a hardware standpoint). DB2 does not care whether you have shared disks
between nodes, or multiple partitions per node (except from a performance
standpoint). All databases that support multi-node parallel configuration
support shared nothing hardware if set up that way, but even Teradata
abandoned the absolute share nothing hardware about 8 years ago. ICE is a
"shared less" architecture in a relatively affordable package (compared to
other similar configurations).
So maybe if you narrowed down your concern to the software part or the
hardware part, you might be able to better figure out how comfortable you
are with the solution. You could even get a couple of AMD 64 bit boxes and
create a 2 node system and try out DB2 ESE with partitioning option in a
proof of concept if you are really concerned about the ICE. From my
perspective, the ICE is not really anything all that new, it just some
existing software and new hardware that has been packaged as a very
affordable MPP system.