Ian wrote:
The first connection to the database will incur a "penalty" associated
with allocating bufferpools, etc. You can avoid this by using
"activate database".
10m sounds like a long time to me. Does your system start swapping
when you connect to DB2?
Thanks for answering the question I did not ask.
Yesterday I increased a buffer pool for an index from 60000 pages to
192000 pages (I wanted to get the entire index into memory) and then
db2 connect to DATABASE
started taking a long time. Not 10 minutes, though. I wondered why that
was, but you told me the answer.
Can I ever make the buffer pool so large that it would be quicker to read
the disk than rummage around in the buffer pool? (DMS storage used here) I
know decades ago, UNIX had an optimum memory size and if you put more RAM
on the machine than that, it started to slow down because they used a
sequential search of the buffer pool to find pages. They have long since
fixed that problem, of course. In other words, if I had enough RAM to put
the entire database in memory, would it _necessarily_ run faster than
leaving it mostly on the disks? Now my MB will hold only 16GBytes RAM and
I have only 4GBytes in there, so that will probably not be a problem, but
I am curious.
--
.~. Jean-David Beyer Registered Linux User 85642.
/V\ Registered Machine 241939.
/( )\ Shrewsbury, New Jersey
http://counter.li.org
^^-^^ 09:45:00 up 5 days, 11:56, 5 users, load average: 4.10, 4.82, 7.21