469,643 Members | 2,035 Online
Bytes | Developer Community
New Post

Home Posts Topics Members FAQ

Post your question to a community of 469,643 developers. It's quick & easy.

"connect to dbname" is very slow...

when I firstly ran "connect to dbname" command after "db2start", it spent me
about 10 mins. I saw a process named "db2sync" that used 700M memory.

The OS is Linux Advanced Server V2.1, Database is V8.1.5. The machine is
Compaq ProLiant 8000, 1G Mem. How can I do? thanks..
Nov 12 '05 #1
2 5376
Ian
Hunter wrote:
when I firstly ran "connect to dbname" command after "db2start", it spent me
about 10 mins. I saw a process named "db2sync" that used 700M memory.

The OS is Linux Advanced Server V2.1, Database is V8.1.5. The machine is
Compaq ProLiant 8000, 1G Mem. How can I do? thanks..


The first connection to the database will incur a "penalty" associated
with allocating bufferpools, etc. You can avoid this by using
"activate database".

10m sounds like a long time to me. Does your system start swapping when
you connect to DB2?

-----= Posted via Newsfeeds.Com, Uncensored Usenet News =-----
http://www.newsfeeds.com - The #1 Newsgroup Service in the World!
-----== Over 100,000 Newsgroups - 19 Different Servers! =-----
Nov 12 '05 #2
Ian wrote:
The first connection to the database will incur a "penalty" associated
with allocating bufferpools, etc. You can avoid this by using
"activate database".

10m sounds like a long time to me. Does your system start swapping
when you connect to DB2?

Thanks for answering the question I did not ask.

Yesterday I increased a buffer pool for an index from 60000 pages to
192000 pages (I wanted to get the entire index into memory) and then

db2 connect to DATABASE

started taking a long time. Not 10 minutes, though. I wondered why that
was, but you told me the answer.
Can I ever make the buffer pool so large that it would be quicker to read
the disk than rummage around in the buffer pool? (DMS storage used here) I
know decades ago, UNIX had an optimum memory size and if you put more RAM
on the machine than that, it started to slow down because they used a
sequential search of the buffer pool to find pages. They have long since
fixed that problem, of course. In other words, if I had enough RAM to put
the entire database in memory, would it _necessarily_ run faster than
leaving it mostly on the disks? Now my MB will hold only 16GBytes RAM and
I have only 4GBytes in there, so that will probably not be a problem, but
I am curious.

--
.~. Jean-David Beyer Registered Linux User 85642.
/V\ Registered Machine 241939.
/( )\ Shrewsbury, New Jersey http://counter.li.org
^^-^^ 09:45:00 up 5 days, 11:56, 5 users, load average: 4.10, 4.82, 7.21

Nov 12 '05 #3

This discussion thread is closed

Replies have been disabled for this discussion.

Similar topics

2 posts views Thread by Prakash Wadhwani | last post: by
1 post views Thread by delset | last post: by
3 posts views Thread by cberthu | last post: by
12 posts views Thread by Jonathan Wood | last post: by
By using this site, you agree to our Privacy Policy and Terms of Use.