By using this site, you agree to our updated Privacy Policy and our Terms of Use. Manage your Cookies Settings.
424,688 Members | 1,862 Online
Bytes IT Community
+ Ask a Question
Need help? Post your question and get tips & solutions from a community of 424,688 IT Pros & Developers. It's quick & easy.

"Time waited for prefetch (ms)" very high

P: n/a
Hi all,

Ive got a database approx 350 GB in which Im getting very high Time
waited for prefetch. This is directly out of the snapshot for the db
(these are for the entire database I assume as I pulled it out of get
snapshot for all on {dbname})

Total buffer pool read time (milliseconds) = 45660639
Total buffer pool write time (milliseconds)= 42128058
Total elapsed asynchronous read time = 33856320
Total elapsed asynchronous write time = 21817205
Asynchronous data read requests = 2127045
Asynchronous index read requests = 92769
No victim buffers available = 8785764
LSN Gap cleaner triggers = 691047
Dirty page steal cleaner triggers = 1298
Dirty page threshold cleaner triggers = 6637
Time waited for prefetch (ms) = 2386004
Unread prefetch pages = 945322
Direct reads = 3298851768
Direct writes = 17558698
Direct read requests = 6794130
Direct write requests = 69421
Direct reads elapsed time (ms) = 60671696
Direct write elapsed time (ms) = 3467406
Database files closed = 240580
Data pages copied to extended storage = 0
Index pages copied to extended storage = 0
Data pages copied from extended storage = 0
Index pages copied from extended storage = 0
so with those figures in mind, is the 40 minutes I see in time waited
for prefetch too high? Ive read repetedly that If its in the seconds
you should be concerned...but Ive not read much about fixing it other
than increase IO_Servers (which I did to 20) and prefetch size (which
is 20 times the extent size) and turn on IO parrallelism (which is
DB2_PARALLEL_IO=*) and make sure your using a raid array (which were
using an IBM San machine which uses for our Lpar at least 16-20 disks)
So what am I missing?

Jul 31 '06 #1
Share this Question
Share on Google+
3 Replies


P: n/a


may be it is problem with dirty pages.
give low values to the changed page threshold database configuration
parameter (CHNGPGS_THRESH) and try

(may be 60% give relatively low value and try)

Aug 1 '06 #2

P: n/a
Are you concerned of the "prefetch wait time" because the value is too
high or is it because the response time / performance of the queries /
system is not good.

Prefetches are most often (always?) asynchronous - so lets look at your
async numbers.
You stats indicate about 2,000,000 reads (approx) (Asynchronous data
read requests = 2127045).
All that async reads took about 33,000,000 units (ms?) (Total elapsed
asynchronous read time = 33856320)
Of that the prefetch wait time was about 2,000,000 ms (Time waited for
prefetch (ms) = 2386004)

Thus your wait ratio was about 1/15 or about 7% of the total read time.
That is not excessive, relatively speaking.

I would use the prefetch wait time in conjunction with (or in support
of) other data elements and not as a standalone entity.
rdudejr wrote:
Hi all,

Ive got a database approx 350 GB in which Im getting very high Time
waited for prefetch. This is directly out of the snapshot for the db
(these are for the entire database I assume as I pulled it out of get
snapshot for all on {dbname})

Total buffer pool read time (milliseconds) = 45660639
Total buffer pool write time (milliseconds)= 42128058
Total elapsed asynchronous read time = 33856320
Total elapsed asynchronous write time = 21817205
Asynchronous data read requests = 2127045
Asynchronous index read requests = 92769
No victim buffers available = 8785764
LSN Gap cleaner triggers = 691047
Dirty page steal cleaner triggers = 1298
Dirty page threshold cleaner triggers = 6637
Time waited for prefetch (ms) = 2386004
Unread prefetch pages = 945322
Direct reads = 3298851768
Direct writes = 17558698
Direct read requests = 6794130
Direct write requests = 69421
Direct reads elapsed time (ms) = 60671696
Direct write elapsed time (ms) = 3467406
Database files closed = 240580
Data pages copied to extended storage = 0
Index pages copied to extended storage = 0
Data pages copied from extended storage = 0
Index pages copied from extended storage = 0
so with those figures in mind, is the 40 minutes I see in time waited
for prefetch too high? Ive read repetedly that If its in the seconds
you should be concerned...but Ive not read much about fixing it other
than increase IO_Servers (which I did to 20) and prefetch size (which
is 20 times the extent size) and turn on IO parrallelism (which is
DB2_PARALLEL_IO=*) and make sure your using a raid array (which were
using an IBM San machine which uses for our Lpar at least 16-20 disks)
So what am I missing?
Aug 1 '06 #3

P: n/a
I was mostly concerned with the Time waited for prefetch value because
of this statement in a tuning guide I found on IBMs site:
http://www-128.ibm.com/developerwork...-0404mcarthur/

"NUM_IOSERVERS (DB)
I/O servers are used to perform prefetching and this parameter
specifies the maximum number of I/O servers for a database.
Non-prefetch I/Os are scheduled directly from the database agents and
as a result are not constrained by this parameter. Start by setting
equal to the number of physical disks that the database resides on
(even if many disks in a disk array or many disks in a logical volume)
+ 1 or 2, but not more than 4-6 times the # of CPUs. Having too large a
value will not hurt performance, unless it exceeds the 4-6 times the #
of CPUs.

If you see "Time waited for prefetch (ms)" in the seconds, you may want
to add an IO Server to see if performance improves.
"

And as for the dirty page steal cleaners, I only started seeing those
alot after I changed the the extent:prefetch ratio on all the table
spaces from 1:4 to 1:20, and changed the IOSERVERS from 10-20. I also
started seeing the great number of unread prefetch pages, which I
really did not see before doing that.

The other reason why I was doing this is because were trying to boost
performance on this db. Although I really dont know bench marks as far
as where it should be. So Im looking for bottlenecks, and this was a
candidate. Anyways, our CHNGPGS_THRESH is 45, which from what I read
is 15 less than the default value, but Im going to try lowering it some
more. Before I do so, though, is there any negative reprocussions to
having it too low? The only one I can think of is having all your IO
cleaners kick off more times than nessesary, which would eat up your
CPU. Am I on the right track?

Aug 1 '06 #4

This discussion thread is closed

Replies have been disabled for this discussion.