468,140 Members | 1,425 Online
Bytes | Developer Community
New Post

Home Posts Topics Members FAQ

Post your question to a community of 468,140 developers. It's quick & easy.

Statistics IO logical reads sometimes 15 million and other times down to 90 thousand?

I am running a query in SQL 2000 SP4, Windows 2000 Server
that is not being shared with any other users or any sql
connections users. The db involves a lot of tables,
JOINs, LEFT JOINs, UNIONS etc... Ok it's not a pretty
code and my job is to make it better.

But for now one thing I would like to understand with your
help is why the same SP on the same server and everything
the same without me changing anything at all in terms of
SQL Server (configuration, code change, ...) runs in
Query Analyzer in 1:05 minute and i see one table get a
hit of 15 million logical reads:
Table 'TABLE1'. Scan count 2070, logical reads 15516368,
physical reads 147, read-ahead reads 0.

This 'TABLE1' has about 400,000 records

The second time i ran right after in Query Analyzer again:
Table 'TABLE1'. Scan count 2070, logical reads 15516368,
physical reads 0, read-ahead reads 0.
I can see now the physical reads being 0 as it is
understandable that SQL is now fetching the data from
memory.

But now the third time I ran:
Table 'TABLE1'. Scan count 28, logical reads 87784,
physical reads 0, read-ahead reads 0.
The Scan count went down from 2070 to 28. I don't
know what the Scan count is actually. It scanned the
table 28 times?
The logical reads went down to 87,784 reads from 15
million and 2 seconds execution time!

Anybody has any ideas why this number change?

The problem is i tried various repeats of my test, i
rebooted the SQL Server, dropped the database, restored
it, ran the same exact query and it took 3-4-5 seconds
with 87,784 reads vs 15 million.

Why i don't see 15 million now?

Well i kept working during the day and i happen to run into
another set of seeing 15 million again. A few runs would
keep running at the paste of 15 million over 1 minute and
eventually the numbers went back down to 87,784 and 2
seconds.

Is it my way of using the computer? Maybe i was opening
too many applications, SQL was fighting for memory?
Would that explain the 15 million reads?
I went and changed my SQL Server to used a fixed memory
of 100 megs, restarted it and tested again the same
query but it continued to show 87,784 reads with 2 seconds
execution time.

I opened all kinds of applications redid the same test
and i was never able to see 15 million reads again.

Can someone help me with suggestions on what could be
this problem and what if i could find a way to come to
see 15 million reads again?

By the way with the limited info you have here about the
database I am using, is 87,784 reads a terrible number of
reads, average or normal when the max records in the many
tables involved in this SP is 400,000 records?

I am guessing it is a terrible number, am I correct?
I would appreciate your help.

Thank you

Mar 4 '06 #1
4 3343
I can add that I have 1.5GB of RAM
Everything is on the local server and I think 13 rows
are being returned.
Mar 4 '06 #2
serge (se****@nospam.ehmail.com) writes:
I can see now the physical reads being 0 as it is
understandable that SQL is now fetching the data from
memory.

But now the third time I ran:
Table 'TABLE1'. Scan count 28, logical reads 87784,
physical reads 0, read-ahead reads 0.

The Scan count went down from 2070 to 28. I don't
know what the Scan count is actually. It scanned the
table 28 times?
Or at least accessed. Say that you run a query like:

SELECT SUM(OD.UnitPrice * OD.Quantity)
FROM Orders O
JOIN [Order Details] OD ON O.OrderID = OD.OrderID
WHERE O.CustomerID = N'VINET'

This gives the following statistics IO:

Table 'Order Details'. Scan count 5, logical reads 10, physical reads 0,
read-ahead reads 0.
Table 'Orders'. Scan count 1, logical reads 2, physical reads 0,
read-ahead reads 0.

There are five orders for VINET in Northwind..Orders. SQL Server seeks
the index on Orders.CusteromerID and for every match it looks up the
order in Order Details. Not by scan, but by seeking the index. But
the output from STATISTICS IO does not make that distinction.

Note that is what happens with nested-loop joins. Had the join been
implemented as a merge or a hash join, the Scan Count would be 1
for both table - but then it had also truely been a scan.
The logical reads went down to 87,784 reads from 15
million and 2 seconds execution time!

Anybody has any ideas why this number change?
My guess is auto-statistcs. When you run queries than scan a lot of
data, SQL Server takes the oppurtunity to sample statistics abou the
data. Once that statistics is available, the optimizer may find a better
way to implement the query.

If you were to create a new table, and move the data over to that
table, my prediction that you will see the same result. First 15
million reads a few times, and then a reduction to 87000 reads. If you
also study the query plan, you will see that it changes.
Is it my way of using the computer? Maybe i was opening
too many applications, SQL was fighting for memory?


No, that has nothing to do with it.

--
Erland Sommarskog, SQL Server MVP, es****@sommarskog.se

Books Online for SQL Server 2005 at
http://www.microsoft.com/technet/pro...ads/books.mspx
Books Online for SQL Server 2000 at
http://www.microsoft.com/sql/prodinf...ons/books.mspx
Mar 4 '06 #3
Thanks Erland, I did learn a few more things today after trying the
things you said. Plus I also remembered one of my other posts where
you suggested to compare the execution plan in text format.
This is something I'll be doing while I continue my understanding
of the execution plan in my query.
If you were to create a new table, and move the data over to that
table, my prediction that you will see the same result. First 15
million reads a few times, and then a reduction to 87000 reads. If you
also study the query plan, you will see that it changes.
I did test this but I started getting reads of 100,000 instead of 15
million.
But I can't say I did a clean job. I did a

SELECT *
INTO TABLE2
FROM TABLE1

Then dropped the Foreign Keys of TABLE1, dropped TABLE1,
renamed TABLE2 to TABLE1.

Then i re-run my query and i got 100,000 reads.

Anyhow what I found after is like you said statistics related.
My guess is auto-statistcs. When you run queries than scan a lot of
data, SQL Server takes the oppurtunity to sample statistics abou the
data. Once that statistics is available, the optimizer may find a better
way to implement the query.


This time I restored the same db on my second SQL instance on my
machine. I ran the query dozens of time and kept getting 15 million reads
in 1+ minute.
I left the machine idle for a few hours, returned back, re-ran the query
and same 15 million reads...
Immediately I ran
UPDATE STATISTICS dbo.TABLE1 WITH FULLSCAN
and i re-ran the query and it took 2 seconds and 87 thousand reads!

Ok I can tell you that yesterday on my first SQL instance I ran the
Database Maintenance Plan wizard and chose to update the stats using
50% sample data. And that explains why my queries were running in 2
seconds. But I still can't understand why my queries kept running in 2
seconds
if i was dropping the database and restoring it brand new! I thought the
database
stats info would be stored inside the database. It's almost like either the
stats
info, or the execution plan???? maybe is being stored in the master
databases
of my SQL Server??
I just did another test on my second SQL Server Instance. I dropped the db,
restored it again, ran my query, it took 10 seconds to execute, i ran it
again
and it took 2 seconds and 87 thousand reads. So it looks to me something
is being stored in the master database of my SQL Server Instance otherwise
why it is not taking 15 million reads anymore?

Thank you


Mar 5 '06 #4
Even when I restored the same db using a different db name and
different physical file names, I run my query, the first time it takes
8 seconds then 2, all with no 15 million reads!

The only common thing would be the logical file name which
I did not change everytime I restored this same db.

It's almost like everytime we restore a database, we should
immediately right after re-index it and update all the statistics?

I am sure now if I restore this same db on another SQL Server
Instance, I will keep getting 15 million reads until I update
the statistics of 1 single table. Then that SQL Server Instance
will never take 15 million reads if i drop the db and restore
it again. I don't know why.

Mar 5 '06 #5

This discussion thread is closed

Replies have been disabled for this discussion.

Similar topics

13 posts views Thread by Nickolay Kolev | last post: by
4 posts views Thread by Sky Fly | last post: by
2 posts views Thread by dbtwo | last post: by
21 posts views Thread by omkar pangarkar | last post: by
27 posts views Thread by didacticone | last post: by
1 post views Thread by gcdp | last post: by
By using this site, you agree to our Privacy Policy and Terms of Use.