473,396 Members | 2,023 Online
Bytes | Software Development & Data Engineering Community
Post Job

Home Posts Topics Members FAQ

Join Bytes to post your question to a community of 473,396 software developers and data experts.

Statistics IO logical reads sometimes 15 million and other times down to 90 thousand?

I am running a query in SQL 2000 SP4, Windows 2000 Server
that is not being shared with any other users or any sql
connections users. The db involves a lot of tables,
JOINs, LEFT JOINs, UNIONS etc... Ok it's not a pretty
code and my job is to make it better.

But for now one thing I would like to understand with your
help is why the same SP on the same server and everything
the same without me changing anything at all in terms of
SQL Server (configuration, code change, ...) runs in
Query Analyzer in 1:05 minute and i see one table get a
hit of 15 million logical reads:
Table 'TABLE1'. Scan count 2070, logical reads 15516368,
physical reads 147, read-ahead reads 0.

This 'TABLE1' has about 400,000 records

The second time i ran right after in Query Analyzer again:
Table 'TABLE1'. Scan count 2070, logical reads 15516368,
physical reads 0, read-ahead reads 0.
I can see now the physical reads being 0 as it is
understandable that SQL is now fetching the data from
memory.

But now the third time I ran:
Table 'TABLE1'. Scan count 28, logical reads 87784,
physical reads 0, read-ahead reads 0.
The Scan count went down from 2070 to 28. I don't
know what the Scan count is actually. It scanned the
table 28 times?
The logical reads went down to 87,784 reads from 15
million and 2 seconds execution time!

Anybody has any ideas why this number change?

The problem is i tried various repeats of my test, i
rebooted the SQL Server, dropped the database, restored
it, ran the same exact query and it took 3-4-5 seconds
with 87,784 reads vs 15 million.

Why i don't see 15 million now?

Well i kept working during the day and i happen to run into
another set of seeing 15 million again. A few runs would
keep running at the paste of 15 million over 1 minute and
eventually the numbers went back down to 87,784 and 2
seconds.

Is it my way of using the computer? Maybe i was opening
too many applications, SQL was fighting for memory?
Would that explain the 15 million reads?
I went and changed my SQL Server to used a fixed memory
of 100 megs, restarted it and tested again the same
query but it continued to show 87,784 reads with 2 seconds
execution time.

I opened all kinds of applications redid the same test
and i was never able to see 15 million reads again.

Can someone help me with suggestions on what could be
this problem and what if i could find a way to come to
see 15 million reads again?

By the way with the limited info you have here about the
database I am using, is 87,784 reads a terrible number of
reads, average or normal when the max records in the many
tables involved in this SP is 400,000 records?

I am guessing it is a terrible number, am I correct?
I would appreciate your help.

Thank you

Mar 4 '06 #1
4 3535
I can add that I have 1.5GB of RAM
Everything is on the local server and I think 13 rows
are being returned.
Mar 4 '06 #2
serge (se****@nospam.ehmail.com) writes:
I can see now the physical reads being 0 as it is
understandable that SQL is now fetching the data from
memory.

But now the third time I ran:
Table 'TABLE1'. Scan count 28, logical reads 87784,
physical reads 0, read-ahead reads 0.

The Scan count went down from 2070 to 28. I don't
know what the Scan count is actually. It scanned the
table 28 times?
Or at least accessed. Say that you run a query like:

SELECT SUM(OD.UnitPrice * OD.Quantity)
FROM Orders O
JOIN [Order Details] OD ON O.OrderID = OD.OrderID
WHERE O.CustomerID = N'VINET'

This gives the following statistics IO:

Table 'Order Details'. Scan count 5, logical reads 10, physical reads 0,
read-ahead reads 0.
Table 'Orders'. Scan count 1, logical reads 2, physical reads 0,
read-ahead reads 0.

There are five orders for VINET in Northwind..Orders. SQL Server seeks
the index on Orders.CusteromerID and for every match it looks up the
order in Order Details. Not by scan, but by seeking the index. But
the output from STATISTICS IO does not make that distinction.

Note that is what happens with nested-loop joins. Had the join been
implemented as a merge or a hash join, the Scan Count would be 1
for both table - but then it had also truely been a scan.
The logical reads went down to 87,784 reads from 15
million and 2 seconds execution time!

Anybody has any ideas why this number change?
My guess is auto-statistcs. When you run queries than scan a lot of
data, SQL Server takes the oppurtunity to sample statistics abou the
data. Once that statistics is available, the optimizer may find a better
way to implement the query.

If you were to create a new table, and move the data over to that
table, my prediction that you will see the same result. First 15
million reads a few times, and then a reduction to 87000 reads. If you
also study the query plan, you will see that it changes.
Is it my way of using the computer? Maybe i was opening
too many applications, SQL was fighting for memory?


No, that has nothing to do with it.

--
Erland Sommarskog, SQL Server MVP, es****@sommarskog.se

Books Online for SQL Server 2005 at
http://www.microsoft.com/technet/pro...ads/books.mspx
Books Online for SQL Server 2000 at
http://www.microsoft.com/sql/prodinf...ons/books.mspx
Mar 4 '06 #3
Thanks Erland, I did learn a few more things today after trying the
things you said. Plus I also remembered one of my other posts where
you suggested to compare the execution plan in text format.
This is something I'll be doing while I continue my understanding
of the execution plan in my query.
If you were to create a new table, and move the data over to that
table, my prediction that you will see the same result. First 15
million reads a few times, and then a reduction to 87000 reads. If you
also study the query plan, you will see that it changes.
I did test this but I started getting reads of 100,000 instead of 15
million.
But I can't say I did a clean job. I did a

SELECT *
INTO TABLE2
FROM TABLE1

Then dropped the Foreign Keys of TABLE1, dropped TABLE1,
renamed TABLE2 to TABLE1.

Then i re-run my query and i got 100,000 reads.

Anyhow what I found after is like you said statistics related.
My guess is auto-statistcs. When you run queries than scan a lot of
data, SQL Server takes the oppurtunity to sample statistics abou the
data. Once that statistics is available, the optimizer may find a better
way to implement the query.


This time I restored the same db on my second SQL instance on my
machine. I ran the query dozens of time and kept getting 15 million reads
in 1+ minute.
I left the machine idle for a few hours, returned back, re-ran the query
and same 15 million reads...
Immediately I ran
UPDATE STATISTICS dbo.TABLE1 WITH FULLSCAN
and i re-ran the query and it took 2 seconds and 87 thousand reads!

Ok I can tell you that yesterday on my first SQL instance I ran the
Database Maintenance Plan wizard and chose to update the stats using
50% sample data. And that explains why my queries were running in 2
seconds. But I still can't understand why my queries kept running in 2
seconds
if i was dropping the database and restoring it brand new! I thought the
database
stats info would be stored inside the database. It's almost like either the
stats
info, or the execution plan???? maybe is being stored in the master
databases
of my SQL Server??
I just did another test on my second SQL Server Instance. I dropped the db,
restored it again, ran my query, it took 10 seconds to execute, i ran it
again
and it took 2 seconds and 87 thousand reads. So it looks to me something
is being stored in the master database of my SQL Server Instance otherwise
why it is not taking 15 million reads anymore?

Thank you


Mar 5 '06 #4
Even when I restored the same db using a different db name and
different physical file names, I run my query, the first time it takes
8 seconds then 2, all with no 15 million reads!

The only common thing would be the logical file name which
I did not change everytime I restored this same db.

It's almost like everytime we restore a database, we should
immediately right after re-index it and update all the statistics?

I am sure now if I restore this same db on another SQL Server
Instance, I will keep getting 15 million reads until I update
the statistics of 1 single table. Then that SQL Server Instance
will never take 15 million reads if i drop the db and restore
it again. I don't know why.

Mar 5 '06 #5

This thread has been closed and replies have been disabled. Please start a new discussion.

Similar topics

13
by: Nickolay Kolev | last post by:
Hi all, I am currently writing some simple functions in the process of learning Python. I have a task where the program has to read in a text file and display some statistics about the tokens in...
7
by: Charles Crume | last post by:
Hello all; I have used dBASE, and other computer languages/databases, for years. They all have a logical field type. However, the version of MySQL used by the ISP hosting my site does not...
5
by: Jesper Jensen | last post by:
Hello group. I have an issue, which has bothered me for a while now: I'm wondering why the column statistics, which SQL Server wants me to create, if I turn off auto-created statistics, are so...
4
by: Sky Fly | last post by:
Hello all, I've written a stored procedure which runs OK for the first few thousand records it processes, then around about the 10,000th record it suffers a sudden and dramatic drop in...
6
by: Dung Ping | last post by:
A while ago I posted a message saying that saving Javascript code as external file might hide the code better. All replies to it said it would still be very easy for people to look at the source...
17
by: romixnews | last post by:
Hi, I'm facing the problem of analyzing a memory allocation dynamic and object creation dynamics of a very big C++ application with a goal of optimizing its performance and eventually also...
2
by: dbtwo | last post by:
Until today I always thought as long as you see a lot of logical reads as compared to physical reads, then you're good. But it looks like it isn't so. But doesn't logical read mean it's being read...
21
by: omkar pangarkar | last post by:
Hi all, I have two simple hello world programs one using printf() and other using write() --prog 1-- #include<stdio.h> #include<stdlib.h> int main() { printf("Hello"); /* up to here...
0
ADezii
by: ADezii | last post by:
In last week's Tip, I showed you how to use the ISAMStats Method of the DBEngine (DAO) to return vital statistics concerning Query executions such as: Disk Reads and Writes, Cache Reads and Writes,...
0
by: Charles Arthur | last post by:
How do i turn on java script on a villaon, callus and itel keypad mobile phone
0
by: emmanuelkatto | last post by:
Hi All, I am Emmanuel katto from Uganda. I want to ask what challenges you've faced while migrating a website to cloud. Please let me know. Thanks! Emmanuel
0
BarryA
by: BarryA | last post by:
What are the essential steps and strategies outlined in the Data Structures and Algorithms (DSA) roadmap for aspiring data scientists? How can individuals effectively utilize this roadmap to progress...
1
by: nemocccc | last post by:
hello, everyone, I want to develop a software for my android phone for daily needs, any suggestions?
1
by: Sonnysonu | last post by:
This is the data of csv file 1 2 3 1 2 3 1 2 3 1 2 3 2 3 2 3 3 the lengths should be different i have to store the data by column-wise with in the specific length. suppose the i have to...
0
marktang
by: marktang | last post by:
ONU (Optical Network Unit) is one of the key components for providing high-speed Internet services. Its primary function is to act as an endpoint device located at the user's premises. However,...
0
Oralloy
by: Oralloy | last post by:
Hello folks, I am unable to find appropriate documentation on the type promotion of bit-fields when using the generalised comparison operator "<=>". The problem is that using the GNU compilers,...
0
jinu1996
by: jinu1996 | last post by:
In today's digital age, having a compelling online presence is paramount for businesses aiming to thrive in a competitive landscape. At the heart of this digital strategy lies an intricately woven...
0
tracyyun
by: tracyyun | last post by:
Dear forum friends, With the development of smart home technology, a variety of wireless communication protocols have appeared on the market, such as Zigbee, Z-Wave, Wi-Fi, Bluetooth, etc. Each...

By using Bytes.com and it's services, you agree to our Privacy Policy and Terms of Use.

To disable or enable advertisements and analytics tracking please visit the manage ads & tracking page.