473,396 Members | 1,996 Online
Bytes | Software Development & Data Engineering Community
Post Job

Home Posts Topics Members FAQ

Join Bytes to post your question to a community of 473,396 software developers and data experts.

slow queries for a particular database

Bear with me here folks, I don't know much MS SQL performance.
Wondering if someone can point me in the right direction. I have 1
particular database where queries are slow. To test this theory, I am
running a SELECT COUNT(*) on some of the tables that have many rows
(2-4 million rows). Example: A table with approx 3.5 million rows, a
count takes over 10 mins. to run initially. If I just do a count on
the indexed field, SELECT COUNT(ID), it takes about 2 minutes. I have
other similiar databases, same tables, with just as much data, with
the same indicies/constraints, etc., and it is much quicker (1 to 2
secs. to run). I've moved the db with slow queries to another server,
and it is still slow there (single processor server). Other db's on
that server are just fine.

Any help/hints are appreciated.

Thanks,
Jun 30 '08 #1
12 3078
Outdated statistics can cause such performance issues. Update the statistics
and see if that helps:
http://msdn.microsoft.com/en-us/library/ms187348.aspx
http://msdn.microsoft.com/en-us/library/ms173804.aspx
http://www.sql-server-performance.co...istics_p1.aspx

HTH,

Plamen Ratchev
http://www.SQLStudio.com

Jun 30 '08 #2
On Jun 30, 4:37*pm, "Plamen Ratchev" <Pla...@SQLStudio.comwrote:
Outdated statistics can cause such performance issues. Update the statistics
and see if that helps:http://msdn.microsoft.com/en-us/libr...istics_p1.aspx

HTH,

Plamen Ratchevhttp://www.SQLStudio.com
There was a maintenance plan to update the statistics already for this
db. I deleted, and reset it up, and executed manually. Completed
successfully in approx. 7 mins., but a count is still taking an
extremely long time.
Jun 30 '08 #3
Can you try SELECT COUNT(*) on a table that has a narrow non-clustered index
and see how long it takes to complete? If there is no non-clustered index on
the table all data pages have to be scanned to get the count.

HTH,

Plamen Ratchev
http://www.SQLStudio.com

Jun 30 '08 #4
Marc Baker (mb****@selectsa.com) writes:
Bear with me here folks, I don't know much MS SQL performance.
Wondering if someone can point me in the right direction. I have 1
particular database where queries are slow. To test this theory, I am
running a SELECT COUNT(*) on some of the tables that have many rows
(2-4 million rows). Example: A table with approx 3.5 million rows, a
count takes over 10 mins. to run initially. If I just do a count on
the indexed field, SELECT COUNT(ID), it takes about 2 minutes. I have
other similiar databases, same tables, with just as much data, with
the same indicies/constraints, etc., and it is much quicker (1 to 2
secs. to run). I've moved the db with slow queries to another server,
and it is still slow there (single processor server). Other db's on
that server are just fine.
SELECT COUNT(*) on a table will use a non-clustered index if there is
one. But the first query is likely to drag data into cache, so it runs
faster the next time.

10 minutes for four million rows is a lot, but then again, that depends
on how wide the rows are. My gut reaction is that you have bad case of
fragmentation. Can you run DBCC SHOWCONTIG on this table?

Which version of SQL Server are you using?
--
Erland Sommarskog, SQL Server MVP, es****@sommarskog.se

Books Online for SQL Server 2005 at
http://www.microsoft.com/technet/pro...ads/books.mspx
Books Online for SQL Server 2000 at
http://www.microsoft.com/sql/prodinf...ons/books.mspx
Jun 30 '08 #5
On Jun 30, 6:05*pm, Erland Sommarskog <esq...@sommarskog.sewrote:
Marc Baker (mba...@selectsa.com) writes:
Bear with me here folks, I don't know much MS SQL performance.
Wondering if someone can point me in the right direction. *I have 1
particular database where queries are slow. *To test this theory, I am
running a SELECT COUNT(*) on some of the tables that have many rows
(2-4 million rows). *Example: *A table with approx 3.5 million rows,a
count takes over 10 mins. to run initially. *If I just do a count on
the indexed field, SELECT COUNT(ID), it takes about 2 minutes. *I have
other similiar databases, same tables, with just as much data, with
the same indicies/constraints, etc., and it is much quicker (1 to 2
secs. to run). *I've moved the db with slow queries to another server,
and it is still slow there (single processor server). Other db's on
that server are just fine.

SELECT COUNT(*) on a table will use a non-clustered index if there is
one. But the first query is likely to drag data into cache, so it runs
faster the next time.

10 minutes for four million rows is a lot, but then again, that depends
on how wide the rows are. My gut reaction is that you have bad case of
fragmentation. Can you run DBCC SHOWCONTIG on this table?

Which version of SQL Server are you using?

--
Erland Sommarskog, SQL Server MVP, esq...@sommarskog.se

Books Online for SQL Server 2005 athttp://www.microsoft.com/technet/prodtechnol/sql/2005/downloads/books...
Books Online for SQL Server 2000 athttp://www.microsoft.com/sql/prodinfo/previousversions/books.mspx
The tables in question both have a clustered and non-clustered index
(unique identifier only), and these are the DBCC SHOWCONTIG statistics
for one of those tables.

TABLE level scan performed.
- Pages Scanned................................: 234847
- Extents Scanned..............................: 29518
- Extent Switches..............................: 29517
- Avg. Pages per Extent........................: 8.0
- Scan Density [Best Count:Actual Count].......: 99.45% [29356:29518]
- Logical Scan Fragmentation ..................: 0.00%
- Extent Scan Fragmentation ...................: 2.87%
- Avg. Bytes Free per Page.....................: 493.5
- Avg. Page Density (full).....................: 93.90%
DBCC execution completed. If DBCC printed error messages, contact your
system administrator.

From what I am reading, this does not seem to be bad.

This is on a SQL 2000 server SP4.
Jul 1 '08 #6
Marc Baker (mb****@selectsa.com) writes:
The tables in question both have a clustered and non-clustered index
(unique identifier only), and these are the DBCC SHOWCONTIG statistics
for one of those tables.

TABLE level scan performed.
- Pages Scanned................................: 234847
- Extents Scanned..............................: 29518
- Extent Switches..............................: 29517
- Avg. Pages per Extent........................: 8.0
- Scan Density [Best Count:Actual Count].......: 99.45% [29356:29518]
- Logical Scan Fragmentation ..................: 0.00%
- Extent Scan Fragmentation ...................: 2.87%
- Avg. Bytes Free per Page.....................: 493.5
- Avg. Page Density (full).....................: 93.90%
DBCC execution completed. If DBCC printed error messages, contact your
system administrator.

From what I am reading, this does not seem to be bad.
Yes, there is no problem with fragmentation here. But I notice that the
table is quite big. 1.9 GB (234847*8192), so if there are only 3.5 million
rows, the rows are fairly wide, around 550 bytes in average.

Still several minutes to do a SELECT COUNT(*) seems a bit excessive.
How much memory do you have in the machine?

Does the other databases that you don't think are slow have similar
size and schema?
--
Erland Sommarskog, SQL Server MVP, es****@sommarskog.se

Books Online for SQL Server 2005 at
http://www.microsoft.com/technet/pro...ads/books.mspx
Books Online for SQL Server 2000 at
http://www.microsoft.com/sql/prodinf...ons/books.mspx
Jul 1 '08 #7
On Jul 1, 5:48*pm, Erland Sommarskog <esq...@sommarskog.sewrote:
Marc Baker (mba...@selectsa.com) writes:
The tables in question both have a clustered and non-clustered index
(unique identifier only), and these are the DBCC SHOWCONTIG statistics
for one of those tables.
TABLE level scan performed.
- Pages Scanned................................: 234847
- Extents Scanned..............................: 29518
- Extent Switches..............................: 29517
- Avg. Pages per Extent........................: 8.0
- Scan Density [Best Count:Actual Count].......: 99.45% [29356:29518]
- Logical Scan Fragmentation ..................: 0.00%
- Extent Scan Fragmentation ...................: 2.87%
- Avg. Bytes Free per Page.....................: 493.5
- Avg. Page Density (full).....................: 93.90%
DBCC execution completed. If DBCC printed error messages, contact your
system administrator.
From what I am reading, this does not seem to be bad.

Yes, there is no problem with fragmentation here. But I notice that the
table is quite big. 1.9 GB (234847*8192), so if there are only 3.5 million
rows, the rows are fairly wide, around 550 bytes in average.

Still several minutes to do a SELECT COUNT(*) seems a bit excessive.
How much memory do you have in the machine?

Does the other databases that you don't think are slow have similar
size and schema?

--
Erland Sommarskog, SQL Server MVP, esq...@sommarskog.se

Books Online for SQL Server 2005 athttp://www.microsoft.com/technet/prodtechnol/sql/2005/downloads/books...
Books Online for SQL Server 2000 athttp://www.microsoft.com/sql/prodinfo/previousversions/books.mspx
There is 3GB of memory. Dynamically assigned by SQL. The other db's
do have similar size, and the exact same schema.
Jul 2 '08 #8
Erland Sommarskog wrote:
>
Marc Baker (mb****@selectsa.com) writes:
The tables in question both have a clustered and non-clustered index
(unique identifier only), and these are the DBCC SHOWCONTIG statistics
for one of those tables.

TABLE level scan performed.
- Pages Scanned................................: 234847
- Extents Scanned..............................: 29518
- Extent Switches..............................: 29517
- Avg. Pages per Extent........................: 8.0
- Scan Density [Best Count:Actual Count].......: 99.45% [29356:29518]
- Logical Scan Fragmentation ..................: 0.00%
- Extent Scan Fragmentation ...................: 2.87%
- Avg. Bytes Free per Page.....................: 493.5
- Avg. Page Density (full).....................: 93.90%
DBCC execution completed. If DBCC printed error messages, contact your
system administrator.

From what I am reading, this does not seem to be bad.

Yes, there is no problem with fragmentation here. But I notice that the
table is quite big. 1.9 GB (234847*8192), so if there are only 3.5 million
rows, the rows are fairly wide, around 550 bytes in average.

Still several minutes to do a SELECT COUNT(*) seems a bit excessive.
How much memory do you have in the machine?

Does the other databases that you don't think are slow have similar
size and schema?

--
Erland Sommarskog, SQL Server MVP, es****@sommarskog.se

Books Online for SQL Server 2005 at
http://www.microsoft.com/technet/pro...ads/books.mspx
Books Online for SQL Server 2000 at
http://www.microsoft.com/sql/prodinf...ons/books.mspx

Yes, several minutes seems excessive, but only if the accessed disk is
used exclusively by SQL Server, and there is no other significant
database activity going on. If there is other database activity going
on, the OP could have blocking, and the available I/O might have to be
shared. If (because of all that) the storage engine decides not to Read
Ahead, and you have all these interruptions, then I can imagine reaching
10 minutes for (just) 2 GB.

--
Gert-Jan
SQL Server MVP
Jul 2 '08 #9
Marc Baker (mb****@selectsa.com) writes:
There is 3GB of memory. Dynamically assigned by SQL. The other db's
do have similar size, and the exact same schema.
I'm starting to run of ideas. One more last straw: check with
sp_helpdb whether autoclose is on for the database.

--
Erland Sommarskog, SQL Server MVP, es****@sommarskog.se

Books Online for SQL Server 2005 at
http://www.microsoft.com/technet/pro...ads/books.mspx
Books Online for SQL Server 2000 at
http://www.microsoft.com/sql/prodinf...ons/books.mspx
Jul 2 '08 #10
On Jul 2, 5:30*pm, Erland Sommarskog <esq...@sommarskog.sewrote:
Marc Baker (mba...@selectsa.com) writes:
There is 3GB of memory. *Dynamically assigned by SQL. *The other db's
do have similar size, and the exact same schema.

I'm starting to run of ideas. One more last straw: check with
sp_helpdb whether autoclose is on for the database.

--
Erland Sommarskog, SQL Server MVP, esq...@sommarskog.se

Books Online for SQL Server 2005 athttp://www.microsoft.com/technet/prodtechnol/sql/2005/downloads/books...
Books Online for SQL Server 2000 athttp://www.microsoft.com/sql/prodinfo/previousversions/books.mspx
This is on a non-production server that has hardly any activity. The
sp_helpdb did not show anything that stated autoclose. Thanks to all
for assistance and suggestions.

name
db_size
owner
dbid created
status
compatibility_level
--------------------------------------------------------------------------------------------------------------------------------
-------------
--------------------------------------------------------------------------------------------------------------------------------
------ -----------
----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
-------------------
FISH_MNT_FC
20415.19 MB
sa
40 Jun 26 2008 Status=ONLINE, Updateability=READ_WRITE,
UserAccess=MULTI_USER, Recovery=SIMPLE, Version=539,
Collation=Latin1_General_BIN, SQLSortOrder=0,
IsTornPageDetectionEnabled, IsAutoCreateStatistics,
IsAutoUpdateStatistics 80
name
fileid
filename
filegroup
size maxsize growth usage
--------------------------------------------------------------------------------------------------------------------------------
------
----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
--------------------------------------------------------------------------------------------------------------------------------
------------------ ------------------ ------------------ ---------
MNT_SYS_DATA
1 F:\FishTest
\FISH_MNT_FC.mdf
PRIMARY
7552 KB Unlimited 10% data only
MNT_LOG
2 F:\FishTest
\FISH_MNT_FC_log.ldf
NULL
6965376 KB Unlimited 5% log only
MNT_DATA1
3 F:\FishTest
\FISH_MNT_FC_1.mdf
MNT_DATA_GROUP
13932224 KB Unlimited 10% data only

Jul 2 '08 #11
Marc Baker wrote:
On Jul 2, 5:30Â*pm, Erland Sommarskog <esq...@sommarskog.sewrote:
>Marc Baker (mba...@selectsa.com) writes:
There is 3GB of memory. Â*Dynamically assigned by SQL. Â*The other db's
do have similar size, and the exact same schema.

I'm starting to run of ideas. One more last straw: check with
sp_helpdb whether autoclose is on for the database.
If there are 29k+ extents (of 64k) scanned then at 100 random ios/sec thats
5 minutes. If the disks are cheaper/slower then 50 ios/sec is not
unreasonable, which gives you your 10 mins.

As previously posted a count(id) on a non-clustered index should just do an
index scan which depends on the size of the index and whether it is cached
or not.

Whilst you are running your query have perfmon running to check cpu and io
Jul 3 '08 #12
Marc Baker (mb****@selectsa.com) writes:
This is on a non-production server that has hardly any activity. The
sp_helpdb did not show anything that stated autoclose. Thanks to all
for assistance and suggestions.
Still clutching at straws...

That F: disk is not a USB disk, is it?

Could you run first run SET STATISTICS PROFILE ON, and then a a SELECT
COUNT(*) in the slow database, and a SELECT COUNT(*) on the same table in
other database and where the number of rows are the same and then post
the output?

--
Erland Sommarskog, SQL Server MVP, es****@sommarskog.se

Books Online for SQL Server 2005 at
http://www.microsoft.com/technet/pro...ads/books.mspx
Books Online for SQL Server 2000 at
http://www.microsoft.com/sql/prodinf...ons/books.mspx
Jul 3 '08 #13

This thread has been closed and replies have been disabled. Please start a new discussion.

Similar topics

0
by: Jesse Sheidlower | last post by:
I'm struggling with speed issues on some queries that I would have expected to be relatively fast. Perhaps even more frustratingly, when I've tried to break these down into their components, they...
11
by: DJJ | last post by:
I am using the MySQL ODBC 3.51 driver to link three relatively small MySQL tables to a Microsoft Access 2003 database. I am finding that the data from the MySQL tables takes a hell of a long time...
2
by: Yonatan Goraly | last post by:
I am in the process of adding PostgreSQL support for an application, in addition to Oracle and MS SQL. I am using PostgreSQL version 7.3.2, Red Hat 9.0 on Intel Pentium III board. I have a...
1
by: Coy Howe | last post by:
This one seems bizarre! We have a database consisting of a main table and 12 - 15 "sub" tables, which are connected via cascading relationships. The database performs many complex calculations...
14
by: google | last post by:
I am creating a new database for use within our company, that I'd like to make reasonably secure (short of a true server based solution). The back-end of a non-server based database seems to be...
6
by: lawrence k | last post by:
The following function is way too slow. If anyone has any suggestions about how to speed it up, I'd be grateful for them. We have to call this function 36 times on one page, and I think each time...
29
by: wizofaus | last post by:
I previously posted about a problem where it seemed that changing the case of the word "BY" in a SELECT query was causing it to run much much faster. Now I've hit the same thing again, where...
12
by: grace | last post by:
i am wondering why my database retrieval becomes too slow...we set up a new server (ubuntu, breezy badger) machine where we transferred all our files from the old server.. Our new server uses Asus...
2
by: existential.philosophy | last post by:
This is a new problem for me: I have some queries that open very slowly in design view. My benchmark query takes about 20 minutes to open in design view. That same query takes about 20 minutes...
0
by: emmanuelkatto | last post by:
Hi All, I am Emmanuel katto from Uganda. I want to ask what challenges you've faced while migrating a website to cloud. Please let me know. Thanks! Emmanuel
0
BarryA
by: BarryA | last post by:
What are the essential steps and strategies outlined in the Data Structures and Algorithms (DSA) roadmap for aspiring data scientists? How can individuals effectively utilize this roadmap to progress...
1
by: nemocccc | last post by:
hello, everyone, I want to develop a software for my android phone for daily needs, any suggestions?
1
by: Sonnysonu | last post by:
This is the data of csv file 1 2 3 1 2 3 1 2 3 1 2 3 2 3 2 3 3 the lengths should be different i have to store the data by column-wise with in the specific length. suppose the i have to...
0
marktang
by: marktang | last post by:
ONU (Optical Network Unit) is one of the key components for providing high-speed Internet services. Its primary function is to act as an endpoint device located at the user's premises. However,...
0
Oralloy
by: Oralloy | last post by:
Hello folks, I am unable to find appropriate documentation on the type promotion of bit-fields when using the generalised comparison operator "<=>". The problem is that using the GNU compilers,...
0
jinu1996
by: jinu1996 | last post by:
In today's digital age, having a compelling online presence is paramount for businesses aiming to thrive in a competitive landscape. At the heart of this digital strategy lies an intricately woven...
0
tracyyun
by: tracyyun | last post by:
Dear forum friends, With the development of smart home technology, a variety of wireless communication protocols have appeared on the market, such as Zigbee, Z-Wave, Wi-Fi, Bluetooth, etc. Each...
0
agi2029
by: agi2029 | last post by:
Let's talk about the concept of autonomous AI software engineers and no-code agents. These AIs are designed to manage the entire lifecycle of a software development project—planning, coding, testing,...

By using Bytes.com and it's services, you agree to our Privacy Policy and Terms of Use.

To disable or enable advertisements and analytics tracking please visit the manage ads & tracking page.