473,662 Members | 2,390 Online
Bytes | Software Development & Data Engineering Community
+ Post

Home Posts Topics Members FAQ

slow queries for a particular database

Bear with me here folks, I don't know much MS SQL performance.
Wondering if someone can point me in the right direction. I have 1
particular database where queries are slow. To test this theory, I am
running a SELECT COUNT(*) on some of the tables that have many rows
(2-4 million rows). Example: A table with approx 3.5 million rows, a
count takes over 10 mins. to run initially. If I just do a count on
the indexed field, SELECT COUNT(ID), it takes about 2 minutes. I have
other similiar databases, same tables, with just as much data, with
the same indicies/constraints, etc., and it is much quicker (1 to 2
secs. to run). I've moved the db with slow queries to another server,
and it is still slow there (single processor server). Other db's on
that server are just fine.

Any help/hints are appreciated.

Thanks,
Jun 30 '08 #1
12 3093
Outdated statistics can cause such performance issues. Update the statistics
and see if that helps:
http://msdn.microsoft.com/en-us/library/ms187348.aspx
http://msdn.microsoft.com/en-us/library/ms173804.aspx
http://www.sql-server-performance.co...istics_p1.aspx

HTH,

Plamen Ratchev
http://www.SQLStudio.com

Jun 30 '08 #2
On Jun 30, 4:37*pm, "Plamen Ratchev" <Pla...@SQLStud io.comwrote:
Outdated statistics can cause such performance issues. Update the statistics
and see if that helps:http://msdn.microsoft.com/en-us/libr...istics_p1.aspx

HTH,

Plamen Ratchevhttp://www.SQLStudio.c om
There was a maintenance plan to update the statistics already for this
db. I deleted, and reset it up, and executed manually. Completed
successfully in approx. 7 mins., but a count is still taking an
extremely long time.
Jun 30 '08 #3
Can you try SELECT COUNT(*) on a table that has a narrow non-clustered index
and see how long it takes to complete? If there is no non-clustered index on
the table all data pages have to be scanned to get the count.

HTH,

Plamen Ratchev
http://www.SQLStudio.com

Jun 30 '08 #4
Marc Baker (mb****@selects a.com) writes:
Bear with me here folks, I don't know much MS SQL performance.
Wondering if someone can point me in the right direction. I have 1
particular database where queries are slow. To test this theory, I am
running a SELECT COUNT(*) on some of the tables that have many rows
(2-4 million rows). Example: A table with approx 3.5 million rows, a
count takes over 10 mins. to run initially. If I just do a count on
the indexed field, SELECT COUNT(ID), it takes about 2 minutes. I have
other similiar databases, same tables, with just as much data, with
the same indicies/constraints, etc., and it is much quicker (1 to 2
secs. to run). I've moved the db with slow queries to another server,
and it is still slow there (single processor server). Other db's on
that server are just fine.
SELECT COUNT(*) on a table will use a non-clustered index if there is
one. But the first query is likely to drag data into cache, so it runs
faster the next time.

10 minutes for four million rows is a lot, but then again, that depends
on how wide the rows are. My gut reaction is that you have bad case of
fragmentation. Can you run DBCC SHOWCONTIG on this table?

Which version of SQL Server are you using?
--
Erland Sommarskog, SQL Server MVP, es****@sommarsk og.se

Books Online for SQL Server 2005 at
http://www.microsoft.com/technet/pro...ads/books.mspx
Books Online for SQL Server 2000 at
http://www.microsoft.com/sql/prodinf...ons/books.mspx
Jun 30 '08 #5
On Jun 30, 6:05*pm, Erland Sommarskog <esq...@sommars kog.sewrote:
Marc Baker (mba...@selects a.com) writes:
Bear with me here folks, I don't know much MS SQL performance.
Wondering if someone can point me in the right direction. *I have 1
particular database where queries are slow. *To test this theory, I am
running a SELECT COUNT(*) on some of the tables that have many rows
(2-4 million rows). *Example: *A table with approx 3.5 million rows,a
count takes over 10 mins. to run initially. *If I just do a count on
the indexed field, SELECT COUNT(ID), it takes about 2 minutes. *I have
other similiar databases, same tables, with just as much data, with
the same indicies/constraints, etc., and it is much quicker (1 to 2
secs. to run). *I've moved the db with slow queries to another server,
and it is still slow there (single processor server). Other db's on
that server are just fine.

SELECT COUNT(*) on a table will use a non-clustered index if there is
one. But the first query is likely to drag data into cache, so it runs
faster the next time.

10 minutes for four million rows is a lot, but then again, that depends
on how wide the rows are. My gut reaction is that you have bad case of
fragmentation. Can you run DBCC SHOWCONTIG on this table?

Which version of SQL Server are you using?

--
Erland Sommarskog, SQL Server MVP, esq...@sommarsk og.se

Books Online for SQL Server 2005 athttp://www.microsoft.c om/technet/prodtechnol/sql/2005/downloads/books...
Books Online for SQL Server 2000 athttp://www.microsoft.c om/sql/prodinfo/previousversion s/books.mspx
The tables in question both have a clustered and non-clustered index
(unique identifier only), and these are the DBCC SHOWCONTIG statistics
for one of those tables.

TABLE level scan performed.
- Pages Scanned........ ............... .........: 234847
- Extents Scanned........ ............... .......: 29518
- Extent Switches....... ............... ........: 29517
- Avg. Pages per Extent......... ............... : 8.0
- Scan Density [Best Count:Actual Count].......: 99.45% [29356:29518]
- Logical Scan Fragmentation ............... ...: 0.00%
- Extent Scan Fragmentation ............... ....: 2.87%
- Avg. Bytes Free per Page........... ..........: 493.5
- Avg. Page Density (full)......... ............: 93.90%
DBCC execution completed. If DBCC printed error messages, contact your
system administrator.

From what I am reading, this does not seem to be bad.

This is on a SQL 2000 server SP4.
Jul 1 '08 #6
Marc Baker (mb****@selects a.com) writes:
The tables in question both have a clustered and non-clustered index
(unique identifier only), and these are the DBCC SHOWCONTIG statistics
for one of those tables.

TABLE level scan performed.
- Pages Scanned........ ............... .........: 234847
- Extents Scanned........ ............... .......: 29518
- Extent Switches....... ............... ........: 29517
- Avg. Pages per Extent......... ............... : 8.0
- Scan Density [Best Count:Actual Count].......: 99.45% [29356:29518]
- Logical Scan Fragmentation ............... ...: 0.00%
- Extent Scan Fragmentation ............... ....: 2.87%
- Avg. Bytes Free per Page........... ..........: 493.5
- Avg. Page Density (full)......... ............: 93.90%
DBCC execution completed. If DBCC printed error messages, contact your
system administrator.

From what I am reading, this does not seem to be bad.
Yes, there is no problem with fragmentation here. But I notice that the
table is quite big. 1.9 GB (234847*8192), so if there are only 3.5 million
rows, the rows are fairly wide, around 550 bytes in average.

Still several minutes to do a SELECT COUNT(*) seems a bit excessive.
How much memory do you have in the machine?

Does the other databases that you don't think are slow have similar
size and schema?
--
Erland Sommarskog, SQL Server MVP, es****@sommarsk og.se

Books Online for SQL Server 2005 at
http://www.microsoft.com/technet/pro...ads/books.mspx
Books Online for SQL Server 2000 at
http://www.microsoft.com/sql/prodinf...ons/books.mspx
Jul 1 '08 #7
On Jul 1, 5:48*pm, Erland Sommarskog <esq...@sommars kog.sewrote:
Marc Baker (mba...@selects a.com) writes:
The tables in question both have a clustered and non-clustered index
(unique identifier only), and these are the DBCC SHOWCONTIG statistics
for one of those tables.
TABLE level scan performed.
- Pages Scanned........ ............... .........: 234847
- Extents Scanned........ ............... .......: 29518
- Extent Switches....... ............... ........: 29517
- Avg. Pages per Extent......... ............... : 8.0
- Scan Density [Best Count:Actual Count].......: 99.45% [29356:29518]
- Logical Scan Fragmentation ............... ...: 0.00%
- Extent Scan Fragmentation ............... ....: 2.87%
- Avg. Bytes Free per Page........... ..........: 493.5
- Avg. Page Density (full)......... ............: 93.90%
DBCC execution completed. If DBCC printed error messages, contact your
system administrator.
From what I am reading, this does not seem to be bad.

Yes, there is no problem with fragmentation here. But I notice that the
table is quite big. 1.9 GB (234847*8192), so if there are only 3.5 million
rows, the rows are fairly wide, around 550 bytes in average.

Still several minutes to do a SELECT COUNT(*) seems a bit excessive.
How much memory do you have in the machine?

Does the other databases that you don't think are slow have similar
size and schema?

--
Erland Sommarskog, SQL Server MVP, esq...@sommarsk og.se

Books Online for SQL Server 2005 athttp://www.microsoft.c om/technet/prodtechnol/sql/2005/downloads/books...
Books Online for SQL Server 2000 athttp://www.microsoft.c om/sql/prodinfo/previousversion s/books.mspx
There is 3GB of memory. Dynamically assigned by SQL. The other db's
do have similar size, and the exact same schema.
Jul 2 '08 #8
Erland Sommarskog wrote:
>
Marc Baker (mb****@selects a.com) writes:
The tables in question both have a clustered and non-clustered index
(unique identifier only), and these are the DBCC SHOWCONTIG statistics
for one of those tables.

TABLE level scan performed.
- Pages Scanned........ ............... .........: 234847
- Extents Scanned........ ............... .......: 29518
- Extent Switches....... ............... ........: 29517
- Avg. Pages per Extent......... ............... : 8.0
- Scan Density [Best Count:Actual Count].......: 99.45% [29356:29518]
- Logical Scan Fragmentation ............... ...: 0.00%
- Extent Scan Fragmentation ............... ....: 2.87%
- Avg. Bytes Free per Page........... ..........: 493.5
- Avg. Page Density (full)......... ............: 93.90%
DBCC execution completed. If DBCC printed error messages, contact your
system administrator.

From what I am reading, this does not seem to be bad.

Yes, there is no problem with fragmentation here. But I notice that the
table is quite big. 1.9 GB (234847*8192), so if there are only 3.5 million
rows, the rows are fairly wide, around 550 bytes in average.

Still several minutes to do a SELECT COUNT(*) seems a bit excessive.
How much memory do you have in the machine?

Does the other databases that you don't think are slow have similar
size and schema?

--
Erland Sommarskog, SQL Server MVP, es****@sommarsk og.se

Books Online for SQL Server 2005 at
http://www.microsoft.com/technet/pro...ads/books.mspx
Books Online for SQL Server 2000 at
http://www.microsoft.com/sql/prodinf...ons/books.mspx

Yes, several minutes seems excessive, but only if the accessed disk is
used exclusively by SQL Server, and there is no other significant
database activity going on. If there is other database activity going
on, the OP could have blocking, and the available I/O might have to be
shared. If (because of all that) the storage engine decides not to Read
Ahead, and you have all these interruptions, then I can imagine reaching
10 minutes for (just) 2 GB.

--
Gert-Jan
SQL Server MVP
Jul 2 '08 #9
Marc Baker (mb****@selects a.com) writes:
There is 3GB of memory. Dynamically assigned by SQL. The other db's
do have similar size, and the exact same schema.
I'm starting to run of ideas. One more last straw: check with
sp_helpdb whether autoclose is on for the database.

--
Erland Sommarskog, SQL Server MVP, es****@sommarsk og.se

Books Online for SQL Server 2005 at
http://www.microsoft.com/technet/pro...ads/books.mspx
Books Online for SQL Server 2000 at
http://www.microsoft.com/sql/prodinf...ons/books.mspx
Jul 2 '08 #10

This thread has been closed and replies have been disabled. Please start a new discussion.

Similar topics

0
486
by: Jesse Sheidlower | last post by:
I'm struggling with speed issues on some queries that I would have expected to be relatively fast. Perhaps even more frustratingly, when I've tried to break these down into their components, they still execute very slowly. I've looked over all the relevant suggestions for optimization and so forth, and there's nothing I can tell that I'm missing. An example of a query is to get all the words (the cg.cw field) in a particular...
11
17551
by: DJJ | last post by:
I am using the MySQL ODBC 3.51 driver to link three relatively small MySQL tables to a Microsoft Access 2003 database. I am finding that the data from the MySQL tables takes a hell of a long time to load making any kind linkage with my Access data virtually useless. I have the MySQL driver setup in as a USER DSN. The MySQL data is sitting out on a server and the Access database is running locally. The network connection is very...
2
2006
by: Yonatan Goraly | last post by:
I am in the process of adding PostgreSQL support for an application, in addition to Oracle and MS SQL. I am using PostgreSQL version 7.3.2, Red Hat 9.0 on Intel Pentium III board. I have a query that generally looks like this: SELECT t1.col1, t2.col1 FROM t1, t2 WHERE t1.x=t2.y AND t2.p='string' AND t2.q=1 This query is strikingly slow (about 100 sec when both t1 and t2 has
1
1886
by: Coy Howe | last post by:
This one seems bizarre! We have a database consisting of a main table and 12 - 15 "sub" tables, which are connected via cascading relationships. The database performs many complex calculations and displays the results in both forms and reports. In the process of developing the database we "entered" 12 sample clients. Before packaging the database we remove the sample clients by deleting them from the main table, which cascades the...
14
2456
by: google | last post by:
I am creating a new database for use within our company, that I'd like to make reasonably secure (short of a true server based solution). The back-end of a non-server based database seems to be the biggest security hole, due to the ease of opening, linking, and/or importing raw data directly from the back-end database. I've read information that indicates that locking down the back-end tables to the owner, then using RWOP queries in the...
6
2080
by: lawrence k | last post by:
The following function is way too slow. If anyone has any suggestions about how to speed it up, I'd be grateful for them. We have to call this function 36 times on one page, and I think each time it takes half a second, so it adds up to maybe 18 seconds, which is a lot when you're showing software to a client. The reponse we get is "Why is it so slow?" <?php
29
5497
by: wizofaus | last post by:
I previously posted about a problem where it seemed that changing the case of the word "BY" in a SELECT query was causing it to run much much faster. Now I've hit the same thing again, where basically almost any change I make to how the query is executed (so that it still performs the same function) causes the performance to jump from a dismal 7 or 8 seconds to instantaneous. It's a very simple query of the form: SELECT Min(MyValue)...
12
3931
by: grace | last post by:
i am wondering why my database retrieval becomes too slow...we set up a new server (ubuntu, breezy badger) machine where we transferred all our files from the old server.. Our new server uses Asus p5pe-vm motherboard and an Intel Pentium D 3.0Ghz processor, compared to the old one where we uses asrock motherboard and AMD Duron. Both has the same version of mysql installed... To summarized, both machine has the same configuration except...
2
9834
by: existential.philosophy | last post by:
This is a new problem for me: I have some queries that open very slowly in design view. My benchmark query takes about 20 minutes to open in design view. That same query takes about 20 minutes to open in datasheet view. As an experiment, I deleted all rows in all tables; after that, the query took only seconds to open in both design view and datasheet view. From these facts, I conclude that Access is evaluating the query when I go to...
0
8343
by: Hystou | last post by:
Most computers default to English, but sometimes we require a different language, especially when relocating. Forgot to request a specific language before your computer shipped? No problem! You can effortlessly switch the default language on Windows 10 without reinstalling. I'll walk you through it. First, let's disable language synchronization. With a Microsoft account, language settings sync across devices. To prevent any complications,...
0
8856
Oralloy
by: Oralloy | last post by:
Hello folks, I am unable to find appropriate documentation on the type promotion of bit-fields when using the generalised comparison operator "<=>". The problem is that using the GNU compilers, it seems that the internal comparison operator "<=>" tries to promote arguments from unsigned to signed. This is as boiled down as I can make it. Here is my compilation command: g++-12 -std=c++20 -Wnarrowing bit_field.cpp Here is the code in...
0
8762
jinu1996
by: jinu1996 | last post by:
In today's digital age, having a compelling online presence is paramount for businesses aiming to thrive in a competitive landscape. At the heart of this digital strategy lies an intricately woven tapestry of website design and digital marketing. It's not merely about having a website; it's about crafting an immersive digital experience that captivates audiences and drives business growth. The Art of Business Website Design Your website is...
0
7365
agi2029
by: agi2029 | last post by:
Let's talk about the concept of autonomous AI software engineers and no-code agents. These AIs are designed to manage the entire lifecycle of a software development project—planning, coding, testing, and deployment—without human intervention. Imagine an AI that can take a project description, break it down, write the code, debug it, and then launch it, all on its own.... Now, this would greatly impact the work of software developers. The idea...
1
6185
isladogs
by: isladogs | last post by:
The next Access Europe User Group meeting will be on Wednesday 1 May 2024 starting at 18:00 UK time (6PM UTC+1) and finishing by 19:30 (7.30PM). In this session, we are pleased to welcome a new presenter, Adolph Dupré who will be discussing some powerful techniques for using class modules. He will explain when you may want to use classes instead of User Defined Types (UDT). For example, to manage the data in unbound forms. Adolph will...
0
5653
by: conductexam | last post by:
I have .net C# application in which I am extracting data from word file and save it in database particularly. To store word all data as it is I am converting the whole word file firstly in HTML and then checking html paragraph one by one. At the time of converting from word file to html my equations which are in the word document file was convert into image. Globals.ThisAddIn.Application.ActiveDocument.Select();...
0
4179
by: TSSRALBI | last post by:
Hello I'm a network technician in training and I need your help. I am currently learning how to create and manage the different types of VPNs and I have a question about LAN-to-LAN VPNs. The last exercise I practiced was to create a LAN-to-LAN VPN between two Pfsense firewalls, by using IPSEC protocols. I succeeded, with both firewalls in the same network. But I'm wondering if it's possible to do the same thing, with 2 Pfsense firewalls...
0
4347
by: adsilva | last post by:
A Windows Forms form does not have the event Unload, like VB6. What one acts like?
1
2762
by: 6302768590 | last post by:
Hai team i want code for transfer the data from one system to another through IP address by using C# our system has to for every 5mins then we have to update the data what the data is updated we have to send another system

By using Bytes.com and it's services, you agree to our Privacy Policy and Terms of Use.

To disable or enable advertisements and analytics tracking please visit the manage ads & tracking page.