473,883 Members | 1,626 Online
Bytes | Software Development & Data Engineering Community
+ Post

Home Posts Topics Members FAQ

Very small table incredibly slow

Hello.

I am administering a SQL Server (Enterprise Edition on Windows 2003)
from some month and can't understand what is going on in the latest
week (when the db grow a lot).

The DB is around 250G, and has one table with 1 billion rows. It is
performing in a decent way, but can't understand why a particolar table
has strong performance problem.

I have a stored procedure that read table from table A and insert them,
after processing in table B, and then move them in other table (similar
to a Star Schema) for reporting.

Table B is, for how the SP is written, not more than 3000 lines. Table
B is very simple, has 3 rows, and no index.

What is very strange is that performance of table B is really slow. If
I do a select count (*) from table_b it takes between 30s & 2minutes to
return it has 0 lines. When the stored procedure insert 1000 lines, it
takes 20/30 seconds and it takes 20/30 seconds to delete them.

To me it doesn't look like a lock problem, because it is slow also when
the only procedure that access that table are stopped. I did an update
statistics with fullscan on this table with no improvement.

The DB is on a Storage Area Network that should perform decently. The
LUN I use is configured to use a piece of 32 disk that are used also by
other application. I don't have performance data of the SAN. The
machine is an HP DL580 with 4 CPU (hiperthreading disabled), 8G of RAM,
AWE and PAE and 5G reserved for SQL Server.

I don't know what to do to solve this situation. Could it be a
"corruption problem" that slow this table so much? is it possible the
fact the db grow a lot in the last week created problem also to this
small and simple table?

Do you have any idea or hint on how to manage this situation, or
pointer to documentation that can help in analizing this situation?

For the ones that arrived till here, thank you for your time and
patience reading my bad english...

Best Regards,
Mamo

PS
I can't rewrite the stored procedure, because it is part of a closed
source product.

Jul 23 '05 #1
16 21392
First of all. Please don't use "lines" the purists among us will have
a fit.
Table B is, for how the SP is written, not more than 3000 lines. TableB is very simple, has 3 rows, and no index.
Table B has no more than 3000 rows at a time and has 3 columns with no
index?
If I do a select count (*) from table_b it takes between 30s & 2minutes toreturn it has 0 lines. When the stored procedure insert 1000 lines, ittakes 20/30 seconds and it takes 20/30 seconds to delete them.
a) I would check the execution plans when you run these queries/ stored
procedures.
b) Check database configuration.
Does the slowdown only occur when a query involves table B? Or is the
entire database slow?
Is table A, B, & C all in the same database or in separate databases?
Does it have single or multiple data and log files?
What are the database options? Do you have something funky like "auto
close" or "auto shrink" enabled?
Is the database set to autogrow? Any max size restriction?
c) Try running sp_who2 while the query is progress
Could it be a
"corruption problem" that slow this table so much? is it possible the
fact the db grow a lot in the last week created problem also to this
small and simple table?


Corruption unlikely. Running out of space -- definitely a possibility.
When the disk arrays are nearing capacity,
you'll get mysterious problems. Jobs will fail. Queries take a long
time. Error messages that don't make any
sense. Check the size of the arrays -- for the database and log and
tempDb.

Jul 23 '05 #2
louis wrote:
First of all. Please don't use "lines" the purists among us will have a fit.
Sorry....You are right.
Table B has no more than 3000 rows at a time and has 3 columns with no index? Yes!
a) I would check the execution plans when you run these queries/ stored procedures. I checked them. The slowdown is on the insert/delete in the "table B".
b) Check database configuration.
What would you check in particolar?
Does the slowdown only occur when a query involves table B? Or is the entire database slow? The DB is not fast. The application sometimes timeout. But it is also
big with a lot of data. I can't say how much it could perform better
with fine tuning. Is table A, B, & C all in the same database or in separate databases?
Table A,B,C are all in the same DB. Does it have single or multiple data and log files?
They are in the same filesystem. The filesystem is on the SAN. I could
have some local disk on the machine (SCSI 15K), but I am not sure that
could help (the "SAN expert" says me that the SAN should be a lot
faster than local disk). What do you think about it?
What are the database options? Do you have something funky like "auto close" or "auto shrink" enabled? The DB doesn't have options like auto close or shrink. I have auto
create/update statistics, Torn Page detection, allow cross database
ownership chaining (it is required by some stored procedure in the
vendor code). The recovery model is Simple.
Is the database set to autogrow? Any max size restriction? The DB have been created as 150GB. It has autogrow with step of 1G. Now
it is 250G. It can grow till 350G. Now the DB has 50G free space. The
filesystem is 0.5Tbyte. Now it is 60% Used. Is 40% enought to have a
decent performance? What is the maximum that can be used to have decent
performance?
We had the transaction log grow till 80G a month ago, but now we backup
the system every day and the db agent truncate the transaction log so
now it is around 0.5G (We have 75G transaction log free, because we
didn't shrink it).

c) Try running sp_who2 while the query is progress
I did it but could not find what is wrong. What could I look for?
From the execution plan it looks like it take 20s to insert 1000 rows

in a simple table & 20s to delete them from that simple table. I would
like to solve this because it looks really strange to me. I am a DBA
newbie, but to me it looks really strange that to write 5/10kbyte of
data on a fast storage managed by a DMBS it takes so much time. Isn't
it "strange"?

Best Regards,
Massimo

Jul 23 '05 #3
ma****@gmail.co m (ma****@gmail.c om) writes:
I have a stored procedure that read table from table A and insert them,
after processing in table B, and then move them in other table (similar
to a Star Schema) for reporting.

Table B is, for how the SP is written, not more than 3000 lines. Table
B is very simple, has 3 rows, and no index.

What is very strange is that performance of table B is really slow. If
I do a select count (*) from table_b it takes between 30s & 2minutes to
return it has 0 lines. When the stored procedure insert 1000 lines, it
takes 20/30 seconds and it takes 20/30 seconds to delete them.


OK, so here goes my ESP:

Table B is frequenly inserted into and deleted from. It does not
have a clustered index. This results in huge fragmentation, particular
if there are leftover rows which causes entire extents to be left behind.

Run DBCC SHOWCONTIG on the table to see some horrying numbers.

Then create a clustered index on the table, and keep it. An occassional
DBCC DBREINDEX would be a good thing too.
--
Erland Sommarskog, SQL Server MVP, es****@sommarsk og.se

Books Online for SQL Server SP3 at
http://www.microsoft.com/sql/techinf...2000/books.asp
Jul 23 '05 #4
>>>They are in the same filesystem. The filesystem is on the SAN. I
could have some local disk on the machine (SCSI 15K), but I am not sure
that could help (the "SAN expert" says me that the SAN should be a lot
faster than local disk). What do you think about it?

I don't have any experience with SANs, so I don't know. I think
SANs/FibreChannel are "supposed" to have the same speed as local disk.

Is 40% enought to have a decent performance? What is the maximum that can be used to have decent performance?

I would worry when the DB is full and it has to constantly autogrow (if
possible). 40% is plenty of space.
sp_who2 Will identify any blocked processes. It also tells you which SPIDs are
monopolizing CPU and disk IO.

Based on everything described -- I think Erland is right -- and that a
clustered index on table B will fix the problem. You can also execute
sp_spaceused, before and after creating the clustered index. You
should see that the table size will shrink dramatically.

Jul 23 '05 #5
The dbcc return:

/*-----------------------------
DBCC SHOWCONTIG (table_b)
-----------------------------*/
DBCC SHOWCONTIG scanning 'table_b' table...
Table: 'table_b' (1282103608); index ID: 0, database ID: 5
TABLE level scan performed.
- Pages Scanned........ ............... .........: 30229
- Extents Scanned........ ............... .......: 3841
- Extent Switches....... ............... ........: 3840
- Avg. Pages per Extent......... ............... : 7.9
- Scan Density [Best Count:Actual Count].......: 98.39% [3779:3841]
- Extent Scan Fragmentation ............... ....: 99.51%
- Avg. Bytes Free per Page........... ..........: 7481.5
- Avg. Page Density (full)......... ............: 7.57%

Do you think I can benefit the creation of a clusted index with this
statistics?

Best Regards,
Mamo

Jul 23 '05 #6
To Erland Sommarskog. Thank you very much!!!!

You are right!!! I dropped and recreated the table and now the process
is INCREDIBLY faster. I will create the clustered index as you say.
Thank you.
Mamo

Jul 23 '05 #7
To Erland Sommarskog. Thank you very much!!!!

You are right!!! I dropped and recreated the table and now the process
is INCREDIBLY faster. I will create the clustered index as you say.
Thank you.
Mamo

Jul 23 '05 #8
ma****@gmail.co m (ma****@gmail.c om) writes:
The dbcc return:

/*-----------------------------
DBCC SHOWCONTIG (table_b)
-----------------------------*/
DBCC SHOWCONTIG scanning 'table_b' table...
Table: 'table_b' (1282103608); index ID: 0, database ID: 5
TABLE level scan performed.
- Pages Scanned........ ............... .........: 30229
- Extents Scanned........ ............... .......: 3841
- Extent Switches....... ............... ........: 3840
- Avg. Pages per Extent......... ............... : 7.9
- Scan Density [Best Count:Actual Count].......: 98.39% [3779:3841]
- Extent Scan Fragmentation ............... ....: 99.51%
- Avg. Bytes Free per Page........... ..........: 7481.5
- Avg. Page Density (full)......... ............: 7.57%


30000 pages for three rows is quite a lot!

I'm glad to hear that you got things working!
--
Erland Sommarskog, SQL Server MVP, es****@sommarsk og.se

Books Online for SQL Server SP3 at
http://www.microsoft.com/sql/techinf...2000/books.asp
Jul 23 '05 #9
Erland Sommarskog wrote:
30000 pages for three rows is quite a lot!


How is it possible that SQL server can get into this situation?
It seems totally bizarre! Since pages belong to one table exclusively,
why don't empty pages get released?

I tend to avoid using clustered indexes whenever secondary indices are
required, except very limited situations where the clustered index has
a very small number of bytes in the key value. is this unnecessarily
conservative?

Is there any way to get SQL server to select pages for new rows based
on locality on a particular index (other than using a clustered index?),
for when frequent index scans will need to fetch a subset of the rows
on that index? I just ask because when I was starting out with SQL
Server some years ago, that's what I thought clustering did, and that
assumption burnt me a bit.
Jul 23 '05 #10

This thread has been closed and replies have been disabled. Please start a new discussion.

Similar topics

1
4883
by: Tom Yee | last post by:
We just installed SQL Server version 800.194 on a dual processor server equipped with a gigabyte of RAM, running Windows 2000 Server operating system. We set up a few databases with (so far) very tiny tables. When I am working locally (i.e. on the server itself) with Query Analyzer, even the simplest operation is incredibly slow. If I bring up Windows Task Manager looking at the Processes pane (Query Analyzer shows up as "isqlw.exe"),...
11
17592
by: DJJ | last post by:
I am using the MySQL ODBC 3.51 driver to link three relatively small MySQL tables to a Microsoft Access 2003 database. I am finding that the data from the MySQL tables takes a hell of a long time to load making any kind linkage with my Access data virtually useless. I have the MySQL driver setup in as a USER DSN. The MySQL data is sitting out on a server and the Access database is running locally. The network connection is very...
4
24526
by: cefrancke | last post by:
Are there any ways to speed up a Cumulative sum in a query or perhaps another faster way to have a cumulative sum column (call a vba function?). For example, I want to sum up all values under 'Amount', and also have a field call 'Cumulative' and have the sum grow with each record. Using "MyTable", the query results are... ID or Date Amount Cumulative --------------- ------ ----------
2
1452
by: Case - | last post by:
Dear people, I have a scaled up short sin(short) function which uses a lookup table. The code will be used on different platforms (from mobile phone to desktop computer). (Yes, I know sizeof short can differ.) Here my slightly OT question: Is there anything to say about execution speed differences between using an initialised static const short array, or an array
0
2631
by: roiavidan | last post by:
Hi, I'm having a bit of a problem with a small application I wrote in C#, which uses an Access database (mdb file) for storing financial data. After looking for a similiar topic and failing to find one, I'm posting the question in hope some one of you guys out there will have the answer for me...! I'll start with what I have, then I'll continue to the problem itself.
12
4041
by: Ian Murphy | last post by:
Hopefully someone else has seen something similar. We are experiencing an odd problem with aspnet, iis6 and w2k3. I have written a small site which works fine during the day. Each morning however the initial page load can take anything from 10 minutes to over an hour. Nothing seems to occur during this time. The first page load is simply incredibly slow. It seems to 'hang' until someone else tried to load the default.aspx, though there...
4
2354
by: frizzle | last post by:
Hi group. I have a news management system, with a mySQL backend. I tested it yesterday with 1.000.000+ records, testing my url system. I pulled out records calling them by the url field. It was incredibly fast, but now, when i call 5 records, ordered by date (which were inserted randomly) it's incredibly slow. And i only have 100.000 records left at the moment. My database structure is below, and i son't know how i could get it to
0
1160
by: pooky333 | last post by:
Hello! Please help someone... I am still at work and completely stumped. I am a relative beginner at VB and have put together some apparently awful code (as shown below). It is to compare two incredibly huge lists of strings (which might duplicate if I take out things like - (not sure if relevant!)), find matches and entering text next to each one. It is very slow and completes a loop about 4 times a minute (yes i have timed it). the annoying...
0
9943
marktang
by: marktang | last post by:
ONU (Optical Network Unit) is one of the key components for providing high-speed Internet services. Its primary function is to act as an endpoint device located at the user's premises. However, people are often confused as to whether an ONU can Work As a Router. In this blog post, we’ll explore What is ONU, What Is Router, ONU & Router’s main usage, and What is the difference between ONU and Router. Let’s take a closer look ! Part I. Meaning of...
0
9793
by: Hystou | last post by:
Most computers default to English, but sometimes we require a different language, especially when relocating. Forgot to request a specific language before your computer shipped? No problem! You can effortlessly switch the default language on Windows 10 without reinstalling. I'll walk you through it. First, let's disable language synchronization. With a Microsoft account, language settings sync across devices. To prevent any complications,...
0
11151
Oralloy
by: Oralloy | last post by:
Hello folks, I am unable to find appropriate documentation on the type promotion of bit-fields when using the generalised comparison operator "<=>". The problem is that using the GNU compilers, it seems that the internal comparison operator "<=>" tries to promote arguments from unsigned to signed. This is as boiled down as I can make it. Here is my compilation command: g++-12 -std=c++20 -Wnarrowing bit_field.cpp Here is the code in...
0
10750
jinu1996
by: jinu1996 | last post by:
In today's digital age, having a compelling online presence is paramount for businesses aiming to thrive in a competitive landscape. At the heart of this digital strategy lies an intricately woven tapestry of website design and digital marketing. It's not merely about having a website; it's about crafting an immersive digital experience that captivates audiences and drives business growth. The Art of Business Website Design Your website is...
1
10858
by: Hystou | last post by:
Overview: Windows 11 and 10 have less user interface control over operating system update behaviour than previous versions of Windows. In Windows 11 and 10, there is no way to turn off the Windows Update option using the Control Panel or Settings app; it automatically checks for updates and installs any it finds, whether you like it or not. For most users, this new feature is actually very convenient. If you want to control the update process,...
1
7974
isladogs
by: isladogs | last post by:
The next Access Europe User Group meeting will be on Wednesday 1 May 2024 starting at 18:00 UK time (6PM UTC+1) and finishing by 19:30 (7.30PM). In this session, we are pleased to welcome a new presenter, Adolph Dupré who will be discussing some powerful techniques for using class modules. He will explain when you may want to use classes instead of User Defined Types (UDT). For example, to manage the data in unbound forms. Adolph will...
0
7134
by: conductexam | last post by:
I have .net C# application in which I am extracting data from word file and save it in database particularly. To store word all data as it is I am converting the whole word file firstly in HTML and then checking html paragraph one by one. At the time of converting from word file to html my equations which are in the word document file was convert into image. Globals.ThisAddIn.Application.ActiveDocument.Select();...
1
4619
by: 6302768590 | last post by:
Hai team i want code for transfer the data from one system to another through IP address by using C# our system has to for every 5mins then we have to update the data what the data is updated we have to send another system
3
3237
bsmnconsultancy
by: bsmnconsultancy | last post by:
In today's digital era, a well-designed website is crucial for businesses looking to succeed. Whether you're a small business owner or a large corporation in Toronto, having a strong online presence can significantly impact your brand's success. BSMN Consultancy, a leader in Website Development in Toronto offers valuable insights into creating effective websites that not only look great but also perform exceptionally well. In this comprehensive...

By using Bytes.com and it's services, you agree to our Privacy Policy and Terms of Use.

To disable or enable advertisements and analytics tracking please visit the manage ads & tracking page.