Hi,
I am using SQL 2000 and has a table that contains more than 2 million
rows of data (and growing). Right now, I have encountered 2 problems:
1) Sometimes, when I try to query against this table, I would get sql
command time out. Hence, I did more testing with Query Analyser and to
find out that the same queries would not always take about the same
time to be executed. Could anyone please tell me what would affect the
speed of the query and what is the most important thing among all the
factors? (I could think of the opened connections, server's
CPU/Memory...)
2) I am not sure if 2 million rows is considered a lot or not, however,
it start to take 5~10 seconds for me to finish some simple queries. I
am wondering what is the best practices to handle this amount of data
while having a decent performance?
Thank you,
Charlie Chang
[Ch*********@hotmail.com] 5 14297
Have you researched indexes?
Generally if you create a index in your table on the most common fields
called in your where statement you can increase performance
considerably.
Keep in mind that creating too many indexes could hinder performance
for insert and delete queries since the table has to be reindexed after
each of these type of operations.
Any other suggestions would require us seeing how you built the table.
Hope that helps.
Philip ch*********@hotmail.com wrote: Hi,
I am using SQL 2000 and has a table that contains more than 2 million rows of data (and growing). Right now, I have encountered 2 problems:
1) Sometimes, when I try to query against this table, I would get sql command time out. Hence, I did more testing with Query Analyser and
to find out that the same queries would not always take about the same time to be executed. Could anyone please tell me what would affect
the speed of the query and what is the most important thing among all the factors? (I could think of the opened connections, server's CPU/Memory...)
2) I am not sure if 2 million rows is considered a lot or not,
however, it start to take 5~10 seconds for me to finish some simple queries. I am wondering what is the best practices to handle this amount of data while having a decent performance? Thank you,
Charlie Chang [Ch*********@hotmail.com]
(ch*********@hotmail.com) writes: I am using SQL 2000 and has a table that contains more than 2 million rows of data (and growing). Right now, I have encountered 2 problems:
I'll take the second question first, as it is more general.
2) I am not sure if 2 million rows is considered a lot or not, however, it start to take 5~10 seconds for me to finish some simple queries. I am wondering what is the best practices to handle this amount of data while having a decent performance?
Two million rows for a table is a respectable number, although the world
have seen many larger tables than this. By the way, what matters a lot
is the total size. A two-million row table with a single integer column
and a two-million row table with a single char(8000) are very different.
But say that you have some 30 columns, with an average size of 300 bytes.
That's a 600 MB table which certainly is not a small table.
For a table with that size, it's essential that you have good indexes
for the common queries. It is also essential that you rebuild indexes
on a regular basis with DBCC REINDEX. However, depends how quickly they
get fragmented.
When you say that queries are taking a long time, it could be because
you need to add some more indexes. One tool to find proper indexes, is
run the Index Tuning Wizard on a workload.
If you believe that you have the right indexes, a possible cause could
be fragmentation. The command DBCC SHOWCONTIG can give you information
about this.
1) Sometimes, when I try to query against this table, I would get sql command time out. Hence, I did more testing with Query Analyser and to find out that the same queries would not always take about the same time to be executed. Could anyone please tell me what would affect the speed of the query and what is the most important thing among all the factors? (I could think of the opened connections, server's CPU/Memory...)
There are a bit too many unknowns here to give an exact answer. Is the
same query take differnt amount of time from execution to execution?
There are at least two possible causes for this: blocking and caching.
If another process performs some update operation, your query may be
blocked for a while. You can examine blocking by using the sp_who command.
If you see a non-zero value in the Blk column, the spid on that row is
blocked by the spid in Blk. In the status bar in QA you can see the spid
of the current window.
SQL Server tries to keep as much data as it can in cache. If data is
in cache, the response time for a query can be significantly better
than if data has to be read from disk. But the cache cannot be bigger
than a certain amount of the available memory in the machine. (I don't
know the exact number, but say 60-70%). If there are a lot of scans in
many tables, data will go in and out of the cache, and response time
will vary accordingly.
When testing different queries or indexes, one way to factor out the
effect of the cache is to use the command DBCC DROPCLEANBUFFERS. This
flushes the cache entirely. Obviously, it is not a good idea to do
this on a production box.
--
Erland Sommarskog, SQL Server MVP, es****@sommarskog.se
Books Online for SQL Server SP3 at http://www.microsoft.com/sql/techinf...2000/books.asp
Thx for the reply, I will read about the indexes tonight.
As of my table structure. It consists of 12 columns with the following
order:
Sale_Date_DT (datetime, first column)
Employee_ID (int)
Machine_ID (int)
Receipt_Number_NV (nvarchar)
UPC_NV (nvarchar)
Quantity_Sold_IN (int)
Sale_Price_MN (money)
Tax_MN (money)
Payment_Type_IN (int)
Payment_Amount_MN (money)
Rebate_Category_ID (int)
Sales_ID (int, key, identity)
I get somewhere between 1.5 to 2 million rows of data every year. I
have been thinking about archieve and reindexing every 6 months.
I guess I will read about indexing and full-text indexing (maybe on
receipt number). Any other suggestions would be appreciated :)
Thank you,
Charlie Chang
[ch*********@hotmail.com] ph******@msn.com wrote: Have you researched indexes? Generally if you create a index in your table on the most common
fields called in your where statement you can increase performance considerably. Keep in mind that creating too many indexes could hinder performance for insert and delete queries since the table has to be reindexed
after each of these type of operations. Any other suggestions would require us seeing how you built the
table. Hope that helps.
Philip ch*********@hotmail.com wrote: Hi,
I am using SQL 2000 and has a table that contains more than 2
million rows of data (and growing). Right now, I have encountered 2
problems: 1) Sometimes, when I try to query against this table, I would get
sql command time out. Hence, I did more testing with Query Analyser and to find out that the same queries would not always take about the same time to be executed. Could anyone please tell me what would affect the speed of the query and what is the most important thing among all
the factors? (I could think of the opened connections, server's CPU/Memory...)
2) I am not sure if 2 million rows is considered a lot or not, however, it start to take 5~10 seconds for me to finish some simple queries.
I am wondering what is the best practices to handle this amount of
data while having a decent performance? Thank you,
Charlie Chang [Ch*********@hotmail.com]
Adding indexes works great. Thank you.
I do have a few more questions:
when I do dbcc showcontig (table_name)
I get the following information:
TABLE level scan performed.
- Pages Scanned................................: 38882
- Extents Scanned..............................: 4879
- Extent Switches..............................: 4878
- Avg. Pages per Extent........................: 8.0
- Scan Density [Best Count:Actual Count].......: 99.63% [4861:4879]
- Logical Scan Fragmentation ..................: 0.08%
- Extent Scan Fragmentation ...................: 1.46%
- Avg. Bytes Free per Page.....................: 27.4
- Avg. Page Density (full).....................: 99.66%
I guess the number to look is the Scan Density (the table I had problem
with was down to 34%). Now, what I really want to know is, in general,
when should I reindex the table?
Another question I encountered is that while performing all the
database maintenances and with all the failed (due to fragmentation
causing operation timeout) , my transaction log get so big that my HD
ran out of spaces. I detached the database and truncated the log and
reattach back the database in order to fix this. I am wondering, if
there is a way that we can do so that the transaction log can erase the
old logs when the log file get to a certain size?
Thank you again for your reply, it really helped.
Charlie Chang
Erland Sommarskog wrote: (ch*********@hotmail.com) writes: I am using SQL 2000 and has a table that contains more than 2
million rows of data (and growing). Right now, I have encountered 2
problems: I'll take the second question first, as it is more general.
2) I am not sure if 2 million rows is considered a lot or not,
however, it start to take 5~10 seconds for me to finish some simple queries.
I am wondering what is the best practices to handle this amount of
data while having a decent performance?
Two million rows for a table is a respectable number, although the
world have seen many larger tables than this. By the way, what matters a
lot is the total size. A two-million row table with a single integer
column and a two-million row table with a single char(8000) are very
different. But say that you have some 30 columns, with an average size of 300
bytes. That's a 600 MB table which certainly is not a small table.
For a table with that size, it's essential that you have good indexes for the common queries. It is also essential that you rebuild indexes on a regular basis with DBCC REINDEX. However, depends how quickly
they get fragmented.
When you say that queries are taking a long time, it could be because you need to add some more indexes. One tool to find proper indexes,
is run the Index Tuning Wizard on a workload.
If you believe that you have the right indexes, a possible cause
could be fragmentation. The command DBCC SHOWCONTIG can give you
information about this.
1) Sometimes, when I try to query against this table, I would get
sql command time out. Hence, I did more testing with Query Analyser and
to find out that the same queries would not always take about the same time to be executed. Could anyone please tell me what would affect
the speed of the query and what is the most important thing among all
the factors? (I could think of the opened connections, server's CPU/Memory...) There are a bit too many unknowns here to give an exact answer. Is
the same query take differnt amount of time from execution to execution? There are at least two possible causes for this: blocking and
caching. If another process performs some update operation, your query may be blocked for a while. You can examine blocking by using the sp_who
command. If you see a non-zero value in the Blk column, the spid on that row
is blocked by the spid in Blk. In the status bar in QA you can see the
spid of the current window.
SQL Server tries to keep as much data as it can in cache. If data is in cache, the response time for a query can be significantly better than if data has to be read from disk. But the cache cannot be bigger than a certain amount of the available memory in the machine. (I
don't know the exact number, but say 60-70%). If there are a lot of scans
in many tables, data will go in and out of the cache, and response time will vary accordingly.
When testing different queries or indexes, one way to factor out the effect of the cache is to use the command DBCC DROPCLEANBUFFERS. This flushes the cache entirely. Obviously, it is not a good idea to do this on a production box.
-- Erland Sommarskog, SQL Server MVP, es****@sommarskog.se
Books Online for SQL Server SP3 at http://www.microsoft.com/sql/techinf...2000/books.asp
(ch*********@hotmail.com) writes: I do have a few more questions:
when I do dbcc showcontig (table_name) I get the following information:
TABLE level scan performed. - Pages Scanned................................: 38882 - Extents Scanned..............................: 4879 - Extent Switches..............................: 4878 - Avg. Pages per Extent........................: 8.0 - Scan Density [Best Count:Actual Count].......: 99.63% [4861:4879] - Logical Scan Fragmentation ..................: 0.08% - Extent Scan Fragmentation ...................: 1.46% - Avg. Bytes Free per Page.....................: 27.4 - Avg. Page Density (full).....................: 99.66%
I guess the number to look is the Scan Density (the table I had problem with was down to 34%). Now, what I really want to know is, in general, when should I reindex the table?
Depends a little, and there are actually a number of strategies you
can use, depending on how the table is used. But as a simple rule of
thumb, don't defragment if scan density is better than 70%. If nothing
else, that avoids unnecessary bloat of the transaction log.
Another question I encountered is that while performing all the database maintenances and with all the failed (due to fragmentation causing operation timeout) , my transaction log get so big that my HD ran out of spaces. I detached the database and truncated the log and reattach back the database in order to fix this. I am wondering, if there is a way that we can do so that the transaction log can erase the old logs when the log file get to a certain size?
Well, depends on what you want the transaction log for. If you are
perfectly content with restore the latest full backup (a backup every
night is good) in case of a crash, just switch to simple recovery mode. You
can still encounter transaction-log explosion with reindexing, since the log
can never be truncated past any currently running transaction. But at least
when you are, the log will be truncated automatically.
If you need up-to-the-point recovery, you must run with full or bulk-logged
recovery, but in such case you don't want the transaction to be erased,
you need to back it up every now and then.
--
Erland Sommarskog, SQL Server MVP, es****@sommarskog.se
Books Online for SQL Server SP3 at http://www.microsoft.com/sql/techinf...2000/books.asp This thread has been closed and replies have been disabled. Please start a new discussion. Similar topics
by: Ice Man |
last post by:
Hi All,
I will need to send bewteen 2 asp pages very large amounts of data
what is the best way to send it and to read it?
for ex. I know this method:
For i = 1 To Request.QueryString.Count...
|
by: steve |
last post by:
Hi, I have researched but have not found a good solution to this
problem.
I am importing large amounts of data (over 50 Meg) into a new mysql db
that I set up. I use
>mysql dbname <...
|
by: Mike |
last post by:
This is a general question on the best way to import a large amount of data
to a MS-SQL DB.
I can have the data in just about any format I need to, I just don't know
how to import the data. I...
|
by: Bart |
last post by:
Dear all,
I would like to encrypt a large amount of data by using public/private keys,
but I read on MSDN:
"Symmetric encryption is performed on streams and is therefore useful to
encrypt large...
|
by: Lakesider |
last post by:
Hi NG,
I have written an application with a lot of file- and database
operations. There are several algorithmic operations, too. My question
is: are ther any tools to improve performance
-...
|
by: loretta |
last post by:
I have data within an xml tag that is being truncated when being read
into a javascript variable. On Firefox, I am only getting up to 4096
characters. On IE, I am getting 31324 characters. I can...
|
by: eggie5 |
last post by:
OK, so I'm trying to get a large amount of text back to my ASP script
on the server. My large amount of text is the source to a web page, and
I want to preserve the formatting on it, ie the...
|
by: Jack |
last post by:
I need to process large amount of data. The data structure fits well
in a dictionary but the amount is large - close to or more than the size
of physical memory. I wonder what will happen if I try...
|
by: AnishAbs |
last post by:
Hi ,
I have got a senario in which large amount of data should be copied from MS excel to MS SQL server. which is the best option to do so. Because when I use recordsets the process is very slow and...
|
by: Gilles Ganault |
last post by:
Hello
I'm no PHP expert, and I'm reading "Building scalable web sites". In
the tips section, the author mentions using templates to speed things
up. I was wondering how the template engines...
|
by: Naresh1 |
last post by:
What is WebLogic Admin Training?
WebLogic Admin Training is a specialized program designed to equip individuals with the skills and knowledge required to effectively administer and manage Oracle...
|
by: antdb |
last post by:
Ⅰ. Advantage of AntDB: hyper-convergence + streaming processing engine
In the overall architecture, a new "hyper-convergence" concept was proposed, which integrated multiple engines and...
|
by: Oralloy |
last post by:
Hello Folks,
I am trying to hook up a CPU which I designed using SystemC to I/O pins on an FPGA.
My problem (spelled failure) is with the synthesis of my design into a bitstream, not the C++...
|
by: BLUEPANDA |
last post by:
At BluePanda Dev, we're passionate about building high-quality software and sharing our knowledge with the community. That's why we've created a SaaS starter kit that's not only easy to use but also...
|
by: Rahul1995seven |
last post by:
Introduction:
In the realm of programming languages, Python has emerged as a powerhouse. With its simplicity, versatility, and robustness, Python has gained popularity among beginners and experts...
|
by: Johno34 |
last post by:
I have this click event on my form. It speaks to a Datasheet Subform
Private Sub Command260_Click()
Dim r As DAO.Recordset
Set r = Form_frmABCD.Form.RecordsetClone
r.MoveFirst
Do
If...
|
by: ezappsrUS |
last post by:
Hi,
I wonder if someone knows where I am going wrong below. I have a continuous form and two labels where only one would be visible depending on the checkbox being checked or not. Below is the...
|
by: DizelArs |
last post by:
Hi all)
Faced with a problem, element.click() event doesn't work in Safari browser.
Tried various tricks like emulating touch event through a function:
let clickEvent = new Event('click', {...
|
by: F22F35 |
last post by:
I am a newbie to Access (most programming for that matter). I need help in creating an Access database that keeps the history of each user in a database. For example, a user might have lesson 1 sent...
| |