473,381 Members | 1,428 Online
Bytes | Software Development & Data Engineering Community
Post Job

Home Posts Topics Members FAQ

Join Bytes to post your question to a community of 473,381 software developers and data experts.

speeding up Dynamic SP

We have a dynamic SP that dependant on a user name will run a selected
tailored to them.

One of the criteria is the number of rows retrieved, which we include using
'top @varNoOfRows' in the dynamically created select .

As long as the number of rows is less than 130 the SP runs in less than a
second, but if you increase that value to over 150 the SP times out.

It is being run from ASP in this way: DBCon.execute(SQLQuery)

The main table that we are querying contains about 1.5 million records and
is un-indexed. (eek - didn't realise that until I just checked) on SQL
server 2000.

Does anyone have any pointers towards streamlining the SP - I can post it if
you wish. Or can anyone explain how to use the execute plan to our
advantage?

I've already used it to change a

SET @statement2 = (select T1_QueryPart from Table1 where T1_ID like (SELECT
LoginTable_T1ID from LoginTable where @username = LT_UserName)
)

to

SET @T1ID = (SELECT LT_T1ID from LoginTable where @username = LT_UserName)
SET @statement2 = (select T1_QueryPart from Table1 where T1_ID like @T1ID)

But would , say, a join be more time efficient?

Any help would be appreciated

John
Jul 20 '05 #1
3 1362
John D (dobjsonneNOSPAMhotmail.com) writes:
We have a dynamic SP that dependant on a user name will run a selected
tailored to them.

One of the criteria is the number of rows retrieved, which we include
using 'top @varNoOfRows' in the dynamically created select .

As long as the number of rows is less than 130 the SP runs in less than a
second, but if you increase that value to over 150 the SP times out.

It is being run from ASP in this way: DBCon.execute(SQLQuery)

The main table that we are querying contains about 1.5 million records and
is un-indexed. (eek - didn't realise that until I just checked) on SQL
server 2000.
Really? I would guess that the big change from 130 to 150 was due to
chaning from using an non-clustered index to using a table scan.
Does anyone have any pointers towards streamlining the SP - I can post
it if you wish. Or can anyone explain how to use the execute plan to
our advantage?
I don't think very many here is going to guess your code. But recall
that not only the code may not be sufficient. Need to know the table
and its indexes as well.

And if the table does not have any indexes - it's time to create a few!
I've already used it to change a

SET @statement2 = (select T1_QueryPart from Table1 where T1_ID like
(SELECT LoginTable_T1ID from LoginTable where @username = LT_UserName)
)

to

SET @T1ID = (SELECT LT_T1ID from LoginTable where @username = LT_UserName)
SET @statement2 = (select T1_QueryPart from Table1 where T1_ID like @T1ID)


I don't see any TOP here? Is this a different query?

--
Erland Sommarskog, SQL Server MVP, es****@sommarskog.se

Books Online for SQL Server SP3 at
http://www.microsoft.com/sql/techinf...2000/books.asp
Jul 20 '05 #2
> > As long as the number of rows is less than 130 the SP runs in less than
a
second, but if you increase that value to over 150 the SP times out. The main table that we are querying contains about 1.5 million records and is un-indexed. (eek - didn't realise that until I just checked) on SQL
server 2000.
Really? I would guess that the big change from 130 to 150 was due to
chaning from using an non-clustered index to using a table scan.


Yes ! Why is that?
That is what is happening, and is causing the biggest problem.

And if the table does not have any indexes - it's time to create a few!


Is it best to create the indexes out of office hours? I'm worried that
creating them will
take up considerable space on the server, and/or slow the server down whilst
insert/updates/deletes ??

Will creating some indexes on this table stop "top 150" from using a table
scan?

John

Jul 20 '05 #3
John D (dobjsonneNOSPAMhotmail.com) writes:
Really? I would guess that the big change from 130 to 150 was due to
chaning from using an non-clustered index to using a table scan.

Yes ! Why is that?
That is what is happening, and is causing the biggest problem.


SQL Server uses a cost-based optimizer, which means that it tries
to find the cheapest way to evaluate a query. To compute the costs for
different query plans, it relies on statistics about the data. Statistics
are usually sampled automatically as you run queries, and it has to
scan the data anyway.

In this particular case, it appears that SQL Server has a choice between
using a non-clustered index and a table scan. Say that your query is

SELECT TOP 150 * FROM tbl ORDER BY col

And there is a non-clustered index on col. There are two possible query
plans here: on is to follow the index on col, and stop when you have found
150 rows, and another is to scan the table and then find 150 rows with
low values on col.

Why does SQL Server not always use the index? Because when using a non-
clustered index, SQL Server must access the data pages to get the data.
This means that if you retrieve many rows, you will access the same
page more than once, and in the end you get worse performance. So at
some point, swicthing to table scan is better. It appears that in this
case, SQL Server is making an incorrect guess on where the balance
point is. This could be because the statistics are not accurate enough,
and it's possible that an UPDATE STATISTICS WITH FULLSCAN on the table
could have some effect, although it may only push the limit in some
direction with resolving the problem. My experience is that SQL Server
is conservative about using non-clustered indexes.

If you instead made the index clustered, you problems should disappear
In a clustered index, the leaf level of the index is the data, so
there are no extra access to data pages. If the query actually is

SELECT * FROM tbl WHERE othercol = @value ORDER BY col

then the clustered index should have othercol as its first column.

(I should add that there is one exception about non-clustered indexes:
if the index includes all columns in the query, there is no need to
access the data pages, and such a covered query can be quite effective.)
Is it best to create the indexes out of office hours? I'm worried that
creating them will take up considerable space on the server, and/or slow
the server down whilst insert/updates/deletes ??


Indexes do incure a toll on INSERT, DELETE and UPDATEs that is true.
Usually this toll is acceptable and in fact not even noticeable. But
you should avoid putting the clustered index on columns that often
changes value, since if the clustered key changes, not only must the
data move, but all non-clustered indexes are affected, because the
pointer to the data pages from the non-clustered indexes is in fact the
clustered key.

As for when to create indexes, it is usually good idea do to it outside
peak times. particularly if you change or add a clustered index, because
the table is basically offline for the duration of the CREATE INDEX
statement. Non-clustered index usually take less time to create.

--
Erland Sommarskog, SQL Server MVP, es****@sommarskog.se

Books Online for SQL Server SP3 at
http://www.microsoft.com/sql/techinf...2000/books.asp
Jul 20 '05 #4

This thread has been closed and replies have been disabled. Please start a new discussion.

Similar topics

4
by: Snyke | last post by:
Hi. I have a command line script which works really fine, the only problem is that it take *really* long for the first output to be printed on screen. Since I also get some HTTP headers I'm...
12
by: dvumani | last post by:
I have C code which computes the row sums of a matrix, divide each element of each row with the row sum and then compute the column sum of the resulting matrix. Is there a way I can speed up the...
9
by: mfyahya | last post by:
Hi, I'm new to databases :) I need help speeding up select queries on my data which are currently taking 4-5 seconds. I set up a single large table of coordinates data with an index on the fields...
2
by: Robert Wilkens | last post by:
Ok... This may be the wrong forum, but it's the first place I'm trying. I'm new to C# and just implemented the 3-tier Distributed application from Chapter 1 (the first walkthrough) in the...
2
by: OHM | last post by:
I was wondering about this topic and although I accept that different situations call for different solutions, but wondered are there any other solutions and whether has anyone carried out a...
0
by: Pascal Costanza | last post by:
Dynamic Languages Day @ Vrije Universiteit Brussel ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Monday, February 13, 2006, VUB Campus Etterbeek The VUB (Programming Technology Lab,...
5
by: RobinAG | last post by:
Hello, I just split my database into front and back end. My front end users are experiencing really slow opening of forms. I've searched online for help speeding up forms, but I'm not sure what...
10
by: ags5406 | last post by:
I've created an application that downloads data daily from a secure web site and stores that data in an Access database. Then there are different options for allowing the user to do keyword and...
1
by: CloudSolutions | last post by:
Introduction: For many beginners and individual users, requiring a credit card and email registration may pose a barrier when starting to use cloud servers. However, some cloud server providers now...
0
by: Faith0G | last post by:
I am starting a new it consulting business and it's been a while since I setup a new website. Is wordpress still the best web based software for hosting a 5 page website? The webpages will be...
0
isladogs
by: isladogs | last post by:
The next Access Europe User Group meeting will be on Wednesday 3 Apr 2024 starting at 18:00 UK time (6PM UTC+1) and finishing by 19:30 (7.30PM). In this session, we are pleased to welcome former...
0
by: ryjfgjl | last post by:
In our work, we often need to import Excel data into databases (such as MySQL, SQL Server, Oracle) for data analysis and processing. Usually, we use database tools like Navicat or the Excel import...
0
by: Charles Arthur | last post by:
How do i turn on java script on a villaon, callus and itel keypad mobile phone
0
by: ryjfgjl | last post by:
In our work, we often receive Excel tables with data in the same format. If we want to analyze these data, it can be difficult to analyze them because the data is spread across multiple Excel files...
0
by: emmanuelkatto | last post by:
Hi All, I am Emmanuel katto from Uganda. I want to ask what challenges you've faced while migrating a website to cloud. Please let me know. Thanks! Emmanuel
0
BarryA
by: BarryA | last post by:
What are the essential steps and strategies outlined in the Data Structures and Algorithms (DSA) roadmap for aspiring data scientists? How can individuals effectively utilize this roadmap to progress...
1
by: Sonnysonu | last post by:
This is the data of csv file 1 2 3 1 2 3 1 2 3 1 2 3 2 3 2 3 3 the lengths should be different i have to store the data by column-wise with in the specific length. suppose the i have to...

By using Bytes.com and it's services, you agree to our Privacy Policy and Terms of Use.

To disable or enable advertisements and analytics tracking please visit the manage ads & tracking page.