467,880 Members | 1,101 Online
Bytes | Developer Community
New Post

Home Posts Topics Members FAQ

Post your question to a community of 467,880 developers. It's quick & easy.

speeding up Dynamic SP

We have a dynamic SP that dependant on a user name will run a selected
tailored to them.

One of the criteria is the number of rows retrieved, which we include using
'top @varNoOfRows' in the dynamically created select .

As long as the number of rows is less than 130 the SP runs in less than a
second, but if you increase that value to over 150 the SP times out.

It is being run from ASP in this way: DBCon.execute(SQLQuery)

The main table that we are querying contains about 1.5 million records and
is un-indexed. (eek - didn't realise that until I just checked) on SQL
server 2000.

Does anyone have any pointers towards streamlining the SP - I can post it if
you wish. Or can anyone explain how to use the execute plan to our
advantage?

I've already used it to change a

SET @statement2 = (select T1_QueryPart from Table1 where T1_ID like (SELECT
LoginTable_T1ID from LoginTable where @username = LT_UserName)
)

to

SET @T1ID = (SELECT LT_T1ID from LoginTable where @username = LT_UserName)
SET @statement2 = (select T1_QueryPart from Table1 where T1_ID like @T1ID)

But would , say, a join be more time efficient?

Any help would be appreciated

John
Jul 20 '05 #1
  • viewed: 1201
Share:
3 Replies
John D (dobjsonneNOSPAMhotmail.com) writes:
We have a dynamic SP that dependant on a user name will run a selected
tailored to them.

One of the criteria is the number of rows retrieved, which we include
using 'top @varNoOfRows' in the dynamically created select .

As long as the number of rows is less than 130 the SP runs in less than a
second, but if you increase that value to over 150 the SP times out.

It is being run from ASP in this way: DBCon.execute(SQLQuery)

The main table that we are querying contains about 1.5 million records and
is un-indexed. (eek - didn't realise that until I just checked) on SQL
server 2000.
Really? I would guess that the big change from 130 to 150 was due to
chaning from using an non-clustered index to using a table scan.
Does anyone have any pointers towards streamlining the SP - I can post
it if you wish. Or can anyone explain how to use the execute plan to
our advantage?
I don't think very many here is going to guess your code. But recall
that not only the code may not be sufficient. Need to know the table
and its indexes as well.

And if the table does not have any indexes - it's time to create a few!
I've already used it to change a

SET @statement2 = (select T1_QueryPart from Table1 where T1_ID like
(SELECT LoginTable_T1ID from LoginTable where @username = LT_UserName)
)

to

SET @T1ID = (SELECT LT_T1ID from LoginTable where @username = LT_UserName)
SET @statement2 = (select T1_QueryPart from Table1 where T1_ID like @T1ID)


I don't see any TOP here? Is this a different query?

--
Erland Sommarskog, SQL Server MVP, es****@sommarskog.se

Books Online for SQL Server SP3 at
http://www.microsoft.com/sql/techinf...2000/books.asp
Jul 20 '05 #2
> > As long as the number of rows is less than 130 the SP runs in less than
a
second, but if you increase that value to over 150 the SP times out. The main table that we are querying contains about 1.5 million records and is un-indexed. (eek - didn't realise that until I just checked) on SQL
server 2000.
Really? I would guess that the big change from 130 to 150 was due to
chaning from using an non-clustered index to using a table scan.


Yes ! Why is that?
That is what is happening, and is causing the biggest problem.

And if the table does not have any indexes - it's time to create a few!


Is it best to create the indexes out of office hours? I'm worried that
creating them will
take up considerable space on the server, and/or slow the server down whilst
insert/updates/deletes ??

Will creating some indexes on this table stop "top 150" from using a table
scan?

John

Jul 20 '05 #3
John D (dobjsonneNOSPAMhotmail.com) writes:
Really? I would guess that the big change from 130 to 150 was due to
chaning from using an non-clustered index to using a table scan.

Yes ! Why is that?
That is what is happening, and is causing the biggest problem.


SQL Server uses a cost-based optimizer, which means that it tries
to find the cheapest way to evaluate a query. To compute the costs for
different query plans, it relies on statistics about the data. Statistics
are usually sampled automatically as you run queries, and it has to
scan the data anyway.

In this particular case, it appears that SQL Server has a choice between
using a non-clustered index and a table scan. Say that your query is

SELECT TOP 150 * FROM tbl ORDER BY col

And there is a non-clustered index on col. There are two possible query
plans here: on is to follow the index on col, and stop when you have found
150 rows, and another is to scan the table and then find 150 rows with
low values on col.

Why does SQL Server not always use the index? Because when using a non-
clustered index, SQL Server must access the data pages to get the data.
This means that if you retrieve many rows, you will access the same
page more than once, and in the end you get worse performance. So at
some point, swicthing to table scan is better. It appears that in this
case, SQL Server is making an incorrect guess on where the balance
point is. This could be because the statistics are not accurate enough,
and it's possible that an UPDATE STATISTICS WITH FULLSCAN on the table
could have some effect, although it may only push the limit in some
direction with resolving the problem. My experience is that SQL Server
is conservative about using non-clustered indexes.

If you instead made the index clustered, you problems should disappear
In a clustered index, the leaf level of the index is the data, so
there are no extra access to data pages. If the query actually is

SELECT * FROM tbl WHERE othercol = @value ORDER BY col

then the clustered index should have othercol as its first column.

(I should add that there is one exception about non-clustered indexes:
if the index includes all columns in the query, there is no need to
access the data pages, and such a covered query can be quite effective.)
Is it best to create the indexes out of office hours? I'm worried that
creating them will take up considerable space on the server, and/or slow
the server down whilst insert/updates/deletes ??


Indexes do incure a toll on INSERT, DELETE and UPDATEs that is true.
Usually this toll is acceptable and in fact not even noticeable. But
you should avoid putting the clustered index on columns that often
changes value, since if the clustered key changes, not only must the
data move, but all non-clustered indexes are affected, because the
pointer to the data pages from the non-clustered indexes is in fact the
clustered key.

As for when to create indexes, it is usually good idea do to it outside
peak times. particularly if you change or add a clustered index, because
the table is basically offline for the duration of the CREATE INDEX
statement. Non-clustered index usually take less time to create.

--
Erland Sommarskog, SQL Server MVP, es****@sommarskog.se

Books Online for SQL Server SP3 at
http://www.microsoft.com/sql/techinf...2000/books.asp
Jul 20 '05 #4

This discussion thread is closed

Replies have been disabled for this discussion.

Similar topics

4 posts views Thread by Snyke | last post: by
12 posts views Thread by dvumani | last post: by
2 posts views Thread by Robert Wilkens | last post: by
2 posts views Thread by OHM | last post: by
reply views Thread by Pascal Costanza | last post: by
reply views Thread by MrMoon | last post: by
By using this site, you agree to our Privacy Policy and Terms of Use.