By using this site, you agree to our updated Privacy Policy and our Terms of Use. Manage your Cookies Settings.
425,501 Members | 1,660 Online
Bytes IT Community
+ Ask a Question
Need help? Post your question and get tips & solutions from a community of 425,501 IT Pros & Developers. It's quick & easy.

How do I delete many rows without monopolizing server?

P: n/a
Hello,

I am working on a project using SQL Server 2000 with a database containing
about 10 related tables with a lot of columns containing text. The total
current size of the database is about 2 Gig. When I delete data from the
database, it takes a lot of system resources and monopolizes the database so
that all other query requests are slow as mud!

Ideally, I would like to be able to issue delete commands to the database on
a primary table and get a fast response back. Then, it doesn't matter to me
how long the actual deletion operation takes as long as its priority is low
compared to the other query requests coming in. Typically, removing a
single row from the primary table results in a deletion of up to 300 rows
from related tables.

Questions:
1. Can I create a trigger on the primary table that will delete the rows
from that table, issue a delayed/low priority delete for all of the other
tables, and return to the application quickly?
2. Can a trigger be run in an asynchrous mode? (that is, issue the command,
return immediately, and then go about its business on its own time).
3. Can the priority of an SQL statement be specified?
4. Is there a Transact-SQL "sleep" command that would allow you to do some
work -- sleep for a little bit -- do some more work -- etc?

Any help in this area would be greatly appreciated.

....Thanks in advance...

--
Bob Ganger
General Dynamics
rg*******@hotmail.com
Jul 20 '05 #1
Share this Question
Share on Google+
2 Replies


P: n/a

"Bob Ganger" <rg*******@hotmail.com> wrote in message
news:Y4***************@news.uswest.net...
Hello,

I am working on a project using SQL Server 2000 with a database containing
about 10 related tables with a lot of columns containing text. The total
current size of the database is about 2 Gig. When I delete data from the
database, it takes a lot of system resources and monopolizes the database so that all other query requests are slow as mud!

Ideally, I would like to be able to issue delete commands to the database on a primary table and get a fast response back. Then, it doesn't matter to me how long the actual deletion operation takes as long as its priority is low compared to the other query requests coming in. Typically, removing a
single row from the primary table results in a deletion of up to 300 rows
from related tables.
If you want to delete ALL of them, a simpe TRUNCATE TABLE FOO will work.

A non-portable method is to use ROWCOUNT

SET ROWCOUNT=100
DELETE FROM FOO

Repeat as needed.


Questions:
1. Can I create a trigger on the primary table that will delete the rows
from that table, issue a delayed/low priority delete for all of the other
tables, and return to the application quickly?
2. Can a trigger be run in an asynchrous mode? (that is, issue the command, return immediately, and then go about its business on its own time).
3. Can the priority of an SQL statement be specified?
4. Is there a Transact-SQL "sleep" command that would allow you to do some
work -- sleep for a little bit -- do some more work -- etc?

Any help in this area would be greatly appreciated.

...Thanks in advance...

--
Bob Ganger
General Dynamics
rg*******@hotmail.com

Jul 20 '05 #2

P: n/a
[posted and mailed, please reply in news]

Bob Ganger (rg*******@hotmail.com) writes:
Ideally, I would like to be able to issue delete commands to the
database on a primary table and get a fast response back. Then, it
doesn't matter to me how long the actual deletion operation takes as
long as its priority is low compared to the other query requests coming
in. Typically, removing a single row from the primary table results in
a deletion of up to 300 rows from related tables.
And deletion of 300 rows is not much. If deletion of 300 rows is
bringing the server to its knees, you should probably review the
DELETE statements, and make sure that there are appropriate indexes.
You could run the query from Query Analyzer with SET STATISTISTICS IO
ON to see which tables that are hit the most. You can also use SHOW
EXECUTION PLAN to analyse whether you have the best plan.

One thing to consider with DELETE statements is that there may be
foreign-key relations that are not indexed. Say that you have:

CREATE TABLE smalltable (smallid int NOT NULL PRIMARY KEY,
....)
go
CREATE TABLE bigtable (bigid int NOT NULL PRIMARY KEY,
...
smallid int NULL REFERENCES smalltable(smallid)
..

if bigtable.smallid is not indexed and you delete a row from
smalltable.smallid, then SQL Server has to scan bigtable from left to
right to check for references.

I answer your questions below for completeness. However, I hope the
answers will be moot, once you have found why the delete operations
takes so much resources.
1. Can I create a trigger on the primary table that will delete the rows
from that table, issue a delayed/low priority delete for all of the other
tables, and return to the application quickly?
This is only possible if there is now foreign-key constraints from the
sub-tables to the primary table - and there should probably be. And
"possible" hear does not mean that it is simple - it takes some real
trickery.
2. Can a trigger be run in an asynchrous mode? (that is, issue the
command, return immediately, and then go about its business on its own
time).
No, it is fundamental in database technology that a trigger is part
of the transaction that includes the statement that fired the trigger.
This is because triggers are intended for enforcing rules in the database,
either by checking for violations of the rules, or cascade updates to
implement them. In both cases it is instrumental that the trigger +
checks/cascades are executed in whole, else all must be rolled back.

What you can do, though, is set a bit in a table, and then have a job
run from SQL Agent which checks that table for work to do. In this
case, maybe you do not need a trigger at all. You could just have a
job that is run periodically from SQL Agent and that deletes orphans.
3. Can the priority of an SQL statement be specified?
No.
4. Is there a Transact-SQL "sleep" command that would allow you to do some
work -- sleep for a little bit -- do some more work -- etc?


No. But if you have very many rows to delete - 300 is a far cry
from that - you can use SET ROWCOUNT to limit the number of rows
that the DELETE statement operates on.
--
Erland Sommarskog, SQL Server MVP, so****@algonet.se

Books Online for SQL Server SP3 at
http://www.microsoft.com/sql/techinf...2000/books.asp
Jul 20 '05 #3

This discussion thread is closed

Replies have been disabled for this discussion.