By using this site, you agree to our updated Privacy Policy and our Terms of Use. Manage your Cookies Settings.
438,419 Members | 1,648 Online
Bytes IT Community
+ Ask a Question
Need help? Post your question and get tips & solutions from a community of 438,419 IT Pros & Developers. It's quick & easy.

why is execution of a storedprocedure in QueryAnalyzer faster thanexecuting this SP as scheduled job?

P: n/a
Hi NG!

I wrote a stored procedure which at first deletes 100.000 rows and then
inserts 100.000 new rows.

There is a huge difference between executing this SP in the
query-analyzer (runtime approx. 2 minutes), and scheduling this SP as an
Job (runtime > 30 minutes!!!).

Why? Whats the issue?
And how can i improve the "job"-runtime?

The INSERT-Statement gets the data from a joined and grouped SELECT -
here is the code (exemplary):
INSERT INTO [Table1]
(Field1, Field2,... )
SELECT DISTINCT Field1, Count(field2), (Count(field2) * (-1))
FROM Table2 T2
INNER JOIN Table3 T3 ON T2.[ID] = T3.[ID]
INNER JOIN Table4 T4 ON T2.[DID] = T4.[ID]
INNER JOIN Table5 T5 ON T2.[SID] = T5.[ID]
GROUP BY field1, field2...
I have noticed something strange: After executing the SP the
queryanalyzer outputs "1 Row(s) affected." line by line - 100.000
times... Obviously every line is inserted one by one.

Any idea?

Would it help, to create a temporary-table with SELECT INTO, and then
just INSERT this data (w/o aggregation) into the target table?
Any help is greatly appreciated, tia and best regards

---
Heiko
Jul 23 '05 #1
Share this Question
Share on Google+
10 Replies


P: n/a
Strange. If you have several "1 rows affected", you are executing several statements. You get one
such message per DML (INSERT, UPDATE, DELETE or SELECT) statement you are executing. So it sounds to
like you either have a loop of some kind (cursor perhaps) or in some other way a whole bunch of DML
statements. I would look into this as the first step.

Anyhow, it is indeed the " rows affected" messages that causes the difference between QA and Agent.
Agent has a delay for each such "DONE IN PROC" signal. You can lesser the effect by adding SET
NOCOUNT ON in the beginning of the proc code. If there still is a significant difference, try using
a CmdExec jobstep instead from where you execute the proc using OSQL.EXE.

--
Tibor Karaszi, SQL Server MVP
http://www.karaszi.com/sqlserver/default.asp
http://www.solidqualitylearning.com/
http://www.sqlug.se/
"Heiko Pliefke" <sl***********@freenet.de> wrote in message news:1104416731.799992@nbgm66x...
Hi NG!

I wrote a stored procedure which at first deletes 100.000 rows and then inserts 100.000 new rows.

There is a huge difference between executing this SP in the query-analyzer (runtime approx. 2
minutes), and scheduling this SP as an Job (runtime > 30 minutes!!!).

Why? Whats the issue?
And how can i improve the "job"-runtime?

The INSERT-Statement gets the data from a joined and grouped SELECT - here is the code
(exemplary):
INSERT INTO [Table1]
(Field1, Field2,... )
SELECT DISTINCT Field1, Count(field2), (Count(field2) * (-1))
FROM Table2 T2
INNER JOIN Table3 T3 ON T2.[ID] = T3.[ID]
INNER JOIN Table4 T4 ON T2.[DID] = T4.[ID]
INNER JOIN Table5 T5 ON T2.[SID] = T5.[ID]
GROUP BY field1, field2...
I have noticed something strange: After executing the SP the queryanalyzer outputs "1 Row(s)
affected." line by line - 100.000 times... Obviously every line is inserted one by one.

Any idea?

Would it help, to create a temporary-table with SELECT INTO, and then just INSERT this data (w/o
aggregation) into the target table?
Any help is greatly appreciated, tia and best regards

---
Heiko

Jul 23 '05 #2

P: n/a
> I have noticed something strange: After executing the SP the queryanalyzer
outputs "1 Row(s) affected." line by line - 100.000 times... Obviously
every line is inserted one by one.
It appears you have an INSERT trigger on the table that iterates over the
inserted table.

To improve performance, add SET NOCOUNT ON at the beginning of the trigger
and/or proc to suppress DONE_IN_PROC messages. Also, consider revising the
trigger code to use a set-based technique.

--
Happy Holidays

Dan Guzman
SQL Server MVP

"Heiko Pliefke" <sl***********@freenet.de> wrote in message
news:1104416731.799992@nbgm66x... Hi NG!

I wrote a stored procedure which at first deletes 100.000 rows and then
inserts 100.000 new rows.

There is a huge difference between executing this SP in the query-analyzer
(runtime approx. 2 minutes), and scheduling this SP as an Job (runtime >
30 minutes!!!).

Why? Whats the issue?
And how can i improve the "job"-runtime?

The INSERT-Statement gets the data from a joined and grouped SELECT - here
is the code (exemplary):
INSERT INTO [Table1]
(Field1, Field2,... )
SELECT DISTINCT Field1, Count(field2), (Count(field2) * (-1))
FROM Table2 T2
INNER JOIN Table3 T3 ON T2.[ID] = T3.[ID]
INNER JOIN Table4 T4 ON T2.[DID] = T4.[ID]
INNER JOIN Table5 T5 ON T2.[SID] = T5.[ID]
GROUP BY field1, field2...
I have noticed something strange: After executing the SP the queryanalyzer
outputs "1 Row(s) affected." line by line - 100.000 times... Obviously
every line is inserted one by one.

Any idea?

Would it help, to create a temporary-table with SELECT INTO, and then just
INSERT this data (w/o aggregation) into the target table?
Any help is greatly appreciated, tia and best regards

---
Heiko

Jul 23 '05 #3

P: n/a
> It appears you have an INSERT trigger on the table that iterates over the inserted table.

Good catch, Dan!

--
Tibor Karaszi, SQL Server MVP
http://www.karaszi.com/sqlserver/default.asp
http://www.solidqualitylearning.com/
http://www.sqlug.se/
"Dan Guzman" <gu******@nospam-online.sbcglobal.net> wrote in message
news:qp****************@newssvr12.news.prodigy.com ...
I have noticed something strange: After executing the SP the queryanalyzer outputs "1 Row(s)
affected." line by line - 100.000 times... Obviously every line is inserted one by one.


It appears you have an INSERT trigger on the table that iterates over the inserted table.

To improve performance, add SET NOCOUNT ON at the beginning of the trigger and/or proc to suppress
DONE_IN_PROC messages. Also, consider revising the trigger code to use a set-based technique.

--
Happy Holidays

Dan Guzman
SQL Server MVP

"Heiko Pliefke" <sl***********@freenet.de> wrote in message news:1104416731.799992@nbgm66x...
Hi NG!

I wrote a stored procedure which at first deletes 100.000 rows and then inserts 100.000 new rows.

There is a huge difference between executing this SP in the query-analyzer (runtime approx. 2
minutes), and scheduling this SP as an Job (runtime > 30 minutes!!!).

Why? Whats the issue?
And how can i improve the "job"-runtime?

The INSERT-Statement gets the data from a joined and grouped SELECT - here is the code
(exemplary):
INSERT INTO [Table1]
(Field1, Field2,... )
SELECT DISTINCT Field1, Count(field2), (Count(field2) * (-1))
FROM Table2 T2
INNER JOIN Table3 T3 ON T2.[ID] = T3.[ID]
INNER JOIN Table4 T4 ON T2.[DID] = T4.[ID]
INNER JOIN Table5 T5 ON T2.[SID] = T5.[ID]
GROUP BY field1, field2...
I have noticed something strange: After executing the SP the queryanalyzer outputs "1 Row(s)
affected." line by line - 100.000 times... Obviously every line is inserted one by one.

Any idea?

Would it help, to create a temporary-table with SELECT INTO, and then just INSERT this data (w/o
aggregation) into the target table?
Any help is greatly appreciated, tia and best regards

---
Heiko


Jul 23 '05 #4

P: n/a
Hi!

Thank you all for your answers!
I think I know what to do now - i will try and feedback how it works ;)

But there's still one question: Maybe there is a trigger, why is the
execution in QA 10-15times faster than the Job? Doesn't the trigger have
the same "performance-impact" on this two ways?

Dan Guzman schrieb:
It appears you have an INSERT trigger on the table that iterates over the
inserted table.

To improve performance, add SET NOCOUNT ON at the beginning of the trigger
and/or proc to suppress DONE_IN_PROC messages. Also, consider revising the
trigger code to use a set-based technique.

Best regards,

Heiko
Jul 23 '05 #5

P: n/a
Hi!

You are great! Thank you very much, Dan and Tibor!
With NOCOUNT OFF the runtime is reduced to < 30 seconds! Super!

Best regards and Happy new year!!

--
Heiko
Tibor Karaszi schrieb:
It appears you have an INSERT trigger on the table that iterates over the inserted table.

Good catch, Dan!

Jul 23 '05 #6

P: n/a
Hi!

You are great! Thank you very much, Dan and Tibor!
With NOCOUNT OFF the runtime is reduced to < 30 seconds! Super!

Best regards and Happy new year!!

--
Heiko
Jul 23 '05 #7

P: n/a
I assume you mean ON and not OFF?

--
Andrew J. Kelly SQL MVP
"Heiko Pliefke" <sl***********@freenet.de> wrote in message
news:41**************@freenet.de...
Hi!

You are great! Thank you very much, Dan and Tibor!
With NOCOUNT OFF the runtime is reduced to < 30 seconds! Super!

Best regards and Happy new year!!

--
Heiko
Tibor Karaszi schrieb:
It appears you have an INSERT trigger on the table that iterates over the
inserted table.

Good catch, Dan!

Jul 23 '05 #8

P: n/a
> why is the execution in QA 10-15times faster than the Job? Doesn't the trigger have the same
"performance-impact" on this two ways?
See my first reply regarding the wait Agent introduces for "rows affected" messages.

--
Tibor Karaszi, SQL Server MVP
http://www.karaszi.com/sqlserver/default.asp
http://www.solidqualitylearning.com/
http://www.sqlug.se/
"Heiko Pliefke" <sl***********@freenet.de> wrote in message news:1104421665.669657@nbgm66x... Hi!

Thank you all for your answers!
I think I know what to do now - i will try and feedback how it works ;)

But there's still one question: Maybe there is a trigger, why is the execution in QA 10-15times
faster than the Job? Doesn't the trigger have the same "performance-impact" on this two ways?

Dan Guzman schrieb:
It appears you have an INSERT trigger on the table that iterates over the inserted table.

To improve performance, add SET NOCOUNT ON at the beginning of the trigger and/or proc to
suppress DONE_IN_PROC messages. Also, consider revising the trigger code to use a set-based
technique.

Best regards,

Heiko

Jul 23 '05 #9

P: n/a
Heiko Pliefke (sl***********@freenet.de) writes:
But there's still one question: Maybe there is a trigger, why is the
execution in QA 10-15times faster than the Job? Doesn't the trigger have
the same "performance-impact" on this two ways?


I think Tibor addressed this:

Anyhow, it is indeed the " rows affected" messages that causes the
difference between QA and Agent. Agent has a delay for each such "DONE
IN PROC" signal.

There are also differences between QA and Agent with regarding to the
default for some SET options, so with Agent ARITHABORT and QUOTED_IDENTIFIER
are off. This can have severe effect if there is an indexed view or an
indexed computed column somewhere. The optimizer does not consider
these indexes when these two options are off. (There are four more
options that must be on.)
--
Erland Sommarskog, SQL Server MVP, es****@sommarskog.se

Books Online for SQL Server SP3 at
http://www.microsoft.com/sql/techinf...2000/books.asp
Jul 23 '05 #10

P: n/a
> But there's still one question: Maybe there is a trigger, why is the
execution in QA 10-15times faster than the Job? Doesn't the trigger have
the same "performance-impact" on this two ways?
SQL Agent jobs are more susceptible to DONE_IN_PROC messages as Tibor
stated. See http://support.microsoft.com/default...b;en-us;249730

--
Happy Holidays

Dan Guzman
SQL Server MVP

"Heiko Pliefke" <sl***********@freenet.de> wrote in message
news:1104421665.669657@nbgm66x... Hi!

Thank you all for your answers!
I think I know what to do now - i will try and feedback how it works ;)

But there's still one question: Maybe there is a trigger, why is the
execution in QA 10-15times faster than the Job? Doesn't the trigger have
the same "performance-impact" on this two ways?

Dan Guzman schrieb:
It appears you have an INSERT trigger on the table that iterates over the
inserted table.

To improve performance, add SET NOCOUNT ON at the beginning of the
trigger and/or proc to suppress DONE_IN_PROC messages. Also, consider
revising the trigger code to use a set-based technique.

Best regards,

Heiko

Jul 23 '05 #11

This discussion thread is closed

Replies have been disabled for this discussion.