473,322 Members | 1,307 Online
Bytes | Software Development & Data Engineering Community
Post Job

Home Posts Topics Members FAQ

Join Bytes to post your question to a community of 473,322 software developers and data experts.

why is execution of a storedprocedure in QueryAnalyzer faster thanexecuting this SP as scheduled job?

Hi NG!

I wrote a stored procedure which at first deletes 100.000 rows and then
inserts 100.000 new rows.

There is a huge difference between executing this SP in the
query-analyzer (runtime approx. 2 minutes), and scheduling this SP as an
Job (runtime > 30 minutes!!!).

Why? Whats the issue?
And how can i improve the "job"-runtime?

The INSERT-Statement gets the data from a joined and grouped SELECT -
here is the code (exemplary):
INSERT INTO [Table1]
(Field1, Field2,... )
SELECT DISTINCT Field1, Count(field2), (Count(field2) * (-1))
FROM Table2 T2
INNER JOIN Table3 T3 ON T2.[ID] = T3.[ID]
INNER JOIN Table4 T4 ON T2.[DID] = T4.[ID]
INNER JOIN Table5 T5 ON T2.[SID] = T5.[ID]
GROUP BY field1, field2...
I have noticed something strange: After executing the SP the
queryanalyzer outputs "1 Row(s) affected." line by line - 100.000
times... Obviously every line is inserted one by one.

Any idea?

Would it help, to create a temporary-table with SELECT INTO, and then
just INSERT this data (w/o aggregation) into the target table?
Any help is greatly appreciated, tia and best regards

---
Heiko
Jul 23 '05 #1
10 1818
Strange. If you have several "1 rows affected", you are executing several statements. You get one
such message per DML (INSERT, UPDATE, DELETE or SELECT) statement you are executing. So it sounds to
like you either have a loop of some kind (cursor perhaps) or in some other way a whole bunch of DML
statements. I would look into this as the first step.

Anyhow, it is indeed the " rows affected" messages that causes the difference between QA and Agent.
Agent has a delay for each such "DONE IN PROC" signal. You can lesser the effect by adding SET
NOCOUNT ON in the beginning of the proc code. If there still is a significant difference, try using
a CmdExec jobstep instead from where you execute the proc using OSQL.EXE.

--
Tibor Karaszi, SQL Server MVP
http://www.karaszi.com/sqlserver/default.asp
http://www.solidqualitylearning.com/
http://www.sqlug.se/
"Heiko Pliefke" <sl***********@freenet.de> wrote in message news:1104416731.799992@nbgm66x...
Hi NG!

I wrote a stored procedure which at first deletes 100.000 rows and then inserts 100.000 new rows.

There is a huge difference between executing this SP in the query-analyzer (runtime approx. 2
minutes), and scheduling this SP as an Job (runtime > 30 minutes!!!).

Why? Whats the issue?
And how can i improve the "job"-runtime?

The INSERT-Statement gets the data from a joined and grouped SELECT - here is the code
(exemplary):
INSERT INTO [Table1]
(Field1, Field2,... )
SELECT DISTINCT Field1, Count(field2), (Count(field2) * (-1))
FROM Table2 T2
INNER JOIN Table3 T3 ON T2.[ID] = T3.[ID]
INNER JOIN Table4 T4 ON T2.[DID] = T4.[ID]
INNER JOIN Table5 T5 ON T2.[SID] = T5.[ID]
GROUP BY field1, field2...
I have noticed something strange: After executing the SP the queryanalyzer outputs "1 Row(s)
affected." line by line - 100.000 times... Obviously every line is inserted one by one.

Any idea?

Would it help, to create a temporary-table with SELECT INTO, and then just INSERT this data (w/o
aggregation) into the target table?
Any help is greatly appreciated, tia and best regards

---
Heiko

Jul 23 '05 #2
> I have noticed something strange: After executing the SP the queryanalyzer
outputs "1 Row(s) affected." line by line - 100.000 times... Obviously
every line is inserted one by one.
It appears you have an INSERT trigger on the table that iterates over the
inserted table.

To improve performance, add SET NOCOUNT ON at the beginning of the trigger
and/or proc to suppress DONE_IN_PROC messages. Also, consider revising the
trigger code to use a set-based technique.

--
Happy Holidays

Dan Guzman
SQL Server MVP

"Heiko Pliefke" <sl***********@freenet.de> wrote in message
news:1104416731.799992@nbgm66x... Hi NG!

I wrote a stored procedure which at first deletes 100.000 rows and then
inserts 100.000 new rows.

There is a huge difference between executing this SP in the query-analyzer
(runtime approx. 2 minutes), and scheduling this SP as an Job (runtime >
30 minutes!!!).

Why? Whats the issue?
And how can i improve the "job"-runtime?

The INSERT-Statement gets the data from a joined and grouped SELECT - here
is the code (exemplary):
INSERT INTO [Table1]
(Field1, Field2,... )
SELECT DISTINCT Field1, Count(field2), (Count(field2) * (-1))
FROM Table2 T2
INNER JOIN Table3 T3 ON T2.[ID] = T3.[ID]
INNER JOIN Table4 T4 ON T2.[DID] = T4.[ID]
INNER JOIN Table5 T5 ON T2.[SID] = T5.[ID]
GROUP BY field1, field2...
I have noticed something strange: After executing the SP the queryanalyzer
outputs "1 Row(s) affected." line by line - 100.000 times... Obviously
every line is inserted one by one.

Any idea?

Would it help, to create a temporary-table with SELECT INTO, and then just
INSERT this data (w/o aggregation) into the target table?
Any help is greatly appreciated, tia and best regards

---
Heiko

Jul 23 '05 #3
> It appears you have an INSERT trigger on the table that iterates over the inserted table.

Good catch, Dan!

--
Tibor Karaszi, SQL Server MVP
http://www.karaszi.com/sqlserver/default.asp
http://www.solidqualitylearning.com/
http://www.sqlug.se/
"Dan Guzman" <gu******@nospam-online.sbcglobal.net> wrote in message
news:qp****************@newssvr12.news.prodigy.com ...
I have noticed something strange: After executing the SP the queryanalyzer outputs "1 Row(s)
affected." line by line - 100.000 times... Obviously every line is inserted one by one.


It appears you have an INSERT trigger on the table that iterates over the inserted table.

To improve performance, add SET NOCOUNT ON at the beginning of the trigger and/or proc to suppress
DONE_IN_PROC messages. Also, consider revising the trigger code to use a set-based technique.

--
Happy Holidays

Dan Guzman
SQL Server MVP

"Heiko Pliefke" <sl***********@freenet.de> wrote in message news:1104416731.799992@nbgm66x...
Hi NG!

I wrote a stored procedure which at first deletes 100.000 rows and then inserts 100.000 new rows.

There is a huge difference between executing this SP in the query-analyzer (runtime approx. 2
minutes), and scheduling this SP as an Job (runtime > 30 minutes!!!).

Why? Whats the issue?
And how can i improve the "job"-runtime?

The INSERT-Statement gets the data from a joined and grouped SELECT - here is the code
(exemplary):
INSERT INTO [Table1]
(Field1, Field2,... )
SELECT DISTINCT Field1, Count(field2), (Count(field2) * (-1))
FROM Table2 T2
INNER JOIN Table3 T3 ON T2.[ID] = T3.[ID]
INNER JOIN Table4 T4 ON T2.[DID] = T4.[ID]
INNER JOIN Table5 T5 ON T2.[SID] = T5.[ID]
GROUP BY field1, field2...
I have noticed something strange: After executing the SP the queryanalyzer outputs "1 Row(s)
affected." line by line - 100.000 times... Obviously every line is inserted one by one.

Any idea?

Would it help, to create a temporary-table with SELECT INTO, and then just INSERT this data (w/o
aggregation) into the target table?
Any help is greatly appreciated, tia and best regards

---
Heiko


Jul 23 '05 #4
Hi!

Thank you all for your answers!
I think I know what to do now - i will try and feedback how it works ;)

But there's still one question: Maybe there is a trigger, why is the
execution in QA 10-15times faster than the Job? Doesn't the trigger have
the same "performance-impact" on this two ways?

Dan Guzman schrieb:
It appears you have an INSERT trigger on the table that iterates over the
inserted table.

To improve performance, add SET NOCOUNT ON at the beginning of the trigger
and/or proc to suppress DONE_IN_PROC messages. Also, consider revising the
trigger code to use a set-based technique.

Best regards,

Heiko
Jul 23 '05 #5
Hi!

You are great! Thank you very much, Dan and Tibor!
With NOCOUNT OFF the runtime is reduced to < 30 seconds! Super!

Best regards and Happy new year!!

--
Heiko
Tibor Karaszi schrieb:
It appears you have an INSERT trigger on the table that iterates over the inserted table.

Good catch, Dan!

Jul 23 '05 #6
Hi!

You are great! Thank you very much, Dan and Tibor!
With NOCOUNT OFF the runtime is reduced to < 30 seconds! Super!

Best regards and Happy new year!!

--
Heiko
Jul 23 '05 #7
I assume you mean ON and not OFF?

--
Andrew J. Kelly SQL MVP
"Heiko Pliefke" <sl***********@freenet.de> wrote in message
news:41**************@freenet.de...
Hi!

You are great! Thank you very much, Dan and Tibor!
With NOCOUNT OFF the runtime is reduced to < 30 seconds! Super!

Best regards and Happy new year!!

--
Heiko
Tibor Karaszi schrieb:
It appears you have an INSERT trigger on the table that iterates over the
inserted table.

Good catch, Dan!

Jul 23 '05 #8
> why is the execution in QA 10-15times faster than the Job? Doesn't the trigger have the same
"performance-impact" on this two ways?
See my first reply regarding the wait Agent introduces for "rows affected" messages.

--
Tibor Karaszi, SQL Server MVP
http://www.karaszi.com/sqlserver/default.asp
http://www.solidqualitylearning.com/
http://www.sqlug.se/
"Heiko Pliefke" <sl***********@freenet.de> wrote in message news:1104421665.669657@nbgm66x... Hi!

Thank you all for your answers!
I think I know what to do now - i will try and feedback how it works ;)

But there's still one question: Maybe there is a trigger, why is the execution in QA 10-15times
faster than the Job? Doesn't the trigger have the same "performance-impact" on this two ways?

Dan Guzman schrieb:
It appears you have an INSERT trigger on the table that iterates over the inserted table.

To improve performance, add SET NOCOUNT ON at the beginning of the trigger and/or proc to
suppress DONE_IN_PROC messages. Also, consider revising the trigger code to use a set-based
technique.

Best regards,

Heiko

Jul 23 '05 #9
Heiko Pliefke (sl***********@freenet.de) writes:
But there's still one question: Maybe there is a trigger, why is the
execution in QA 10-15times faster than the Job? Doesn't the trigger have
the same "performance-impact" on this two ways?


I think Tibor addressed this:

Anyhow, it is indeed the " rows affected" messages that causes the
difference between QA and Agent. Agent has a delay for each such "DONE
IN PROC" signal.

There are also differences between QA and Agent with regarding to the
default for some SET options, so with Agent ARITHABORT and QUOTED_IDENTIFIER
are off. This can have severe effect if there is an indexed view or an
indexed computed column somewhere. The optimizer does not consider
these indexes when these two options are off. (There are four more
options that must be on.)
--
Erland Sommarskog, SQL Server MVP, es****@sommarskog.se

Books Online for SQL Server SP3 at
http://www.microsoft.com/sql/techinf...2000/books.asp
Jul 23 '05 #10
> But there's still one question: Maybe there is a trigger, why is the
execution in QA 10-15times faster than the Job? Doesn't the trigger have
the same "performance-impact" on this two ways?
SQL Agent jobs are more susceptible to DONE_IN_PROC messages as Tibor
stated. See http://support.microsoft.com/default...b;en-us;249730

--
Happy Holidays

Dan Guzman
SQL Server MVP

"Heiko Pliefke" <sl***********@freenet.de> wrote in message
news:1104421665.669657@nbgm66x... Hi!

Thank you all for your answers!
I think I know what to do now - i will try and feedback how it works ;)

But there's still one question: Maybe there is a trigger, why is the
execution in QA 10-15times faster than the Job? Doesn't the trigger have
the same "performance-impact" on this two ways?

Dan Guzman schrieb:
It appears you have an INSERT trigger on the table that iterates over the
inserted table.

To improve performance, add SET NOCOUNT ON at the beginning of the
trigger and/or proc to suppress DONE_IN_PROC messages. Also, consider
revising the trigger code to use a set-based technique.

Best regards,

Heiko

Jul 23 '05 #11

This thread has been closed and replies have been disabled. Please start a new discussion.

Similar topics

5
by: Boris Nikolaevich | last post by:
This is backwards of what I usually want--normally if you have a long-running ASP script, it's a good idea to check to see whether the client is still connected so you can cancel execution. ...
38
by: vashwath | last post by:
Might be off topic but I don't know where to post this question.Hope some body clears my doubt. The coding standard of the project which I am working on say's not to use malloc.When I asked my...
6
by: John Bowman | last post by:
Hi, I have a C# app that needs to launch the "Add Scheduled Tasks" wizard found in the control panel "Scheduled Tasks" applet. I realize that this "applet" really just opens the tasks folder,...
8
by: Frank van Vugt | last post by:
Hi, If during a transaction a number of deferred triggers are fired, what will be their execution order upon the commit? Will they be executed in order of firing or alfabetically or...
23
by: pamelafluente | last post by:
Hi guys! I want to do a large number of scheduled task (say 80). Each task can be run at a certain time every given weekday. For instance at 10am and 5pm on each monday, etc. I would like to...
52
by: Nomad.C | last post by:
Hi I've been thinking of learning Fortran as number crunching kinda language for my Physics degree......but then looking around the internet, people are saying that the libraries/ Algorithms once...
1
by: islandfong | last post by:
I have a list of job scheduled to be executed in sequence with Oracle 10g. Each of these jobs could take quite a long time to run. Eventually user may decide to terminate the execution of the job in...
0
by: mauler05 | last post by:
I have scheduled a SSIS package to run repeatedly by creating a scheduled SQL job that runs every minute . After every hour the packgae fails with the following error Description:...
0
by: siddu | last post by:
Hi, How to find execution logs like execution time of StoredProcedure etc using ReportServerDB in SQLSERVER 2005 Also suggest how to increase performance of StoredProcedure Thanks
0
by: ryjfgjl | last post by:
ExcelToDatabase: batch import excel into database automatically...
0
isladogs
by: isladogs | last post by:
The next Access Europe meeting will be on Wednesday 6 Mar 2024 starting at 18:00 UK time (6PM UTC) and finishing at about 19:15 (7.15PM). In this month's session, we are pleased to welcome back...
1
isladogs
by: isladogs | last post by:
The next Access Europe meeting will be on Wednesday 6 Mar 2024 starting at 18:00 UK time (6PM UTC) and finishing at about 19:15 (7.15PM). In this month's session, we are pleased to welcome back...
0
by: jfyes | last post by:
As a hardware engineer, after seeing that CEIWEI recently released a new tool for Modbus RTU Over TCP/UDP filtering and monitoring, I actively went to its official website to take a look. It turned...
1
by: PapaRatzi | last post by:
Hello, I am teaching myself MS Access forms design and Visual Basic. I've created a table to capture a list of Top 30 singles and forms to capture new entries. The final step is a form (unbound)...
1
by: CloudSolutions | last post by:
Introduction: For many beginners and individual users, requiring a credit card and email registration may pose a barrier when starting to use cloud servers. However, some cloud server providers now...
1
by: Defcon1945 | last post by:
I'm trying to learn Python using Pycharm but import shutil doesn't work
1
by: Shællîpôpï 09 | last post by:
If u are using a keypad phone, how do u turn on JavaScript, to access features like WhatsApp, Facebook, Instagram....
0
isladogs
by: isladogs | last post by:
The next Access Europe User Group meeting will be on Wednesday 3 Apr 2024 starting at 18:00 UK time (6PM UTC+1) and finishing by 19:30 (7.30PM). In this session, we are pleased to welcome former...

By using Bytes.com and it's services, you agree to our Privacy Policy and Terms of Use.

To disable or enable advertisements and analytics tracking please visit the manage ads & tracking page.