473,395 Members | 1,774 Online
Bytes | Software Development & Data Engineering Community
Post Job

Home Posts Topics Members FAQ

Join Bytes to post your question to a community of 473,395 software developers and data experts.

Stored procedure and SQL Job Agent Task

I have a stored procedure thats transferring/processing data from one
table to two different tables. The destination tables have a unique
value constraint as the source tables at times has duplicate records
and this will prevent the duplicates from being reported. When the
stored procedure (which includes a cursor) is executed through query
analyzer, it runs fine, and reports an error everytime it sees a
duplicate value (as expected). It moves all the unique values from the
source to the destination tables.

However, if the same stored procedure is run as a task/job in SQL
Server Agent, the behaviour is different. The job fails when it see's
the error and ends up skipping records or terminating the procedure
all together. Eg. if there are 100 records in the source table with 10
duplicates, the stored procedure when run through Query Analyzer will
copy the 90 unique records to the destination tables but when run from
SQL-Agent, it copies just 10-15 records.

Any idea why this happens?

Feb 5 '07 #1
9 7651
Nevermind...

added "SET NOCOUNT ON" to the Sp and the task now works just fine.

On Feb 5, 3:14 pm, vikram.man...@gmail.com wrote:
I have a stored procedure thats transferring/processing data from one
table to two different tables. The destination tables have a unique
value constraint as the source tables at times has duplicate records
and this will prevent the duplicates from being reported. When the
stored procedure (which includes a cursor) is executed through query
analyzer, it runs fine, and reports an error everytime it sees a
duplicate value (as expected). It moves all the unique values from the
source to the destination tables.

However, if the same stored procedure is run as a task/job in SQL
Server Agent, the behaviour is different. The job fails when it see's
the error and ends up skipping records or terminating the procedure
all together. Eg. if there are 100 records in the source table with 10
duplicates, the stored procedure when run through Query Analyzer will
copy the 90 unique records to the destination tables but when run from
SQL-Agent, it copies just 10-15 records.

Any idea why this happens?

Feb 5 '07 #2
(vi***********@gmail.com) writes:
I have a stored procedure thats transferring/processing data from one
table to two different tables. The destination tables have a unique
value constraint as the source tables at times has duplicate records
and this will prevent the duplicates from being reported. When the
stored procedure (which includes a cursor) is executed through query
analyzer, it runs fine, and reports an error everytime it sees a
duplicate value (as expected). It moves all the unique values from the
source to the destination tables.

However, if the same stored procedure is run as a task/job in SQL
Server Agent, the behaviour is different. The job fails when it see's
the error and ends up skipping records or terminating the procedure
all together. Eg. if there are 100 records in the source table with 10
duplicates, the stored procedure when run through Query Analyzer will
copy the 90 unique records to the destination tables but when run from
SQL-Agent, it copies just 10-15 records.
I see that you have resolved the issue, but generally I think it's
better to use NOT EXISTS to avoid the dup error to happen in the first
place. You mention a cursor, but if the sole purpose is to copy rows
one table to another, running a cursor is not very effective.
--
Erland Sommarskog, SQL Server MVP, es****@sommarskog.se

Books Online for SQL Server 2005 at
http://www.microsoft.com/technet/pro...ads/books.mspx
Books Online for SQL Server 2000 at
http://www.microsoft.com/sql/prodinf...ons/books.mspx
Feb 5 '07 #3
Would NOT EXISTS add any overhead to the query? The application
logging data to the SQL server is using OLEDB and as a test is
currently logging nearly 10000 rows in the table every minute. I was
wondering if NOT EXISTS would actually slow down the INSERT.

Feb 6 '07 #4
(vi***********@gmail.com) writes:
Would NOT EXISTS add any overhead to the query? The application
logging data to the SQL server is using OLEDB and as a test is
currently logging nearly 10000 rows in the table every minute. I was
wondering if NOT EXISTS would actually slow down the INSERT.
I am a little confused here. In your original post you talked about an
Agent job and you even mentioned a cursor. Now you are talking about
an application that logs data.

But let's say that we are talking about application logging now. Then I
am curious: how come the application is (attempting to) logging duplicates?

As for the question as such, NOT EXISTS may add an overhead, but so will
an error that has to be rolled back.

--
Erland Sommarskog, SQL Server MVP, es****@sommarskog.se

Books Online for SQL Server 2005 at
http://www.microsoft.com/technet/pro...ads/books.mspx
Books Online for SQL Server 2000 at
http://www.microsoft.com/sql/prodinf...ons/books.mspx
Feb 6 '07 #5
The application logs "raw" data. The SQL Job (stored procedure) is
adding attributes to that data and moving it to "report" tables. The
cursor was used to select the right attributes based on the data
logged.

The application is logging data from a hardware device (PLC) thats
generating data faster than the application can accept (at times) ..
hence the issue of duplicates to avoid any data loss during the timing
issue. There is unfortunately no way to control the duplicate problem
at the application level. But since its generating data so fast - we
need to just dump the raw data in a table and then copy it for
reporting purposes.
On Feb 6, 5:32 pm, Erland Sommarskog <esq...@sommarskog.sewrote:
(vikram.man...@gmail.com) writes:
Would NOT EXISTS add any overhead to the query? The application
logging data to the SQL server is using OLEDB and as a test is
currently logging nearly 10000 rows in the table every minute. I was
wondering if NOT EXISTS would actually slow down the INSERT.

I am a little confused here. In your original post you talked about an
Agent job and you even mentioned a cursor. Now you are talking about
an application that logs data.

But let's say that we are talking about application logging now. Then I
am curious: how come the application is (attempting to) logging duplicates?

As for the question as such, NOT EXISTS may add an overhead, but so will
an error that has to be rolled back.

--
Erland Sommarskog, SQL Server MVP, esq...@sommarskog.se

Books Online for SQL Server 2005 athttp://www.microsoft.com/technet/prodtechnol/sql/2005/downloads/books...
Books Online for SQL Server 2000 athttp://www.microsoft.com/sql/prodinfo/previousversions/books.mspx

Feb 6 '07 #6
(vi***********@gmail.com) writes:
The application logs "raw" data. The SQL Job (stored procedure) is
adding attributes to that data and moving it to "report" tables. The
cursor was used to select the right attributes based on the data
logged.

The application is logging data from a hardware device (PLC) thats
generating data faster than the application can accept (at times) ..
hence the issue of duplicates to avoid any data loss during the timing
issue. There is unfortunately no way to control the duplicate problem
at the application level. But since its generating data so fast - we
need to just dump the raw data in a table and then copy it for
reporting purposes.
I can understand that you log everthing in the raw tables. That
certainly seems like the best strategy. And my suggestion was not
that you have the WHERE NOT EXISTS in this place, but rather the in
the Agent job.

Whether the WHERE NOT EXISTS would be too costly in the Agent job,
I don't think so. After all, if you can afford having a cursor, then
you don't appear to be in a hurry...
--
Erland Sommarskog, SQL Server MVP, es****@sommarskog.se

Books Online for SQL Server 2005 at
http://www.microsoft.com/technet/pro...ads/books.mspx
Books Online for SQL Server 2000 at
http://www.microsoft.com/sql/prodinf...ons/books.mspx
Feb 6 '07 #7
I'll give it a shot. Is it generally more efficient to check for
duplicates through T-SQL like with WHERE NOT EXISTS? or use
constraints on the table? I realize the latter is a little painful as
it clogs the error logs for the job history.

thanks, Vikram

On Feb 6, 6:05 pm, Erland Sommarskog <esq...@sommarskog.sewrote:
(vikram.man...@gmail.com) writes:
The application logs "raw" data. The SQL Job (stored procedure) is
adding attributes to that data and moving it to "report" tables. The
cursor was used to select the right attributes based on the data
logged.
The application is logging data from a hardware device (PLC) thats
generating data faster than the application can accept (at times) ..
hence the issue of duplicates to avoid any data loss during the timing
issue. There is unfortunately no way to control the duplicate problem
at the application level. But since its generating data so fast - we
need to just dump the raw data in a table and then copy it for
reporting purposes.

I can understand that you log everthing in the raw tables. That
certainly seems like the best strategy. And my suggestion was not
that you have the WHERE NOT EXISTS in this place, but rather the in
the Agent job.

Whether the WHERE NOT EXISTS would be too costly in the Agent job,
I don't think so. After all, if you can afford having a cursor, then
you don't appear to be in a hurry...

--
Erland Sommarskog, SQL Server MVP, esq...@sommarskog.se

Books Online for SQL Server 2005 athttp://www.microsoft.com/technet/prodtechnol/sql/2005/downloads/books...
Books Online for SQL Server 2000 athttp://www.microsoft.com/sql/prodinfo/previousversions/books.mspx

Feb 6 '07 #8
(vi***********@gmail.com) writes:
I'll give it a shot. Is it generally more efficient to check for
duplicates through T-SQL like with WHERE NOT EXISTS? or use
constraints on the table? I realize the latter is a little painful as
it clogs the error logs for the job history.
If you want to prevent duplicates at all, you should use constraints.
And then you may use WHERE NOT EXISTS to avoid the errors to happen.
Note that if you only use WHERE NOT EXISTS, but does not have any index
at all, performance will be awful.

As for that log table, I would not put any constraints on that table
that prevents duplicates, nor any WHERE NOT EXISTS. It seems to me that
it may be better to permitt the application to log what it logs and
weed out duplicates later.

--
Erland Sommarskog, SQL Server MVP, es****@sommarskog.se

Books Online for SQL Server 2005 at
http://www.microsoft.com/technet/pro...ads/books.mspx
Books Online for SQL Server 2000 at
http://www.microsoft.com/sql/prodinf...ons/books.mspx
Feb 7 '07 #9
Thanks Erland - i noticed the performance is much better with WHERE
NOT EXISTS... and the SQL Job completes much more cleanly.
On Feb 7, 3:20 am, Erland Sommarskog <esq...@sommarskog.sewrote:
(vikram.man...@gmail.com) writes:
I'll give it a shot. Is it generally more efficient to check for
duplicates through T-SQLlike with WHERE NOT EXISTS? or use
constraints on the table? I realize the latter is a little painful as
it clogs the error logs for the job history.

If you want to prevent duplicates at all, you should use constraints.
And then you may use WHERE NOT EXISTS to avoid the errors to happen.
Note that if you only use WHERE NOT EXISTS, but does not have any index
at all, performance will be awful.

As for that log table, I would not put any constraints on that table
that prevents duplicates, nor any WHERE NOT EXISTS. It seems to me that
it may be better to permitt the application to log what it logs and
weed out duplicates later.

--
Erland Sommarskog,SQLServer MVP, esq...@sommarskog.se

Books Online forSQLServer 2005 athttp://www.microsoft.com/technet/prodtechnol/sql/2005/downloads/books...
Books Online forSQLServer 2000 athttp://www.microsoft.com/sql/prodinfo/previousversions/books.mspx

Feb 7 '07 #10

This thread has been closed and replies have been disabled. Please start a new discussion.

Similar topics

3
by: Richard Morey | last post by:
Hi.. I have written a stored procedure that take 5 - 8 minutes to fully execute.. I wrote this routine as a stored procedure because I started to create all the functionality via ASP but I kept...
7
by: Lauren Quantrell | last post by:
I have a table with three columns. One of the columns contains text output data nvarchar(500), one of them contains a filename nvarchar(50), one of the columns is a bit to record if it has been...
2
by: Tim.D | last post by:
Hello people, Thanks to Serge, PM, Knut and a host of others I have now successfully completed my very first SQL Stored Procedure. Takes some 13mins or so to run, but that was expected as it...
7
by: Anthony Robinson | last post by:
Have been encountering an odd issue. Every now and again, certain packages of stored procedures just become invalid. I'm aware that dropping or altering an underlying table would render a package...
5
by: Rhino | last post by:
I am trying to determine the behaviour of stored procedures in DB2 V8.2.x in Windows/Unix/Linux and how I can control that behaviour. Some documentation in the manuals is confusing the issue...
6
by: fumanchu | last post by:
I've got to let end users (really just one person) load billing batch files into a third party app table. They need to specify the billing cycle name, the batch name, and the input file name and...
1
by: Ed Murphy | last post by:
One of my clients has a stored procedure on their secondary server that copies a bunch of data from the production server. (Replication will break the accounting software, according to its...
4
by: PJackson | last post by:
I have been given the task of taking a 3,200 line COBOL stored procedure and duplicating the same functionality in UDB 7.2 on the Windows platform with a procedural SQL stored procedure. I have...
2
by: acw | last post by:
On a SQL Server 2000 db I would like to setup a stored procedure that accesses couple tables and runs the extended stored procedure xp..cmdshell. The goal is to grant users with limited privileges...
0
by: Charles Arthur | last post by:
How do i turn on java script on a villaon, callus and itel keypad mobile phone
0
by: ryjfgjl | last post by:
In our work, we often receive Excel tables with data in the same format. If we want to analyze these data, it can be difficult to analyze them because the data is spread across multiple Excel files...
0
BarryA
by: BarryA | last post by:
What are the essential steps and strategies outlined in the Data Structures and Algorithms (DSA) roadmap for aspiring data scientists? How can individuals effectively utilize this roadmap to progress...
1
by: nemocccc | last post by:
hello, everyone, I want to develop a software for my android phone for daily needs, any suggestions?
1
by: Sonnysonu | last post by:
This is the data of csv file 1 2 3 1 2 3 1 2 3 1 2 3 2 3 2 3 3 the lengths should be different i have to store the data by column-wise with in the specific length. suppose the i have to...
0
by: Hystou | last post by:
There are some requirements for setting up RAID: 1. The motherboard and BIOS support RAID configuration. 2. The motherboard has 2 or more available SATA protocol SSD/HDD slots (including MSATA, M.2...
0
Oralloy
by: Oralloy | last post by:
Hello folks, I am unable to find appropriate documentation on the type promotion of bit-fields when using the generalised comparison operator "<=>". The problem is that using the GNU compilers,...
0
tracyyun
by: tracyyun | last post by:
Dear forum friends, With the development of smart home technology, a variety of wireless communication protocols have appeared on the market, such as Zigbee, Z-Wave, Wi-Fi, Bluetooth, etc. Each...
0
agi2029
by: agi2029 | last post by:
Let's talk about the concept of autonomous AI software engineers and no-code agents. These AIs are designed to manage the entire lifecycle of a software development project—planning, coding, testing,...

By using Bytes.com and it's services, you agree to our Privacy Policy and Terms of Use.

To disable or enable advertisements and analytics tracking please visit the manage ads & tracking page.