Greetings All, I was hoping that someone out there has run into this
issue before and can shed some light on it for me. I have a stored
procedure that essentially does a mini ETL from a source OLTP DB to a
the target Operational Data Store. This is still in development so
both DB's reside on the same machine for convenience. The stored proc
runs successfully from within Query analyzer and this holds true on the
following platforms: XP Pro, 2000 Pro, and 2000 Server. However, if I
try to call the proc from the OSQL prompt on the 2000 machines the proc
dies halfway through with errors that don't make sense and simply
aren't there (remember the proc can run from within query analyzer
successfully). Is it possible that 2000 Pro and 2000 Server act
differently when the OSQL utility is run over XP pro?
Further information:
The proc is using dynamic sql and sp_executesql is being used within
the proc. The dynamic sql string created is very close to the 4000
byte limit imposed by the nvarhcar datatype, I have the variable
@v_SQLString defined as varchar(4000).
It seems that this proc is dying somewhere within the portion of the
proc where this 4000 byte dynamic sql string is being executed. There
are many dynamic sql strings in this proc and several execute just fine
before getting to this big one.
It is my opinion that for some reason the 2000 platforms are seeing the
dynamic sql string > 4000 bytes and this is causing the query to fail.
The XP machine does not do this and the query executes just fine from
both Query Analyzer and the OSQL utility??
I apologize if this is nebulous and I will be willing to provide any
further information.
Let me know if you can help.
TFD 24 4578
"LineVoltageHalogen" <tr****************@yahoo.com> wrote in message
news:11*********************@g14g2000cwa.googlegro ups.com... Greetings All, I was hoping that someone out there has run into this issue before and can shed some light on it for me. I have a stored procedure that essentially does a mini ETL from a source OLTP DB to a the target Operational Data Store. This is still in development so both DB's reside on the same machine for convenience. The stored proc runs successfully from within Query analyzer and this holds true on the following platforms: XP Pro, 2000 Pro, and 2000 Server. However, if I try to call the proc from the OSQL prompt on the 2000 machines the proc dies halfway through with errors that don't make sense and simply aren't there (remember the proc can run from within query analyzer successfully). Is it possible that 2000 Pro and 2000 Server act differently when the OSQL utility is run over XP pro?
Further information:
The proc is using dynamic sql and sp_executesql is being used within the proc. The dynamic sql string created is very close to the 4000 byte limit imposed by the nvarhcar datatype, I have the variable @v_SQLString defined as varchar(4000).
It seems that this proc is dying somewhere within the portion of the proc where this 4000 byte dynamic sql string is being executed. There are many dynamic sql strings in this proc and several execute just fine before getting to this big one.
It is my opinion that for some reason the 2000 platforms are seeing the dynamic sql string > 4000 bytes and this is causing the query to fail. The XP machine does not do this and the query executes just fine from both Query Analyzer and the OSQL utility??
I apologize if this is nebulous and I will be willing to provide any further information.
Let me know if you can help.
TFD
Since the procedure executes entirely on the server, it's not obvious (to
me) why the client OS would matter. Having said that, you mentioned there
are errors, but you didn't say what they are - can you post them, perhaps
with the corresponding lines of code from the proc (simplified if
necessary)? It would also be useful to see the osql.exe command line,
especially if you're passing parameters to it.
One possibility is that the errors are caused by SET options being different
between QA and osql.exe, but that's just a guess - you can use DBCC
USEROPTIONS to investigate. It seems unlikely that a different OS would
result in different connection settings, though, unless you're doing
something like using ODBC DSNs with different properties to connect.
Simon
I will re-run tonight when I get home and post the error. In the
meanwhile here is the osql command:
- osql -S bobo2 -U sa -P sa -i go_bobo2.sql -o go_bobo2.out
Here is the contents of go_bobo2.sql:
-
use mydb
go
truncate table table1
truncate table table2
go
select getdate()
go
exec uspDbLoad '8CD1F53B-A463-4E4A-B7A3-4B515B061729', N'bobo2',
N'OltpData', N'dbo'
select getdate()
go
select 'Table=> ' + cast(o.name as varchar) + ' Num Rows=> ' +
cast(rowcnt as varchar)
from sysobjects o inner join sysindexes i
on o.id = i.id
where i.rowcnt <> 0
and o.xtype = 'U'
go
As for the code, the procedure is probably 20 pages long. However, if
you feel that it would help I could send the code were OSQL thinks it
is crashing. Also the osql command listed above is called from within
a *.bat file.
TFD
Thanks for taking the time to work through this with me.
LineVoltageHalogen (tr****************@yahoo.com) writes: Greetings All, I was hoping that someone out there has run into this issue before and can shed some light on it for me. I have a stored procedure that essentially does a mini ETL from a source OLTP DB to a the target Operational Data Store. This is still in development so both DB's reside on the same machine for convenience. The stored proc runs successfully from within Query analyzer and this holds true on the following platforms: XP Pro, 2000 Pro, and 2000 Server. However, if I try to call the proc from the OSQL prompt on the 2000 machines the proc dies halfway through with errors that don't make sense and simply aren't there (remember the proc can run from within query analyzer successfully). Is it possible that 2000 Pro and 2000 Server act differently when the OSQL utility is run over XP pro?
The error message makes even less sense, if you don't post them. It's
essential for this sort of problem to post the error messages.
The one thing which is different with OSQL from Query Analyzer is that
OSQL by default runs with SET QUOTED_IDENTIFER OFF. This can have an
effect if you working with indexed views or indexed computed columns.
Or, of course, if you are using "" to quote object names.
Use the -I option to enabled QUOTED_IDENTIFIER from the command line.
--
Erland Sommarskog, SQL Server MVP, es****@sommarskog.se
Books Online for SQL Server SP3 at http://www.microsoft.com/sql/techinf...2000/books.asp
Simon Hayes wrote: "LineVoltageHalogen" <tr****************@yahoo.com> wrote in message news:11*********************@g14g2000cwa.googlegro ups.com... Greetings All, I was hoping that someone out there has run into
this issue before and can shed some light on it for me. I have a stored procedure that essentially does a mini ETL from a source OLTP DB to
a the target Operational Data Store. This is still in development so both DB's reside on the same machine for convenience. The stored
proc runs successfully from within Query analyzer and this holds true on
the following platforms: XP Pro, 2000 Pro, and 2000 Server. However,
if I try to call the proc from the OSQL prompt on the 2000 machines the
proc dies halfway through with errors that don't make sense and simply aren't there (remember the proc can run from within query analyzer successfully). Is it possible that 2000 Pro and 2000 Server act differently when the OSQL utility is run over XP pro?
Further information:
The proc is using dynamic sql and sp_executesql is being used
within the proc. The dynamic sql string created is very close to the 4000 byte limit imposed by the nvarhcar datatype, I have the variable @v_SQLString defined as varchar(4000).
It seems that this proc is dying somewhere within the portion of
the proc where this 4000 byte dynamic sql string is being executed.
There are many dynamic sql strings in this proc and several execute just
fine before getting to this big one.
It is my opinion that for some reason the 2000 platforms are seeing
the dynamic sql string > 4000 bytes and this is causing the query to
fail. The XP machine does not do this and the query executes just fine
from both Query Analyzer and the OSQL utility??
I apologize if this is nebulous and I will be willing to provide
any further information.
Let me know if you can help.
TFD
Since the procedure executes entirely on the server, it's not obvious
(to me) why the client OS would matter. Having said that, you mentioned
there are errors, but you didn't say what they are - can you post them,
perhaps with the corresponding lines of code from the proc (simplified if necessary)? It would also be useful to see the osql.exe command line,
especially if you're passing parameters to it.
One possibility is that the errors are caused by SET options being
different between QA and osql.exe, but that's just a guess - you can use DBCC USEROPTIONS to investigate. It seems unlikely that a different OS
would result in different connection settings, though, unless you're doing something like using ODBC DSNs with different properties to connect.
Simon
Simon, I went ahead and ran the "dbcc useroptions" on all three
machines: XP PRO, 2000 PRO, and 2000 SERVER and they all returned the
same values. If it was not clear all three machines are running SQL
Server and the command was given to each database from that machine. I
managed to get more information regarding this problem. I manually
logged into osql with all the same flags as in the bat file, I then
manually initiated the command in the *.sql file:
exec uspDbLoad '8CD1F53B-A463-4E4A-B7A3-4B515B061729', N'bobo2',
N'OltpData', N'dbo'
and it ran successfully! I then tried to run the batch file again and
I get the same error:
************************************************** ***************************
-----------------------
2005-02-08 21:15:30.587
(1 row affected)
DbId FileId CurrentSize MinimumSize UsedPages EstimatedPages
------ ------ ----------- ----------- ----------- --------------
7 2 63 63 56 56
(1 row affected)
(1 row affected)
DBCC execution completed. If DBCC printed error messages, contact your
system administrator.
Msg 515, Level 16, State 2, Server BOBO2, Procedure
uspPsOltpLoadProductRevenues, Line 802
Cannot insert the value NULL into column 'PPRIdPK', table
'PersistentStore.dbo.PsProductRevenue'; column does not allow nulls.
INSERT fails.
Msg 241, Level 16, State 1, Server BOBO2, Procedure
uspPsOltpLoadProductRevenues, Line 826
Syntax error converting datetime from character string.
Msg 137, Level 15, State 2, Server BOBO2, Line 99
Must declare the variable '@v_Exi'.
The statement has been terminated.
-----------------------
2005-02-08 21:15:34.890
(1 row affected)
************************************************** ***************************
Please keep in mind that this error does not occur when I run the
stored proc from query analyzer and as I just discoverd it does not
occur when I manually log into osql and call the procedure just as it
is being called in the batch file, below is the output from running the
command manually at the osql prompt:
************************************************** ***************************
1> exec uspDbLoad '8CD1F53B-A463-4E4A-B7A3-4B515B061729', N'',
N'OltpData, N'dbo'
2> go
DbId bFileIdbCurrentSizebMinimumSizebUsedPages bEstimatedPages
------b------b-----------b-----------b-----------b--------------
7b 2b 63b 63b 56b 56
(1 row affected)
(1 row affected)
DBCC execution completed. If DBCC printed error messages, contact your
system
administrator.
1>
************************************************** ***************************
Please let me know if this helps.
TFD
"LineVoltageHalogen" <tr****************@yahoo.com> wrote in message
news:11*********************@g14g2000cwa.googlegro ups.com... Greetings All, I was hoping that someone out there has run into this issue before and can shed some light on it for me. I have a stored procedure that essentially does a mini ETL from a source OLTP DB to a the target Operational Data Store. This is still in development so both DB's reside on the same machine for convenience. The stored proc runs successfully from within Query analyzer and this holds true on the following platforms: XP Pro, 2000 Pro, and 2000 Server. However, if I try to call the proc from the OSQL prompt on the 2000 machines the proc dies halfway through with errors that don't make sense and simply aren't there (remember the proc can run from within query analyzer successfully). Is it possible that 2000 Pro and 2000 Server act differently when the OSQL utility is run over XP pro?
Further information:
The proc is using dynamic sql and sp_executesql is being used within the proc. The dynamic sql string created is very close to the 4000 byte limit imposed by the nvarhcar datatype, I have the variable @v_SQLString defined as varchar(4000).
It seems that this proc is dying somewhere within the portion of the proc where this 4000 byte dynamic sql string is being executed. There are many dynamic sql strings in this proc and several execute just fine before getting to this big one.
It is my opinion that for some reason the 2000 platforms are seeing the dynamic sql string > 4000 bytes and this is causing the query to fail. The XP machine does not do this and the query executes just fine from both Query Analyzer and the OSQL utility??
I apologize if this is nebulous and I will be willing to provide any further information.
Let me know if you can help.
TFD
This may or may not help (free advice is worth the price).
I would set the -a parameter (packet size) as large as your biggest batch
(i.e character count between GO's),
and see if that does anything.
I will try that and get back to you. Thank you for you input David.
TFD
LineVoltageHalogen (tr****************@yahoo.com) writes: Please keep in mind that this error does not occur when I run the stored proc from query analyzer and as I just discoverd it does not occur when I manually log into osql and call the procedure just as it is being called in the batch file, below is the output from running the command manually at the osql prompt:
Exactly how does that batch file look like?
The possibilibites I can think of are:
1) There are more than one procedure with the same name, but with
different owners, of which one works and the other not.
2) Same thing, but they are in different databases.
3) The batch file has some new line, or formatting issue which cases
one or more of the parameters to be lost.
--
Erland Sommarskog, SQL Server MVP, es****@sommarskog.se
Books Online for SQL Server SP3 at http://www.microsoft.com/sql/techinf...2000/books.asp
David Rawheiser wrote: "LineVoltageHalogen" <tr****************@yahoo.com> wrote in message news:11*********************@g14g2000cwa.googlegro ups.com... Greetings All, I was hoping that someone out there has run into
this issue before and can shed some light on it for me. I have a stored procedure that essentially does a mini ETL from a source OLTP DB to
a the target Operational Data Store. This is still in development so both DB's reside on the same machine for convenience. The stored
proc runs successfully from within Query analyzer and this holds true on
the following platforms: XP Pro, 2000 Pro, and 2000 Server. However,
if I try to call the proc from the OSQL prompt on the 2000 machines the
proc dies halfway through with errors that don't make sense and simply aren't there (remember the proc can run from within query analyzer successfully). Is it possible that 2000 Pro and 2000 Server act differently when the OSQL utility is run over XP pro?
Further information:
The proc is using dynamic sql and sp_executesql is being used
within the proc. The dynamic sql string created is very close to the 4000 byte limit imposed by the nvarhcar datatype, I have the variable @v_SQLString defined as varchar(4000).
It seems that this proc is dying somewhere within the portion of
the proc where this 4000 byte dynamic sql string is being executed.
There are many dynamic sql strings in this proc and several execute just
fine before getting to this big one.
It is my opinion that for some reason the 2000 platforms are seeing
the dynamic sql string > 4000 bytes and this is causing the query to
fail. The XP machine does not do this and the query executes just fine
from both Query Analyzer and the OSQL utility??
I apologize if this is nebulous and I will be willing to provide
any further information.
Let me know if you can help.
TFD
This may or may not help (free advice is worth the price).
I would set the -a parameter (packet size) as large as your biggest
batch (i.e character count between GO's),
and see if that does anything.
I tried changing the packet size as recommended and it did not work. I
did however get some usefull informatin in the output file for the
change:
================================================== ==================
[ODBC Driver Manager] Driver's SQLSetConnectAttr failed
Packet size change not honored by server, server size used
1> 2> 1> 2> 3> 4> 1> 2> 3>
-----------------------
2005-02-09 22:08:19.380
(1 row affected)
1> 2> 3> DbId FileId CurrentSize MinimumSize UsedPages
EstimatedPages
------ ------ ----------- ----------- ----------- --------------
7 2 63 63 56 56
(1 row affected)
(1 row affected)
DBCC execution completed. If DBCC printed error messages, contact your
system administrator.
Msg 515, Level 16, State 2, Server BOBO2, Procedure
uspPsOltpLoadProductRevenues, Line 802
Cannot insert the value NULL into column 'PPRIdPK', table
'PersistentStore.dbo.PsProductRevenue'; column does not allow nulls.
INSERT fails.
Msg 241, Level 16, State 1, Server BOBO2, Procedure
uspPsOltpLoadProductRevenues, Line 826
Syntax error converting datetime from character string.
Msg 137, Level 15, State 2, Server BOBO2, Line 99
Must declare the variable '@v_Exi'.
The statement has been terminated.
1> 2> 3>
================================================== ==================
I am out of ideas on this one? It is so strange that it works on my XP
machine but not on the 2000 machines?
L
Here are the exact contents of the batch file:
SET SERVER=BOBO2
SET SA_PW=SA
SET DB_NM=PersistentStore
SET OSQL_DIR=C:\Program Files\Microsoft SQL Server\80\Tools\Binn
ECHO Creating database tables. Time %TIME%
"%OSQL_DIR%\osql" -a 65535 -U sa -P "%SA_PW%" -S "%SERVER%" -d"%DB_NM%"
-i go_bobo2.sql -o go_bobo2.out
The file go_bobo2.sql looks like:
================================================== ===========
use persistentstore
go
truncate table PsOltpLoadMessageLog
truncate table PsOltpLoadStatsLog
go
select getdate()
go
exec uspPersistentStoreLoad '8CD1F53B-A463-4E4A-B7A3-4B515B061729',
N'bobo2', N'support846_db', N'dbo'
go
select getdate()
go
select 'Table=> ' + cast(o.name as varchar) + ' Num Rows=> ' +
cast(rowcnt as varchar)
from sysobjects o inner join sysindexes i
on o.id = i.id
where i.rowcnt <> 0
and o.xtype = 'U'
go
================================================== ===========
Erland, I really appreciate your taking the time to consider this
issue, in response to your three questions:
1) There are more than one procedure with the same name, but with
different owners, of which one works and the other not.
- This is not the case. There is only one database called
PersistentStore and I completely wipe it out each time I try to run the
load scripts. The wiping out actually drops the database and rebuilds
all the objects.
2) Same thing, but they are in different databases.
-- There is no other database referenced in the scripts.
3) The batch file has some new line, or formatting issue which cases
one or more of the parameters to be lost.
-- I checked and the file seems to be clean. I even recreated the
file and typed in the command again?
This is so strange ???
TFD
Hi
if you are using Varchar the maximum string size is 8000 characters ,
but you are using the sp_ExecuteSql which takes only nvarchar , instead
you can just use the execute(@sqlstatement).
secondly if you are getting some funny errors with the dynamic sql,
best thing is to add a print statement , if will make your debugging
lot easier .
Thanks
ssk
SSK wrote: Hi
if you are using Varchar the maximum string size is 8000 characters
, but you are using the sp_ExecuteSql which takes only nvarchar ,
instead you can just use the execute(@sqlstatement).
secondly if you are getting some funny errors with the dynamic sql, best thing is to add a print statement , if will make your debugging lot easier .
Thanks ssk
SSK, thanks for the response. Please read the entire post and you will
see that that is not the problem at all. The stored procedure which is
dying can be successully called from within Query Analyzer, it can
successfully be run from OSQL if I manually type in the call, e.g) >
exec uspProc,etc. Further, I can successfully run the proc on an XP
machine even if I call it from within a DOS batch file which is were I
am having problem on Windows 2000 Machines. The stored proc is just
fine, the problem is when I try calling the proc from within a *.bat
file on Windows 2000 machines.
Thanks, TFD.
LineVoltageHalogen (tr****************@yahoo.com) writes: Here are the exact contents of the batch file:
Thanks! Nothing strange there what I can see. Except, then, for the
actual result.
I think you need to insert some debug SELECT or PRINT in your procedure in
order to try to get some more clues. For instance, for the message:
Msg 515, Level 16, State 2, Server BOBO2, Procedure
uspPsOltpLoadProductRevenues, Line 802
Cannot insert the value NULL into column 'PPRIdPK', table
'PersistentStore.dbo.PsProductRevenue'; column does not allow nulls.
INSERT fails.
add the same SELECT that you have in the INSERT statement, but without
the INSERT. If this can return many rows, add a WHERE IS NULL for the
column that produces PPRIdPK.
And how about this one:
Msg 241, Level 16, State 1, Server BOBO2, Procedure
uspPsOltpLoadProductRevenues, Line 826
Syntax error converting datetime from character string.
What does the SQL here look like? If you have data in a column which
should be dates, but contain spurious error, you have to tread carefully.
--
Erland Sommarskog, SQL Server MVP, es****@sommarskog.se
Books Online for SQL Server SP3 at http://www.microsoft.com/sql/techinf...2000/books.asp
The errors are quite different, so it's hard to know what's going on.
Have you tried using Profiler to see what's going on? It can show
dynamic SQL statements from both EXEC() and sp_executesql, and you
mentioned that the errors seem to occur in that part of your proc.
It may not give you a real answer, but Profiler will at least show what
SQL is really being executed, and hopefully that will give you a clue
as to where the problem is.
Simon
Erland Sommarskog wrote: LineVoltageHalogen (tr****************@yahoo.com) writes: Here are the exact contents of the batch file: Thanks! Nothing strange there what I can see. Except, then, for the actual result.
I think you need to insert some debug SELECT or PRINT in your
procedure in order to try to get some more clues. For instance, for the message:
Msg 515, Level 16, State 2, Server BOBO2, Procedure uspPsOltpLoadProductRevenues, Line 802 Cannot insert the value NULL into column 'PPRIdPK', table 'PersistentStore.dbo.PsProductRevenue'; column does not allow
nulls. INSERT fails.
add the same SELECT that you have in the INSERT statement, but
without the INSERT. If this can return many rows, add a WHERE IS NULL for the column that produces PPRIdPK.
And how about this one:
Msg 241, Level 16, State 1, Server BOBO2, Procedure uspPsOltpLoadProductRevenues, Line 826 Syntax error converting datetime from character string.
What does the SQL here look like? If you have data in a column which should be dates, but contain spurious error, you have to tread
carefully.
-- Erland Sommarskog, SQL Server MVP, es****@sommarskog.se
Books Online for SQL Server SP3 at http://www.microsoft.com/sql/techinf...2000/books.asp
Erland, the stored proc that is bombing out when run from the *.bat
file is rather larger, especially some of the dynamic sql contained
within it. The errors seen above are similar to ones I have seen in
the past when the variable which holds the dynamic sql string isn't
large enough. The dynamic sql that is bombing out is just on the cusp
of the 4000 byte nvarchar limit and I was wondering if there could be
something funky within the O.S. environment that could be causing each
code point to be larger than two bytes? It is a long shot but I keep
thinking that I should at least be looking at this option. One more
piece of information which leads me to think that there may be a code
set/locale issue is that when I run sql that returns records sets in
query analyzer and query analyzer's output is set to "text" is get
garbled data, but if I scroll the screen in query analyzer from left to
right it clears up some of it. If you want I can email you a word doc
that has the image of what I am talking about, for that matter I can
email the Word doc to anyone who wants to see it.
I will try your suggestions above and get back to you.
TFD
Simon, I will try your suggestion and bet back to you with the results.
Please read my post to Erland and let me know if that sheds any light
on things, also let me know if you would like me to email the Word
document to you that has the image of the garbled data.
TFD
LineVoltageHalogen (tr****************@yahoo.com) writes: Erland, the stored proc that is bombing out when run from the *.bat file is rather larger, especially some of the dynamic sql contained within it. The errors seen above are similar to ones I have seen in the past when the variable which holds the dynamic sql string isn't large enough. The dynamic sql that is bombing out is just on the cusp of the 4000 byte nvarchar limit
You had three error messages:
Msg 515, Level 16, State 2, Server BOBO2, Procedure
uspPsOltpLoadProductRevenues, Line 802
Cannot insert the value NULL into column 'PPRIdPK', table
'PersistentStore.dbo.PsProductRevenue'; column does not allow nulls.
INSERT fails.
Msg 241, Level 16, State 1, Server BOBO2, Procedure
uspPsOltpLoadProductRevenues, Line 826
Syntax error converting datetime from character string.
Msg 137, Level 15, State 2, Server BOBO2, Line 99
Must declare the variable '@v_Exi'.
The statement has been terminated.
The last message is from a batch of dynamic SQL which fails to compile.
This could be a truncation issue, the real name of the variable being
longer than @v_Exi.
However, the first two messages from a stored procedure. Not the
one you call in the batch script. I would guess that you call this
procedure from your uspPersistentStoreLoad. Or do you call this
procedure through dynamic SQL? In such case that sounds a little
funny. I've rarely written dynamic SQL that calls stored procedures.
(Note that if you need to determine the name of the procedure
dynamically, you can say "EXEC @sp_name @par1, @par2...")
Particular the datetime error makes me suspect that you could have a
query that is not robust, and which will succeed or fail depending
on query plan.
and I was wondering if there could be something funky within the O.S. environment that could be causing each code point to be larger than two bytes?
Nah, but if you dynamic SQL includes a call to host_name() and the
the machine where this fails has a longer name, that the one where
it succeeds, that could be an explanation.
Anyway, if yuu believe that you are on the brink of overflowing, then
maybe you should split it up into two variables. Look at http://www.sommarskog.se/dynamic_sql.html#use-which, to see how combine
EXEC() with sp_executesql to achieve this.
--
Erland Sommarskog, SQL Server MVP, es****@sommarskog.se
Books Online for SQL Server SP3 at http://www.microsoft.com/sql/techinf...2000/books.asp
Erland Sommarskog wrote: LineVoltageHalogen (tr****************@yahoo.com) writes: Erland, the stored proc that is bombing out when run from the *.bat file is rather larger, especially some of the dynamic sql contained within it. The errors seen above are similar to ones I have seen
in the past when the variable which holds the dynamic sql string isn't large enough. The dynamic sql that is bombing out is just on the
cusp of the 4000 byte nvarchar limit You had three error messages:
Msg 515, Level 16, State 2, Server BOBO2, Procedure uspPsOltpLoadProductRevenues, Line 802 Cannot insert the value NULL into column 'PPRIdPK', table 'PersistentStore.dbo.PsProductRevenue'; column does not allow
nulls. INSERT fails.
Msg 241, Level 16, State 1, Server BOBO2, Procedure uspPsOltpLoadProductRevenues, Line 826 Syntax error converting datetime from character string.
Msg 137, Level 15, State 2, Server BOBO2, Line 99 Must declare the variable '@v_Exi'. The statement has been terminated.
The last message is from a batch of dynamic SQL which fails to
compile. This could be a truncation issue, the real name of the variable being longer than @v_Exi.
Erland, that is what I am thinking and certainly that is what is within sql, it is being truncated as you imply. But why does it
only get truncated when it is called from within a batch file but not
when I manually call it from within osql or isql, or for that matter
from within QA. I know I keep saying this but it just blows my mind.
However, the first two messages from a stored procedure. Not the one you call in the batch script. I would guess that you call this procedure from your uspPersistentStoreLoad. Or do you call this procedure through dynamic SQL? In such case that sounds a little funny. I've rarely written dynamic SQL that calls stored procedures. (Note that if you need to determine the name of the procedure dynamically, you can say "EXEC @sp_name @par1, @par2...")
uspPersistentStoreLoad calls another procecure uspStxOltpEtl which then calls the actual procedures that do the ETL process, there
are 35 procedures called from uspStxOltpEtl with one of them being uspPsOltpLoadProductRevenues which is the procedure that is
crapping out, this procedure is rather large, about 2000 lines of code. It should be noted that there are about 19 other procedures that get called before this one and they all execute successfully, however
they are all small in comparison. No procedures are called with
dynamic sql.
Particular the datetime error makes me suspect that you could have a query that is not robust, and which will succeed or fail depending on query plan.
This is an interesting suggestion. Can you provide an example of how a query could fail depending on the query plan. To be honest Erland thes procs really are not doing any thing fancy, they are mostly doing selects and insert with maybe some updates.
and I was wondering if there could be something funky within the
O.S. environment that could be causing each code point to be larger than
two bytes?
Nah, but if you dynamic SQL includes a call to host_name() and the the machine where this fails has a longer name, that the one where it succeeds, that could be an explanation.
Anyway, if yuu believe that you are on the brink of overflowing, then maybe you should split it up into two variables. Look at http://www.sommarskog.se/dynamic_sql.html#use-which, to see how
combine EXEC() with sp_executesql to achieve this.
I appreciate your your taking the time to interact on this issue Erland, you have been a great help. I think that this problem was probably a good thing to happen to me/my company at this time.
The work I am doing is for a new software product, we currently have
two other software products and they too rely on SQL for a back end
db. I am new to the company and the dba before me used batch files to
do the buildout of the database piece of the software install. I
think that I am going to leverage C# and SQL DMO to do this task now,
this will get me out of using batch file.
Peace and Good day to you Erland. TFD -- Erland Sommarskog, SQL Server MVP, es****@sommarskog.se
Books Online for SQL Server SP3 at http://www.microsoft.com/sql/techinf...2000/books.asp
LineVoltageHalogen (tr****************@yahoo.com) writes: >> uspPersistentStoreLoad calls another procecure uspStxOltpEtl which >> then calls the actual procedures that do the ETL process, there >> are 35 procedures called from uspStxOltpEtl with one of them being >> uspPsOltpLoadProductRevenues which is the procedure that is crapping >> out, this procedure is rather large, about 2000 lines of code. It >> should be noted that there are about 19 other procedures that get >> called before this one and they all execute successfully, however >> they are all small in comparison. No procedures are called with >> dynamic sql.
I may be large, but you could do yourself a service and dig out the
lines where it's failing. You have the line numbers in the error messages.
>> This is an interesting suggestion. Can you provide an example of >> how a query could fail depending on the query plan. To be honest >> Erland thes procs really are not doing any thing fancy, they are >> mostly doing selects and insert with maybe some updates.
Here is a very quick example (I'm at work now):
SELECT * FROM tbl WHERE isdate(col) = 1 AND col > getdate()
Assume that col is a char(8) column and can contain values that are not
legal dates. This particular example may not fail, but my idea is that
the datetime error is due to something like this.
--
Erland Sommarskog, SQL Server MVP, es****@sommarskog.se
Books Online for SQL Server SP3 at http://www.microsoft.com/sql/techinf...2000/books.asp
On 10 Feb 2005 19:02:36 -0800, LineVoltageHalogen wrote:
(snip) >> Erland, that is what I am thinking and certainly that is what is >> within sql, it is being truncated as you imply. But why does itonly >> get truncated when it is called from within a batch file but notwhen >> I manually call it from within osql or isql, or for that matterfrom >> within QA. I know I keep saying this but it just blows my mind.
Hi TFD,
Wild guess to follow:
Various settings (like ANSI-settings and such) have a default at server-,
database-, or user-level, but can be overriden. Many clients (like QA)
will automatically override these settings with their own defaults.
Could it be possible that an option that controls the formatting of some
pieces in your dynamic SQL is different? If one formatting option uses
more space than another one, this might explain why the string is
truncated in one case but not in another case.
You are using dynamic SQL, right? Or have I gotten some threads mixed up?
Particular the datetime error makes me suspect that you could have a query that is not robust, and which will succeed or fail depending on query plan.
>> This is an interesting suggestion. Can you provide an example of >> how a query could fail depending on the query plan. To be honest >> Erland thes procs really are not doing any thing fancy, they are >> mostly doing selects and insert with maybe some updates.
With the above theory, another explanation for the datetime error is
possible. If you build datetime constants in your dynamic SQL and rely on
implicit formatting, they could be used in a format that SQL Server won't
recognise. The date constant '03/04/2005' can be either March 4th or April
3rd, depending on regional settings. If you build a date as dd/mm/yyyy but
SQL Server thinks it's mm/dd/yyyy, then you'll get an error when dd > 12.
Check how you build your dynamic SQL. If you use a date constant anywhere,
make sure that it's in either of these three formats, as these are the
only formats that are guaranteed unambiguous:
* yyyymmdd (date only)
* yyyy-mm-ddThh:mm:ss (date plus time)
* yyyy-mm-ddThh:mm:ss.ttt (ditto, including milliseconds)
If you build a date constant from a datetime variable, use CONVERT with a
style parameter of 112 (date only) or 126 (date plus time).
Best, Hugo
--
(Remove _NO_ and _SPAM_ to get my e-mail address)
Erland, I have decided to break up my SQL into two smaller units and
then combine them as decribed on your website. I think that this is a
good idea because the errors I keep getting strongly resemble those
that have popped up due to the sql string being beyound 4000 bytes. It
is my hope that breaking it up the strange behavoir will go away on the
Windows 2000 platform. With that said I am having a bit of a problem
breaking up the sql, here is the gist of what I need to do, it is a
curtailed version:
*******************************
DECLARE @sql1 nvarchar(4000)
DECLARE @sql2 nvarchar(4000)
DECLARE @v_OLTPQualifiedPath nvarchar(1000)
DECLARE @v_TotalRowsLoaded integer
DECLARE @v_ExitStatus integer
SELECT @v_OLTPQualifiedPath = 'mydata.dbo.'
SELECT @sql1 = N'SELECT COUNT(*)'
SELECT @sql2 = N' from ' + @v_OLTPQualifiedPath + 'customer
SELECT @v_TotalRowsLoaded = @@ROWCOUNT,
@v_ExitStatus = @@ERROR '
EXEC('
EXEC sp_executesql N''' + @sql1 + @sql2 + ''',
N''@v_TotalRowsLoaded integer OUTPUT'',
N''@v_ExitStatus integer OUTPUT'', N''@v_OLTPQualifiedPath
nvarchar(1000)''
@v_TotalRowsLoaded OUTPUT, @v_ExitStatus
OUTPUT, @v_OLTPQualifiedPath = ''' + @v_OLTPQualifiedPath + '''
')
************************************************** *
Obviously sql1 and sql2 will be much larger. However, I am having a
problem getting this little snippet to run. I must be missing a quote
mark somewhere.
As you can see I am grabbing the exit status and error code so that I
can trap and act accordingly if an error took place. Do you see any
gross error in the code I have posted above?
Any help would be greatly appreciated.
TFD
LineVoltageHalogen (tr****************@yahoo.com) writes: Erland, I have decided to break up my SQL into two smaller units and then combine them as decribed on your website. I think that this is a good idea because the errors I keep getting strongly resemble those that have popped up due to the sql string being beyound 4000 bytes.
It seemed to me quite clear that ther errors you got in that stored
procedure did not come from dynamic SQL.
Sure, rewriting is probably a good thing anyway, but I have the feeling
that you will have to dig into what is going on those lines anyway.
Finding errors in SQL code is not about voodo, it's about getting
your hands dirty.
Obviously sql1 and sql2 will be much larger. However, I am having a problem getting this little snippet to run. I must be missing a quote mark somewhere.
Not a quote. I reformatted the code, and suddenty the error was all
apparent:
DECLARE @sql1 nvarchar(4000)
DECLARE @sql2 nvarchar(4000)
DECLARE @v_OLTPQualifiedPath nvarchar(1000)
DECLARE @v_TotalRowsLoaded integer
DECLARE @v_ExitStatus integer
SELECT @v_OLTPQualifiedPath = 'mydata.dbo.'
SELECT @sql1 = N'SELECT COUNT(*)'
SELECT @sql2 = N' from ' + @v_OLTPQualifiedPath + 'customer
SELECT @v_TotalRowsLoaded = @@ROWCOUNT, @v_ExitStatus = @@ERROR '
EXEC('
EXEC sp_executesql N''' + @sql1 + @sql2 + ''',
N''@v_TotalRowsLoaded integer OUTPUT'',
N''@v_ExitStatus integer OUTPUT'',
N''@v_OLTPQualifiedPath nvarchar(1000)''
@v_TotalRowsLoaded OUTPUT,
@v_ExitStatus OUTPUT,
@v_OLTPQualifiedPath = ''' + @v_OLTPQualifiedPath + '''
')
Do you spot it?
There is another problem here, though. The OUTPUT parameters will not
work well here, so you will have do:
INSERT #tmp (...)
EXEC ('DECLARE ....
EXEC sp_execuatesql ...
SELECT @out1, @out2')
This won't fly if you are using INSERT EXEC somewhere in the procedures.
And in any case you will be running the procedures in transaction
context. Hm, this you can avoid:
EXEC ('DECLARE ...
EXEC sp_executesql
INSERT #tmp (...) VALUES(@out1, @out2))
Yes, it's only getting uglier. In SQL 2005 there is nvarchar(MAX), and
you can use sp_executesql without restrictions.
--
Erland Sommarskog, SQL Server MVP, es****@sommarskog.se
Books Online for SQL Server SP3 at http://www.microsoft.com/sql/techinf...2000/books.asp
Erland, I got it to work. I had to create a global temp table because
othewise when I went to select from it outside of the EXEC() I was out
of scope. Here is what the code looks like now:
************************************************** ******
DECLARE @sql1 nvarchar(4000)
DECLARE @sql2 nvarchar(4000)
DECLARE @v_OLTPQualifiedPath nvarchar(1000)
SELECT @v_OLTPQualifiedPath = 'mydata.dbo.'
SELECT @sql1 = N'SELECT COUNT(*)'
SELECT @sql2 = N' from ' + @v_OLTPQualifiedPath + 'customer'
EXEC('DECLARE @v_TotalRowsLoaded integer
DECLARE @v_ExitStatus integer
create table ##tbOperationStatus (rowsloaded integer,
exitstatus integer)
EXEC sp_executesql N''' + @sql1 + @sql2 + ''',
N''@v_OLTPQualifiedPath nvarchar(1000)'',
@v_OLTPQualifiedPath = ''' +
@v_OLTPQualifiedPath + '''
INSERT INTO ##tbOperationStatus values (@@ROWCOUNT, @@ERROR)
'
)
select * from ##tbOperationStatus
************************************************** ******
Does there exist the concept of a global table variable that I could
implement instead of the temp table?
I guess there would be no benefit though, this table is only going to
hold one row of data that has only two attributes.
TFD
I was missing a comma after: N''@v_OLTPQualifiedPath nvarchar(1000)'' ?
I think that was it because I got a new error after I put the comma in.
So I need to declare the variables @v_TotalRowsLoaded and @v_ExitStatus
within the EXEC() to make this code snippet work? I am trying it now.
Thanks for your help
TFD
LineVoltageHalogen (tr****************@yahoo.com) writes: Erland, I got it to work. I had to create a global temp table because othewise when I went to select from it outside of the EXEC() I was out of scope. Here is what the code looks like now:
A global temp table is overkill, and can even cause a mess as they
may not go way as you intended. Just create a regular temp table
outside the EXEC statement and you are fine.
Does there exist the concept of a global table variable that I could implement instead of the temp table?
You can have a permanent table that is keyed by @@spid. Sometimes this
is a good technique, but for this case it would be an overkill.
--
Erland Sommarskog, SQL Server MVP, es****@sommarskog.se
Books Online for SQL Server SP3 at http://www.microsoft.com/sql/techinf...2000/books.asp This discussion thread is closed Replies have been disabled for this discussion. Similar topics
2 posts
views
Thread by Markus Franz |
last post: by
|
3 posts
views
Thread by Ajay Garg |
last post: by
|
2 posts
views
Thread by Bob |
last post: by
|
2 posts
views
Thread by Murtix Van Basten |
last post: by
|
1 post
views
Thread by Alan Zhong |
last post: by
|
2 posts
views
Thread by second714 |
last post: by
|
1 post
views
Thread by john_20_28_2000 |
last post: by
|
11 posts
views
Thread by gopal |
last post: by
|
7 posts
views
Thread by gopal |
last post: by
| | | | | | | | | | |