By using this site, you agree to our updated Privacy Policy and our Terms of Use. Manage your Cookies Settings.
438,842 Members | 2,368 Online
Bytes IT Community
+ Ask a Question
Need help? Post your question and get tips & solutions from a community of 438,842 IT Pros & Developers. It's quick & easy.

Some comments on DTS package

P: n/a

On quick observation, it seems to me DTS has a strange way to
interprete File System. ENV: NT OS, SQL Server 2000.
When a data import package (source point to c:\XYZdir) is executed at
EM level, sql server seems to think the "c:\XYZdir" is the current
user's "c:\XYZdir", however, if the same package is scheduled as a job
and executed as a job, then, sql server seems to think the "c:\XYZdir"
is THE SQL SERVER INSTALLTION MACHINE's C:\XYZdir. It's quite
inconsistent. Please let me know your finding about this.

Also, I use ActiveX scripts to perform exception handing (error
checking) for certain transaction. For instance before a package
imports a file an ActiveX script checks source file's existence and
format, if not as expected halt here (not to execute the package).
Now, the two cases of (a) file not exist; and (b) incorrect file format
can't be determined by not running the package. Probably, the ActiveX
script should capture each case by creating a record/row during the
file checking. What's your thought?
TIA.

Jul 23 '05 #1
Share this Question
Share on Google+
4 Replies


P: n/a
It wouldn't suprise me. I've had similar findings with other aspects of
DTS. For example, we had differing versions of MDAC on the server and
my development PC. It was enough of a difference to stop our package
working (connection string differences). From what I understand, DTS
uses the local user's settings if you run it manually as it knows you
have run the package, but when you run it as a scheduled job, the
server is the one that calls the job (not a user) and it therefore uses
the settings, files and permissions of the server as it can't determine
which user's settings it 'should' be using, this makes sense for it to
run as the server. It is consistent though. UNC paths should work fine
if you can use these or re-look at the specifics of what you want to
achieve. There may be some helpful stuff on www.sqldts.com

HTH

Ryan

Jul 23 '05 #2

P: n/a
On 20 Dec 2004 13:42:58 -0800, NickName wrote:
When a data import package (source point to c:\XYZdir) is executed at
EM level, sql server seems to think the "c:\XYZdir" is the current
user's "c:\XYZdir", however, if the same package is scheduled as a job
and executed as a job, then, sql server seems to think the "c:\XYZdir"
is THE SQL SERVER INSTALLTION MACHINE's C:\XYZdir. It's quite
inconsistent.


It's totally consistent. You give it an absolute path with a C: in it, it
uses an absolute path with a C: in it. When you execute immediately using
EM, the job is run using your credentials on your machine. When you
schedule it, it's run using the sql agent's credentials on the sql agent's
machine.

By way of analogy, if a .BAT file contained

FORMAT C:

would you expect it to always format the C drive of the person who wrote
it, no matter where the batch file was being run? Of course not. It would
format the local C: drive.
Jul 23 '05 #3

P: n/a
Good point on UNC.

Jul 23 '05 #4

P: n/a
And actually l was using that after the conventional drive pointer's
negative experience.

Jul 23 '05 #5

This discussion thread is closed

Replies have been disabled for this discussion.