473,289 Members | 1,842 Online
Bytes | Software Development & Data Engineering Community
Post Job

Home Posts Topics Members FAQ

Join Bytes to post your question to a community of 473,289 software developers and data experts.

Copying Data from Access to SQL Server

TC
I need an automated procedure to copy data from an Access table to a
SQL Server table. Speed is important. What is the recommended
technique?

I can export the data from Access, copy it via FTP, then import it into
SQL Server. I've tried that, and the speed is acceptable. It is an ugly
solution, however, and I expect to find a better one -- preferably a
solution better integrated with the Access RDBMS.

I've tried using an ODBC connection and an INSERT query. That's a good,
clean technique, but is extremely slow (about 10x slower than the file
copy technique).

I've also tried using an OLEDB connection and ADO recordsets to copy
the data procedurally. That was even slower (about 30x slower than the
file copy).

None of this makes sense to me. Why can't Access transfer data to SQL
Server through ODBC or OLEDB virtually as fast as the network can carry
the data? There are no indexes or triggers involved, and no transaction
processing -- just a simple column-for-column transfer of data from one
table to another. I feel as though I must be missing something obvious.
Can anyone offer any insight?
-TC
By the way, I'm using Access 2003 and SQL Server 2000. The network is a
WAN and I'm using TCP/IP as the default protocol.

Jul 3 '06 #1
20 37796
There are many ways to achieve what you want, the following list is not
exhaustive they just happen to be the ways I have done similar work.

You can either push the data from Access or pull the data from SQL Server.

Pushing the data
1) Recordset to Recordset
Copy the data row by row and field by field from one recordset to
the other
2) Recordset to Insert query
Create an Insert statement for each row in the recordset
3) Linked table insert
Link to the SQL Server table and use an Insert statement to insert
the data

Pulling the data
1) DTS package
Create a DTS package in SQL to pull the data, you can execute the
DTS package either from the Access database or by calling a stored procedure
on the SQL server which executes the DTS package.
2) Linked Server
You can create a linked server in SQL which points at your Access DB
you can then call a stored procedure to insert the data from the linked
server into your SQL table.

--

Terry Kreft
"TC" <go*********@yahoo.comwrote in message
news:11**********************@m73g2000cwd.googlegr oups.com...
I need an automated procedure to copy data from an Access table to a
SQL Server table. Speed is important. What is the recommended
technique?

I can export the data from Access, copy it via FTP, then import it into
SQL Server. I've tried that, and the speed is acceptable. It is an ugly
solution, however, and I expect to find a better one -- preferably a
solution better integrated with the Access RDBMS.

I've tried using an ODBC connection and an INSERT query. That's a good,
clean technique, but is extremely slow (about 10x slower than the file
copy technique).

I've also tried using an OLEDB connection and ADO recordsets to copy
the data procedurally. That was even slower (about 30x slower than the
file copy).

None of this makes sense to me. Why can't Access transfer data to SQL
Server through ODBC or OLEDB virtually as fast as the network can carry
the data? There are no indexes or triggers involved, and no transaction
processing -- just a simple column-for-column transfer of data from one
table to another. I feel as though I must be missing something obvious.
Can anyone offer any insight?
-TC
By the way, I'm using Access 2003 and SQL Server 2000. The network is a
WAN and I'm using TCP/IP as the default protocol.

Jul 3 '06 #2
TC wrote in message
<11**********************@m73g2000cwd.googlegroups .com:
I need an automated procedure to copy data from an Access table to a
SQL Server table. Speed is important. What is the recommended
technique?

I can export the data from Access, copy it via FTP, then import it
into SQL Server. I've tried that, and the speed is acceptable. It is
an ugly solution, however, and I expect to find a better one --
preferably a solution better integrated with the Access RDBMS.

I've tried using an ODBC connection and an INSERT query. That's a
good, clean technique, but is extremely slow (about 10x slower than
the file copy technique).

I've also tried using an OLEDB connection and ADO recordsets to copy
the data procedurally. That was even slower (about 30x slower than
the file copy).

None of this makes sense to me. Why can't Access transfer data to SQL
Server through ODBC or OLEDB virtually as fast as the network can
carry the data? There are no indexes or triggers involved, and no
transaction processing -- just a simple column-for-column transfer of
data from one table to another. I feel as though I must be missing
something obvious. Can anyone offer any insight?
-TC
By the way, I'm using Access 2003 and SQL Server 2000. The network is
a WAN and I'm using TCP/IP as the default protocol.
Have you tried Openrowset (whatch for linebreaks)?
http://msdn.microsoft.com/library/de...oa-oz_78z8.asp

say a one field/one table thingie, create an SP

CREATE PROCEDURE dbo.uspCopyTable
AS

INSERT INTO
dbo.mytable (myfield)

SELECT
myfield
FROM OPENROWSET('Microsoft.Jet.OLEDB.4.0',
'c:\mypath\mydb.mdb';'admin';'mypwd', mytable)
GO

fire it off by something like this

dim cn as adodb.connection
set cn = new adodb.connection
cn.open "Provider=sqloledb;" & _
"Data Source=myServerName;" & _
"Initial Catalog=myDatabaseName;" & _
"Integrated Security=SSPI"
cn.execute "dbo.uspCopyTable"

--
Roy-Vidar
Jul 3 '06 #3
If I understand what you require correctly:

I create an ADP file connected to the SQL Server / SQL Database.

To transfer Table "Temp" I run this code from the MDB file:

****
Const ADPFile As String = "C:\Documents and Settings\Lyle Fairfield\My
Documents\Access\FFDBABooks.adp"

Sub temp()
DoCmd.TransferDatabase acExport, "Microsoft Access", _
ADPFile, acTable, "Temp", "Temp", False
End Sub
****

I cannot comment about speed as I have used this only for small tables,
but it seems instantaneous for them.

In my full code I loop through Table Names and transfer them all. (My
actual use for this is in reverse; I use it to keep a working MDB
backup on my local machne of my remote SQL Server Data).

Column Defintions are munged, of course, to comply with what SQL Server
thinks it is getting.

Indexes are not included in the Transfer.

After the Transfer the ADP is fully "automatable" of course. If I need
Append I supplement my code to run SQL in the ADP to do the Append and
to delete the Temporary Table, or I can run SQL to modify Column Types
and to Add Indexes.

Jul 3 '06 #4
TC
Thank you for all the suggestions. Here are the transfer rates I
achieve with various of the recommended techniques:

1. Export, FTP, and Import -- 280 rows/s
2. TransferDatabase to ADP file -- 22 rows/s
3. INSERT into ODBC-linked table -- 18 rows/s
4. Procedural Loop through ADO Recordsets -- 6 rows/s

My question is really a conceptual one. Why is it so much faster to do
the transfer outside the RDBMS (1) than inside the RDBMS (2,3,4)?

Before doing this, I would have expected the opposite -- I would have
thought that techniques specifically designed to transfer data between
databases would be faster than generic techniques. In fact, the results
are so surprising, I feel I need a reality check. Is there really no
efficient way to transfer data directly between Access and SQL Server?

-TC

Jul 3 '06 #5
TC wrote:
Thank you for all the suggestions. Here are the transfer rates I
achieve with various of the recommended techniques:

1. Export, FTP, and Import -- 280 rows/s
2. TransferDatabase to ADP file -- 22 rows/s
3. INSERT into ODBC-linked table -- 18 rows/s
4. Procedural Loop through ADO Recordsets -- 6 rows/s
Yikes ... some variance!

Do you export as a text file?

Do you FTP to the server where SQL-Server runs, or to a connected
Server?

Do you import with bcp?

Jul 3 '06 #6
TC wrote:
Thank you for all the suggestions. Here are the transfer rates I
achieve with various of the recommended techniques:

1. Export, FTP, and Import -- 280 rows/s
2. TransferDatabase to ADP file -- 22 rows/s
3. INSERT into ODBC-linked table -- 18 rows/s
4. Procedural Loop through ADO Recordsets -- 6 rows/s

My question is really a conceptual one. Why is it so much faster to do
the transfer outside the RDBMS (1) than inside the RDBMS (2,3,4)?
I would imagine that a large part of the difference is because bulk operations
like the import are not logged. This is also why DTS transfers are very fast.

--
Rick Brandt, Microsoft Access MVP
Email (as appropriate) to...
RBrandt at Hunter dot com
Jul 3 '06 #7
TC
Lyle,

I tried exporting a text file, an Access .mdb, and an Excel file. I get
the fast speeds with Access and Excel -- the text file is just too big.

I FTP to the same server where SQL Server is running.

I'm using DTS, not BCP.
-TC

Lyle Fairfield wrote:
TC wrote:
Thank you for all the suggestions. Here are the transfer rates I
achieve with various of the recommended techniques:

1. Export, FTP, and Import -- 280 rows/s
2. TransferDatabase to ADP file -- 22 rows/s
3. INSERT into ODBC-linked table -- 18 rows/s
4. Procedural Loop through ADO Recordsets -- 6 rows/s

Yikes ... some variance!

Do you export as a text file?

Do you FTP to the server where SQL-Server runs, or to a connected
Server?

Do you import with bcp?
Jul 3 '06 #8
TC
Rick,

I saw the logging issue raised in another thread, so I turned on the
bulk copy option. I used a stored procedure with the following line:

EXEC sp_dboption @DatabaseName, 'select into/bulkcopy', 'True'

It didn't seem to make any difference, though. Did I do something
wrong?
-TC
Rick Brandt wrote:
>
I would imagine that a large part of the difference is because bulk operations
like the import are not logged. This is also why DTS transfers are very fast.

--
Rick Brandt, Microsoft Access MVP
Email (as appropriate) to...
RBrandt at Hunter dot com
Jul 3 '06 #9
That 280 rows/s was that for the whole operation or just the export or just
the import?

--

Terry Kreft
"TC" <go*********@yahoo.comwrote in message
news:11*********************@m73g2000cwd.googlegro ups.com...
Thank you for all the suggestions. Here are the transfer rates I
achieve with various of the recommended techniques:

1. Export, FTP, and Import -- 280 rows/s
2. TransferDatabase to ADP file -- 22 rows/s
3. INSERT into ODBC-linked table -- 18 rows/s
4. Procedural Loop through ADO Recordsets -- 6 rows/s

My question is really a conceptual one. Why is it so much faster to do
the transfer outside the RDBMS (1) than inside the RDBMS (2,3,4)?

Before doing this, I would have expected the opposite -- I would have
thought that techniques specifically designed to transfer data between
databases would be faster than generic techniques. In fact, the results
are so surprising, I feel I need a reality check. Is there really no
efficient way to transfer data directly between Access and SQL Server?

-TC


Jul 4 '06 #10
No indexes, no logging: my next guess would be
that Access is sending the data one row at a time.
Turn on odbc or server tracing and see how the
data is comming across

(Or, of course, turning off ODBC logging if that
is turned on: that really slows things down :~)

(david)

"TC" <go*********@yahoo.comwrote in message
news:11*********************@b68g2000cwa.googlegro ups.com...
Rick,

I saw the logging issue raised in another thread, so I turned on the
bulk copy option. I used a stored procedure with the following line:

EXEC sp_dboption @DatabaseName, 'select into/bulkcopy', 'True'

It didn't seem to make any difference, though. Did I do something
wrong?
-TC
Rick Brandt wrote:
>>
I would imagine that a large part of the difference is because bulk
operations
like the import are not logged. This is also why DTS transfers are very
fast.

--
Rick Brandt, Microsoft Access MVP
Email (as appropriate) to...
RBrandt at Hunter dot com

Jul 4 '06 #11
>
1. Export, FTP, and Import -- 280 rows/s
2. TransferDatabase to ADP file -- 22 rows/s
3. INSERT into ODBC-linked table -- 18 rows/s
4. Procedural Loop through ADO Recordsets -- 6 rows/s

My question is really a conceptual one. Why is it so much faster to do
the transfer outside the RDBMS (1) than inside the RDBMS (2,3,4)?
I am not surprised one bit by the above.

#1 - FTP, and using the sql (enterprise manager) tools, it can read the raw
data.

#2,#3

How do you think a remote connection to sql works?

Ok, build a sql string. Now send the sql string. The query processor then
loads. The query process then parses out the string. The query processor
then checks the sql for syntax. The query processor then builds a query
plan. The query process then optimizes this query plan. The query then
executes the sql AND INSERTS ONE ROW of data!.

Now, send a whole new sql string for ht next record. And, you are surprised
this is slower then option one?

I don't think you should be surprised that this is going to be slower at
all...

Remember, when you use a odbc (or oleDB) connection, EACH data insert is
going to a full line of sql sent to the server. Often, just the sql syntax
overhead is GOING TO BE LARGER THEN HE ACTUAL DATA transferred for the ONE
record. Remember, EACH record is going to requite a FULL LINE OF sql string
to insert the data.

With the ftp, and using the sql tools, the use of a sql statement for EACH
LINE of data is by-passed....
what you need to do is find the syntax in sql for MULTI-RECORD inserts of
lines of data.

You then sent about 10, or 20 lines of data IN ONE sql pass-through
statement. Doing his would likely speed up the process by a factor of 10, or
20 times (in fact, the speed up would be the factor of how many lines of
data you include in each sql).

I reasonably sure that sql server supports a insert statement with MULTIPLE
lines of data...so, do some testing.

You code that loops would thus have to format he sql statement in code..and
send it as a pass-though to sql server...

Once you realize that a odbc, or remote connection can ONLY communicate with
sql server by sending FULLY formatted sql statements for EACH record, then
there should be little, if any surprise at the performance numbers you
posted...

--
Albert D. Kallal (Access MVP)
Edmonton, Alberta Canada
pl*****************@msn.com
http://www.members.shaw.ca/AlbertKallal

Jul 4 '06 #12
TC
Terry,

280 rows/s is for the whole operation.

I've also done some experiments with a local SQL Server. In that
scenario, there is no FTP step -- just export and import. It looks like
I'm getting a speed improvement of about 20x over the ODBC/append
technique.

Based on my observations so far, I must conclude that Access offers no
efficient way to append rows into SQL Server. Despite the extra steps
involved, it is far more efficient to export to a file, move the file
onto the server, then import into SQL Server. This is true even for a
local SQL Server.

I will put that statement up for debate. I'm curious to know whether
the experiences of other Access developers support or refute my
conclusion.
-TC
Terry Kreft wrote:
That 280 rows/s was that for the whole operation or just the export or just
the import?

--
Terry Kreft
"TC" <go*********@yahoo.comwrote in message
news:11*********************@m73g2000cwd.googlegro ups.com...
Thank you for all the suggestions. Here are the transfer rates I
achieve with various of the recommended techniques:

1. Export, FTP, and Import -- 280 rows/s
2. TransferDatabase to ADP file -- 22 rows/s
3. INSERT into ODBC-linked table -- 18 rows/s
4. Procedural Loop through ADO Recordsets -- 6 rows/s

My question is really a conceptual one. Why is it so much faster to do
the transfer outside the RDBMS (1) than inside the RDBMS (2,3,4)?

Before doing this, I would have expected the opposite -- I would have
thought that techniques specifically designed to transfer data between
databases would be faster than generic techniques. In fact, the results
are so surprising, I feel I need a reality check. Is there really no
efficient way to transfer data directly between Access and SQL Server?

-TC
Jul 4 '06 #13
TC
Albert,

You have added something new to this discussion. Before this, I did not
know ODBC sends a different SQL statement for each row in an INSERT
statement. Can you provide more details? Is this a limitation of ODBC
only, or is it true also of OLEDB? I assume it is a problem only for
UPDATE, INSERT, and DELETE queries operating on SQL Server tables -- is
that right, or am I failing to see broader ramifications?

If the one-statement-per-row phenomenon is true (and it does have the
ring of truth to it), that suggests ODBC cannot be, and was probably
never intended to be, an efficient way of transferring bulk data
between databases. That brings me back to my original question: What is
an efficient way of transferring bulk data between databases?

I understand your suggestion about a multi-line pass-through query. I
doubt this will work in my case, however, since I want a solution
capable of sending memo/BLOB columns.
-TC
Albert D.Kallal wrote:

1. Export, FTP, and Import -- 280 rows/s
2. TransferDatabase to ADP file -- 22 rows/s
3. INSERT into ODBC-linked table -- 18 rows/s
4. Procedural Loop through ADO Recordsets -- 6 rows/s

My question is really a conceptual one. Why is it so much faster to do
the transfer outside the RDBMS (1) than inside the RDBMS (2,3,4)?

I am not surprised one bit by the above.

#1 - FTP, and using the sql (enterprise manager) tools, it can read the raw
data.

#2,#3

How do you think a remote connection to sql works?

Ok, build a sql string. Now send the sql string. The query processor then
loads. The query process then parses out the string. The query processor
then checks the sql for syntax. The query processor then builds a query
plan. The query process then optimizes this query plan. The query then
executes the sql AND INSERTS ONE ROW of data!.

Now, send a whole new sql string for ht next record. And, you are surprised
this is slower then option one?

I don't think you should be surprised that this is going to be slower at
all...

Remember, when you use a odbc (or oleDB) connection, EACH data insert is
going to a full line of sql sent to the server. Often, just the sql syntax
overhead is GOING TO BE LARGER THEN HE ACTUAL DATA transferred for the ONE
record. Remember, EACH record is going to requite a FULL LINE OF sql string
to insert the data.

With the ftp, and using the sql tools, the use of a sql statement for EACH
LINE of data is by-passed....
what you need to do is find the syntax in sql for MULTI-RECORD inserts of
lines of data.

You then sent about 10, or 20 lines of data IN ONE sql pass-through
statement. Doing his would likely speed up the process by a factor of 10, or
20 times (in fact, the speed up would be the factor of how many lines of
data you include in each sql).

I reasonably sure that sql server supports a insert statement with MULTIPLE
lines of data...so, do some testing.

You code that loops would thus have to format he sql statement in code..and
send it as a pass-though to sql server...

Once you realize that a odbc, or remote connection can ONLY communicate with
sql server by sending FULLY formatted sql statements for EACH record, then
there should be little, if any surprise at the performance numbers you
posted...

--
Albert D. Kallal (Access MVP)
Edmonton, Alberta Canada
pl*****************@msn.com
http://www.members.shaw.ca/AlbertKallal
Jul 4 '06 #14
"TC" <go*********@yahoo.comwrote in
news:11*********************@75g2000cwc.googlegrou ps.com:
Based on my observations so far, I must conclude that Access
offers no efficient way to append rows into SQL Server. Despite
the extra steps involved, it is far more efficient to export to a
file, move the file onto the server, then import into SQL Server.
This is true even for a local SQL Server.
Er, why would this kind of performance difference ever matter? That
is, how often is this going to be done in any real-world scenario?
I'd say: use the easiest method in terms of implementation and don't
worry at all about performance.

--
David W. Fenton http://www.dfenton.com/
usenet at dfenton dot com http://www.dfenton.com/DFA/
Jul 4 '06 #15
TC
David,

The scenario is very real to me, and peformance definitely matters.

Perhaps you meant to imply that an application which needs to do
frequent bulk uploads from client to server should not be built with
Access / SQL Server? If so, I agree with you, but I have no choice in
the matter -- I'm working with the challenge I've been given. And
frankly, with the export/transfer/import technique, I think I'm headed
toward a pretty good solution.
-TC
David W. Fenton wrote:
"TC" <go*********@yahoo.comwrote in
news:11*********************@75g2000cwc.googlegrou ps.com:
Based on my observations so far, I must conclude that Access
offers no efficient way to append rows into SQL Server. Despite
the extra steps involved, it is far more efficient to export to a
file, move the file onto the server, then import into SQL Server.
This is true even for a local SQL Server.

Er, why would this kind of performance difference ever matter? That
is, how often is this going to be done in any real-world scenario?
I'd say: use the easiest method in terms of implementation and don't
worry at all about performance.

--
David W. Fenton http://www.dfenton.com/
usenet at dfenton dot com http://www.dfenton.com/DFA/
Jul 5 '06 #16
ODBC often sends one row at a time: it does not always send
one row at a time.

The most well known place you see this is reading data: you
can see when the ODBC connection is using the primary key
and loading one row at a time (slow), or loading all (fast).

You can sometimes reproduce this behaviour on subform when
changing the subform recordsource: change subform recordsource
(subform loads slow) reset subform recordsource !twice! and
subform loads fast: ODBC trace shows that slow transfer is
single records using primary key, fast transfer is bulk transfer.

I am not aware of any explicit way to control this behaviour.
Since you say that you have no indexes, presumably you do not
have a primary key, so it's not that simple.

(david)

"TC" <go*********@yahoo.comwrote in message
news:11*********************@v61g2000cwv.googlegro ups.com...
Albert,

You have added something new to this discussion. Before this, I did not
know ODBC sends a different SQL statement for each row in an INSERT
statement. Can you provide more details? Is this a limitation of ODBC
only, or is it true also of OLEDB? I assume it is a problem only for
UPDATE, INSERT, and DELETE queries operating on SQL Server tables -- is
that right, or am I failing to see broader ramifications?

If the one-statement-per-row phenomenon is true (and it does have the
ring of truth to it), that suggests ODBC cannot be, and was probably
never intended to be, an efficient way of transferring bulk data
between databases. That brings me back to my original question: What is
an efficient way of transferring bulk data between databases?

I understand your suggestion about a multi-line pass-through query. I
doubt this will work in my case, however, since I want a solution
capable of sending memo/BLOB columns.
-TC
Albert D.Kallal wrote:
>
1. Export, FTP, and Import -- 280 rows/s
2. TransferDatabase to ADP file -- 22 rows/s
3. INSERT into ODBC-linked table -- 18 rows/s
4. Procedural Loop through ADO Recordsets -- 6 rows/s

My question is really a conceptual one. Why is it so much faster to do
the transfer outside the RDBMS (1) than inside the RDBMS (2,3,4)?

I am not surprised one bit by the above.

#1 - FTP, and using the sql (enterprise manager) tools, it can read the
raw
data.

#2,#3

How do you think a remote connection to sql works?

Ok, build a sql string. Now send the sql string. The query processor then
loads. The query process then parses out the string. The query processor
then checks the sql for syntax. The query processor then builds a query
plan. The query process then optimizes this query plan. The query then
executes the sql AND INSERTS ONE ROW of data!.

Now, send a whole new sql string for ht next record. And, you are
surprised
this is slower then option one?

I don't think you should be surprised that this is going to be slower at
all...

Remember, when you use a odbc (or oleDB) connection, EACH data insert is
going to a full line of sql sent to the server. Often, just the sql
syntax
overhead is GOING TO BE LARGER THEN HE ACTUAL DATA transferred for the
ONE
record. Remember, EACH record is going to requite a FULL LINE OF sql
string
to insert the data.

With the ftp, and using the sql tools, the use of a sql statement for
EACH
LINE of data is by-passed....
what you need to do is find the syntax in sql for MULTI-RECORD inserts of
lines of data.

You then sent about 10, or 20 lines of data IN ONE sql pass-through
statement. Doing his would likely speed up the process by a factor of 10,
or
20 times (in fact, the speed up would be the factor of how many lines of
data you include in each sql).

I reasonably sure that sql server supports a insert statement with
MULTIPLE
lines of data...so, do some testing.

You code that loops would thus have to format he sql statement in
code..and
send it as a pass-though to sql server...

Once you realize that a odbc, or remote connection can ONLY communicate
with
sql server by sending FULLY formatted sql statements for EACH record,
then
there should be little, if any surprise at the performance numbers you
posted...

--
Albert D. Kallal (Access MVP)
Edmonton, Alberta Canada
pl*****************@msn.com
http://www.members.shaw.ca/AlbertKallal

Jul 5 '06 #17
"TC" <go*********@yahoo.comwrote in message
news:11**********************@75g2000cwc.googlegro ups.com...
David,

The scenario is very real to me, and peformance definitely matters.

Perhaps you meant to imply that an application which needs to do
frequent bulk uploads from client to server should not be built with
Access / SQL Server? If so, I agree with you, but I have no choice in
the matter -- I'm working with the challenge I've been given. And
frankly, with the export/transfer/import technique, I think I'm headed
toward a pretty good solution.
If this is a regular operation have you looked into having a DTS package pull
the data directly from the Access file instead of pushing it with ODBC? You
could use a passthrough query to execute the package so that the Access app
would still be initiating the process.

--
Rick Brandt, Microsoft Access MVP
Email (as appropriate) to...
RBrandt at Hunter dot com
Jul 5 '06 #18
TC
Rick,

Thanks for the suggestion. I have been avoiding any kind of "pull"
approach because this is essentially a "push" operation. The client
will initiate the data transfer, and the client could be anywhere. I'm
not sure how to make DTS find the data for a client which is not at a
fixed location.

-TC
Rick Brandt wrote:
>
If this is a regular operation have you looked into having a DTS package pull
the data directly from the Access file instead of pushing it with ODBC? You
could use a passthrough query to execute the package so that the Access app
would still be initiating the process.

--
Rick Brandt, Microsoft Access MVP
Email (as appropriate) to...
RBrandt at Hunter dot com
Jul 5 '06 #19
You have added something new to this discussion. Before this, I did not
know ODBC sends a different SQL statement for each row in an INSERT
statement. Can you provide more details? Is this a limitation of ODBC
only, or is it true also of OLEDB? I assume it is a problem only for
UPDATE, INSERT, and DELETE queries operating on SQL Server tables -- is
that right, or am I failing to see broader ramifications?
Yes, as far as I know, the above is how it works. (and, yes, for oledb
also). It is not really a limitation, it just how do you talk to sql
server...and the that means you use sql. You never write raw data, but you
ALWAYS use sql text.

When writing records to a JET based mdb file on your local computer, you
will find performance MUCH faster since you are not using sql strings to
write out the data. When you throw in the use of the seek command, you can
achieve MUCH higher data write and search speeds then sql server since you
have this little jet engine that just SCRAPES data right off of the disk
drive.

As a side note, often JET based applications are 2 times faster then sql
based applications (same hardware, same machine). We are not talking 5%, or
10%...but 2 times!!

I would even suggest that you create a new temp mdb file/table, write the
data to that file, and then use the sql server import tools to import that
mdb file. That likely would be the fastest import, and the least amount of
development cost on your part. As for automating this process from start to
finish, that is a another matter. (I not done this with the DTS tools...but,
I suspect you can).

What this means is that you want to use a import system/option of sql
server, and not a odbc connection....
If the one-statement-per-row phenomenon is true (and it does have the
ring of truth to it), that suggests ODBC cannot be, and was probably
never intended to be, an efficient way of transferring bulk data
between databases. That brings me back to my original question: What is
an efficient way of transferring bulk data between databases?
For sql server to another sql server?

You simply export to a file backup. (use the bulk copy options). You then
move that file to other sql server, and then do a bulk import. This type of
transfer does NOT involve sql.

It seems to me that the DTS tools also does allow you to import the mdb file
you just crated (or the one you will create based on what we conclude!!).

So, for between machines that are both running sql server, you use the bulk
copy options.

For odbc, you are much out of luck. And, there does seem to be the
possibility of downloading some of the sql server library code, and rolling
your own custom imports....but that sounds like a lot of work (but, they can
be used....as that is how the DTS and bulk copy operations were written).

Barring the above, you might try using a transaction....run 1000
inserts..and then commit..that might help, but you are venturing into waters
that I not tested, or tired.

--
Albert D. Kallal (Access MVP)
Edmonton, Alberta Canada
pl*****************@msn.com
http://www.members.shaw.ca/AlbertKallal
Jul 5 '06 #20

You don't, you push the location to the DTS package.

If you are using SQL Server2000 or above you can use global variables on the
command line, if you are using a lower version then you populate a table and
have the DTS package read the table.

The way I have set this up before is:-
DTS package which has a global var
Stored procedure to be executed by the application (this calls the DTS
package)

In Access you then
Create a passthrough query to execute the stored procedure
At run time you
Amend the passthrough query SQl to include the path to the data you want
to import.
Execute the passthrough query
The stored procedure is then called which executes the DTS package,
passing the path as a global variable.
The DTS package then runs, the first step in the DTS package is a script
object which sets the import path according to the global variable.
The rest of the DTS package then runs importing the data.
--

Terry Kreft
"TC" <go*********@yahoo.comwrote in message
news:11**********************@p79g2000cwp.googlegr oups.com...
Rick,
<SNIP>I'm
not sure how to make DTS find the data for a client which is not at a
fixed location.

-TC
Rick Brandt wrote:

If this is a regular operation have you looked into having a DTS package
pull
the data directly from the Access file instead of pushing it with ODBC?
You
could use a passthrough query to execute the package so that the Access
app
would still be initiating the process.

--
Rick Brandt, Microsoft Access MVP
Email (as appropriate) to...
RBrandt at Hunter dot com

Jul 6 '06 #21

This thread has been closed and replies have been disabled. Please start a new discussion.

Similar topics

8
by: Frnak McKenney | last post by:
Back when computer dinosaurs roamed the earth and the precursors to today's Internet were tiny flocks of TDMs living symbiotically with the silicon giants, tracking access to data processing...
6
by: Hamed | last post by:
Hello I have employed as a developer in a software company that its team uses FoxPro / VB 6.0 / VC++ 6.0 as the developing tools and newly is going to migrate to VS.NET. There is a project...
4
by: Alex | last post by:
Dear netters, We are looking to build a SQL Server database that will be hooked up to a Web server (WebLogic or a .NET). The database will house data for multiple customers, and the...
1
by: Bob Alston | last post by:
Anyone able to do this with Access created Data Access Pages, using UseRemoteProvider parm on DAP and getting RDS configured properly on the server so the DAP's activeX can access the database on...
3
by: Lyle Fairfield | last post by:
In a recent thread there has been discussion about Data Access Pages. It has been suggested that they are not permitted on many or most secure sites. Perhaps, that it is so, although I know of no...
2
by: Bob Alston | last post by:
Anyone out there successfully deployed Data Access Pages, on a server they do not control, using RDS in a 3-tier environment? If so I would like to hear about your success. From my reading, RDS...
1
by: Johann Blake | last post by:
I am looking for a good solution on how to implement data access in an application so that there is a clean separation between the data access layer, the business layer and the GUI layer. I am...
3
by: Marc Gravell | last post by:
Kind of an open question on best-practice for smart-client design. I'd really appreciate anyones views (preferably with reasoning, but I'll take what I get...). Or if anybody has any useful links...
3
by: SAL | last post by:
Hi, I have Microsoft Enterprise Library 2005 installed on my local system. I'm developing in C# using Visual Studio 2003 IDE with framework 1.1. I am automating a process that will use a DLL...
9
by: Guineapig1980 | last post by:
Hi all I am trying to copy data from one database server to another. I only want to copy one table's data, not the entire database. The part that I am having trouble with is connecting from one...
2
isladogs
by: isladogs | last post by:
The next Access Europe meeting will be on Wednesday 7 Feb 2024 starting at 18:00 UK time (6PM UTC) and finishing at about 19:30 (7.30PM). In this month's session, the creator of the excellent VBE...
0
by: MeoLessi9 | last post by:
I have VirtualBox installed on Windows 11 and now I would like to install Kali on a virtual machine. However, on the official website, I see two options: "Installer images" and "Virtual machines"....
0
by: ryjfgjl | last post by:
ExcelToDatabase: batch import excel into database automatically...
0
isladogs
by: isladogs | last post by:
The next Access Europe meeting will be on Wednesday 6 Mar 2024 starting at 18:00 UK time (6PM UTC) and finishing at about 19:15 (7.15PM). In this month's session, we are pleased to welcome back...
0
by: marcoviolo | last post by:
Dear all, I would like to implement on my worksheet an vlookup dynamic , that consider a change of pivot excel via win32com, from an external excel (without open it) and save the new file into a...
1
isladogs
by: isladogs | last post by:
The next Access Europe meeting will be on Wednesday 6 Mar 2024 starting at 18:00 UK time (6PM UTC) and finishing at about 19:15 (7.15PM). In this month's session, we are pleased to welcome back...
0
by: Vimpel783 | last post by:
Hello! Guys, I found this code on the Internet, but I need to modify it a little. It works well, the problem is this: Data is sent from only one cell, in this case B5, but it is necessary that data...
0
by: jfyes | last post by:
As a hardware engineer, after seeing that CEIWEI recently released a new tool for Modbus RTU Over TCP/UDP filtering and monitoring, I actively went to its official website to take a look. It turned...
0
by: ArrayDB | last post by:
The error message I've encountered is; ERROR:root:Error generating model response: exception: access violation writing 0x0000000000005140, which seems to be indicative of an access violation...

By using Bytes.com and it's services, you agree to our Privacy Policy and Terms of Use.

To disable or enable advertisements and analytics tracking please visit the manage ads & tracking page.