By using this site, you agree to our updated Privacy Policy and our Terms of Use. Manage your Cookies Settings.
435,204 Members | 1,171 Online
Bytes IT Community
+ Ask a Question
Need help? Post your question and get tips & solutions from a community of 435,204 IT Pros & Developers. It's quick & easy.

Forcing ODBC-Autocommit

P: n/a
Hi,

I've got some troubles with exporting data base entries from
an "Access"-table (ver. 10) into an "Oracle"-table (ver. 9.2)
using the "Oracle-ODBC-driver".
At first I linked the corresponding tables in "Access" (via the odbc
driver) so
I can use the "Oracle"-table as a native "Access"-table.

tOracle - oracle table dummy
tAccess - access table dummy

Then I start the following sql command "INSERT INTO tOracle SELECT *
FROM tAccess;"

Now I've watched this using SQL-Trace.

- autocommit off - SQLSetConnectOption(SQL_AUTOCOMMIT, 0)
for each data set:
- bind parameter for each field - SQLBindParameter
- execute the sql command - SQLExecDirect("INSERT INTO "TORACLE"
(fields...) VALUES (?,?,...)")

after it's finished with executing the command this will appear
- Commit - - SQLTransact(SQL_COMMIT)
- switch autocommit on again - SQLSetConnectOption(SQL_AUTOCOMMIT, 1)
So here is my question:

If I try to insert more than a million data sets into the oracle-table
without any
commit, the undo-tables space grows and the destination table stays
empty until commit is called.
That's pretty slow (much too slow).
Is there any possibility to:
- let the autocommit enabled during the sql command is executed?
or
- how can I enable a commit after 100 data set or so (without using
any vbasic scripts)

where can I enable one of these options?


thanks a lot,

greets,

Marcel Czerny

Nov 13 '05 #1
Share this Question
Share on Google+
1 Reply


P: n/a
On 15 Jul 2005 02:54:50 -0700, "Marcel Czerny"
<ma***********@list.smwa.sachsen.de> wrote:
Hi,

I've got some troubles with exporting data base entries from
an "Access"-table (ver. 10) into an "Oracle"-table (ver. 9.2)
using the "Oracle-ODBC-driver".
At first I linked the corresponding tables in "Access" (via the odbc
driver) so
I can use the "Oracle"-table as a native "Access"-table.

tOracle - oracle table dummy
tAccess - access table dummy

Then I start the following sql command "INSERT INTO tOracle SELECT *
FROM tAccess;"

Now I've watched this using SQL-Trace.

- autocommit off - SQLSetConnectOption(SQL_AUTOCOMMIT, 0)
for each data set:
- bind parameter for each field - SQLBindParameter
- execute the sql command - SQLExecDirect("INSERT INTO "TORACLE"
(fields...) VALUES (?,?,...)")

after it's finished with executing the command this will appear
- Commit - - SQLTransact(SQL_COMMIT)
- switch autocommit on again - SQLSetConnectOption(SQL_AUTOCOMMIT, 1)
So here is my question:

If I try to insert more than a million data sets into the oracle-table
without any
commit, the undo-tables space grows and the destination table stays
empty until commit is called.
That's pretty slow (much too slow).
Is there any possibility to:
- let the autocommit enabled during the sql command is executed?
or
- how can I enable a commit after 100 data set or so (without using
any vbasic scripts)

where can I enable one of these options?


thanks a lot,

greets,

Marcel Czerny


You could order the data by a field with strictly increasing values
and run a series of TOP 100 queries (or maybe 1000 or 10,000 if you
have a million rows) and keep track of where you have got to.

Is this in effect a new table in Oracle each time or are you adding to
a bigger table, or even worse deleting a million items and then adding
a million new ones? IE have you speed problems in addition to your
transaction log size?
David

Nov 13 '05 #2

This discussion thread is closed

Replies have been disabled for this discussion.