Jim Geissman (ji**********@countrywide.com) writes:
A colleague wants to insert many millions of records where the
values are computed in a C++ program. He connects to the
database with ODBC, and does an INSERT for each row.
This is slow, apparently because each INSERT is a separate
transaction. Is there a way to delay committing the data
until several thousand records have been written? Inside
SQL Server this is simple, but I don't see an equivalent
when using ODBC. Or is there something better than ODBC?
Or might it be faster to write values to a file and then
use bulk insert? I would appreciate any thoughts on this
general problem!
You can send a BEGIN TRANSACTION statement from ODBC as well.
Furthermore he can gain even more speed by putting the INSERT statement
into a stored procedure, so SQL Server does have to parse and
optimize it each time. He should then be careful to use an RPC
mechanism to call the procedure, and not send an EXEC statment.
Unfortunately, I cannot give an example, as I have little experience
of ODBC programming myself.
An even speedier possibility is to use the bulk-copy interface.
Again, I can't give examples, because I only used the bulk-copy
interface in DB-Library. But some study of the manaul can be rewarding.
And the possibilty the fastest method is to build an XML document of
the lot, pass this to a stored procedure which uses
sp_xml_preprare_document and OPENXML to produce a result set which
you insert into the table in one single INSERT statement. (Or build
several documents of 100000 rows if you like.)
--
Erland Sommarskog, SQL Server MVP,
so****@algonet.se
Books Online for SQL Server SP3 at
http://www.microsoft.com/sql/techinf...2000/books.asp