469,283 Members | 2,297 Online
Bytes | Developer Community
New Post

Home Posts Topics Members FAQ

Post your question to a community of 469,283 developers. It's quick & easy.

Inserting Records into SQL Server - is there a faster interface than ADO

I have a program that reads records from a binary file and loads them
into an MS-SQL Server database. It is using a stored proc, passing the
parameters.

I am using pywin32 to create a connection object. Once the connection
is open I simple pass the SQL formatted commands using
cnx.Execute(sqlstring).

My test examples;

20,000 records using the ADO connection: 0:04:45:45

If I setup the program to not send the record to the database - so all
other variables and processes are constant, it simply just skips the
cnx.Execute(sqlstring) step, then it takes only 0:00:25:78 to process
thru the same number of trx.

Obviously the times in my test are that , but I have a client that woud
like to use this and has several million transactions to content with.

So my questions is ....
Is there a "faster" method I can use to connect to the SQL server ?
Or does anyone have any "optimization" tips the can offer ?

Nov 11 '05 #1
5 3975
ge********@hotmail.com napisał(a):
Is there a "faster" method I can use to connect to the SQL server ?
Or does anyone have any "optimization" tips the can offer ?


This has nothing with python, but the fastest way to load large amount
of data to MS SQL Server database is DTS import from flat file.

To spped up the things a bit, do not commit transaction after each row
inserted -- commit whole batch.

--
Jarek Zgoda
http://jpa.berlios.de/
Nov 11 '05 #2
[ge********@hotmail.com]
I have a program that reads records from a binary file and loads them
into an MS-SQL Server database. It is using a stored proc, passing the
parameters.
[snip]
So my questions is ....
Is there a "faster" method I can use to connect to the SQL server ?
Or does anyone have any "optimization" tips the can offer ?


Is there a reason why you need to use a stored procedure?

Do you need to process the data in some way in order to maintain
referential integrity of the database?

If the answer to both these questions is "no", then you can use the
"bcp" (Bulk CoPy) utility to transfer data into SQLServer *very* quickly.

http://msdn.microsoft.com/library/en...p_bcp_61et.asp
http://www.sql-server-performance.com/bcp.asp

thought-it-was-worth-mentioning-ly y'rs,

--
alan kennedy
------------------------------------------------------
email alan: http://xhaus.com/contact/alan
Nov 11 '05 #3
Alan Kennedy wrote:
[ge********@hotmail.com]
I have a program that reads records from a binary file and loads them
into an MS-SQL Server database. It is using a stored proc, passing the
parameters.

So my questions is ....
Is there a "faster" method I can use to connect to the SQL server ?
Or does anyone have any "optimization" tips the can offer ?


Is there a reason why you need to use a stored procedure?

Do you need to process the data in some way in order to maintain
referential integrity of the database?

If the answer to both these questions is "no", then you can use the
"bcp" (Bulk CoPy) utility to transfer data into SQLServer *very* quickly.

http://msdn.microsoft.com/library/en...p_bcp_61et.asp
http://www.sql-server-performance.com/bcp.asp

thought-it-was-worth-mentioning-ly y'rs,

If the answer to some of the earlier questions is "yes," I have
found "bcp" can be a great tool to fill up a new table of data
"on its way in." SQL can then move it to where it should really
go with nice transaction-protected SQL, proper index-building
and so on. After distributing the data, you can drop the table
of pending data.

I agree this is off-topic, but it is too close to my experience.

--Scott David Daniels
sc***********@acm.org
Nov 11 '05 #4
The utility is designed to run in the background and maintain/update a
parallel copy of a production system database. We are using the
stored procedure to do a If Exist, update, else Insert processing for
each record.

The originating database is a series of keyed ISAM files. So we need
to read each record, perform some simple data conversions and then
update the SQL database. We are using Python to read the originating
database and perform the record conversion and then posting the results
back to SQL Server.

We designed our utility to run a night so that the SQL server is up to
date the next day and ready for reporting.

Thanks for your tips on BCP. I will investigate further as it looks
like it might be useful for the initial loading of the data and perhaps
some changes to the our utility program to minimize the amount of data
that needs to be read/processed.

Geoff.

Nov 22 '05 #5
> We are using the stored procedure to do a If Exist, update, else Insert processing for
each record.


Consider loading the data in batches into a temporary table and then
use a single insert statement to insert new records and a single update
statement to update existing ones. This way, you are not forcing the
database to do it one by one and give it a chance to aggressively
optimize your queries and update the indexes in bulk. You'd be
surprized at the difference this can make!

Nov 22 '05 #6

This discussion thread is closed

Replies have been disabled for this discussion.

Similar topics

4 posts views Thread by Andreas Lauffer | last post: by
7 posts views Thread by Trevor Best | last post: by
5 posts views Thread by Brian | last post: by
reply views Thread by zhoujie | last post: by
By using this site, you agree to our Privacy Policy and Terms of Use.