By using this site, you agree to our updated Privacy Policy and our Terms of Use. Manage your Cookies Settings.
426,011 Members | 1,000 Online
Bytes IT Community
+ Ask a Question
Need help? Post your question and get tips & solutions from a community of 426,011 IT Pros & Developers. It's quick & easy.

disabling autocommit

P: n/a
I'm looking to get a little more performance out of my database, and saw in
the docs a section about disabling autocommit by using the BEGIN and COMMIT
keywords.

My problem is this: I enforce unique rows for all data, and occasionally
there is an error where I try to insert a duplicate entry. I expect to see
these duplicate entries and depend on the DB to enforce the row uniqueness.
When I just run the insert statements without the begin and commit keywords
the insert only fails for that single insert, but If I disable autocommit
then all the inserts fail because of one error.

As a test I ran about 1000 identical inserts with autocommit on and also
with it off. I get roughly a 33% speed increase with the autocommit off, so
it's definitely a good thing. The problem is, to parse the insert
statements and ensure there are no duplicates I feel like I would be losing
the advantage that disabling autocommit gives me, and simply spending the
cpu cycles somewhere else.

Is there a way for me to say 'only commit the successful commands and ignore
the unsuccessful ones'? I know that's the point behind using this type of
transaction/rollback statement but I was curious if there was a way I could
fix it.

Matt

__________________________________________________ _______________
Express yourself instantly with MSN Messenger! Download today - it's FREE!
http://messenger.msn.click-url.com/g...ave/direct/01/
---------------------------(end of broadcast)---------------------------
TIP 1: subscribe and unsubscribe commands go to ma*******@postgresql.org

Nov 23 '05 #1
Share this Question
Share on Google+
1 Reply


P: n/a
>>>>> "MVM" == Matt Van Mater <nu*****@hotmail.com> writes:

MVM> My problem is this: I enforce unique rows for all data, and
MVM> occasionally there is an error where I try to insert a duplicate
MVM> entry. I expect to see these duplicate entries and depend on the DB
MVM> to enforce the row uniqueness. When I just run the insert statements
MVM> without the begin and commit keywords the insert only fails for that
MVM> single insert, but If I disable autocommit then all the inserts fail
MVM> because of one error.

I have a situation like this, but with foreign key dependencies.
Occasionally someone will perform some action that is tracked by my
system, but they will use a very old stale link that has no associated
record with it in the database any more, so I should ignore logging
that action.

What I do is make it opportunistic. First I try to insert my batch of
log records within a transaction. If the tx fails for a FK violation,
I then run that same batch again, but I do a select prior to each
insert to ensure that the FK violation won't occur.

In something like 1 out of 200 batches do I need to retry with the
explicit integrity checks on.

However, if your expected norm is to encounter duplicates, then try
just doing the select prior to insert always.

In PG 8.0, I expect to be able to deal with this with the nested
transactions.
--
=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=
Vivek Khera, Ph.D. Khera Communications, Inc.
Internet: kh***@kciLink.com Rockville, MD +1-301-869-4449 x806
AIM: vivekkhera Y!: vivek_khera http://www.khera.org/~vivek/

---------------------------(end of broadcast)---------------------------
TIP 8: explain analyze is your friend

Nov 23 '05 #2

This discussion thread is closed

Replies have been disabled for this discussion.