I'm looking to get a little more performance out of my database, and saw in
the docs a section about disabling autocommit by using the BEGIN and COMMIT
keywords.
My problem is this: I enforce unique rows for all data, and occasionally
there is an error where I try to insert a duplicate entry. I expect to see
these duplicate entries and depend on the DB to enforce the row uniqueness.
When I just run the insert statements without the begin and commit keywords
the insert only fails for that single insert, but If I disable autocommit
then all the inserts fail because of one error.
As a test I ran about 1000 identical inserts with autocommit on and also
with it off. I get roughly a 33% speed increase with the autocommit off, so
it's definitely a good thing. The problem is, to parse the insert
statements and ensure there are no duplicates I feel like I would be losing
the advantage that disabling autocommit gives me, and simply spending the
cpu cycles somewhere else.
Is there a way for me to say 'only commit the successful commands and ignore
the unsuccessful ones'? I know that's the point behind using this type of
transaction/rollback statement but I was curious if there was a way I could
fix it.
Matt
_______________ _______________ _______________ _______________ _____
Express yourself instantly with MSN Messenger! Download today - it's FREE!
http://messenger.msn.click-url.com/g...ave/direct/01/
---------------------------(end of broadcast)---------------------------
TIP 1: subscribe and unsubscribe commands go to ma*******@postg resql.org