By using this site, you agree to our updated Privacy Policy and our Terms of Use. Manage your Cookies Settings.
437,695 Members | 1,956 Online
Bytes IT Community
+ Ask a Question
Need help? Post your question and get tips & solutions from a community of 437,695 IT Pros & Developers. It's quick & easy.

problems with transaction blocks

P: n/a
I want to do a series of inserts within a single transaction block, but with
postgresql if one insert fails, the whole block is aborted. Is there any
way to get around this behavior so that postgresql won't abort the entire
transaction if a single insert returns an error?

Chris
---------------------------(end of broadcast)---------------------------
TIP 1: subscribe and unsubscribe commands go to ma*******@postgresql.org

Nov 12 '05 #1
Share this Question
Share on Google+
6 Replies


P: n/a
Transactions are atomic. What you are asking to do violates the whole
concept of a transaction.

You can, however, do these inserts outside of the transaction block.

Best Wishes,
Chris Travers
----- Original Message -----
From: "Chris Ochs" <ch***@paymentonline.com>
To: <pg***********@postgresql.org>
Sent: Wednesday, January 07, 2004 7:52 AM
Subject: [GENERAL] problems with transaction blocks

I want to do a series of inserts within a single transaction block, but with postgresql if one insert fails, the whole block is aborted. Is there any
way to get around this behavior so that postgresql won't abort the entire
transaction if a single insert returns an error?

Chris
---------------------------(end of broadcast)---------------------------
TIP 1: subscribe and unsubscribe commands go to ma*******@postgresql.org

---------------------------(end of broadcast)---------------------------
TIP 5: Have you checked our extensive FAQ?

http://www.postgresql.org/docs/faqs/FAQ.html

Nov 12 '05 #2

P: n/a
On Tue, Jan 06, 2004 at 16:52:12 -0800,
Chris Ochs <ch***@paymentonline.com> wrote:
I want to do a series of inserts within a single transaction block, but with
postgresql if one insert fails, the whole block is aborted. Is there any
way to get around this behavior so that postgresql won't abort the entire
transaction if a single insert returns an error?


Currently there is no provision for recovery from error by the application
inside a transaction. What you can do is have the application check for
problems before trying the insert. Depending on the problems you expect,
you might be able to avoid trying an insert which will fail in almost
all cases.

---------------------------(end of broadcast)---------------------------
TIP 5: Have you checked our extensive FAQ?

http://www.postgresql.org/docs/faqs/FAQ.html

Nov 12 '05 #3

P: n/a
Another good way to handle this is to put a trigger on the table that
diverts inserts that would fail to a holding table. While this will slow
down the inserts, it will allow you to insert large lists of dubious
quality and worry about the bad rows later.

My preference is to fix the data feed, or pre-process it with PHP/Perl to
split it into two files ahead of time, but I'm more of a coder than a dba.
I get a lot of data to import from other sources at work, and it's often
easier to make the sources fix their data feeds than it is to try and
massage them each and every time.

On Wed, 7 Jan 2004, Chris Travers wrote:
Transactions are atomic. What you are asking to do violates the whole
concept of a transaction.

You can, however, do these inserts outside of the transaction block.

Best Wishes,
Chris Travers
----- Original Message -----
From: "Chris Ochs" <ch***@paymentonline.com>
To: <pg***********@postgresql.org>
Sent: Wednesday, January 07, 2004 7:52 AM
Subject: [GENERAL] problems with transaction blocks

I want to do a series of inserts within a single transaction block, but

with
postgresql if one insert fails, the whole block is aborted. Is there any
way to get around this behavior so that postgresql won't abort the entire
transaction if a single insert returns an error?

Chris
---------------------------(end of broadcast)---------------------------
TIP 1: subscribe and unsubscribe commands go to ma*******@postgresql.org

---------------------------(end of broadcast)---------------------------
TIP 5: Have you checked our extensive FAQ?

http://www.postgresql.org/docs/faqs/FAQ.html

---------------------------(end of broadcast)---------------------------
TIP 7: don't forget to increase your free space map settings

Nov 12 '05 #4

P: n/a
Another way is break the transaction. Instead of consisting of many
inserts, each insert is a transaction itself.
Do a Begin and a Commit (or rollback) circling the insert statement.
HTH
Rodrigo Malara
Em Qua, 2004-01-07 ąs 14:41, scott.marlowe escreveu:
Another good way to handle this is to put a trigger on the table that
diverts inserts that would fail to a holding table. While this will slow
down the inserts, it will allow you to insert large lists of dubious
quality and worry about the bad rows later.

My preference is to fix the data feed, or pre-process it with PHP/Perl to
split it into two files ahead of time, but I'm more of a coder than a dba.
I get a lot of data to import from other sources at work, and it's often
easier to make the sources fix their data feeds than it is to try and
massage them each and every time.

On Wed, 7 Jan 2004, Chris Travers wrote:
Transactions are atomic. What you are asking to do violates the whole
concept of a transaction.

You can, however, do these inserts outside of the transaction block.

Best Wishes,
Chris Travers
----- Original Message -----
From: "Chris Ochs" <ch***@paymentonline.com>
To: <pg***********@postgresql.org>
Sent: Wednesday, January 07, 2004 7:52 AM
Subject: [GENERAL] problems with transaction blocks

I want to do a series of inserts within a single transaction block, but

with
postgresql if one insert fails, the whole block is aborted. Is there any
way to get around this behavior so that postgresql won't abort the entire
transaction if a single insert returns an error?

Chris
---------------------------(end of broadcast)---------------------------
TIP 1: subscribe and unsubscribe commands go to ma*******@postgresql.org

---------------------------(end of broadcast)---------------------------
TIP 5: Have you checked our extensive FAQ?

http://www.postgresql.org/docs/faqs/FAQ.html

---------------------------(end of broadcast)---------------------------
TIP 7: don't forget to increase your free space map settings


---------------------------(end of broadcast)---------------------------
TIP 5: Have you checked our extensive FAQ?

http://www.postgresql.org/docs/faqs/FAQ.html

Nov 12 '05 #5

P: n/a
>>>>> "RM" == Rodrigo Malara <ro***********@yahoo.com.br> writes:

RM> Another way is break the transaction. Instead of consisting of many
RM> inserts, each insert is a transaction itself.
RM> Do a Begin and a Commit (or rollback) circling the insert statement.

that will destroy your performance if you do it a lot. and there's no
point in doing an explicit begin/commit since PG does that implicitly
for each statement not inside a transaction already.

--
=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=
Vivek Khera, Ph.D. Khera Communications, Inc.
Internet: kh***@kciLink.com Rockville, MD +1-301-869-4449 x806
AIM: vivekkhera Y!: vivek_khera http://www.khera.org/~vivek/

---------------------------(end of broadcast)---------------------------
TIP 9: the planner will ignore your desire to choose an index scan if your
joining column's datatypes do not match

Nov 12 '05 #6

P: n/a
>>>>> "BW" == Bruno Wolff, <Bruno> writes:

BW> Currently there is no provision for recovery from error by the application
BW> inside a transaction. What you can do is have the application check for
BW> problems before trying the insert. Depending on the problems you expect,

I have an application that does this. First it attempts to do a mass
number of inserts based on some external events all inside a
transaction. If we abort due to referential integrity constraint
violation, we restart the whole process but before each insert a check
is done to see if the required FK's are satisfied.

This gives us the benefit of 99% of the time when the FKs are ok we
zip along pretty darned fast, and for the 1% of the time when some
stale data is re-injected into the stream, we just restart that batch
and pay the penalty for it.

The benefit of being able to do all the inserts within a single
begin/end cannot be understated.
--
=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=
Vivek Khera, Ph.D. Khera Communications, Inc.
Internet: kh***@kciLink.com Rockville, MD +1-301-869-4449 x806
AIM: vivekkhera Y!: vivek_khera http://www.khera.org/~vivek/

---------------------------(end of broadcast)---------------------------
TIP 8: explain analyze is your friend

Nov 12 '05 #7

This discussion thread is closed

Replies have been disabled for this discussion.