By using this site, you agree to our updated Privacy Policy and our Terms of Use. Manage your Cookies Settings.
440,304 Members | 3,174 Online
Bytes IT Community
+ Ask a Question
Need help? Post your question and get tips & solutions from a community of 440,304 IT Pros & Developers. It's quick & easy.

Row and Statement based Trigger Question

P: n/a
Hi All,

This question is related to iSeries V5R4 and db2.

I want to implement an AFTER DELETE trigger to save the deleted rows
to an archive table, I initially defined it as a FOR EACH STATEMENT
trigger that would insert all the deleted rows in one operation like
this:

CREATE TRIGGER MyTable_TD
AFTER DELETE ON MyTable
REFERENCING OLD TABLE AS Deleted
FOR EACH STATEMENT
BEGIN
INSERT INTO MyTableA SELECT * from Deleted
END

This worked pretty well, but if I was bulk deleting millions of
records, the delete would run for a long time without writing any
archive records. Only once all the deletes completed would any archive
records get written. If the operation was interrupted, the delete
would end incomplete, and no archive records are written at all - the
records are lost forever. The archive table is not journaled.

I can see a performance benefit in this approach as the inserts are
done as a single operation (although there may be a penalty of
individual inserts into a temp table).

I am now attempting to rewrite this as a FOR EACH ROW based trigger,
on the expectation that one row insert will occur immediately after
each row deleted, so if the operation is interrupted, at most one row
is lost.

This is what I am trying to do:
CREATE TRIGGER MyTable_TD
AFTER DELETE ON MyTable
REFERENCING OLD ROW AS Deleted
FOR EACH ROW
BEGIN
INSERT INTO MyTableA SELECT Deleted.* from SYSIBM.SYSDUMMY1
END

But of course the trigger payload is invalid as Deleted is a row
reference not a table reference.

So my question is this: in a row based trigger, how can I reference
the old deleted row in my insert statement. I cannot use a list of
individually named columns as a) I want this to be generic and b) I
want it to be low maintenance and c) the underlying table has hundreds
of columns and referencing them individually would be a royal PITA!

Thanks in advance for any advice,
JohnO
Jun 27 '08 #1
Share this Question
Share on Google+
10 Replies


P: n/a
On May 9, 11:45 am, JohnO <johno1...@gmail.comwrote:
Hi All,

This question is related to iSeries V5R4 and db2.

I want to implement an AFTER DELETE trigger to save the deleted rows
to an archive table, I initially defined it as a FOR EACH STATEMENT
trigger that would insert all the deleted rows in one operation like
this:

CREATE TRIGGER MyTable_TD
AFTER DELETE ON MyTable
REFERENCING OLD TABLE AS Deleted
FOR EACH STATEMENT
BEGIN
INSERT INTO MyTableA SELECT * from Deleted
END

This worked pretty well, but if I was bulk deleting millions of
records, the delete would run for a long time without writing any
archive records. Only once all the deletes completed would any archive
records get written. If the operation was interrupted, the delete
would end incomplete, and no archive records are written at all - the
records are lost forever. The archive table is not journaled.

I can see a performance benefit in this approach as the inserts are
done as a single operation (although there may be a penalty of
individual inserts into a temp table).

I am now attempting to rewrite this as a FOR EACH ROW based trigger,
on the expectation that one row insert will occur immediately after
each row deleted, so if the operation is interrupted, at most one row
is lost.

This is what I am trying to do:
CREATE TRIGGER MyTable_TD
AFTER DELETE ON MyTable
REFERENCING OLD ROW AS Deleted
FOR EACH ROW
BEGIN
INSERT INTO MyTableA SELECT Deleted.* from SYSIBM.SYSDUMMY1
END

But of course the trigger payload is invalid as Deleted is a row
reference not a table reference.

So my question is this: in a row based trigger, how can I reference
the old deleted row in my insert statement. I cannot use a list of
individually named columns as a) I want this to be generic and b) I
want it to be low maintenance and c) the underlying table has hundreds
of columns and referencing them individually would be a royal PITA!

Thanks in advance for any advice,
JohnO
I found this thread:
http://groups.google.co.nz/group/com...5649e6e626354d

In which Serge replied in the negative. Serge, as I said, the
statement based trigger works, but it has a problem when I am deleting
millions of rows.
Jun 27 '08 #2

P: n/a
JohnO wrote:
On May 9, 11:45 am, JohnO <johno1...@gmail.comwrote:
>Hi All,

This question is related to iSeries V5R4 and db2.

I want to implement an AFTER DELETE trigger to save the deleted rows
to an archive table, I initially defined it as a FOR EACH STATEMENT
trigger that would insert all the deleted rows in one operation like
this:

CREATE TRIGGER MyTable_TD
AFTER DELETE ON MyTable
REFERENCING OLD TABLE AS Deleted
FOR EACH STATEMENT
BEGIN
INSERT INTO MyTableA SELECT * from Deleted
END

This worked pretty well, but if I was bulk deleting millions of
records, the delete would run for a long time without writing any
archive records. Only once all the deletes completed would any archive
records get written. If the operation was interrupted, the delete
would end incomplete, and no archive records are written at all - the
records are lost forever. The archive table is not journaled.

I can see a performance benefit in this approach as the inserts are
done as a single operation (although there may be a penalty of
individual inserts into a temp table).

I am now attempting to rewrite this as a FOR EACH ROW based trigger,
on the expectation that one row insert will occur immediately after
each row deleted, so if the operation is interrupted, at most one row
is lost.

This is what I am trying to do:
CREATE TRIGGER MyTable_TD
AFTER DELETE ON MyTable
REFERENCING OLD ROW AS Deleted
FOR EACH ROW
BEGIN
INSERT INTO MyTableA SELECT Deleted.* from SYSIBM.SYSDUMMY1
END

But of course the trigger payload is invalid as Deleted is a row
reference not a table reference.

So my question is this: in a row based trigger, how can I reference
the old deleted row in my insert statement. I cannot use a list of
individually named columns as a) I want this to be generic and b) I
want it to be low maintenance and c) the underlying table has hundreds
of columns and referencing them individually would be a royal PITA!

Thanks in advance for any advice,
JohnO

I found this thread:
http://groups.google.co.nz/group/com...5649e6e626354d

In which Serge replied in the negative. Serge, as I said, the
statement based trigger works, but it has a problem when I am deleting
millions of rows.
As always convenience has its price....
In some future version of DB2 you may be able to do:
INSERT INTO T VALUES ROW newrow
(or something like that). But that'll be a while...

Cheers
Serge
--
Serge Rielau
DB2 Solutions Development
IBM Toronto Lab
Jun 27 '08 #3

P: n/a
On May 9, 1:04 pm, Serge Rielau <srie...@ca.ibm.comwrote:
JohnO wrote:
On May 9, 11:45 am, JohnO <johno1...@gmail.comwrote:
Hi All,
This question is related to iSeries V5R4 and db2.
I want to implement an AFTER DELETE trigger to save the deleted rows
to an archive table, I initially defined it as a FOR EACH STATEMENT
trigger that would insert all the deleted rows in one operation like
this:
CREATE TRIGGER MyTable_TD
AFTER DELETE ON MyTable
REFERENCING OLD TABLE AS Deleted
FOR EACH STATEMENT
BEGIN
INSERT INTO MyTableA SELECT * from Deleted
END
This worked pretty well, but if I was bulk deleting millions of
records, the delete would run for a long time without writing any
archive records. Only once all the deletes completed would any archive
records get written. If the operation was interrupted, the delete
would end incomplete, and no archive records are written at all - the
records are lost forever. The archive table is not journaled.
I can see a performance benefit in this approach as the inserts are
done as a single operation (although there may be a penalty of
individual inserts into a temp table).
I am now attempting to rewrite this as a FOR EACH ROW based trigger,
on the expectation that one row insert will occur immediately after
each row deleted, so if the operation is interrupted, at most one row
is lost.
This is what I am trying to do:
CREATE TRIGGER MyTable_TD
AFTER DELETE ON MyTable
REFERENCING OLD ROW AS Deleted
FOR EACH ROW
BEGIN
INSERT INTO MyTableA SELECT Deleted.* from SYSIBM.SYSDUMMY1
END
But of course the trigger payload is invalid as Deleted is a row
reference not a table reference.
So my question is this: in a row based trigger, how can I reference
the old deleted row in my insert statement. I cannot use a list of
individually named columns as a) I want this to be generic and b) I
want it to be low maintenance and c) the underlying table has hundreds
of columns and referencing them individually would be a royal PITA!
Thanks in advance for any advice,
JohnO
I found this thread:
http://groups.google.co.nz/group/com.../browse_frm/th...
In which Serge replied in the negative. Serge, as I said, the
statement based trigger works, but it has a problem when I am deleting
millions of rows.

As always convenience has its price....
In some future version of DB2 you may be able to do:
INSERT INTO T VALUES ROW newrow
(or something like that). But that'll be a while...

Cheers
Serge
--
Serge Rielau
DB2 Solutions Development
IBM Toronto Lab
Thanks Serge, I knew you would confirm my suspicions.

Can you think of a way to improve the statement based trigger? It
works, but I am not comfortable with using it when deleting millions
of rows as the archive rows are lost if the delete is interrupted yet
the deleted records stay deleted. If I enable journaling on the
archive table would that make it safer? Would there be a performance
and disk space penalty?

What I would really like would be for the delete to run for a while,
and then be able to interrupt the delete, with the corresponding
inserts completing. Then I could restart the delete another time.
Thanks
johnO
Jun 27 '08 #4

P: n/a
If I enable journaling on the
archive table would that make it safer?
Yes.
Would there be a performance [penalty]
I don't believe so. If I remember correctly, the OS will cache the
inserts until a COMMIT is issued.
and disk space penalty?
Yes. Journalling does require some disk space.
What I would really like would be for the delete to run for a while,
and then be able to interrupt the delete, with the corresponding
inserts completing. Then I could restart the delete another time.
You could do this by making the delete a programmed housekeeping /
archive process, rather than relying on a trigger.
Are you likely to be doing bulk (multi-million row) deletes? That
suggests that your delete / housekeeping / archive process is perhaps
not very well designed. (Of course, it might not be avoidable in your
case, but to me this is a bad sign.)
Jun 27 '08 #5

P: n/a
On May 10, 12:13 am, "walker.l2" <walker...@ukonline.co.ukwrote:
If I enable journaling on the
archive table would that make it safer?

Yes.
Would there be a performance [penalty]

I don't believe so. If I remember correctly, the OS will cache the
inserts until a COMMIT is issued.
and disk space penalty?

Yes. Journalling does require some disk space.
What I would really like would be for the delete to run for a while,
and then be able to interrupt the delete, with the corresponding
inserts completing. Then I could restart the delete another time.

You could do this by making the delete a programmed housekeeping /
archive process, rather than relying on a trigger.
Are you likely to be doing bulk (multi-million row) deletes? That
suggests that your delete / housekeeping / archive process is perhaps
not very well designed. (Of course, it might not be avoidable in your
case, but to me this is a bad sign.)
Hi, and thanks for the comments.

The deletes are only going to be purge/archive operations, rather than
business transactions. The first one is going to delete the millions
as it will be several years worth. Subsequent ones will be annual so
will still be pretty big, but not quite so bad.
Jun 27 '08 #6

P: n/a
JohnO wrote:
The deletes are only going to be purge/archive operations, rather than
business transactions. The first one is going to delete the millions
as it will be several years worth. Subsequent ones will be annual so
will still be pretty big, but not quite so bad.
If that's the case why don't you use batch process. Triggers are meant
to to be active database objects. I wouldn't use them for maintenance tasks.

Cheers
Serge

--
Serge Rielau
DB2 Solutions Development
IBM Toronto Lab
Jun 27 '08 #7

P: n/a
On May 11, 2:36 pm, Serge Rielau <srie...@ca.ibm.comwrote:
JohnO wrote:
The deletes are only going to be purge/archive operations, rather than
business transactions. The first one is going to delete the millions
as it will be several years worth. Subsequent ones will be annual so
will still be pretty big, but not quite so bad.

If that's the case why don't you use batch process. Triggers are meant
to to be active database objects. I wouldn't use them for maintenance tasks.
H Serge,

Speed, mostly. This is a JDEdwards OneWorld site, so any batch work
will be written in the OneWorld toolset, and will likely end up bing
millions of individual inserts and deletes. The process would take
many days for some of the files.

It's so much faster to do a bulk delete, and I was hoping the trigger
would optimise out well enough.

The statement based trigger seems to be the most efficient, but with
the problems I pointed otu at the start.

Cheers,
JohnO

Jun 27 '08 #8

P: n/a
Speed, mostly. This is a JDEdwards OneWorld site, so any batch work
will be written in the OneWorld toolset, and will likely end up bing
millions of individual inserts and deletes. The process would take
many days for some of the files.
When all you have is a hammer, every problem lokos like a nail. :-)

This is an ideal job for a couple of simple SQL statements (that can
be run from a simple CL program if required).
Something like:
CREATE TABLE archive as SELECT * FROM mainfile WHERE date < sometime
WITH DATA
and
DELETE FROM mainfile WHERE date < sometime
should do the trick (N.B. I haven't checked the syntax of these, but
you get the idea), with an index over the 'date' field for performance
reasons.
In future a simple CPYF *ADD command with a date selection could move
records from mainfile to archive, and the SQL delete perform the tidy-
up.
Jun 27 '08 #9

P: n/a
On May 12, 9:20 pm, "walker.l2" <walker...@ukonline.co.ukwrote:
Speed, mostly. This is a JDEdwards OneWorld site, so any batch work
will be written in the OneWorld toolset, and will likely end up bing
millions of individual inserts and deletes. The process would take
many days for some of the files.

When all you have is a hammer, every problem lokos like a nail. :-)

This is an ideal job for a couple of simple SQL statements (that can
be run from a simple CL program if required).
Something like:
CREATE TABLE archive as SELECT * FROM mainfile WHERE date < sometime
WITH DATA
and
DELETE FROM mainfile WHERE date < sometime
should do the trick (N.B. I haven't checked the syntax of these, but
you get the idea), with an index over the 'date' field for performance
reasons.
In future a simple CPYF *ADD command with a date selection could move
records from mainfile to archive, and the SQL delete perform the tidy-
up.
Interesting about the CPYF *ADD - will that perform faster than an
INSERT INTO ... SELECT ... FROM type SQL operation?
Jun 27 '08 #10

P: n/a
I've no idea which method would be faster, but CPYF might be less
'scary' if your shop is not used to SQL.
Jun 27 '08 #11

This discussion thread is closed

Replies have been disabled for this discussion.