473,326 Members | 2,136 Online
Bytes | Software Development & Data Engineering Community
Post Job

Home Posts Topics Members FAQ

Join Bytes to post your question to a community of 473,326 software developers and data experts.

Row and Statement based Trigger Question

Hi All,

This question is related to iSeries V5R4 and db2.

I want to implement an AFTER DELETE trigger to save the deleted rows
to an archive table, I initially defined it as a FOR EACH STATEMENT
trigger that would insert all the deleted rows in one operation like
this:

CREATE TRIGGER MyTable_TD
AFTER DELETE ON MyTable
REFERENCING OLD TABLE AS Deleted
FOR EACH STATEMENT
BEGIN
INSERT INTO MyTableA SELECT * from Deleted
END

This worked pretty well, but if I was bulk deleting millions of
records, the delete would run for a long time without writing any
archive records. Only once all the deletes completed would any archive
records get written. If the operation was interrupted, the delete
would end incomplete, and no archive records are written at all - the
records are lost forever. The archive table is not journaled.

I can see a performance benefit in this approach as the inserts are
done as a single operation (although there may be a penalty of
individual inserts into a temp table).

I am now attempting to rewrite this as a FOR EACH ROW based trigger,
on the expectation that one row insert will occur immediately after
each row deleted, so if the operation is interrupted, at most one row
is lost.

This is what I am trying to do:
CREATE TRIGGER MyTable_TD
AFTER DELETE ON MyTable
REFERENCING OLD ROW AS Deleted
FOR EACH ROW
BEGIN
INSERT INTO MyTableA SELECT Deleted.* from SYSIBM.SYSDUMMY1
END

But of course the trigger payload is invalid as Deleted is a row
reference not a table reference.

So my question is this: in a row based trigger, how can I reference
the old deleted row in my insert statement. I cannot use a list of
individually named columns as a) I want this to be generic and b) I
want it to be low maintenance and c) the underlying table has hundreds
of columns and referencing them individually would be a royal PITA!

Thanks in advance for any advice,
JohnO
Jun 27 '08 #1
10 3534
On May 9, 11:45 am, JohnO <johno1...@gmail.comwrote:
Hi All,

This question is related to iSeries V5R4 and db2.

I want to implement an AFTER DELETE trigger to save the deleted rows
to an archive table, I initially defined it as a FOR EACH STATEMENT
trigger that would insert all the deleted rows in one operation like
this:

CREATE TRIGGER MyTable_TD
AFTER DELETE ON MyTable
REFERENCING OLD TABLE AS Deleted
FOR EACH STATEMENT
BEGIN
INSERT INTO MyTableA SELECT * from Deleted
END

This worked pretty well, but if I was bulk deleting millions of
records, the delete would run for a long time without writing any
archive records. Only once all the deletes completed would any archive
records get written. If the operation was interrupted, the delete
would end incomplete, and no archive records are written at all - the
records are lost forever. The archive table is not journaled.

I can see a performance benefit in this approach as the inserts are
done as a single operation (although there may be a penalty of
individual inserts into a temp table).

I am now attempting to rewrite this as a FOR EACH ROW based trigger,
on the expectation that one row insert will occur immediately after
each row deleted, so if the operation is interrupted, at most one row
is lost.

This is what I am trying to do:
CREATE TRIGGER MyTable_TD
AFTER DELETE ON MyTable
REFERENCING OLD ROW AS Deleted
FOR EACH ROW
BEGIN
INSERT INTO MyTableA SELECT Deleted.* from SYSIBM.SYSDUMMY1
END

But of course the trigger payload is invalid as Deleted is a row
reference not a table reference.

So my question is this: in a row based trigger, how can I reference
the old deleted row in my insert statement. I cannot use a list of
individually named columns as a) I want this to be generic and b) I
want it to be low maintenance and c) the underlying table has hundreds
of columns and referencing them individually would be a royal PITA!

Thanks in advance for any advice,
JohnO
I found this thread:
http://groups.google.co.nz/group/com...5649e6e626354d

In which Serge replied in the negative. Serge, as I said, the
statement based trigger works, but it has a problem when I am deleting
millions of rows.
Jun 27 '08 #2
JohnO wrote:
On May 9, 11:45 am, JohnO <johno1...@gmail.comwrote:
>Hi All,

This question is related to iSeries V5R4 and db2.

I want to implement an AFTER DELETE trigger to save the deleted rows
to an archive table, I initially defined it as a FOR EACH STATEMENT
trigger that would insert all the deleted rows in one operation like
this:

CREATE TRIGGER MyTable_TD
AFTER DELETE ON MyTable
REFERENCING OLD TABLE AS Deleted
FOR EACH STATEMENT
BEGIN
INSERT INTO MyTableA SELECT * from Deleted
END

This worked pretty well, but if I was bulk deleting millions of
records, the delete would run for a long time without writing any
archive records. Only once all the deletes completed would any archive
records get written. If the operation was interrupted, the delete
would end incomplete, and no archive records are written at all - the
records are lost forever. The archive table is not journaled.

I can see a performance benefit in this approach as the inserts are
done as a single operation (although there may be a penalty of
individual inserts into a temp table).

I am now attempting to rewrite this as a FOR EACH ROW based trigger,
on the expectation that one row insert will occur immediately after
each row deleted, so if the operation is interrupted, at most one row
is lost.

This is what I am trying to do:
CREATE TRIGGER MyTable_TD
AFTER DELETE ON MyTable
REFERENCING OLD ROW AS Deleted
FOR EACH ROW
BEGIN
INSERT INTO MyTableA SELECT Deleted.* from SYSIBM.SYSDUMMY1
END

But of course the trigger payload is invalid as Deleted is a row
reference not a table reference.

So my question is this: in a row based trigger, how can I reference
the old deleted row in my insert statement. I cannot use a list of
individually named columns as a) I want this to be generic and b) I
want it to be low maintenance and c) the underlying table has hundreds
of columns and referencing them individually would be a royal PITA!

Thanks in advance for any advice,
JohnO

I found this thread:
http://groups.google.co.nz/group/com...5649e6e626354d

In which Serge replied in the negative. Serge, as I said, the
statement based trigger works, but it has a problem when I am deleting
millions of rows.
As always convenience has its price....
In some future version of DB2 you may be able to do:
INSERT INTO T VALUES ROW newrow
(or something like that). But that'll be a while...

Cheers
Serge
--
Serge Rielau
DB2 Solutions Development
IBM Toronto Lab
Jun 27 '08 #3
On May 9, 1:04 pm, Serge Rielau <srie...@ca.ibm.comwrote:
JohnO wrote:
On May 9, 11:45 am, JohnO <johno1...@gmail.comwrote:
Hi All,
This question is related to iSeries V5R4 and db2.
I want to implement an AFTER DELETE trigger to save the deleted rows
to an archive table, I initially defined it as a FOR EACH STATEMENT
trigger that would insert all the deleted rows in one operation like
this:
CREATE TRIGGER MyTable_TD
AFTER DELETE ON MyTable
REFERENCING OLD TABLE AS Deleted
FOR EACH STATEMENT
BEGIN
INSERT INTO MyTableA SELECT * from Deleted
END
This worked pretty well, but if I was bulk deleting millions of
records, the delete would run for a long time without writing any
archive records. Only once all the deletes completed would any archive
records get written. If the operation was interrupted, the delete
would end incomplete, and no archive records are written at all - the
records are lost forever. The archive table is not journaled.
I can see a performance benefit in this approach as the inserts are
done as a single operation (although there may be a penalty of
individual inserts into a temp table).
I am now attempting to rewrite this as a FOR EACH ROW based trigger,
on the expectation that one row insert will occur immediately after
each row deleted, so if the operation is interrupted, at most one row
is lost.
This is what I am trying to do:
CREATE TRIGGER MyTable_TD
AFTER DELETE ON MyTable
REFERENCING OLD ROW AS Deleted
FOR EACH ROW
BEGIN
INSERT INTO MyTableA SELECT Deleted.* from SYSIBM.SYSDUMMY1
END
But of course the trigger payload is invalid as Deleted is a row
reference not a table reference.
So my question is this: in a row based trigger, how can I reference
the old deleted row in my insert statement. I cannot use a list of
individually named columns as a) I want this to be generic and b) I
want it to be low maintenance and c) the underlying table has hundreds
of columns and referencing them individually would be a royal PITA!
Thanks in advance for any advice,
JohnO
I found this thread:
http://groups.google.co.nz/group/com.../browse_frm/th...
In which Serge replied in the negative. Serge, as I said, the
statement based trigger works, but it has a problem when I am deleting
millions of rows.

As always convenience has its price....
In some future version of DB2 you may be able to do:
INSERT INTO T VALUES ROW newrow
(or something like that). But that'll be a while...

Cheers
Serge
--
Serge Rielau
DB2 Solutions Development
IBM Toronto Lab
Thanks Serge, I knew you would confirm my suspicions.

Can you think of a way to improve the statement based trigger? It
works, but I am not comfortable with using it when deleting millions
of rows as the archive rows are lost if the delete is interrupted yet
the deleted records stay deleted. If I enable journaling on the
archive table would that make it safer? Would there be a performance
and disk space penalty?

What I would really like would be for the delete to run for a while,
and then be able to interrupt the delete, with the corresponding
inserts completing. Then I could restart the delete another time.
Thanks
johnO
Jun 27 '08 #4
If I enable journaling on the
archive table would that make it safer?
Yes.
Would there be a performance [penalty]
I don't believe so. If I remember correctly, the OS will cache the
inserts until a COMMIT is issued.
and disk space penalty?
Yes. Journalling does require some disk space.
What I would really like would be for the delete to run for a while,
and then be able to interrupt the delete, with the corresponding
inserts completing. Then I could restart the delete another time.
You could do this by making the delete a programmed housekeeping /
archive process, rather than relying on a trigger.
Are you likely to be doing bulk (multi-million row) deletes? That
suggests that your delete / housekeeping / archive process is perhaps
not very well designed. (Of course, it might not be avoidable in your
case, but to me this is a bad sign.)
Jun 27 '08 #5
On May 10, 12:13 am, "walker.l2" <walker...@ukonline.co.ukwrote:
If I enable journaling on the
archive table would that make it safer?

Yes.
Would there be a performance [penalty]

I don't believe so. If I remember correctly, the OS will cache the
inserts until a COMMIT is issued.
and disk space penalty?

Yes. Journalling does require some disk space.
What I would really like would be for the delete to run for a while,
and then be able to interrupt the delete, with the corresponding
inserts completing. Then I could restart the delete another time.

You could do this by making the delete a programmed housekeeping /
archive process, rather than relying on a trigger.
Are you likely to be doing bulk (multi-million row) deletes? That
suggests that your delete / housekeeping / archive process is perhaps
not very well designed. (Of course, it might not be avoidable in your
case, but to me this is a bad sign.)
Hi, and thanks for the comments.

The deletes are only going to be purge/archive operations, rather than
business transactions. The first one is going to delete the millions
as it will be several years worth. Subsequent ones will be annual so
will still be pretty big, but not quite so bad.
Jun 27 '08 #6
JohnO wrote:
The deletes are only going to be purge/archive operations, rather than
business transactions. The first one is going to delete the millions
as it will be several years worth. Subsequent ones will be annual so
will still be pretty big, but not quite so bad.
If that's the case why don't you use batch process. Triggers are meant
to to be active database objects. I wouldn't use them for maintenance tasks.

Cheers
Serge

--
Serge Rielau
DB2 Solutions Development
IBM Toronto Lab
Jun 27 '08 #7
On May 11, 2:36 pm, Serge Rielau <srie...@ca.ibm.comwrote:
JohnO wrote:
The deletes are only going to be purge/archive operations, rather than
business transactions. The first one is going to delete the millions
as it will be several years worth. Subsequent ones will be annual so
will still be pretty big, but not quite so bad.

If that's the case why don't you use batch process. Triggers are meant
to to be active database objects. I wouldn't use them for maintenance tasks.
H Serge,

Speed, mostly. This is a JDEdwards OneWorld site, so any batch work
will be written in the OneWorld toolset, and will likely end up bing
millions of individual inserts and deletes. The process would take
many days for some of the files.

It's so much faster to do a bulk delete, and I was hoping the trigger
would optimise out well enough.

The statement based trigger seems to be the most efficient, but with
the problems I pointed otu at the start.

Cheers,
JohnO

Jun 27 '08 #8
Speed, mostly. This is a JDEdwards OneWorld site, so any batch work
will be written in the OneWorld toolset, and will likely end up bing
millions of individual inserts and deletes. The process would take
many days for some of the files.
When all you have is a hammer, every problem lokos like a nail. :-)

This is an ideal job for a couple of simple SQL statements (that can
be run from a simple CL program if required).
Something like:
CREATE TABLE archive as SELECT * FROM mainfile WHERE date < sometime
WITH DATA
and
DELETE FROM mainfile WHERE date < sometime
should do the trick (N.B. I haven't checked the syntax of these, but
you get the idea), with an index over the 'date' field for performance
reasons.
In future a simple CPYF *ADD command with a date selection could move
records from mainfile to archive, and the SQL delete perform the tidy-
up.
Jun 27 '08 #9
On May 12, 9:20 pm, "walker.l2" <walker...@ukonline.co.ukwrote:
Speed, mostly. This is a JDEdwards OneWorld site, so any batch work
will be written in the OneWorld toolset, and will likely end up bing
millions of individual inserts and deletes. The process would take
many days for some of the files.

When all you have is a hammer, every problem lokos like a nail. :-)

This is an ideal job for a couple of simple SQL statements (that can
be run from a simple CL program if required).
Something like:
CREATE TABLE archive as SELECT * FROM mainfile WHERE date < sometime
WITH DATA
and
DELETE FROM mainfile WHERE date < sometime
should do the trick (N.B. I haven't checked the syntax of these, but
you get the idea), with an index over the 'date' field for performance
reasons.
In future a simple CPYF *ADD command with a date selection could move
records from mainfile to archive, and the SQL delete perform the tidy-
up.
Interesting about the CPYF *ADD - will that perform faster than an
INSERT INTO ... SELECT ... FROM type SQL operation?
Jun 27 '08 #10
I've no idea which method would be faster, but CPYF might be less
'scary' if your shop is not used to SQL.
Jun 27 '08 #11

This thread has been closed and replies have been disabled. Please start a new discussion.

Similar topics

1
by: Rebecca Lovelace | last post by:
I have a trigger on a table. I am trying to dynamically log the changed fields on the table to another table, so I am iterating through the bits in COLUMNS_UPDATED() to find what's changed, and...
10
by: Mike | last post by:
I know this sounds strange but I am at a loss. I am calling a simple funtion that opens a connection to a SQL Server 2000 database and executes an Insert Statement. private void...
0
by: Frank van Vugt | last post by:
L.S. Postgresql version 7.4.3 does not allow declaration of a 'for each statement' constraint trigger: db=# \h create constraint Command: CREATE CONSTRAINT TRIGGER Description: define a...
2
by: Dima Gofman | last post by:
I have a trigger on UPDATE on a table. I'm running some maintenance UPDATE and DELETE queries which I want the trigger to ignore but at the same time I want other UPDATE queries that other users...
1
by: jburris | last post by:
I am completely new to VBA. I am trying to update a yes/no box in a subform based on a value that I call from the main form into the subform. Below is the if /then statement i am using: If...
1
by: filip1150 | last post by:
I'm trying to find if there is any performance diference between explicitly using a sequence in the insert statement to generate values for a column and doing this in an insert trigger. I...
0
by: imran haq | last post by:
Hi All, I have 3 rather Long Questions that are causing alot of trouble: I would appreciate all the help i can get and tried to use A post sent to atli in the past but it did not help... !) I...
3
by: Alan Mailer | last post by:
Ok, I've looked for an answer for this, because I'm sure it's been asked a thousand times... but no luck... so here goes. Imagine I want to create a NodeList based on an XPath statement. The...
0
by: aj | last post by:
DB2 8.2 FP14 LUW Red Hat AS Interesting one here. I modified a trigger body yesterday, changing a numeric constant used in in INSERT statement. I did not add any SQL to the trigger.. The...
0
by: ryjfgjl | last post by:
ExcelToDatabase: batch import excel into database automatically...
0
by: Vimpel783 | last post by:
Hello! Guys, I found this code on the Internet, but I need to modify it a little. It works well, the problem is this: Data is sent from only one cell, in this case B5, but it is necessary that data...
0
by: jfyes | last post by:
As a hardware engineer, after seeing that CEIWEI recently released a new tool for Modbus RTU Over TCP/UDP filtering and monitoring, I actively went to its official website to take a look. It turned...
0
by: ArrayDB | last post by:
The error message I've encountered is; ERROR:root:Error generating model response: exception: access violation writing 0x0000000000005140, which seems to be indicative of an access violation...
1
by: Defcon1945 | last post by:
I'm trying to learn Python using Pycharm but import shutil doesn't work
1
by: Shællîpôpï 09 | last post by:
If u are using a keypad phone, how do u turn on JavaScript, to access features like WhatsApp, Facebook, Instagram....
0
by: af34tf | last post by:
Hi Guys, I have a domain whose name is BytesLimited.com, and I want to sell it. Does anyone know about platforms that allow me to list my domain in auction for free. Thank you
0
by: Faith0G | last post by:
I am starting a new it consulting business and it's been a while since I setup a new website. Is wordpress still the best web based software for hosting a 5 page website? The webpages will be...
0
isladogs
by: isladogs | last post by:
The next Access Europe User Group meeting will be on Wednesday 3 Apr 2024 starting at 18:00 UK time (6PM UTC+1) and finishing by 19:30 (7.30PM). In this session, we are pleased to welcome former...

By using Bytes.com and it's services, you agree to our Privacy Policy and Terms of Use.

To disable or enable advertisements and analytics tracking please visit the manage ads & tracking page.