we have a table with jobs and a table with job_history information.
Users can define jobs and let them run every X minutes/hours , like a
cronjob.
The jobs table has the following trigger:
CREATE TRIGGER JOBS_AFTER_DELETE
AFTER DELETE ON JOBS
REFERENCING OLD AS o
FOR EACH ROW MODE DB2SQL
BEGIN ATOMIC
DELETE FROM JOB_HISTORY WHERE JOB = o.JOB;
END
When the job_history table contains very much rows for a given job
(say a job that has run every 30 min for the past 2 years), the log
file gets full (error DIA8309C). This gives an error with inconsistent
information, job_history entries for a job that doesn't exist anymore.
Is there an easy way to prevent this error? I was thinking of
deleting the rows in blocks of 100.000 and looping as long as rows
exist for the given job. Is this possible with a trigger, or is there
some better solution?
regards,
Jan831