470,596 Members | 1,681 Online
Bytes | Developer Community
New Post

Home Posts Topics Members FAQ

Post your question to a community of 470,596 developers. It's quick & easy.

Doing updates/deletes in small batches...

I know this subject has come up many times but I am not finding what I
am looking for after a couple days of searching. I know some months
ago I saw an indirect solution for this but I cant seem to find it now
that I need it.

This is DB2 UDB V8.2 on AIX 5.3

I need to create a script (ksh) to update a large number of records.
This could be a million records so I need to do it in smaller chunks to
avoid the logging problem. I cant seem to find a way in DB2 to limit
the number of records I am deleting at a time. The tables do not have
a recnum type column that I can use to select by. I cannot add a new
column just for this purpose either since the tool is just to be used
to do some testing cleanup.
What I want to do is pretty basic:

db2 "UPDATE table1 SET column2 = -1" //for all records

or to reduce further:

db2 "UPDATE table1 SET column2 = -1 WHERE column2 <-1" //to only do
the ones that are not already set to that value

But I would like to do it in increments of 1000 records and commit. I
dont see a way to LIMIT 1000 or FIND FIRST 1000 in DB2.

I could SELECT all records by primarykey (this is a char value) and put
it in a file then do individual updates on each record, committing
every 1000 count but this seems like a very tedious, ineffecient way to
do it.

So...anyone have a better way?

Aug 3 '06 #1
3 19438
I think Serge has posted some time back....some stored procedures...to
do this
CREATE PROCEDURE DELETE_LOADS_OF_ROWS (TABSCHEMA VARCHAR(64), TABNAME
VARCHAR(64), PREDICATE VARCHAR(256), COMMITCOUNT INTEGER) BEGIN
DECLARE SQLCODE INTEGER;
DECLARE txt VARCHAR(512);
DECLARE stmt STATEMENT;
SET txt = 'DELETE FROM (SELECT 1 FROM "'|| TABSCHEMA || '"."' ||
TABNAME || '" WHERE '|| PREDICATE || ' FETCH FIRST ' ||
RTRIM(CHAR(COMMITCOUNT)) || ' ROWS ONLY) AS D';
PREPARE stmt FROM txt;
l: LOOP
EXECUTE stmt;
IF SQLCODE = 100 THEN LEAVE l; END IF;
COMMIT;
END LOOP;
COMMIT;
END
%
cheers...
Shashi Mannepalli

shorti wrote:
I know this subject has come up many times but I am not finding what I
am looking for after a couple days of searching. I know some months
ago I saw an indirect solution for this but I cant seem to find it now
that I need it.

This is DB2 UDB V8.2 on AIX 5.3

I need to create a script (ksh) to update a large number of records.
This could be a million records so I need to do it in smaller chunks to
avoid the logging problem. I cant seem to find a way in DB2 to limit
the number of records I am deleting at a time. The tables do not have
a recnum type column that I can use to select by. I cannot add a new
column just for this purpose either since the tool is just to be used
to do some testing cleanup.
What I want to do is pretty basic:

db2 "UPDATE table1 SET column2 = -1" //for all records

or to reduce further:

db2 "UPDATE table1 SET column2 = -1 WHERE column2 <-1" //to only do
the ones that are not already set to that value

But I would like to do it in increments of 1000 records and commit. I
dont see a way to LIMIT 1000 or FIND FIRST 1000 in DB2.

I could SELECT all records by primarykey (this is a char value) and put
it in a file then do individual updates on each record, committing
every 1000 count but this seems like a very tedious, ineffecient way to
do it.

So...anyone have a better way?
Aug 3 '06 #2
If u want to use SHELL script...

u can find first 1000 rows like this...

db2 "select * from <tablefetch first 1000 rows only"

u can ROWNUMBER function also.

cheers...
Shashi Mannepalli
shorti wrote:
I know this subject has come up many times but I am not finding what I
am looking for after a couple days of searching. I know some months
ago I saw an indirect solution for this but I cant seem to find it now
that I need it.

This is DB2 UDB V8.2 on AIX 5.3

I need to create a script (ksh) to update a large number of records.
This could be a million records so I need to do it in smaller chunks to
avoid the logging problem. I cant seem to find a way in DB2 to limit
the number of records I am deleting at a time. The tables do not have
a recnum type column that I can use to select by. I cannot add a new
column just for this purpose either since the tool is just to be used
to do some testing cleanup.
What I want to do is pretty basic:

db2 "UPDATE table1 SET column2 = -1" //for all records

or to reduce further:

db2 "UPDATE table1 SET column2 = -1 WHERE column2 <-1" //to only do
the ones that are not already set to that value

But I would like to do it in increments of 1000 records and commit. I
dont see a way to LIMIT 1000 or FIND FIRST 1000 in DB2.

I could SELECT all records by primarykey (this is a char value) and put
it in a file then do individual updates on each record, committing
every 1000 count but this seems like a very tedious, ineffecient way to
do it.

So...anyone have a better way?
Aug 3 '06 #3
Thanks. The "fetch first 1000 rows only" In the initial post rung a
few bells. I am just going to do a:

db2 "update (select column2 from table1 where column2 <-1 fetch first
1000 rows only) set column2 = -1"

db2 commit

Then loop it until I get an SQL0100W.
Thanks!

That was exactly what I was looking for!

Aug 3 '06 #4

This discussion thread is closed

Replies have been disabled for this discussion.

Similar topics

1 post views Thread by peter | last post: by
reply views Thread by Dimitris Milonas | last post: by
By using this site, you agree to our Privacy Policy and Terms of Use.