473,836 Members | 2,358 Online
Bytes | Software Development & Data Engineering Community
+ Post

Home Posts Topics Members FAQ

deleting large numbers of records

We have a batch process that inserts large numbers (100,000 - 1,000,000) of
records into a database each day. (DL/I database.) We're considering
converting it to a DB2 table. Currently we have logic in place that, prior
to inserting any data, reads the first input record and checks to see if it
already exists in the table. If the record already exists there are two
options:
1) Don't continue, because you already ran this job today!
2) This is a rerun - continue.

If number 2 is selected the first thing that happens is that it deletes
every record that was inserted today prior to doing the regular insert
process. (You may ask, why not just skip over the ones that are already
there. It's because we may be rerunning with an updated input file, where
the input records may be different than during the first run.)

Anyway, I figured with DB2 this would be a snap. I'll I'd need to do is:
EXEC SQL
DELETE FROM FILM.FILM_TRANS ACTIONS
WHERE UPDATE_DATE = FB_FUNC.TO_DATE (:CURR-DATE-JUL-PACKED)
END-EXEC

The only problem is that my log file would end up running out of room. So
now I've come up with the following:

DELETE-TODAY SECTION.
DISPLAY 'DELETE PROCESS BEGINS' UPON CONSOLE
PERFORM WITH TEST AFTER
UNTIL SQLCODE = 100
DISPLAY 'COMMITTING...' UPON CONSOLE
PERFORM COMMIT-UOW
DISPLAY 'DELETING 10000' UPON CONSOLE
PERFORM DB2-DELETE-TODAY
END-PERFORM
PERFORM COMMIT-UOW
DISPLAY 'DELETE PROCESS ENDS' UPON CONSOLE
Jan 24 '07
24 21675
Mark A wrote:
The major expense of a commit is a synchronous write of the log buffer to
disk.
Not necessarily if group commit is used.

--
Knut Stolze
DB2 z/OS Utilities Development
IBM Germany
Jan 25 '07 #11
Frank Swarbrick wrote:
Mark A<no****@nowher e.com01/24/07 4:36 PM >>>
>>Yes there is a better way that will avoid filling up your DB2 z/OS logs.

Actually, DB2/LUW, but I'm guessing your advice still applies.
In that case, have you considered the MERGE statement? Then you may not
have to DELETE the rows at all - just UPDATE them.

--
Knut Stolze
DB2 z/OS Utilities Development
IBM Germany
Jan 25 '07 #12
aj
Frank:
Here's an OLAPy trick that I sometimes use:

Let's say you want to delete rows from a very large table based on a
sysdate column. You *don't* want to overfill the transaction logs

The answer: Figure out how many rows you can safely delete w/ your
logs, use row_number() to slap a number on each one, and delete based
not only on your sysdate, but also that number.
Let's say you can safely delete up to 200000 rows, and you only want to
delete rows where sysdate = 5/1/2005:

lock table mytable in exclusive mode ;
delete
FROM (SELECT sysdate , row_number() OVER
(ORDER BY sysdate)
AS rn FROM mytable)
AS tr WHERE rn BETWEEN 1 and 200000 and sysdate = '5/1/2005' ;
COMMIT ;

Stick this in a loop and stop when no rows get deleted anymore.

HTH

aj

Frank Swarbrick wrote:
We have a batch process that inserts large numbers (100,000 - 1,000,000) of
records into a database each day. (DL/I database.) We're considering
converting it to a DB2 table. Currently we have logic in place that, prior
to inserting any data, reads the first input record and checks to see if it
already exists in the table. If the record already exists there are two
options:
1) Don't continue, because you already ran this job today!
2) This is a rerun - continue.

If number 2 is selected the first thing that happens is that it deletes
every record that was inserted today prior to doing the regular insert
process. (You may ask, why not just skip over the ones that are already
there. It's because we may be rerunning with an updated input file, where
the input records may be different than during the first run.)

Anyway, I figured with DB2 this would be a snap. I'll I'd need to do is:
EXEC SQL
DELETE FROM FILM.FILM_TRANS ACTIONS
WHERE UPDATE_DATE = FB_FUNC.TO_DATE (:CURR-DATE-JUL-PACKED)
END-EXEC

The only problem is that my log file would end up running out of room. So
now I've come up with the following:

DELETE-TODAY SECTION.
DISPLAY 'DELETE PROCESS BEGINS' UPON CONSOLE
PERFORM WITH TEST AFTER
UNTIL SQLCODE = 100
DISPLAY 'COMMITTING...' UPON CONSOLE
PERFORM COMMIT-UOW
DISPLAY 'DELETING 10000' UPON CONSOLE
PERFORM DB2-DELETE-TODAY
END-PERFORM
PERFORM COMMIT-UOW
DISPLAY 'DELETE PROCESS ENDS' UPON CONSOLE
.

DB2-DELETE-TODAY SECTION.
EXEC SQL
DELETE FROM (
SELECT UPDATE_DATE
FROM FILM.FILM_TRANS ACTIONS
WHERE UPDATE_DATE = FB_FUNC.TO_DATE (:CURR-DATE-JUL-PACKED)
FETCH FIRST 10000 ROWS ONLY
)
WHERE 1 = 1
END-EXEC
CALL CHECKERR USING SQLCA ERRLOC
.

My question is, is this the way to go or is there some better way?

I tried making the "10000" a host variable, but that didn't work. Any way
around this?

You may wondering why I put the "WHERE 1 = 1" clause on the DELETE
statement. This is because DB2 gives a warning if you pre-compile a DELETE
or UPDATE statement without a WHERE clause. Still works, but I like to
avoid warnings.

Thanks!
Frank
---
Frank Swarbrick
Senior Developer/Analyst - Mainframe Applications
FirstBank Data Corporation - Lakewood, CO USA
Jan 25 '07 #13
Another option might be to create the table as an MDC (MultiDimension al
Cluster) table, clustered on UPDATE_DATE, and then turn on the option
MDC ROLLOUT (not sure of exact syntax). This should then allow you to
just delete all the rows for that date. According to the literature,
it should just mark each block (set of pages) for that cluster as
deleted, log each block as being deleted, and commit. If you think
about this, it should delete the 1,000,000 or so rows very quickly and
NOT fill up your log files.

Disclaimer - I haven't (yet) been able to use this myself, so no actual
experience here. But if this is a new DB2 table/DB, it might be a
great time to check this feature out.

-Chris

Jan 25 '07 #14
Knut Stolze<st****@d e.ibm.com01/25/07 12:41 AM >>>
>Frank Swarbrick wrote:
>My probably very naive thought is that it would be nice to have some
sort
>of
DELETE statement that didn't even do logging. While often (usually?)
you
>would want to be able to ROLLBACK a DELETE, in the case of what I'm
doing
>there's no reason I would ever want to rollback. So why log? Just
wondering...

The logs are also used for crash recovery. Let's assume you run the
unlogged DELETE. Now your application or the DB2 server crashes before
you
>issued a COMMIT. Upon restart, DB2 has to make sure the transaction is
properly rolled back and the database is in a consistent state. If you
don't log the DELETE, you are out of luck there.
Are you saying that DB2 occasionally crashes?
:-) (Just kidding.)
As you can tell, I'm hopelessly naive about these things. I'm just a simple
application programmer.
>What would be nice to have in this respect is an option for the DELETE
statement to explicitly turn off logging - which would have a certain
amount of problems as I just mentioned. Truncating a whole table is
supported that way already: you can use ALTER TABLE ... ACTIVATE NOT
LOGGED
>INITIALLY WITH EMPTY TABLE for that. Maybe this, combined with range
partitioning is an option for you?
I don't know about range partitioning. Can you give me a pointer to some
information on this?

Thanks!

Frank
---
Frank Swarbrick
Senior Developer/Analyst - Mainframe Applications
FirstBank Data Corporation - Lakewood, CO USA
Jan 25 '07 #15
Mark A<no****@nowher e.com01/24/07 7:40 PM >>>
>"Frank Swarbrick" <Fr************ *@efirstbank.co mwrote in message
news:51******* ******@mid.indi vidual.net...
>Interesting. I just figured that this would be much less efficient than
doing just the delete with the fullselect, because in the latter case no
data need be returned to the AR. Anyway, I will give it a shot.

It depends on where the program runs. If the program runs on server and the
>static SQL is bound into a package that runs on the server, then there is
not that much difference in performance (unless performance is
ultra-critical). If the program is running remotely, then there would be a
>big difference in performance.

I have written SQL stored procedures to do mass deletes with a cursor and
it
>performs well.

I haven't seen to many COBOL programs running on DB2 LUW. What compiler are
>you using? I used MicroFocus COBOL against OS/2 Database Manager, but that
>was in 1991.
We're doing it in kind of an odd way. And for now, we're only testing. We
are using "DB2 Server for VSE" as the client, with the IBM COBOL for VSE/ESA
compiler. But all of our databases are remote databases on DB2/LUW.

So we definitely fall in to the category of a remote client, not a client
running on the server.

But to answer your question anyway, I have been successful using both Micro
Focus Net Express (COBOL) 5.0 as well as OpenCobol 0.33 to access DB2/LUW
databases.

Frank
---
Frank Swarbrick
Senior Developer/Analyst - Mainframe Applications
FirstBank Data Corporation - Lakewood, CO USA
Jan 25 '07 #16
Knut Stolze<st****@d e.ibm.com01/25/07 1:38 AM >>>
>Frank Swarbrick wrote:
>Mark A<no****@nowher e.com01/24/07 4:36 PM >>>
>>>Yes there is a better way that will avoid filling up your DB2 z/OS logs.

Actually, DB2/LUW, but I'm guessing your advice still applies.

In that case, have you considered the MERGE statement? Then you may not
have to DELETE the rows at all - just UPDATE them.
Yet another thing I am not familiar with. I will look into it. Thanks.

Frank

---
Frank Swarbrick
Senior Developer/Analyst - Mainframe Applications
FirstBank Data Corporation - Lakewood, CO USA
Jan 25 '07 #17
aj<ro****@mcdon alds.com01/25/07 7:10 AM >>>
>Frank:
Here's an OLAPy trick that I sometimes use:

Let's say you want to delete rows from a very large table based on a
sysdate column. You *don't* want to overfill the transaction logs

The answer: Figure out how many rows you can safely delete w/ your
logs, use row_number() to slap a number on each one, and delete based
not only on your sysdate, but also that number.
Let's say you can safely delete up to 200000 rows, and you only want to
delete rows where sysdate = 5/1/2005:

lock table mytable in exclusive mode ;
delete
FROM (SELECT sysdate , row_number() OVER
(ORDER BY sysdate)
AS rn FROM mytable)
AS tr WHERE rn BETWEEN 1 and 200000 and sysdate = '5/1/2005' ;
COMMIT ;

Stick this in a loop and stop when no rows get deleted anymore.
Sounds interesting. And brings up another question. Is there any way to
dynamically determine how many rows I can delete w/o filling up the logs?

Lots of good responses to this. Thanks all!

Frank
---
Frank Swarbrick
Senior Developer/Analyst - Mainframe Applications
FirstBank Data Corporation - Lakewood, CO USA
Jan 25 '07 #18
Frank Swarbrick wrote:
>>The logs are also used for crash recovery. Let's assume you run the
unlogged DELETE. Now your application or the DB2 server crashes before
you
>>issued a COMMIT. Upon restart, DB2 has to make sure the transaction is
properly rolled back and the database is in a consistent state. If you
don't log the DELETE, you are out of luck there.

Are you saying that DB2 occasionally crashes?
:-) (Just kidding.)
I can't really comment on that. DB2 crashes quite often in my environment -
sometimes on purpose, sometimes not. If not, then it is usually due to my
(wrong) code changes, of course. ;-)

Anyway, just think of someone tripping over the power cable or using Windows
as OS...
>>What would be nice to have in this respect is an option for the DELETE
statement to explicitly turn off logging - which would have a certain
amount of problems as I just mentioned. Truncating a whole table is
supported that way already: you can use ALTER TABLE ... ACTIVATE NOT
LOGGED
>>INITIALLY WITH EMPTY TABLE for that. Maybe this, combined with range
partitionin g is an option for you?

I don't know about range partitioning. Can you give me a pointer to some
information on this?
I guess Serge is the most knowledgeable about this. In a nutshell: you have
one logical table that is internally stored as multiple physical tables.
DB2 will distribute your data across those physical tables. For that, it
needs some criteria/algorithm for the distribution. With range
partitioning, you define ranges and a value in a row that fits into one
range goes into the physical table for that range. During query time, the
DB2 optimizer will analyze the query and if it finds that the query
searches on ranges, it can eliminate scanning some/most of the physical
tables, for instance.

Another side effect is that you have now (V9) ALTER TABLE ... ATTACH
PARTITION and ALTER TABLE ... DETACH PARTITION SQL statements.
(http://publib.boulder.ibm.com/infoce...c/r0000888.htm)
Essentially, those statements switch a regular base table to such a
mentioned physical table and group it to the logical table - or vice versa.
Thus, you can roll-in and roll-out ranges of a table with a single SQL
statement.

If you can partition your table according to your deletion criteria, you can
detach the internal, physical table holding the data you want to remove.
It becomes a regular table, which you can drop.

p.s: I hope I didn't screw up too much on the terminology.

--
Knut Stolze
DB2 z/OS Utilities Development
IBM Germany
Jan 25 '07 #19
"Frank Swarbrick" <Fr************ *@efirstbank.co mwrote in message
news:51******** *****@mid.indiv idual.net...
We're doing it in kind of an odd way. And for now, we're only testing.
We
are using "DB2 Server for VSE" as the client, with the IBM COBOL for
VSE/ESA
compiler. But all of our databases are remote databases on DB2/LUW.

So we definitely fall in to the category of a remote client, not a client
running on the server.

But to answer your question anyway, I have been successful using both
Micro
Focus Net Express (COBOL) 5.0 as well as OpenCobol 0.33 to access DB2/LUW
databases.

Frank
Given the above, I would create an SQL stored procedure to do the deletes.
It will run on the LUW server (you can call it from a remote client with any
parms you want) and it should perform quite well.
Jan 26 '07 #20

This thread has been closed and replies have been disabled. Please start a new discussion.

Similar topics

4
6535
by: koray | last post by:
hi, i need to show large numbers seperated by commas. since i'm using variables from speedscript, i cannot know their values, since the user should enter them. how should i code to show these large numbers in comma(or pointed)-style?
3
5279
by: Alex Vinokur | last post by:
Dann Corbit has implemented function int ilog2 (unsigned long) at http://groups.google.com/groups?selm=lkPa4.2165%24I41.1498%40client Is exist a similar C++-function for very large numbers, e.g., function with signature vector<unsigned long> ilog2 (const vector<unsigned long>&)? -- Alex Vinokur email: alex DOT vinokur AT gmail DOT com
10
4999
by: Tuvas | last post by:
I've been thinking about writing a program to generate the world's largest prime numbers, just for the fun of it. This would require being able to hold an 8000000 digit number into memory (25 megabits, or a little over 3 megs of memory for just one variable...) I would also need several smaller variables. This came about as I optimised a prime number generator more and more, until I came with the idea to try to find the largest ever, using...
22
4514
by: Frinton | last post by:
Hi, I am trying to do some calculations on large numbers (ie 7,768,489,957,892,578,474,792,094 / 12,280) and no matter what I do it doesn't get it quite right. Its always somewhere between 10 and and 5000 out :( I have a suspition is could be down to one of the number functions I am using along the way but im not sure.
3
8212
by: CFonville | last post by:
I was wondering if there is any way to store large numbers in a variable? With this simple script: var bigpi = 1234567890123456789012345678901234567890123456789012345678901234567890; alert(bigpi); I only get the first 17 digits and an exponent. Is there any way to save very large numbers to a variable? I would really like to be able to manipulate pi to a large number of decimal places (say 1,000). Is
2
2794
JAMBAI
by: JAMBAI | last post by:
Hi, How to delete large numbers (100,000 - 1,000,000) records from linked table. I am trying to delete from MS Access Forms. Thanks Jambai
0
9810
marktang
by: marktang | last post by:
ONU (Optical Network Unit) is one of the key components for providing high-speed Internet services. Its primary function is to act as an endpoint device located at the user's premises. However, people are often confused as to whether an ONU can Work As a Router. In this blog post, we’ll explore What is ONU, What Is Router, ONU & Router’s main usage, and What is the difference between ONU and Router. Let’s take a closer look ! Part I. Meaning of...
0
10527
jinu1996
by: jinu1996 | last post by:
In today's digital age, having a compelling online presence is paramount for businesses aiming to thrive in a competitive landscape. At the heart of this digital strategy lies an intricately woven tapestry of website design and digital marketing. It's not merely about having a website; it's about crafting an immersive digital experience that captivates audiences and drives business growth. The Art of Business Website Design Your website is...
1
10573
by: Hystou | last post by:
Overview: Windows 11 and 10 have less user interface control over operating system update behaviour than previous versions of Windows. In Windows 11 and 10, there is no way to turn off the Windows Update option using the Control Panel or Settings app; it automatically checks for updates and installs any it finds, whether you like it or not. For most users, this new feature is actually very convenient. If you want to control the update process,...
0
10241
tracyyun
by: tracyyun | last post by:
Dear forum friends, With the development of smart home technology, a variety of wireless communication protocols have appeared on the market, such as Zigbee, Z-Wave, Wi-Fi, Bluetooth, etc. Each protocol has its own unique characteristics and advantages, but as a user who is planning to build a smart home system, I am a bit confused by the choice of these technologies. I'm particularly interested in Zigbee because I've heard it does some...
0
9358
agi2029
by: agi2029 | last post by:
Let's talk about the concept of autonomous AI software engineers and no-code agents. These AIs are designed to manage the entire lifecycle of a software development project—planning, coding, testing, and deployment—without human intervention. Imagine an AI that can take a project description, break it down, write the code, debug it, and then launch it, all on its own.... Now, this would greatly impact the work of software developers. The idea...
1
7773
isladogs
by: isladogs | last post by:
The next Access Europe User Group meeting will be on Wednesday 1 May 2024 starting at 18:00 UK time (6PM UTC+1) and finishing by 19:30 (7.30PM). In this session, we are pleased to welcome a new presenter, Adolph Dupré who will be discussing some powerful techniques for using class modules. He will explain when you may want to use classes instead of User Defined Types (UDT). For example, to manage the data in unbound forms. Adolph will...
0
5642
by: TSSRALBI | last post by:
Hello I'm a network technician in training and I need your help. I am currently learning how to create and manage the different types of VPNs and I have a question about LAN-to-LAN VPNs. The last exercise I practiced was to create a LAN-to-LAN VPN between two Pfsense firewalls, by using IPSEC protocols. I succeeded, with both firewalls in the same network. But I'm wondering if it's possible to do the same thing, with 2 Pfsense firewalls...
0
5812
by: adsilva | last post by:
A Windows Forms form does not have the event Unload, like VB6. What one acts like?
3
3102
bsmnconsultancy
by: bsmnconsultancy | last post by:
In today's digital era, a well-designed website is crucial for businesses looking to succeed. Whether you're a small business owner or a large corporation in Toronto, having a strong online presence can significantly impact your brand's success. BSMN Consultancy, a leader in Website Development in Toronto offers valuable insights into creating effective websites that not only look great but also perform exceptionally well. In this comprehensive...

By using Bytes.com and it's services, you agree to our Privacy Policy and Terms of Use.

To disable or enable advertisements and analytics tracking please visit the manage ads & tracking page.