473,416 Members | 1,609 Online
Bytes | Software Development & Data Engineering Community
Post Job

Home Posts Topics Members FAQ

Join Bytes to post your question to a community of 473,416 software developers and data experts.

ruminating on 390 performance vs. VSAM

consider the following hypothetical (it isn't but....)

- start with a COBOL/VSAM codebase that's REALLY old
- end with both a COBOL/VSAM and a COBOL/DB2 system which differs in only
where the data lives. code, including copybooks, remains the same.

mostly the system does periodic (daily/weekly/<etc>) batch runs.

the VSAM order file has 99 line items. this gets converted to an
Order table with 99 line_items.

beyond the need to fit the copybook definition, it is asserted that a
two table (Order and Order_line) implementation will be too slow.

this rumination was motivated by reading an article linked to from an
earlier thread, which discussed join implementation on 390. i suspect what
it had to say applies generally. what caught me was, as i understood it,
that nested table reads is most often used. if this is true, and it seems
that hash joins are only more efficient on equality constraints, then is
there a known analysis which at least mitigates the reading?

what (hypothetically) we tend to do is put each table in a tablespace.
for the Order/Order_line implementation, it seems logical to put them into
one tablespace, cluster Order on Order_num and Order_line on Order_num,
line_num; and buffer the hell out of it. similarly for indexes.

does this sound remotely on the right track?
Nov 12 '05 #1
3 3058
gn*****@rcn.com (robert) wrote in message news:<da**************************@posting.google. com>...
consider the following hypothetical (it isn't but....)

- start with a COBOL/VSAM codebase that's REALLY old
- end with both a COBOL/VSAM and a COBOL/DB2 system which differs in only
where the data lives. code, including copybooks, remains the same.

mostly the system does periodic (daily/weekly/<etc>) batch runs.

the VSAM order file has 99 line items. this gets converted to an
Order table with 99 line_items.

beyond the need to fit the copybook definition, it is asserted that a
two table (Order and Order_line) implementation will be too slow.

this rumination was motivated by reading an article linked to from an
earlier thread, which discussed join implementation on 390. i suspect what
it had to say applies generally. what caught me was, as i understood it,
that nested table reads is most often used. if this is true, and it seems
that hash joins are only more efficient on equality constraints, then is
there a known analysis which at least mitigates the reading?

what (hypothetically) we tend to do is put each table in a tablespace.
for the Order/Order_line implementation, it seems logical to put them into
one tablespace, cluster Order on Order_num and Order_line on Order_num,
line_num; and buffer the hell out of it. similarly for indexes.

does this sound remotely on the right track?


Assuming that you never have an SQL statement result in a tablespace
scan, it might be OK to put the Order/Order_line in one simple
tablespace. But if a tablespace scan does occur, DB2 will
(unnecessarily) scan both tables, when it could have just scanned the
table needed if they were in separate tablespaces.

If you use a segmented tablespace, this will not help you since the
data for two tables in a single segmented tablespace will not be on
the same page (the lines items and the associated order), they will
reside in different segments.

I think you are over designing just a bit. By using DB2 buffer pools
effectively (this is the biggest difference between how DB2 and plain
VSAM works), I don't think you need to put the two tables in the same
tablespace.

Have one bufferpool for the catalog, indexes, and small tables that
frequently accessed. Second bufferpool for medium and large tables.
Third bufferpool for large decision support tables (if you have any in
your application).

The speed of a join has nothing to do with whether the tables are
using the same tablespace. The speed is related to whether the
required data page is already in the bufferpool, or if it needs to be
fetched from disk.
Nov 12 '05 #2
m0****@yahoo.com (Mark) wrote in message news:<a5**************************@posting.google. com>...
gn*****@rcn.com (robert) wrote in message news:<da**************************@posting.google. com>...
consider the following hypothetical (it isn't but....)

- start with a COBOL/VSAM codebase that's REALLY old
- end with both a COBOL/VSAM and a COBOL/DB2 system which differs in only
where the data lives. code, including copybooks, remains the same.

mostly the system does periodic (daily/weekly/<etc>) batch runs.

the VSAM order file has 99 line items. this gets converted to an
Order table with 99 line_items.

beyond the need to fit the copybook definition, it is asserted that a
two table (Order and Order_line) implementation will be too slow.

this rumination was motivated by reading an article linked to from an
earlier thread, which discussed join implementation on 390. i suspect what
it had to say applies generally. what caught me was, as i understood it,
that nested table reads is most often used. if this is true, and it seems
that hash joins are only more efficient on equality constraints, then is
there a known analysis which at least mitigates the reading?

what (hypothetically) we tend to do is put each table in a tablespace.
for the Order/Order_line implementation, it seems logical to put them into
one tablespace, cluster Order on Order_num and Order_line on Order_num,
line_num; and buffer the hell out of it. similarly for indexes.

does this sound remotely on the right track?


Assuming that you never have an SQL statement result in a tablespace
scan, it might be OK to put the Order/Order_line in one simple
tablespace. But if a tablespace scan does occur, DB2 will
(unnecessarily) scan both tables, when it could have just scanned the
table needed if they were in separate tablespaces.

If you use a segmented tablespace, this will not help you since the
data for two tables in a single segmented tablespace will not be on
the same page (the lines items and the associated order), they will
reside in different segments.

I think you are over designing just a bit. By using DB2 buffer pools
effectively (this is the biggest difference between how DB2 and plain
VSAM works), I don't think you need to put the two tables in the same
tablespace.

Have one bufferpool for the catalog, indexes, and small tables that
frequently accessed. Second bufferpool for medium and large tables.
Third bufferpool for large decision support tables (if you have any in
your application).

The speed of a join has nothing to do with whether the tables are
using the same tablespace. The speed is related to whether the
required data page is already in the bufferpool, or if it needs to be
fetched from disk.


my working assumption was that, given the amount of data, that buffers
would exhaust during this (essentially) sequential batch update process,
and that emulating, to the extent possible, contiguous data storage would
help. may be not. but i do agree, that bufferpools of adequate size
are more important than most anything.
Nov 12 '05 #3
In article <da**************************@posting.google.com >,
robert <gn*****@rcn.com> wrote:
m0****@yahoo.com (Mark) wrote in message news:<a5**************************@posting.google. com>...
gn*****@rcn.com (robert) wrote in message news:<da**************************@posting.google. com>...
consider the following hypothetical (it isn't but....)

- start with a COBOL/VSAM codebase that's REALLY old - end with
both a COBOL/VSAM and a COBOL/DB2 system which differs in only
where the data lives. code, including copybooks, remains the
same.

mostly the system does periodic (daily/weekly/<etc>) batch runs.

the VSAM order file has 99 line items. this gets converted to an
Order table with 99 line_items.

beyond the need to fit the copybook definition, it is asserted that
a two table (Order and Order_line) implementation will be too slow.

this rumination was motivated by reading an article linked to from
an earlier thread, which discussed join implementation on 390. i
suspect what it had to say applies generally. what caught me was,
as i understood it, that nested table reads is most often used. if
this is true, and it seems that hash joins are only more efficient
on equality constraints, then is there a known analysis which at
least mitigates the reading?

what (hypothetically) we tend to do is put each table in a
tablespace. for the Order/Order_line implementation, it seems
logical to put them into one tablespace, cluster Order on Order_num
and Order_line on Order_num, line_num; and buffer the hell out of
it. similarly for indexes.

does this sound remotely on the right track?
Assuming that you never have an SQL statement result in a tablespace
scan, it might be OK to put the Order/Order_line in one simple
tablespace. But if a tablespace scan does occur, DB2 will
(unnecessarily) scan both tables, when it could have just scanned the
table needed if they were in separate tablespaces.

If you use a segmented tablespace, this will not help you since the
data for two tables in a single segmented tablespace will not be on
the same page (the lines items and the associated order), they will
reside in different segments.

I think you are over designing just a bit. By using DB2 buffer pools
effectively (this is the biggest difference between how DB2 and plain
VSAM works), I don't think you need to put the two tables in the same
tablespace.

Have one bufferpool for the catalog, indexes, and small tables that
frequently accessed. Second bufferpool for medium and large tables.
Third bufferpool for large decision support tables (if you have any
in your application).

The speed of a join has nothing to do with whether the tables are
using the same tablespace. The speed is related to whether the
required data page is already in the bufferpool, or if it needs to be
fetched from disk.

my working assumption was that, given the amount of data, that buffers
would exhaust during this (essentially) sequential batch update
process, and that emulating, to the extent possible, contiguous data
storage would help. may be not. but i do agree, that bufferpools of
adequate size are more important than most anything.


IF you are saying that the batch process will read each of two tables
sequentially, in the order that they will be physically held within DB2
(bearing in mind clustering sequence, free space, insert pattern, rows
out of sequence, SQL code, runstats, program bind, etc)

THEN I would expect that DB2 will automatically invoke sequential
pre-fetch. This means that DB2 will try to read the next pages into the
bufffer pool asynchronously BEFORE the program gets there, and updated
pages will be written out to physical disc, again asynchronous with the
program processing.

This effectively (ie simplified) means there are 3 separate CPU tasks -
one read, one update, one write, and the update task is only reading and
writing data in the buffer pools. This can be very efficient.

.... but if the access is random, then all performance bets are off, and
you have to be much more careful!

Hence why it is critical to have an understanding how data will be used
before the database is designed, at least for high-performance systems.

Martin

--
Martin Avison
Note that emails to News@ will be junked. Use Martin instead of News
Nov 12 '05 #4

This thread has been closed and replies have been disabled. Please start a new discussion.

Similar topics

25
by: Brian Patterson | last post by:
I have noticed in the book of words that hasattr works by calling getattr and raising an exception if no such attribute exists. If I need the value in any case, am I better off using getattr...
12
by: serge | last post by:
I have an SP that is big, huge, 700-800 lines. I am not an expert but I need to figure out every possible way that I can improve the performance speed of this SP. In the next couple of weeks I...
6
by: teedilo | last post by:
We have an application with a SQL Server 2000 back end that is fairly database intensive -- lots of fairly frequent queries, inserts, updates -- the gamut. The application does not make use of...
2
by: robert | last post by:
typed this into the ibm.com search window, but didn't get anything that looked like it would answer a question: +trigger +faster +db2 +cobol the question: are DB2 (390/v6, at the moment)...
1
by: simon | last post by:
Hi Hopefully someone could point me in the right direction on this one. INFRASTRUCTURE DB2 v7 on MVS OS390 SCENARIO We are currently loading large volumes of data (eg 20MM rows) per day...
13
by: bjarne | last post by:
Willy Denoyette wrote; > ... it > was not the intention of StrousTrup to the achieve the level of efficiency > of C when he invented C++, ... Ahmmm. It was my aim to match the performance...
0
by: Gee | last post by:
To migrate the VSAM files to SQL server 2000 I need to load 'OLE DB provider for AS/400 and VSAM'. To load the provider I've to install Host Integration Server. But the cost is too high. Please...
1
by: jvn | last post by:
I am experiencing a particular problem with performance counters. I have created a set of classes, that uses System.Diagnostics.PerformanceCounter to increment custom performance counters (using...
0
by: sajithamol | last post by:
The requirement of a program is to read a flat file containing 1,000,000 Transaction ID sequentially and hit a KSDS Transaction VSAM to get the transaction details for each transaction ID. The KEY...
0
by: emmanuelkatto | last post by:
Hi All, I am Emmanuel katto from Uganda. I want to ask what challenges you've faced while migrating a website to cloud. Please let me know. Thanks! Emmanuel
1
by: Sonnysonu | last post by:
This is the data of csv file 1 2 3 1 2 3 1 2 3 1 2 3 2 3 2 3 3 the lengths should be different i have to store the data by column-wise with in the specific length. suppose the i have to...
0
by: Hystou | last post by:
There are some requirements for setting up RAID: 1. The motherboard and BIOS support RAID configuration. 2. The motherboard has 2 or more available SATA protocol SSD/HDD slots (including MSATA, M.2...
0
marktang
by: marktang | last post by:
ONU (Optical Network Unit) is one of the key components for providing high-speed Internet services. Its primary function is to act as an endpoint device located at the user's premises. However,...
0
by: Hystou | last post by:
Most computers default to English, but sometimes we require a different language, especially when relocating. Forgot to request a specific language before your computer shipped? No problem! You can...
0
jinu1996
by: jinu1996 | last post by:
In today's digital age, having a compelling online presence is paramount for businesses aiming to thrive in a competitive landscape. At the heart of this digital strategy lies an intricately woven...
0
tracyyun
by: tracyyun | last post by:
Dear forum friends, With the development of smart home technology, a variety of wireless communication protocols have appeared on the market, such as Zigbee, Z-Wave, Wi-Fi, Bluetooth, etc. Each...
0
isladogs
by: isladogs | last post by:
The next Access Europe User Group meeting will be on Wednesday 1 May 2024 starting at 18:00 UK time (6PM UTC+1) and finishing by 19:30 (7.30PM). In this session, we are pleased to welcome a new...
0
by: conductexam | last post by:
I have .net C# application in which I am extracting data from word file and save it in database particularly. To store word all data as it is I am converting the whole word file firstly in HTML and...

By using Bytes.com and it's services, you agree to our Privacy Policy and Terms of Use.

To disable or enable advertisements and analytics tracking please visit the manage ads & tracking page.