473,395 Members | 1,629 Online
Bytes | Software Development & Data Engineering Community
Post Job

Home Posts Topics Members FAQ

Join Bytes to post your question to a community of 473,395 software developers and data experts.

DB2 shared memory and extended storage

Hi all,

I'm getting very confused about how DB2 uses shared memory and I
wonder if someone could clarify matters for me, please ?

We are running 32bit DB2 V7.2 FP9 under AIX 4.3.3 on a machine with 64
Gb of memory with a single non-partitioned database using extended
storage and with intra-parallelism enabled.

I've been experimenting with changing various parameters in an attempt
to make my buffer pool as large as possible but I can't make it bigger
than about 700 Mb and still get the database to activate.

From what I've read in the manuals and on the internet I believe that
DB2 should be able to use a maximum of 7 shared memory segments, which
is about 1.75 Gb of memory, that is used for everything except the
agents private memory (i.e. DBM shared memory + Db global memory +
Application global memory + Agent/application shared memory). Is that
correct ?

Again from what I've read it seems that:

DBM shared memory = Monitor heap(mon_heap_sz) + Audit buffer size
(audit_buf_sz) + Global control block(How big is this ?) + FCM
Connection Entries(fcm_num_connect) + FCM Message Anchors
(fcm_num_anchors) + FCM Request Block (fcm_num_rqb) + FCM
Buffers(fcm_num_buffers) + FCM Control Block(How big ?) + Something
for the database agents, the manual isn't clear what.

and

Max DB global memory = Buffer Pool size + Locklist(locklist) +
Database Heap(dbheap) + Utility Heap Size(util_heap_sz) + Package
Cache(pckcachesz) + Extended memory cache (How big ?)

and

Application global memory = app_ctl_heap_sz

and

Agent/Application shared memory = Application support layer
heap(aslheapsz) + Client I/O block (rqrioblk)
Adding all these things up gives me a total of 940Mb which added to
the Global Control Block, FCM Control Block, Extended memory cache and
something for the agents should give a grand total of 1.75 Gb.

So, should those last four things use about 800 Mb of memory or have I
missed something ?
The manual also states that the database global memory is affected by
the numdb parameter but I've tried changing this from 8 to 1 with no
affect on the total shared memory used.

On a slightly related note, there is a comment in the manual that the
size of the extended storage segments (ESTORE_SEG_SZ) is operating
system dependent but nowhere can I find anything about what a sensible
value for that parameter is. I have found that using 264 segments of
16000 pages each uses more shared memory than 66 segments of 64000
pages each but I don't know why or if it's better to use lots of
little segments rather than a few big ones!

Also, I suspect I'm not using FCM (DB2_FORCE_FCM_BP is not set), am I
still allocating memory to FCM if the four FCM parameters are set to
non-zero values ?

Cheers,

Jeremy Goodall
Nov 12 '05 #1
12 5496
Jeremy, just a quick reply befor I dive into this one more!
The number of segments you define should be equal to the number of BPs
that are defined to use Ext. Storage.
The size of the segments is then the result of:
Amount of Storage planned for Estore divided by no. of segments.

For the rest it seems you have the right approach but I want to look
further into it.
HTH, Pierre.

Jeremy wrote:
Hi all,

I'm getting very confused about how DB2 uses shared memory and I
wonder if someone could clarify matters for me, please ?

We are running 32bit DB2 V7.2 FP9 under AIX 4.3.3 on a machine with 64
Gb of memory with a single non-partitioned database using extended
storage and with intra-parallelism enabled.

I've been experimenting with changing various parameters in an attempt
to make my buffer pool as large as possible but I can't make it bigger
than about 700 Mb and still get the database to activate.

From what I've read in the manuals and on the internet I believe that
DB2 should be able to use a maximum of 7 shared memory segments, which
is about 1.75 Gb of memory, that is used for everything except the
agents private memory (i.e. DBM shared memory + Db global memory +
Application global memory + Agent/application shared memory). Is that
correct ?

Again from what I've read it seems that:

DBM shared memory = Monitor heap(mon_heap_sz) + Audit buffer size
(audit_buf_sz) + Global control block(How big is this ?) + FCM
Connection Entries(fcm_num_connect) + FCM Message Anchors
(fcm_num_anchors) + FCM Request Block (fcm_num_rqb) + FCM
Buffers(fcm_num_buffers) + FCM Control Block(How big ?) + Something
for the database agents, the manual isn't clear what.

and

Max DB global memory = Buffer Pool size + Locklist(locklist) +
Database Heap(dbheap) + Utility Heap Size(util_heap_sz) + Package
Cache(pckcachesz) + Extended memory cache (How big ?)

and

Application global memory = app_ctl_heap_sz

and

Agent/Application shared memory = Application support layer
heap(aslheapsz) + Client I/O block (rqrioblk)
Adding all these things up gives me a total of 940Mb which added to
the Global Control Block, FCM Control Block, Extended memory cache and
something for the agents should give a grand total of 1.75 Gb.

So, should those last four things use about 800 Mb of memory or have I
missed something ?
The manual also states that the database global memory is affected by
the numdb parameter but I've tried changing this from 8 to 1 with no
affect on the total shared memory used.

On a slightly related note, there is a comment in the manual that the
size of the extended storage segments (ESTORE_SEG_SZ) is operating
system dependent but nowhere can I find anything about what a sensible
value for that parameter is. I have found that using 264 segments of
16000 pages each uses more shared memory than 66 segments of 64000
pages each but I don't know why or if it's better to use lots of
little segments rather than a few big ones!

Also, I suspect I'm not using FCM (DB2_FORCE_FCM_BP is not set), am I
still allocating memory to FCM if the four FCM parameters are set to
non-zero values ?

Cheers,

Jeremy Goodall


--
Pierre Saint-Jacques
IBM DB2 Cerified Solutions Expert - Administration
SES Consultants Inc.

Nov 12 '05 #2
"Jeremy" <je****@home.co.uk> wrote in message
news:i2********************************@4ax.com...
Hi all,

I'm getting very confused about how DB2 uses shared memory and I
wonder if someone could clarify matters for me, please ?

We are running 32bit DB2 V7.2 FP9 under AIX 4.3.3 on a machine with 64
Gb of memory with a single non-partitioned database using extended
storage and with intra-parallelism enabled.

I've been experimenting with changing various parameters in an attempt
to make my buffer pool as large as possible but I can't make it bigger
than about 700 Mb and still get the database to activate.

From what I've read in the manuals and on the internet I believe that
DB2 should be able to use a maximum of 7 shared memory segments, which
is about 1.75 Gb of memory, that is used for everything except the
agents private memory (i.e. DBM shared memory + Db global memory +
Application global memory + Agent/application shared memory). Is that
correct ?

Again from what I've read it seems that:

DBM shared memory = Monitor heap(mon_heap_sz) + Audit buffer size
(audit_buf_sz) + Global control block(How big is this ?) + FCM
Connection Entries(fcm_num_connect) + FCM Message Anchors
(fcm_num_anchors) + FCM Request Block (fcm_num_rqb) + FCM
Buffers(fcm_num_buffers) + FCM Control Block(How big ?) + Something
for the database agents, the manual isn't clear what.

and

Max DB global memory = Buffer Pool size + Locklist(locklist) +
Database Heap(dbheap) + Utility Heap Size(util_heap_sz) + Package
Cache(pckcachesz) + Extended memory cache (How big ?)

and

Application global memory = app_ctl_heap_sz

and

Agent/Application shared memory = Application support layer
heap(aslheapsz) + Client I/O block (rqrioblk)
Adding all these things up gives me a total of 940Mb which added to
the Global Control Block, FCM Control Block, Extended memory cache and
something for the agents should give a grand total of 1.75 Gb.

So, should those last four things use about 800 Mb of memory or have I
missed something ?
The manual also states that the database global memory is affected by
the numdb parameter but I've tried changing this from 8 to 1 with no
affect on the total shared memory used.

On a slightly related note, there is a comment in the manual that the
size of the extended storage segments (ESTORE_SEG_SZ) is operating
system dependent but nowhere can I find anything about what a sensible
value for that parameter is. I have found that using 264 segments of
16000 pages each uses more shared memory than 66 segments of 64000
pages each but I don't know why or if it's better to use lots of
little segments rather than a few big ones!

Also, I suspect I'm not using FCM (DB2_FORCE_FCM_BP is not set), am I
still allocating memory to FCM if the four FCM parameters are set to
non-zero values ?

Cheers,

Jeremy Goodall


You might try running the Performance Wizard and give DB2 a total amount of
memory and then let the wizard allocate it among the components. That could
be used as a starting point for further tuning efforts.
Nov 12 '05 #3
On Fri, 17 Oct 2003 14:48:59 -0600, "Mark A" <ma@switchboard.net>
wrote:

You might try running the Performance Wizard and give DB2 a total amount of
memory and then let the wizard allocate it among the components. That could
be used as a starting point for further tuning efforts.


Mark,

Thanks for the tip. I've tried that but the performance wizard just
recommends allocating 54 Gb to the bufferpools which won't work,

Cheers,

Jeremy
Nov 12 '05 #4
"> >You might try running the Performance Wizard and give DB2 a total amount
of
memory and then let the wizard allocate it among the components. That couldbe used as a starting point for further tuning efforts.


Mark,

Thanks for the tip. I've tried that but the performance wizard just
recommends allocating 54 Gb to the bufferpools which won't work,

Cheers,

Jeremy


Actually, I have only used Performance Wizard on DB2 version 8 for Linux. It
seemed to work fairly well, but not sure about your AIX memory issues.
Nov 12 '05 #5
On Fri, 17 Oct 2003 16:23:47 -0400, "P. Saint-Jacques"
<se*****@attglobal.net> wrote:
Jeremy, just a quick reply befor I dive into this one more!
The number of segments you define should be equal to the number of BPs
that are defined to use Ext. Storage.
The size of the segments is then the result of:
Amount of Storage planned for Estore divided by no. of segments.

For the rest it seems you have the right approach but I want to look
further into it.
HTH, Pierre.


Pierre,

Not really, I'm afraid. We have a single buffer pool and intend
allocating up to 32 Gb of extended memory.
We are using 4k pages so 32 Gb of storage would be over 8 million
pages which is somewhat more than the upper limit for estore_seg_sz
which is just over a million pages.

Should we be using the smallest number of segments then(i.e. 8
segments, each about one million pages in size) or are you saying that
a buffer pool can only use a single segment ?

Cheers,

Jeremy
Nov 12 '05 #6
If you have 32 GB and want to use it effectively, one of the following
things may be more efficient that extended storage:

1. 64-bit DB2
2. use the memory for file system caching if the database is on SMS or
DMS with files
3. consider DB2 EEE with between 8 and 16 partitions.

Jeremy wrote:
On Fri, 17 Oct 2003 16:23:47 -0400, "P. Saint-Jacques"
<se*****@attglobal.net> wrote:

Jeremy, just a quick reply befor I dive into this one more!
The number of segments you define should be equal to the number of BPs
that are defined to use Ext. Storage.
The size of the segments is then the result of:
Amount of Storage planned for Estore divided by no. of segments.

For the rest it seems you have the right approach but I want to look
further into it.
HTH, Pierre.

Pierre,

Not really, I'm afraid. We have a single buffer pool and intend
allocating up to 32 Gb of extended memory.
We are using 4k pages so 32 Gb of storage would be over 8 million
pages which is somewhat more than the upper limit for estore_seg_sz
which is just over a million pages.

Should we be using the smallest number of segments then(i.e. 8
segments, each about one million pages in size) or are you saying that
a buffer pool can only use a single segment ?

Cheers,

Jeremy


Nov 12 '05 #7
On Fri, 17 Oct 2003 18:46:00 -0400, Blair Adamache
<ba*******@2muchspam.yahoo.com> wrote:
If you have 32 GB and want to use it effectively, one of the following
things may be more efficient that extended storage:

1. 64-bit DB2
2. use the memory for file system caching if the database is on SMS or
DMS with files
3. consider DB2 EEE with between 8 and 16 partitions.


Blair,

Thanks for that but we can't use 64 bit DB2 as our client application
is then unable to communicate with the database. That was our
preferred option.

Regarding your second point, we did actually work this way for some
time whilst we were on V6.1 (because of a suspected instability of
extended memory under V6.1 and AIX 4.3) and found that we were still
doing significant physical I/O. In fact, when we switched on extended
memory the system performance doubled so my impression is that
extended memory is a much more efficient use of memory than file
system caching, if only for our particular system.

We may be forced down the EEE route but for the moment I'd prefer to
avoid the extra complexity that would involve if at all possible. I am
a little nervous about extended memory, if only because there is so
little documentation about it, but it appears to be the most sensible
way to go at present.

Just to avoid any confusion, we've actually got 64 GB on the machine
but intend using 32 GB for extended memory.

Cheers,

Jeremy
Nov 12 '05 #8
Okay - I suppose you could always move to v8 to get around the client
problem.

Jeremy wrote:
On Fri, 17 Oct 2003 18:46:00 -0400, Blair Adamache
<ba*******@2muchspam.yahoo.com> wrote:

If you have 32 GB and want to use it effectively, one of the following
things may be more efficient that extended storage:

1. 64-bit DB2
2. use the memory for file system caching if the database is on SMS or
DMS with files
3. consider DB2 EEE with between 8 and 16 partitions.

Blair,

Thanks for that but we can't use 64 bit DB2 as our client application
is then unable to communicate with the database. That was our
preferred option.

Regarding your second point, we did actually work this way for some
time whilst we were on V6.1 (because of a suspected instability of
extended memory under V6.1 and AIX 4.3) and found that we were still
doing significant physical I/O. In fact, when we switched on extended
memory the system performance doubled so my impression is that
extended memory is a much more efficient use of memory than file
system caching, if only for our particular system.

We may be forced down the EEE route but for the moment I'd prefer to
avoid the extra complexity that would involve if at all possible. I am
a little nervous about extended memory, if only because there is so
little documentation about it, but it appears to be the most sensible
way to go at present.

Just to avoid any confusion, we've actually got 64 GB on the machine
but intend using 32 GB for extended memory.

Cheers,

Jeremy


Nov 12 '05 #9
AIX OS (system) caching will not work if containers are DMS. So SMS
containers are to be considered if the OS capability is to be
activated and also the settings of DB2_MMAP_READ and DB2_MMAP_WRITE to
OFF to allow the OS cache to be a natural extended memory, without any
DB2 configuration of extended storage. But AIX 4.3.3 will maybe show
contention on locking (see explanation of DB2_MMAP_*: In most
environments, mmap should be used to avoid operating system locks when
multiple processes are writing to different sections of the same
file.). Remark: this file system contention problem has been improved
(lowered) in AIX 5.1 (see redbook SG24-5765-02, page 220, AIX 5.1.0
enhancement, Complex inode lock).
Bernard Dhooghe
Nov 12 '05 #10
If you use DMS with files, you can benefit from OS file system caching,
I believe. The restriction you're talking about is only DMS with raw
devices.

Bernard Dhooghe wrote:
AIX OS (system) caching will not work if containers are DMS. So SMS
containers are to be considered if the OS capability is to be
activated and also the settings of DB2_MMAP_READ and DB2_MMAP_WRITE to
OFF to allow the OS cache to be a natural extended memory, without any
DB2 configuration of extended storage. But AIX 4.3.3 will maybe show
contention on locking (see explanation of DB2_MMAP_*: In most
environments, mmap should be used to avoid operating system locks when
multiple processes are writing to different sections of the same
file.). Remark: this file system contention problem has been improved
(lowered) in AIX 5.1 (see redbook SG24-5765-02, page 220, AIX 5.1.0
enhancement, Complex inode lock).
Bernard Dhooghe


Nov 12 '05 #11
Blair Adamache <ba*******@2muchspam.yahoo.com> wrote in message news:<bn**********@hanover.torolab.ibm.com>...
If you use DMS with files, you can benefit from OS file system caching,
I believe. The restriction you're talking about is only DMS with raw
devices.

Correct.

Bernard Dhooghe
Nov 12 '05 #12
On 28 Oct 2003 06:50:31 -0800, no***@attglobal.net (Bernard Dhooghe)
wrote:
Blair Adamache <ba*******@2muchspam.yahoo.com> wrote in message news:<bn**********@hanover.torolab.ibm.com>...
If you use DMS with files, you can benefit from OS file system caching,
I believe. The restriction you're talking about is only DMS with raw
devices.

Correct.

Bernard Dhooghe


Except we did use DMS tablespaces without raw devices and didn't seem
to be benefiting from significant file caching,

Cheers,

Jeremy

Nov 12 '05 #13

This thread has been closed and replies have been disabled. Please start a new discussion.

Similar topics

1
by: DJTB | last post by:
zodb-dev@zope.org] Hi, I'm having problems storing large amounts of objects in a ZODB. After committing changes to the database, elements are not cleared from memory. Since the number of...
2
by: Trevor Oakley | last post by:
I am writing thousands of html pages from an MS SQL source using a DataSet and then an XslTransform. I have an interest in making the code run faster as it takes several minutes (sometimes ten...
10
by: p175 | last post by:
Hi folks, I'm after some guidelines on general memory allocations for DB2 8.2.2 ESE on Win2k Server FP4 with 4gb physical memory. Seeing as everything is 32bit and win2k server does not...
19
by: Jerry | last post by:
I am wondering what is the maximum size of memory that malloc() could handle. Is there any limitation on that? Where am I supposed to get this kind of information? Thank you everybody.
1
by: | last post by:
Hi I used shared memory quite a lot for IPC. The problem is that it is fixed size so that you cannot place something like a CString in it because CString will load it's buffer from the local...
1
by: MargaretGillon | last post by:
I am using PGADMIN III v1.1.0 Windows to set up PostgreSQL tables on a Linux Server, RedHat 9. Every time I create a character field, even just one character wide, the PGADMIN uses EXTENDED as the...
2
by: volker_nitschke | last post by:
Hi, I want to develop a program, that transmits several classes between two processes. It should be built after the guidelines of the iso/osi-reference model and work with a shared memory. At...
1
by: BAS | last post by:
Hi , For the past some days i could see the below error on my diag logs, MESSAGE : Unable to create or attach to the segment used for fenced routine communications. Decrease the amount of...
11
by: eBob.com | last post by:
I have this nasty problem with Shared methods and what I think of as "global storage" - i.e. storage declared outside of any subroutines or functions. In the simple example below this "global"...
0
by: ryjfgjl | last post by:
In our work, we often receive Excel tables with data in the same format. If we want to analyze these data, it can be difficult to analyze them because the data is spread across multiple Excel files...
0
BarryA
by: BarryA | last post by:
What are the essential steps and strategies outlined in the Data Structures and Algorithms (DSA) roadmap for aspiring data scientists? How can individuals effectively utilize this roadmap to progress...
1
by: Sonnysonu | last post by:
This is the data of csv file 1 2 3 1 2 3 1 2 3 1 2 3 2 3 2 3 3 the lengths should be different i have to store the data by column-wise with in the specific length. suppose the i have to...
0
by: Hystou | last post by:
There are some requirements for setting up RAID: 1. The motherboard and BIOS support RAID configuration. 2. The motherboard has 2 or more available SATA protocol SSD/HDD slots (including MSATA, M.2...
0
by: Hystou | last post by:
Most computers default to English, but sometimes we require a different language, especially when relocating. Forgot to request a specific language before your computer shipped? No problem! You can...
0
Oralloy
by: Oralloy | last post by:
Hello folks, I am unable to find appropriate documentation on the type promotion of bit-fields when using the generalised comparison operator "<=>". The problem is that using the GNU compilers,...
0
jinu1996
by: jinu1996 | last post by:
In today's digital age, having a compelling online presence is paramount for businesses aiming to thrive in a competitive landscape. At the heart of this digital strategy lies an intricately woven...
0
by: Hystou | last post by:
Overview: Windows 11 and 10 have less user interface control over operating system update behaviour than previous versions of Windows. In Windows 11 and 10, there is no way to turn off the Windows...
0
tracyyun
by: tracyyun | last post by:
Dear forum friends, With the development of smart home technology, a variety of wireless communication protocols have appeared on the market, such as Zigbee, Z-Wave, Wi-Fi, Bluetooth, etc. Each...

By using Bytes.com and it's services, you agree to our Privacy Policy and Terms of Use.

To disable or enable advertisements and analytics tracking please visit the manage ads & tracking page.