473,395 Members | 2,006 Online
Bytes | Software Development & Data Engineering Community
Post Job

Home Posts Topics Members FAQ

Join Bytes to post your question to a community of 473,395 software developers and data experts.

DB2_LARGE_PAGE_MEM

We are running DB2 8.2 (FP9) on Linux (64bit) with 64 bit instances and
we are reviewing the new features that we could activate on the 2.6
kernel. We came up accross an interesting db2set parameter
DB2_LARGE_PAGE_MEM, but we don't exactly understand how it needs to be
set.

Is this parameter replacing the SHMMAX in Linux? If not, how exactly
are we supposed to relate that to the bufferpool size (currently set to
320000 pages)?

Thanks,

Yvan

Mar 6 '06 #1
5 4381

The Phoenix wrote:
We are running DB2 8.2 (FP9) on Linux (64bit) with 64 bit instances and
we are reviewing the new features that we could activate on the 2.6
kernel. We came up accross an interesting db2set parameter
DB2_LARGE_PAGE_MEM, but we don't exactly understand how it needs to be
set.

Is this parameter replacing the SHMMAX in Linux? If not, how exactly
are we supposed to relate that to the bufferpool size (currently set to
320000 pages)?

Thanks,

Yvan


Hi Yvan,

You need to be careful when playing with this parameter. Normally, the
OS uses default 4K memory pages - using 16MB large pages for the DB2
database shared memory set (which includes the bufferpools, locklist,
package cache, etc) may help performance, but it means that you have to
carefully provision your system ahead of time, configure how many large
pages the OS should allow, etc. It also means that if you ever want to
dynamically alter your bufferpool sizes, increase your locklist size or
utility heap size, etc., then you will likely have to also reconfigure
and reboot your OS.

With that being said, to have the database shared memory set allocated
using large pages, run:
db2set DB2_LARGE_PAGE_MEM=DB

To ensure that you have enough large pages configured on your system,
before setting this registry variable, you should activate your
database, and find out how large your database shared memory set
currently is:
db2 get db cfg show detail | grep DATABASE_MEMORY

You'll see an in-memory value which represents how big the database
shared memory set is (this should be much easier than calculating by
hand how big all the bufferpools are, the locklist size, package cache
size, etc). Multiple this by 4096 to get the number of bytes - you'll
have to configure the OS to have at least that amount of memory
dedicated to the large page pool. Changing the large page pool will
likely require you to reboot. After the reboot, set the
DB2_LARGE_PAGE_MEM registry variable as mentioned above, and activate
your database.

Cheers,
Liam.

Mar 6 '06 #2
Thanks for the quick response, based on your experience can we really
improve the DB performance by going that route?

Mar 6 '06 #3
I guess it really depends :-)

If your total amount of database shared memory is fairly small (a few
hundred MBs or so), I doubt you would notice any real change in
performance. If your database is not finely tuned, you probably won't
notice that much of a difference either (i.e. if you're I/O bound, then
switching to large pages won't buy you too much). If you only have a
few agents active at any one time, you may not notice much of a
difference.

If you have a large database shared memory set though, lots of agents,
a finely tuned database, and you're mainly CPU bound, then you might be
able to see 5-10% or so performance improvement.

If you're not careful though, it's also possible that you'll degrade
overall system performance using large pages. Let's say you have 4GB
of RAM on your system, and your database shared memory set is 2GB, so
you configure 2GB of large page memory. That 2GB of large page memory
can now only be used by the database shared memory set. Using the
default 4K pages, if the OS saw that there was heavy demand for memory
in the system, and only about 1.5GB of your database shared memory set
was "hot", the other 500MB of "cold" pages could be swapped out, and
used by other processes, for private agent memory, sorts for DSS
queries, etc, which could end up giving you better overall performance.
With 2GB of large page memory that it can't touch, the OS may end up
having to swap out "warmer" pages from the remaining 2GB, leading to
more overall swapping (i.e. when it needs to read those warm pages back
into memory), degrading overall performance.

If you're looking for ways to boost performance, I would say using
large page memory for your database shared memory set would be one of
the last things you should look at. There are likely many other things
you can do that will generate better performance improvements, with
fewer risks.

Cheers,
Liam.

Mar 7 '06 #4
Ian
Liam Finnie wrote:

You need to be careful when playing with this parameter. Normally, the
OS uses default 4K memory pages - using 16MB large pages for the DB2
database shared memory set (which includes the bufferpools, locklist,
package cache, etc) may help performance,


Liam,

I thought that the point of using large pages was to allow DB2 to
allocate large amounts of memory more quickly - i.e. if allocating
2 Gb for DBMS, that means only 128 16Mb pages are allocated vs.
524,288 4k pages.

However, this (DBMS) allocation only occurs when the database is
activated; so I don't understand how this will improve overall
database throughput?

Or am I wrong in my understanding of large page support?

Thanks,


Mar 9 '06 #5
Hi Ian,

The main benefit comes from how the OS translates virtual addresses
from a process into physical memory addresses. Most hardware platforms
use what are called TLBs - translation lookaside buffers. Entries in
the TLBs are the higher-order portion of a virtual memory address -
they do not include the low-order 12 bits for 4K pages, or the
low-order 24-bits for 16MB pages, since those will be the offset into
the physical memory page. There are typically a very small number of
these hardware TLB entries, so using large page memory allows the OS to
translate a larger range of virtual memory using the same number of TLB
entries. A TLB miss means that the CPU has to stall waiting for the
virtual-to-physical translation to occur - the OS then typically takes
over, and fills in the required TLB entry, and then things proceed.
So, the way large pages can improve throughput is by obtaining better
TLB hit rates during runtime, reducing the likelyhood or CPU stalls.

Another saving is in the size of the page translation tables for a
process. Similar to the TLB discussion above, the OS maintains the
list of all valid virtual-to-physical address translations for a
process (the TLBs only maintain a very small subset of this total).
The smaller this table is, the lower the context switch time for
processes, and the better performance you'll get.

Which is why I mentioned that if you only have a few database agents,
or a fairly small amount of database shared memory, you probably won't
see much difference from using large pages. The OS won't have to do as
many context switches, and your TLB hit rates will likely be high
enough, that large pages won't contribute too much. Further, if your
workload is I/O bound, it just means that the database agents may
finish their useful, CPU-intensive, work more quickly, and end up
waiting on I/O more, yielding no net performance benefit.

Cheers,
Liam.

Mar 9 '06 #6

This thread has been closed and replies have been disabled. Please start a new discussion.

Similar topics

1
by: nrhayyal | last post by:
hi all, i am a c++ developer(so chose to post it in c++ group) facing a problem with DB2-CLI on AIX5.2. we are using db2 v8 fixpak 10, gcc 3.3.2, on aix5.2 with extensive use of STL vectors....
0
by: ryjfgjl | last post by:
In our work, we often receive Excel tables with data in the same format. If we want to analyze these data, it can be difficult to analyze them because the data is spread across multiple Excel files...
0
by: emmanuelkatto | last post by:
Hi All, I am Emmanuel katto from Uganda. I want to ask what challenges you've faced while migrating a website to cloud. Please let me know. Thanks! Emmanuel
0
BarryA
by: BarryA | last post by:
What are the essential steps and strategies outlined in the Data Structures and Algorithms (DSA) roadmap for aspiring data scientists? How can individuals effectively utilize this roadmap to progress...
1
by: Sonnysonu | last post by:
This is the data of csv file 1 2 3 1 2 3 1 2 3 1 2 3 2 3 2 3 3 the lengths should be different i have to store the data by column-wise with in the specific length. suppose the i have to...
0
by: Hystou | last post by:
There are some requirements for setting up RAID: 1. The motherboard and BIOS support RAID configuration. 2. The motherboard has 2 or more available SATA protocol SSD/HDD slots (including MSATA, M.2...
0
marktang
by: marktang | last post by:
ONU (Optical Network Unit) is one of the key components for providing high-speed Internet services. Its primary function is to act as an endpoint device located at the user's premises. However,...
0
by: Hystou | last post by:
Most computers default to English, but sometimes we require a different language, especially when relocating. Forgot to request a specific language before your computer shipped? No problem! You can...
0
Oralloy
by: Oralloy | last post by:
Hello folks, I am unable to find appropriate documentation on the type promotion of bit-fields when using the generalised comparison operator "<=>". The problem is that using the GNU compilers,...
0
agi2029
by: agi2029 | last post by:
Let's talk about the concept of autonomous AI software engineers and no-code agents. These AIs are designed to manage the entire lifecycle of a software development project—planning, coding, testing,...

By using Bytes.com and it's services, you agree to our Privacy Policy and Terms of Use.

To disable or enable advertisements and analytics tracking please visit the manage ads & tracking page.