By using this site, you agree to our updated Privacy Policy and our Terms of Use. Manage your Cookies Settings.
438,756 Members | 1,721 Online
Bytes IT Community
+ Ask a Question
Need help? Post your question and get tips & solutions from a community of 438,756 IT Pros & Developers. It's quick & easy.

DB2_LARGE_PAGE_MEM

P: n/a
We are running DB2 8.2 (FP9) on Linux (64bit) with 64 bit instances and
we are reviewing the new features that we could activate on the 2.6
kernel. We came up accross an interesting db2set parameter
DB2_LARGE_PAGE_MEM, but we don't exactly understand how it needs to be
set.

Is this parameter replacing the SHMMAX in Linux? If not, how exactly
are we supposed to relate that to the bufferpool size (currently set to
320000 pages)?

Thanks,

Yvan

Mar 6 '06 #1
Share this Question
Share on Google+
5 Replies


P: n/a

The Phoenix wrote:
We are running DB2 8.2 (FP9) on Linux (64bit) with 64 bit instances and
we are reviewing the new features that we could activate on the 2.6
kernel. We came up accross an interesting db2set parameter
DB2_LARGE_PAGE_MEM, but we don't exactly understand how it needs to be
set.

Is this parameter replacing the SHMMAX in Linux? If not, how exactly
are we supposed to relate that to the bufferpool size (currently set to
320000 pages)?

Thanks,

Yvan


Hi Yvan,

You need to be careful when playing with this parameter. Normally, the
OS uses default 4K memory pages - using 16MB large pages for the DB2
database shared memory set (which includes the bufferpools, locklist,
package cache, etc) may help performance, but it means that you have to
carefully provision your system ahead of time, configure how many large
pages the OS should allow, etc. It also means that if you ever want to
dynamically alter your bufferpool sizes, increase your locklist size or
utility heap size, etc., then you will likely have to also reconfigure
and reboot your OS.

With that being said, to have the database shared memory set allocated
using large pages, run:
db2set DB2_LARGE_PAGE_MEM=DB

To ensure that you have enough large pages configured on your system,
before setting this registry variable, you should activate your
database, and find out how large your database shared memory set
currently is:
db2 get db cfg show detail | grep DATABASE_MEMORY

You'll see an in-memory value which represents how big the database
shared memory set is (this should be much easier than calculating by
hand how big all the bufferpools are, the locklist size, package cache
size, etc). Multiple this by 4096 to get the number of bytes - you'll
have to configure the OS to have at least that amount of memory
dedicated to the large page pool. Changing the large page pool will
likely require you to reboot. After the reboot, set the
DB2_LARGE_PAGE_MEM registry variable as mentioned above, and activate
your database.

Cheers,
Liam.

Mar 6 '06 #2

P: n/a
Thanks for the quick response, based on your experience can we really
improve the DB performance by going that route?

Mar 6 '06 #3

P: n/a
I guess it really depends :-)

If your total amount of database shared memory is fairly small (a few
hundred MBs or so), I doubt you would notice any real change in
performance. If your database is not finely tuned, you probably won't
notice that much of a difference either (i.e. if you're I/O bound, then
switching to large pages won't buy you too much). If you only have a
few agents active at any one time, you may not notice much of a
difference.

If you have a large database shared memory set though, lots of agents,
a finely tuned database, and you're mainly CPU bound, then you might be
able to see 5-10% or so performance improvement.

If you're not careful though, it's also possible that you'll degrade
overall system performance using large pages. Let's say you have 4GB
of RAM on your system, and your database shared memory set is 2GB, so
you configure 2GB of large page memory. That 2GB of large page memory
can now only be used by the database shared memory set. Using the
default 4K pages, if the OS saw that there was heavy demand for memory
in the system, and only about 1.5GB of your database shared memory set
was "hot", the other 500MB of "cold" pages could be swapped out, and
used by other processes, for private agent memory, sorts for DSS
queries, etc, which could end up giving you better overall performance.
With 2GB of large page memory that it can't touch, the OS may end up
having to swap out "warmer" pages from the remaining 2GB, leading to
more overall swapping (i.e. when it needs to read those warm pages back
into memory), degrading overall performance.

If you're looking for ways to boost performance, I would say using
large page memory for your database shared memory set would be one of
the last things you should look at. There are likely many other things
you can do that will generate better performance improvements, with
fewer risks.

Cheers,
Liam.

Mar 7 '06 #4

P: n/a
Ian
Liam Finnie wrote:

You need to be careful when playing with this parameter. Normally, the
OS uses default 4K memory pages - using 16MB large pages for the DB2
database shared memory set (which includes the bufferpools, locklist,
package cache, etc) may help performance,


Liam,

I thought that the point of using large pages was to allow DB2 to
allocate large amounts of memory more quickly - i.e. if allocating
2 Gb for DBMS, that means only 128 16Mb pages are allocated vs.
524,288 4k pages.

However, this (DBMS) allocation only occurs when the database is
activated; so I don't understand how this will improve overall
database throughput?

Or am I wrong in my understanding of large page support?

Thanks,


Mar 9 '06 #5

P: n/a
Hi Ian,

The main benefit comes from how the OS translates virtual addresses
from a process into physical memory addresses. Most hardware platforms
use what are called TLBs - translation lookaside buffers. Entries in
the TLBs are the higher-order portion of a virtual memory address -
they do not include the low-order 12 bits for 4K pages, or the
low-order 24-bits for 16MB pages, since those will be the offset into
the physical memory page. There are typically a very small number of
these hardware TLB entries, so using large page memory allows the OS to
translate a larger range of virtual memory using the same number of TLB
entries. A TLB miss means that the CPU has to stall waiting for the
virtual-to-physical translation to occur - the OS then typically takes
over, and fills in the required TLB entry, and then things proceed.
So, the way large pages can improve throughput is by obtaining better
TLB hit rates during runtime, reducing the likelyhood or CPU stalls.

Another saving is in the size of the page translation tables for a
process. Similar to the TLB discussion above, the OS maintains the
list of all valid virtual-to-physical address translations for a
process (the TLBs only maintain a very small subset of this total).
The smaller this table is, the lower the context switch time for
processes, and the better performance you'll get.

Which is why I mentioned that if you only have a few database agents,
or a fairly small amount of database shared memory, you probably won't
see much difference from using large pages. The OS won't have to do as
many context switches, and your TLB hit rates will likely be high
enough, that large pages won't contribute too much. Further, if your
workload is I/O bound, it just means that the database agents may
finish their useful, CPU-intensive, work more quickly, and end up
waiting on I/O more, yielding no net performance benefit.

Cheers,
Liam.

Mar 9 '06 #6

This discussion thread is closed

Replies have been disabled for this discussion.