473,748 Members | 2,294 Online
Bytes | Software Development & Data Engineering Community
+ Post

Home Posts Topics Members FAQ

DB cache size strategies


Here's some of my current notions on pgsql performance tuning strictly as it
relates to pgsql tuning parameters in the context of a dedicated linux or
hpux server. I'm particularly focusing on the shared_buffers setting. I
invite any corrective or confirming feedback. I realize there are many
other hugely important performance factors outside this scope.

One key aspect of pgsql performance tuning is to adjust the memory
consumption settings (shared_buffers , sort_mem, etc) large enough to hold
as much of the database in shared memory as possible while not causing page
swap-ins. I understand that both page swap-outs and swap space usage is
normal and OK, but lots of page swap-ins are bad). In other words, for
absolute fastest performance, we want a database buffer cache hit rate of
as close to 100% as possible.

Now, how do we accurately measure this cache hit rate at any one point?
Well, here's what I currently know as the best measure for a given cluster
of databases:

SELECT SUM(pg_stat_get _db_blocks_hit( d.oid))
/ SUM(pg_stat_get _db_blocks_fetc hed(d.oid)) AS hit_rate
FROM pg_database d;

How do we identify when we have sub-optimal pgsql configuration parameters
in this regard? My understanding: to whatever extent the hit rate is
below 100%, our shared buffers are not large enough to hold the data we're
accessing.

How do we fix the problem? We want to increase the shared_buffers setting
and/or maybe some combination sort_mem, wal_buffers, vacuum_mem, reset our
stats, and resume monitoring. I assume which of these we increase for
maximum benefit depends on the sources of the cache overflow (routine
queries, sorting, vacuuming), but that shared_buffers is the most impactful
in practice. Again, we want to do this without causing page swap-ins.
When you see swap-ins, you've gone too far. If we experience swap-ins and
less than 100% cache hit rate by any significant amount, then we need more
RAM or less RAM consumption.

The end result is that page swap-ins (sar -W) and cache hit rate (query
above) are two very key measures to use as warning bells.

Glad to hear your thoughts.

Ed
---------------------------(end of broadcast)---------------------------
TIP 2: you can get off all lists at once with the unregister command
(send "unregister YourEmailAddres sHere" to ma*******@postg resql.org)

Nov 22 '05 #1
26 6270
On Friday January 30 2004 2:33, Ed L. wrote:

One key aspect of pgsql performance tuning is to adjust the memory
consumption settings (shared_buffers , sort_mem, etc) large enough to hold
as much of the database in shared memory as possible while not causing
page swap-ins. I understand that both page swap-outs and swap space
usage is normal and OK, but lots of page swap-ins are bad). In other
words, for absolute fastest performance, we want a database buffer cache
hit rate of as close to 100% as possible.


I'm also curious about the relationship of DB shared buffer cache to the
linux/hpux kernel caches. In particular, if the block being retrieved in
pgsql was in the kernel's cache but not in the DB cache, thereby forcing a
read() system call, what kind of quantitative difference in performance
would one expect when comparing with block retrievals coming from the
cache? I would think they'd differ only by something on the order of
microseconds. Is the linux kernel disk cache normally a duplicate of much
of what is in the DB cache? For linux, does the kernel cache use only
"available" memory until a program needs it, while the pgsql DB cache
memory is guaranteed at startup?

TIA.

Ed

---------------------------(end of broadcast)---------------------------
TIP 8: explain analyze is your friend

Nov 22 '05 #2
On Fri, Jan 30, 2004 at 03:19:56PM -0700, Ed L. wrote:
On Friday January 30 2004 2:33, Ed L. wrote:

One key aspect of pgsql performance tuning is to adjust the memory
consumption settings (shared_buffers , sort_mem, etc) large enough to hold
as much of the database in shared memory as possible while not causing
page swap-ins. I understand that both page swap-outs and swap space
usage is normal and OK, but lots of page swap-ins are bad). In other
words, for absolute fastest performance, we want a database buffer cache
hit rate of as close to 100% as possible.
I'm also curious about the relationship of DB shared buffer cache to the
linux/hpux kernel caches. In particular, if the block being retrieved in
pgsql was in the kernel's cache but not in the DB cache, thereby forcing a
read() system call, what kind of quantitative difference in performance
would one expect when comparing with block retrievals coming from the
cache? I would think they'd differ only by something on the order of
microseconds. Is the linux kernel disk cache normally a duplicate of much
of what is in the DB cache? For linux, does the kernel cache use only
"available" memory until a program needs it, while the pgsql DB cache
memory is guaranteed at startup?


Whenever the database needs a block not in memory it get loaded and ends up
in both the OS cache and the DB cache. The difference between getting a
block out of DB cache and OS cache is very, very small compared to loading
off disk.

Hence, the larger your DB cache, the less overall data you can cache due to
the duplication. OTOH, postgres can only directly use the DB cache and can
only access the OS cache indirectly.

What I try to aim for is the keep the DB cache somewhere near the "working
set", ie the amount of space used by all the system tables and commonly used
tables in the system. It's shared by all postgres servers in a cluster so
keep that in mind. Also the OS cache may be shared by other processes, so
your results will depend on what other things are happening.

Finally, at least on Linux, the shared memory postgres uses for cache can
also be swapped out making it very difficult to determine the correct value.

Hope this helps,

--
Martijn van Oosterhout <kl*****@svana. org> http://svana.org/kleptog/ (... have gone from d-i being barely usable even by its developers
anywhere, to being about 20% done. Sweet. And the last 80% usually takes
20% of the time, too, right?) -- Anthony Towns, debian-devel-announce


-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.0.6 (GNU/Linux)
Comment: For info see http://www.gnupg.org

iD8DBQFAGv+nY5T wig3Ge+YRAlobAK DUzlZXSZt+tDn5c 3ewkSuj965lMACe KUT1
AV/p/m2Pvf8QUF3MTH/opXk=
=XaoN
-----END PGP SIGNATURE-----

Nov 22 '05 #3
But keep in mind that all these strategies need to be reevaluated with
7.5 and its ARC.
Jan

Martijn van Oosterhout wrote:
On Fri, Jan 30, 2004 at 03:19:56PM -0700, Ed L. wrote:
On Friday January 30 2004 2:33, Ed L. wrote:
>
> One key aspect of pgsql performance tuning is to adjust the memory
> consumption settings (shared_buffers , sort_mem, etc) large enough to hold
> as much of the database in shared memory as possible while not causing
> page swap-ins. I understand that both page swap-outs and swap space
> usage is normal and OK, but lots of page swap-ins are bad). In other
> words, for absolute fastest performance, we want a database buffer cache
> hit rate of as close to 100% as possible.


I'm also curious about the relationship of DB shared buffer cache to the
linux/hpux kernel caches. In particular, if the block being retrieved in
pgsql was in the kernel's cache but not in the DB cache, thereby forcing a
read() system call, what kind of quantitative difference in performance
would one expect when comparing with block retrievals coming from the
cache? I would think they'd differ only by something on the order of
microseconds. Is the linux kernel disk cache normally a duplicate of much
of what is in the DB cache? For linux, does the kernel cache use only
"available" memory until a program needs it, while the pgsql DB cache
memory is guaranteed at startup?


Whenever the database needs a block not in memory it get loaded and ends up
in both the OS cache and the DB cache. The difference between getting a
block out of DB cache and OS cache is very, very small compared to loading
off disk.

Hence, the larger your DB cache, the less overall data you can cache due to
the duplication. OTOH, postgres can only directly use the DB cache and can
only access the OS cache indirectly.

What I try to aim for is the keep the DB cache somewhere near the "working
set", ie the amount of space used by all the system tables and commonly used
tables in the system. It's shared by all postgres servers in a cluster so
keep that in mind. Also the OS cache may be shared by other processes, so
your results will depend on what other things are happening.

Finally, at least on Linux, the shared memory postgres uses for cache can
also be swapped out making it very difficult to determine the correct value.

Hope this helps,

--
#============== =============== =============== =============== ===========#
# It's easier to get forgiveness for being wrong than for being right. #
# Let's break this rule - forgive me. #
#============== =============== =============== ====== Ja******@Yahoo. com #
---------------------------(end of broadcast)---------------------------
TIP 2: you can get off all lists at once with the unregister command
(send "unregister YourEmailAddres sHere" to ma*******@postg resql.org)

Nov 22 '05 #4
On Friday January 30 2004 6:06, Martijn van Oosterhout wrote:
On Fri, Jan 30, 2004 at 03:19:56PM -0700, Ed L. wrote:

I'm also curious about the relationship of DB shared buffer cache to
the linux/hpux kernel caches. ...
Whenever the database needs a block not in memory it get loaded and ends
up in both the OS cache and the DB cache. The difference between getting
a block out of DB cache and OS cache is very, very small compared to
loading off disk.


So the OS cache and the DB cache may well have duplicate copies of the data?
With OS cache being more susceptible to preemption from other processes?
Finally, at least on Linux, the shared memory postgres uses for cache can
also be swapped out making it very difficult to determine the correct
value.


What would cause DB shared memory to be swapped out in linux? I thought it
was all pre-allocated at pgsql startup. Is this the "well its not really
pre-allocated but rather supplied on demand" story? I think I saw some
sort of kernel parameter controlling a similar (same?) policy...

TIA.

---------------------------(end of broadcast)---------------------------
TIP 4: Don't 'kill -9' the postmaster

Nov 22 '05 #5
On Tue, Feb 10, 2004 at 01:20:32PM -0700, Ed L. wrote:
On Friday January 30 2004 6:06, Martijn van Oosterhout wrote:
On Fri, Jan 30, 2004 at 03:19:56PM -0700, Ed L. wrote:

I'm also curious about the relationship of DB shared buffer cache to
the linux/hpux kernel caches. ...
Whenever the database needs a block not in memory it get loaded and ends
up in both the OS cache and the DB cache. The difference between getting
a block out of DB cache and OS cache is very, very small compared to
loading off disk.


So the OS cache and the DB cache may well have duplicate copies of the data?
With OS cache being more susceptible to preemption from other processes?


Will have. Any time postgresql does a read() the kernel is copying a buffer
from it's cache to postgresql's internal cache. The kernel cache is more
susceptable to reuse by other processes, but it's still double.
Finally, at least on Linux, the shared memory postgres uses for cache can
also be swapped out making it very difficult to determine the correct
value.


What would cause DB shared memory to be swapped out in linux? I thought it
was all pre-allocated at pgsql startup. Is this the "well its not really
pre-allocated but rather supplied on demand" story? I think I saw some
sort of kernel parameter controlling a similar (same?) policy...


Memory pressure. Same rules apply as for everything else I beleive. If you
allocate a huge amount to shared memory, fill it up and then don't use it
for a while while the rest of the system needs the memory, it'll get swapped
out.

I generally give Postgresql about 64-128MB of shared memory, which covers
all of the system tables and the most commonly used small tables. The rest
of the memory (this is a 1GB machine) I leave for the kernel to manage for
the very large tables.

I think giving postgresql half your memory is couterproductiv e.

--
Martijn van Oosterhout <kl*****@svana. org> http://svana.org/kleptog/ (... have gone from d-i being barely usable even by its developers
anywhere, to being about 20% done. Sweet. And the last 80% usually takes
20% of the time, too, right?) -- Anthony Towns, debian-devel-announce


-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.0.6 (GNU/Linux)
Comment: For info see http://www.gnupg.org

iD8DBQFAKUI3Y5T wig3Ge+YRAjYrAJ 4zaoXQCtYI8tsXq XDl7W4tKIDtqgCg i7+u
Qa/QJnTIWaYt8IfdBh rKrnU=
=kntV
-----END PGP SIGNATURE-----

Nov 22 '05 #6
On Tuesday February 10 2004 1:42, Martijn van Oosterhout wrote:
I generally give Postgresql about 64-128MB of shared memory, which covers
all of the system tables and the most commonly used small tables. The
rest of the memory (this is a 1GB machine) I leave for the kernel to
manage for the very large tables.
Interesting. Why leave very large tables to the kernel instead of the db
cache? Assuming a dedicated DB server and a DB smaller than available RAM,
why not give the DB enough RAM to get the entire DB into the DB cache?
(Assuming you have the RAM).

Is there a point of diminishing returns for DB cache size *prior* to running
out of data to load into, and access from, that cache? Anyone know how to
guage that point? Maybe shared memory management overhead versus the
performance difference between DB cache reads and OS cache reads? We have
some pretty intense DB loads, but that feels like counting angels on the
head of a pin...
I think giving postgresql half your memory is couterproductiv e.


I get the sense it really depends on the nature of the apps running on the
box. If it's a dedicated server (not sharing with an app, a web server,
etc), then giving it enough to get the entire data in memory without
swapping is ideal (without swapping), no matter whether thats 10MB or 10GB
of DB cache. If its a cluster server running along side lots of other apps
and they all generally have the same priority, and assuming disk cache
reads are nearly as fast as DB cache reads, it seems the less DB cache you
have, the better; let all apps leverage the kernel cache equally. Would
you agree?

One troublesome case for which I wonder if there is an easy tweak
unbeknownst to me: a huge table that gets lots of random reads and writes.
It's too big for the cache, and the volume of random queries is such that
its a disk I/O drain. Of course, more memory or better data modeling could
help. Maybe that is the worst case scenario.

Thx.

---------------------------(end of broadcast)---------------------------
TIP 9: the planner will ignore your desire to choose an index scan if your
joining column's datatypes do not match

Nov 22 '05 #7
On Tue, 10 Feb 2004, Ed L. wrote:
On Tuesday February 10 2004 1:42, Martijn van Oosterhout wrote:
I generally give Postgresql about 64-128MB of shared memory, which covers
all of the system tables and the most commonly used small tables. The
rest of the memory (this is a 1GB machine) I leave for the kernel to
manage for the very large tables.


Interesting. Why leave very large tables to the kernel instead of the db
cache? Assuming a dedicated DB server and a DB smaller than available RAM,
why not give the DB enough RAM to get the entire DB into the DB cache?
(Assuming you have the RAM).


Because the kernel is more efficient (right now) at caching large data
sets.

With the ARC cache manager that will likely wend it's way into 7.5, it's
quite a likely possibility that postgresql will be able to efficiently
handle a larger cache, but it will still be a shared memory cache, and
those are still usually much slower than the kernel's cache.
---------------------------(end of broadcast)---------------------------
TIP 3: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to ma*******@postg resql.org so that your
message can get through to the mailing list cleanly

Nov 22 '05 #8
On Tuesday February 10 2004 3:48, scott.marlowe wrote:
On Tue, 10 Feb 2004, Ed L. wrote:
Interesting. Why leave very large tables to the kernel instead of the
db cache? Assuming a dedicated DB server and a DB smaller than
available RAM, why not give the DB enough RAM to get the entire DB into
the DB cache? (Assuming you have the RAM).


Because the kernel is more efficient (right now) at caching large data
sets.

With the ARC cache manager that will likely wend it's way into 7.5, it's
quite a likely possibility that postgresql will be able to efficiently
handle a larger cache, but it will still be a shared memory cache, and
those are still usually much slower than the kernel's cache.


Hmmm. Others have asserted/assumed they'd be roughly equivalent. It'd be
interesting to see some real data measuring the performance of the shared
mem cache vs. kernel cache. Anyone know of existing benchmarks?
---------------------------(end of broadcast)---------------------------
TIP 2: you can get off all lists at once with the unregister command
(send "unregister YourEmailAddres sHere" to ma*******@postg resql.org)

Nov 22 '05 #9
On Tue, 10 Feb 2004, Ed L. wrote:
On Tuesday February 10 2004 3:48, scott.marlowe wrote:
On Tue, 10 Feb 2004, Ed L. wrote:
Interesting. Why leave very large tables to the kernel instead of the
db cache? Assuming a dedicated DB server and a DB smaller than
available RAM, why not give the DB enough RAM to get the entire DB into
the DB cache? (Assuming you have the RAM).


Because the kernel is more efficient (right now) at caching large data
sets.

With the ARC cache manager that will likely wend it's way into 7.5, it's
quite a likely possibility that postgresql will be able to efficiently
handle a larger cache, but it will still be a shared memory cache, and
those are still usually much slower than the kernel's cache.


Hmmm. Others have asserted/assumed they'd be roughly equivalent. It'd be
interesting to see some real data measuring the performance of the shared
mem cache vs. kernel cache. Anyone know of existing benchmarks?


Testing from various folks has shown that there is a maximum size of
somewhere around 256 meg or so where the returns diminish pretty quickly,
unless your working data sets are larger than that. I.e. setting a
database up that needs 20 megs to throw data around in with 200 megs of
buffers is usually slower than handing it 32 megs of buffer and letting
the kernel do it.

But ARC may change all of that. I'll have to grab a CVS snapshot and see
how it behaves with large datasets and shared buffers...
---------------------------(end of broadcast)---------------------------
TIP 4: Don't 'kill -9' the postmaster

Nov 22 '05 #10

This thread has been closed and replies have been disabled. Please start a new discussion.

Similar topics

1
5602
by: xixi | last post by:
i am using db2 udb v8.1 on win 64 bit, i found this on my db2diag.log 2003-11-12-13.20.05.550001 Instance:DB2 Node:000 PID:1840(db2syscs.exe) TID:3068 Appid:AC100453.G761.00F8D15749BE access plan manager sqlra_cache_mem_please Probe:21 Database:NJIPD Package Cache Number Overflows 0x000006FAFFBF141C : 0x00000001 .... 2003-11-12-13.20.05.780000 Instance:DB2 Node:000
2
4204
by: xixi | last post by:
we are using db2 udb v8.1 on win64bit 2003 server, i found this in db2diag.log, our application used jdbc for connection to this remote server, why the overflow number keep increasing, what should i do with this? our maxappls=60, pckcachesz = 859 2004-05-05-14.53.11.720000 Instance:DB2 Node:000 PID:2232(db2syscs.exe) TID:464 Appid:AC1004ED.G890.00FC56AF3D38 access plan manager sqlra_cache_mem_please Probe:21 Database:NJIPD
1
2586
by: aj | last post by:
DB2 v8 FP5 Red Hat AS/EL 2.1 I found the following in my db2diag.log: 2004-09-07-10.01.37.747898 Instance:oltp Node:000 PID:14637(db2agent (NDRDB)) TID:8192 Appid:C0455584.K508.00E507141135 access plan manager sqlra_cache_mem_please Probe:21 Database:NDRDB Package Cache Number Overflows
10
4850
by: datapro01 | last post by:
Running DB2 8.1.6A on AIX 5.1 We are experience package cache overflows. The high water mark for package cache is showing as 16,108,513 bytes, or approximately 3933 4K pages. The package cache size is set at Maxapples * 8 Maxapples is at 500.
17
4829
by: emerth | last post by:
Hello all: I have read references to optimizing C code to exploit the CPU cache of <insert your favourite CPU>. Can anyone point me towards some info on the 'net that gets into techniques for this kind of thing in C (or C++) in some depth? I imagine this can get specific to the CPU in question.
6
1767
by: Adam | last post by:
On an xp machine, the caching works as expected. I have deployed to a win2k server, and an item I add to the cache expires almost immediately some times and in under a minute in other times. The aspnet_wp process is not restarting during this time, and there are no other requests, so nothing should be clearing this out. I have added a CacheItemRemovedCallback which writes debug info out to a DefaultTraceListener, but no output shows up on...
6
2523
by: ShaveDave27 | last post by:
Hi, I'm trying to create a cache which will use direct mapping and output a miss rate for a list of integers. I'm struggling quite a bit. I think I have created the cache OK, probably not though - heres the code. import java.util.*; import java.util.LinkedHashMap; public class Cache<A,B> {
3
5727
by: aj | last post by:
DB2 LUW 8.1 fixpak 14 Red Hat EL AS 4.4 I'm trying to diagnose some nocturnal CPU pressure, and am trying to understand the dynamic statement cache as it applies to LUW. The only doc/redbooks I am finding are for Z/OS, which I am completely ignorant of. I am using only Java and JDBC in my applications. No static SQL.
10
1859
by: =?Utf-8?B?TWFyaw==?= | last post by:
Hi... We've been trying to migrate our asp.net apps off older, underpowered hardware to newer, bigger boxes but when we do, we see our databases start to melt. When I started to look into it, I found that the older boxes all had bigger EffectivePrivateBytesLimit values than the newer boxes, which seems very counter-intuitive to me. And it seemed to me that a much smaller Cache would be pushing more requests back to the databases.
0
8983
marktang
by: marktang | last post by:
ONU (Optical Network Unit) is one of the key components for providing high-speed Internet services. Its primary function is to act as an endpoint device located at the user's premises. However, people are often confused as to whether an ONU can Work As a Router. In this blog post, we’ll explore What is ONU, What Is Router, ONU & Router’s main usage, and What is the difference between ONU and Router. Let’s take a closer look ! Part I. Meaning of...
0
8822
by: Hystou | last post by:
Most computers default to English, but sometimes we require a different language, especially when relocating. Forgot to request a specific language before your computer shipped? No problem! You can effortlessly switch the default language on Windows 10 without reinstalling. I'll walk you through it. First, let's disable language synchronization. With a Microsoft account, language settings sync across devices. To prevent any complications,...
0
9528
Oralloy
by: Oralloy | last post by:
Hello folks, I am unable to find appropriate documentation on the type promotion of bit-fields when using the generalised comparison operator "<=>". The problem is that using the GNU compilers, it seems that the internal comparison operator "<=>" tries to promote arguments from unsigned to signed. This is as boiled down as I can make it. Here is my compilation command: g++-12 -std=c++20 -Wnarrowing bit_field.cpp Here is the code in...
1
9310
by: Hystou | last post by:
Overview: Windows 11 and 10 have less user interface control over operating system update behaviour than previous versions of Windows. In Windows 11 and 10, there is no way to turn off the Windows Update option using the Control Panel or Settings app; it automatically checks for updates and installs any it finds, whether you like it or not. For most users, this new feature is actually very convenient. If you want to control the update process,...
1
6792
isladogs
by: isladogs | last post by:
The next Access Europe User Group meeting will be on Wednesday 1 May 2024 starting at 18:00 UK time (6PM UTC+1) and finishing by 19:30 (7.30PM). In this session, we are pleased to welcome a new presenter, Adolph Dupré who will be discussing some powerful techniques for using class modules. He will explain when you may want to use classes instead of User Defined Types (UDT). For example, to manage the data in unbound forms. Adolph will...
0
4863
by: adsilva | last post by:
A Windows Forms form does not have the event Unload, like VB6. What one acts like?
1
3298
by: 6302768590 | last post by:
Hai team i want code for transfer the data from one system to another through IP address by using C# our system has to for every 5mins then we have to update the data what the data is updated we have to send another system
2
2774
muto222
by: muto222 | last post by:
How can i add a mobile payment intergratation into php mysql website.
3
2206
bsmnconsultancy
by: bsmnconsultancy | last post by:
In today's digital era, a well-designed website is crucial for businesses looking to succeed. Whether you're a small business owner or a large corporation in Toronto, having a strong online presence can significantly impact your brand's success. BSMN Consultancy, a leader in Website Development in Toronto offers valuable insights into creating effective websites that not only look great but also perform exceptionally well. In this comprehensive...

By using Bytes.com and it's services, you agree to our Privacy Policy and Terms of Use.

To disable or enable advertisements and analytics tracking please visit the manage ads & tracking page.