473,756 Members | 4,046 Online
Bytes | Software Development & Data Engineering Community
+ Post

Home Posts Topics Members FAQ

postgresql +AMD64 +big address spaces - does it work?

If I bought one of these boxes/OS combos as a postgresql database server,
would postgresql be able to make the best use of it with a huge (e.g. 40GB)
database?

Box: HP ProLiant DL585, with 4 AMD64 CPUs and 64GB of RAM. (other
vendor options also exist)
OS: SUSE enterprise 8 linux for AMD

(links to product info at bottom)

e.g. Could postgresql make use of most of this address space to map a huge
database (say 40GB) all paged into RAM at once?

----
According to Suse, a process running on such a setup can individually
address 0.5 terabytes of address space, and at any one point, with the
hardware above, up to 64GB (minus a bit!) of this address space could be
paged into physical memory at any one time.
----
According to the docs I could find, I just need to do the following:

1. set postgresql.conf->shared_buffe rs to a value like (40GB/8KB),

2. Check cat /proc/sys/kernel/shmmax is big enough, or successfully increase
it to > 40GB

From experience that's a long way from knowing it will work. Does anyone
have any experience at all with such big memory configurations?
Many thanks,
Andy

______
My references:
Suse: www.suse.com/amd64 and
http://www.suse.com/en/business/prod...les8_amd64.pdf
HP: http://h18004.www1.hp.com/products/s...585/index.html


Nov 23 '05 #1
9 2183
"Andy B" <ab************ *********@bluey onder.co.uk> writes:
1. set postgresql.conf->shared_buffe rs to a value like (40GB/8KB),


That is the very last thing you want to do (even if you could, which you
can't). Let the OS manage the memory --- the kernel can do just fine at
caching large files in RAM. I've never seen any evidence that it helps
to set shared_buffers higher than a few tens of thousands.

See the pgsql-performance archives for much more on this subject.

regards, tom lane

---------------------------(end of broadcast)---------------------------
TIP 5: Have you checked our extensive FAQ?

http://www.postgresql.org/docs/faqs/FAQ.html

Nov 23 '05 #2
On Thu, 2004-07-01 at 06:41, Andy B wrote:
If I bought one of these boxes/OS combos as a postgresql database server,
would postgresql be able to make the best use of it with a huge (e.g. 40GB)
database?

Box: HP ProLiant DL585, with 4 AMD64 CPUs and 64GB of RAM. (other
vendor options also exist)
OS: SUSE enterprise 8 linux for AMD

(links to product info at bottom)

e.g. Could postgresql make use of most of this address space to map a huge
database (say 40GB) all paged into RAM at once?

----
According to Suse, a process running on such a setup can individually
address 0.5 terabytes of address space, and at any one point, with the
hardware above, up to 64GB (minus a bit!) of this address space could be
paged into physical memory at any one time.
----
According to the docs I could find, I just need to do the following:

1. set postgresql.conf->shared_buffe rs to a value like (40GB/8KB),

2. Check cat /proc/sys/kernel/shmmax is big enough, or successfully increase
it to > 40GB
From experience that's a long way from knowing it will work. Does anyone

have any experience at all with such big memory configurations?


Sort of. PostgreSQL could use however much of it you decided to give it
as a buffer for individual queries, and the newer caching algo should
handle large amounts of data well.

However, PostgreSQL isn't currently in the business, so to speak, of
holding all the data you've accessed in the last X minutes in a great
big buffer. The reason for this is that PostgreSQL uses shared memory,
and on many architectures, shared memory is good for letting individual
backends communicate quickly, it's not as fast at managing really huge
chunks of data.

So, PostgreSQL is designed to let your kernel do all this. The buffers
in PostgreSQL are designed to hold the current working set, no more.
The buffers in the kernel cache are designed to handle gigs of data, and
to handle it quickly. So, if it's in Kernel cache, then PostgreSQL IS
capable of utilizing the extra memory, in the sense that the kernel is
using it for cache.

All modern flavors of unix have pretty good caching built in, with
little tuning needed.
---------------------------(end of broadcast)---------------------------
TIP 4: Don't 'kill -9' the postmaster

Nov 23 '05 #3
> "Andy B" <ab************ *********@bluey onder.co.uk> writes:
1. set postgresql.conf->shared_buffe rs to a value like (40GB/8KB),


That is the very last thing you want to do (even if you could, which you
can't). Let the OS manage the memory --- the kernel can do just fine at
caching large files in RAM. I've never seen any evidence that it helps
to set shared_buffers higher than a few tens of thousands.


My experience is the same; keep shared_buffers fairly small.

But I'm curious why we *can't* do 40GB on a 64 bit machine? I'm testing an
AMD64 + FedoraCore2 + PG7.4 installation right now and I am expecting to
throw lots of memory at it soon...

Glen Parker
---------------------------(end of broadcast)---------------------------
TIP 7: don't forget to increase your free space map settings

Nov 23 '05 #4
"Glen Parker" <gl******@nwlin k.com> writes:
But I'm curious why we *can't* do 40GB on a 64 bit machine?


Well, the "can't" part is simple: our code to calculate the size of our
shared-memory request uses int32 arithmetic. So we definitely can't go
higher than 4GB shared memory, and probably not higher than 2GB.

This could be fixed if anyone was sufficiently motivated, although you'd
have to be careful about maintaining portability to machines that don't
think size_t is 8 bytes, or don't have any 8-byte integer type at all.

However, given the complete lack of evidence that it's useful to boost
shared_buffers above the few-hundred-meg range, I can't see anyone
spending time on doing it...

The more interesting part of this discussion is why there isn't any such
evidence. You can find this question beat to death in the PG list
archives, but a quick sketch is:

1. PG's algorithms for managing its buffers are not obviously better
than those commonly found in kernels. In particular there are several
situations in which PG does linear scans of all available buffers.
This is OK for NBuffers counts up to some thousands, but we would need
some serious work to make NBuffers in the millions work well.

2. The whole argument for buffering disk pages in memory is very
dependent on the assumption that your buffers are actually in memory.
However on most Unixen there is *no* guarantee that the shared memory
we request from the kernel will not get swapped out --- and in fact the
larger the shmem request we make, the more likely this will happen.
A disk buffer that get swapped to swap space is completely
counterproducti ve, as it's costing you at least double the I/O work
compared to just re-reading the original file. So in practice it's
better to keep the shared-buffer arena small enough that all of it is
"hot" (heavily used) and not likely to get seen as a swap candidate by
the kernel's VM manager.

3. A large fixed-size shared-buffer arena is the worst of all possible
worlds in terms of dynamic memory management. The real-world situation
is that RAM has to be shared among PG shared buffers, private memory of
PG backend processes, and (usually) workspace of other non-Postgres
processes that are running on the server machine. The kernel is in a
far better position than we are to evaluate these competing demands and
make the right adjustments to changing situations. The kernel can
easily drop cached disk pages from its buffers to allocate more RAM to
process workspace, or the reverse when process demands drop; but we
can't change the size of the shared-buffer arena on the fly.

Bottom line: let the kernel manage as much memory as possible.

regards, tom lane

---------------------------(end of broadcast)---------------------------
TIP 3: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to ma*******@postg resql.org so that your
message can get through to the mailing list cleanly

Nov 23 '05 #5
Hello again and thanks to everyone for the replies so far.

Tom, and all, I hear what you are all saying, and furthermore, in cases
where the amount of RAM is much smaller than the database size, I agree
totally. However, I'm *only* talking about a particular scenario which, till
now, has really only been true of small databases, and that is:
--------------------------------
RAM - (total size of database) = (easily enough RAM for everything else to
run smoothly on the server).
--------------------------------

I get the feeling that, that regardless 64bit support or not, that the
*concept* of a database which just happens to all easily fit within RAM
isn't one that gets the thumbs down...

....and I think I must be missing something!

If you bear with me, could I just give you a better idea of where I'm coming
from, because I think I may have missed an important concept, or might be
able to sway you. (I don't mind which way this goes, other than my server
dreams will be in tatters if you're right and I'm confused, so be gentle
with me!)

Before I start, just to repeat that that though I have only been using
postgresql for 18 months and haven't run a huge database before, I do have a
large amount of experience of designing memory intensive multi-threaded real
time applications running in a unix virtual memory environment (specifically
SGI IRIX boxes), and am very comfortable with the postrgresql implementation
model. (and I did choose it over MySQL, so I am a fan.)

Ok!

---------------------------
Where I'm coming from!
---------------------------

I am developing a postgresql based national website, which hopefully will
have a million or two users if successful (though hopefully not all at once
:) )

The details aren't important, other than to say that the functionality I'm
planning is quite advanced - it's more of a web application than website as
such, and the database is going to get an absolute hammering. Most
importantly, a large percentage of the data within the database is going to
be 'active' (i.e. the target of queries) on a minute by minute basis.

Luckily for me, it isn't due to go live just yet :)

There is now 'proper' 64 bit support in Linux for both Intel and AMD and
we've already seen a few linux boxes with half a terabyte of RAM - shareable
as a single address space. (e.g. boxes like the Altix 3000 from SGI) Even
commodity server boxes (such as the HP DL585) now provide the environment
for multiple threads to share a single huge address space backed up by up to
64GB of RAM, accessible by 4 CPUs. I'm not more up to speed on the hardware
side, but this is only going one way: up!

So, with this new hardware and 64 bit OSs, it is theoretically possible to
dramatically reduce the IO bottlenecks on a database server by simply
sticking in enough RAM so that the database ends up in RAM, leaving most of
the IO being 'O' for syncing updates.

We're talking about High Performance Database

Surely this is a very desirable setup for someone like me?
____

Here's how it could work:

1. The postgresql postmaster (is that right?) - the process responsible for
setting up the shared buffer cache - does what it normally does, except
that it works with 64 bit addressing so your cache can be 40MB, or 200GB.
(CPU limited to 64GB currently in most 64bit processor systems but this will
change)

2. You setup the buffer cache size so that it could easily hold *the entire
database*. (Or the active part... the decision of how big is subject to all
the current guidelines. The only difference is that there is no 2GB limit).

e.g. on my 64GB system, my entire database is 16GB and I instruct postgresql
to setup a 20GB shared buffer cache, in order to allow for headroom for
updates.

3. At the postgresql level, database pages ('buffers?') get cached, and stay
in the buffer cache, because there is enough space for everything to reside
in the postgresql cache and more.

4. Additionally, at the virtual memory level, Linux isn't going to kick
pages of this huge shared buffer cache out to disk, because there's plenty
of RAM kicking around for everything else, and in fact, the target for such
a machine is for it to be running at close to zero swap activity. We're
talking about a high performance system after all, and swap activity is the
last thing you want in a high performance setup.

Note that Linux is still managing *all the memory*. I'm not interfering at
all with virtual memory. (Though IRIX's mpin() is very handy... I must find
out if Linux has a similar call)

5. Assuming your database *does* easily fit in the shared buffer cache,
queries will run like the wind, limited only by CPU availability and the
performance of the front end web servers + connections to deliver queries
and accept the results.

6. You get more bang for your buck, especially in systems with a heavy load
of concurrent postgresql backend activity, which would normally put enormous
strain on the IO system.

7. An extra benefit is that the database application developer can
contemplate query plans which, under the small cache, big database scenario
are just not sensible in a high load environment. (Can you tell that I've
already done this?!)
-------------------------------------------------

Anyway - current implementation aside, what theoretically is stopping you
all from jumping with joy at the prospect of running nationwide databases
from RAM?

Thanks for your input so far, and if you think I should move this thread
elsewhere, I'd be happy to.

I'd also be interested in helping with making the changes to allow the
shared buffer cache to be 64 bit friendly, though I've never helped on an
open source project, and my previous mistakes have only affected hundreds of
people, not millions ;)

Regards,
Andy

-------------------------------------------------
3. A large fixed-size shared-buffer arena is the worst of all possible
worlds in terms of dynamic memory management. The real-world situation
is that RAM has to be shared among PG shared buffers, private memory of
PG backend processes, and (usually) workspace of other non-Postgres
processes that are running on the server machine. The kernel is in a
far better position than we are to evaluate these competing demands and
make the right adjustments to changing situations. The kernel can
easily drop cached disk pages from its buffers to allocate more RAM to
process workspace, or the reverse when process demands drop; but we
can't change the size of the shared-buffer arena on the fly.

Bottom line: let the kernel manage as much memory as possible.

Nov 23 '05 #6
> I get the feeling that, that regardless 64bit support or not, that the
*concept* of a database which just happens to all easily fit within RAM
isn't one that gets the thumbs down...


Oops, I meant to say '*is*' one that gets the thumbs down...
Nov 23 '05 #7
Ok - just to end this thread, I think I understand what I was missing.

I'll stop this thread, and just comment on my first thread.

Thank you everyone who helped
Nov 23 '05 #8
"Andy B" <ab************ *********@bluey onder.co.uk> writes:
I get the feeling that, that regardless 64bit support or not, that the
*concept* of a database which just happens to all easily fit within RAM
isn't one that gets the thumbs down...


Oops, I meant to say '*is*' one that gets the thumbs down...


No, to the contrary, having all the data fit easily within RAM is
quite desirable. One of my coworkers is investigating the
entertaining possibility of hooking up SSDs to some of our systems to
entirely eliminate disk I/O for WAL. (The fun part then is to see
what more can be done with another 3GB of space on the SSD that can
eliminate a bunch more I/O :-)...)

The thing that gets "thumbs down" is the notion of trying to _force_
that to take place by maximizing shared buffer size, based on the
assumption that such a strategy is optimal for the purpose.

By all means, get a big AMD box with plenty of RAM; just _don't_
assume that tweaking shared buffers is the "majick trick" that will
make it all work 50x faster.
--
output = reverse("moc.en worbbc" "@" "enworbbc")
http://cbbrowne.com/info/internet.html
"A lot of people come to this newsgroup and do nothing but complain
about Lisp. I think maybe they are such heavy complainers that they
think they read comp.lain.lisp. " -- Erik Naggum
Nov 23 '05 #9
On Tue, 06 Jul 2004 15:27:54 -0400, Chris Browne <cb******@acm.o rg> wrote:
"Andy B" <ab************ *********@bluey onder.co.uk> writes:
I get the feeling that, that regardless 64bit support or not, that the
*concept* of a database which just happens to all easily fit within RAM
isn't one that gets the thumbs down...


Oops, I meant to say '*is*' one that gets the thumbs down...


No, to the contrary, having all the data fit easily within RAM is
quite desirable. One of my coworkers is investigating the
entertaining possibility of hooking up SSDs to some of our systems to
entirely eliminate disk I/O for WAL. (The fun part then is to see
what more can be done with another 3GB of space on the SSD that can
eliminate a bunch more I/O :-)...)


My experience in tuning Oracle and DB2 is that temp tablespaces are
what belong there. Things that are used when doing joins of large
tables. Honestly, though, it'll depend on your exact application
model. I've also had good luck putting indexes on SSD, because often
you do 3-4 hits of the index before a single seek on the data table.
So the reduction in access time is proportionally larger. I've not
done the numbers recently, but when 10K RPM drives were "the big
thing," I had excellent luck leaving my data tables on 7200rpm drives
that were high-capacity and moving the indexes to 10K drives.

Also a place where the trade offs between duplexing (mirroring with
read-balancing) and RAID-5 can become huge.

Chris
--
| Christopher Petrilli
| pe******@gmail. com

---------------------------(end of broadcast)---------------------------
TIP 5: Have you checked our extensive FAQ?

http://www.postgresql.org/docs/faqs/FAQ.html

Nov 23 '05 #10

This thread has been closed and replies have been disabled. Please start a new discussion.

Similar topics

1
2186
by: Otis Green | last post by:
Vote for or against a new newsgroup proposal. To summarize what you need to do, just send an empty e-mail to postgresql-ballot@netagw.com You will receive a ballot by e-mail. Follow the instructions and vote. _______________________________________________________________________ FIRST CALL FOR VOTES (of 2)
1
2846
by: Mateusz [PEYN] Adamus | last post by:
Hi I'm a developer currently wondering which DB choose for my next project. I looked on the net found ofcourse Oracle but also came up with PostgreSQL. I heard quite few things about it, all good so I'm wondering how really it is. Till now I've been working on MSSQL and Interbase. So PostgreSQL and Oracle are pretty new to me. I would have to learn them from the start there for I have to make a wise decision :D
125
14819
by: Sarah Tanembaum | last post by:
Beside its an opensource and supported by community, what's the fundamental differences between PostgreSQL and those high-price commercial database (and some are bloated such as Oracle) from software giant such as Microsoft SQL Server, Oracle, and Sybase? Is PostgreSQL reliable enough to be used for high-end commercial application? Thanks
2
1700
by: H A Prahalad | last post by:
Hello, Can anybody tell me, whether PostgreSQl is supported on AMD64. If so where can I download from and what is the complexsity in installing them. regards, prahalad
74
8043
by: John Wells | last post by:
Yes, I know you've seen the above subject before, so please be gentle with the flamethrowers. I'm preparing to enter a discussion with management at my company regarding going forward as either a MySql shop or a Postgresql shop. It's my opinion that we should be using PG, because of the full ACID support, and the license involved. A consultant my company hired before bringing me in is pushing hard for MySql, citing speed and community...
29
3523
by: Paul Ganainm | last post by:
Hi all, Following up on another thread, here is a comparison between FB and PG from an FB'ers POV. BTW, FB is the love-child of Open-Source-Interbase. BTW, I have tried to be as accurate as I can with the information available to me at this point in time - I was unable to find anything resembling a feature list on the FB site, so most of this is from memory. I do have an email account on the various lists on that site, and I will post...
8
2304
by: Brendan | last post by:
Hello, I just tried to use the Windows XP installer for Python 2.5 AMD64 but I get the error message: "Installation package not supported by processor type" I am running Windows XP Pro on an AMD Athon 64 Processor. Do I need to have a 64-bit OS to use this version?
35
2061
by: Andrew Fabbro | last post by:
I have a need to create an array that is larger than size_t - 1, which I believe is the largest guaranteed array. I though I'd found salvation in FAQ 6.14, but it appears I am not understanding something. The array is an array of pointers to structs. Here is an example using a small array: /* checks that malloc succeeded, main's return, etc. removed for brevity */
43
1886
by: jimenezrick | last post by:
- Code compiled by GCC: int main(void) { char a = "123"; char b = "abc"; printf("%p %p\n", a, b); return 0;
0
9255
by: Hystou | last post by:
Most computers default to English, but sometimes we require a different language, especially when relocating. Forgot to request a specific language before your computer shipped? No problem! You can effortlessly switch the default language on Windows 10 without reinstalling. I'll walk you through it. First, let's disable language synchronization. With a Microsoft account, language settings sync across devices. To prevent any complications,...
0
10014
Oralloy
by: Oralloy | last post by:
Hello folks, I am unable to find appropriate documentation on the type promotion of bit-fields when using the generalised comparison operator "<=>". The problem is that using the GNU compilers, it seems that the internal comparison operator "<=>" tries to promote arguments from unsigned to signed. This is as boiled down as I can make it. Here is my compilation command: g++-12 -std=c++20 -Wnarrowing bit_field.cpp Here is the code in...
0
9844
jinu1996
by: jinu1996 | last post by:
In today's digital age, having a compelling online presence is paramount for businesses aiming to thrive in a competitive landscape. At the heart of this digital strategy lies an intricately woven tapestry of website design and digital marketing. It's not merely about having a website; it's about crafting an immersive digital experience that captivates audiences and drives business growth. The Art of Business Website Design Your website is...
1
9819
by: Hystou | last post by:
Overview: Windows 11 and 10 have less user interface control over operating system update behaviour than previous versions of Windows. In Windows 11 and 10, there is no way to turn off the Windows Update option using the Control Panel or Settings app; it automatically checks for updates and installs any it finds, whether you like it or not. For most users, this new feature is actually very convenient. If you want to control the update process,...
0
8688
agi2029
by: agi2029 | last post by:
Let's talk about the concept of autonomous AI software engineers and no-code agents. These AIs are designed to manage the entire lifecycle of a software development project—planning, coding, testing, and deployment—without human intervention. Imagine an AI that can take a project description, break it down, write the code, debug it, and then launch it, all on its own.... Now, this would greatly impact the work of software developers. The idea...
1
7226
isladogs
by: isladogs | last post by:
The next Access Europe User Group meeting will be on Wednesday 1 May 2024 starting at 18:00 UK time (6PM UTC+1) and finishing by 19:30 (7.30PM). In this session, we are pleased to welcome a new presenter, Adolph Dupré who will be discussing some powerful techniques for using class modules. He will explain when you may want to use classes instead of User Defined Types (UDT). For example, to manage the data in unbound forms. Adolph will...
0
6514
by: conductexam | last post by:
I have .net C# application in which I am extracting data from word file and save it in database particularly. To store word all data as it is I am converting the whole word file firstly in HTML and then checking html paragraph one by one. At the time of converting from word file to html my equations which are in the word document file was convert into image. Globals.ThisAddIn.Application.ActiveDocument.Select();...
0
5289
by: adsilva | last post by:
A Windows Forms form does not have the event Unload, like VB6. What one acts like?
2
3326
muto222
by: muto222 | last post by:
How can i add a mobile payment intergratation into php mysql website.

By using Bytes.com and it's services, you agree to our Privacy Policy and Terms of Use.

To disable or enable advertisements and analytics tracking please visit the manage ads & tracking page.