473,387 Members | 1,812 Online
Bytes | Software Development & Data Engineering Community
Post Job

Home Posts Topics Members FAQ

Join Bytes to post your question to a community of 473,387 software developers and data experts.

how much ram do i give postgres?

I know this is kinda a debate, but how much ram do I give postgres?
I've seen many places say around 10-15% or some say 25%....... If all
this server is doing is running postgres, why can't I give it 75%+?
Should the limit be as much as possible as long as the server doesn't
use any swap?

Any thoughts would be great, but I'd like to know why.

Thanks.

-Josh

---------------------------(end of broadcast)---------------------------
TIP 8: explain analyze is your friend

Nov 23 '05 #1
17 4095
Josh Close <na****@gmail.com> writes:
I know this is kinda a debate, but how much ram do I give postgres?
I've seen many places say around 10-15% or some say 25%....... If all
this server is doing is running postgres, why can't I give it 75%+?
Should the limit be as much as possible as long as the server doesn't
use any swap?


The short answer is no; the sweet spot for shared_buffers is usually on
the order of 10000 buffers, and trying to go for "75% of RAM" isn't
going to do anything except hurt. For the long answer see the
pgsql-performance list archives.

regards, tom lane

---------------------------(end of broadcast)---------------------------
TIP 1: subscribe and unsubscribe commands go to ma*******@postgresql.org

Nov 23 '05 #2
On Tue, 19 Oct 2004 17:42:16 -0400, Tom Lane <tg*@sss.pgh.pa.us> wrote:
The short answer is no; the sweet spot for shared_buffers is usually on
the order of 10000 buffers, and trying to go for "75% of RAM" isn't
going to do anything except hurt. For the long answer see the
pgsql-performance list archives.

regards, tom lane


Well, I didn't find a whole lot in the list-archives, so I emailed
that list whith a few more questions. My postgres server is just
crawling right now :(

-Josh

---------------------------(end of broadcast)---------------------------
TIP 5: Have you checked our extensive FAQ?

http://www.postgresql.org/docs/faqs/FAQ.html

Nov 23 '05 #3
On 19 Oct 2004 at 17:35, Josh Close wrote:
Well, I didn't find a whole lot in the list-archives, so I emailed
that list whith a few more questions. My postgres server is just
crawling right now :(


Unlike many other database engines the shared buffers of Postgres is
not a private cache of the database data. It is a working area shared
between all the backend processes. This needs to be tuned for number
of connections and overall workload, *not* the amount of your database
that you want to keep in memory. There is still lots of debate about what
the "sweet spot" is. Maybe there isn't one, but its not normally 75% of
RAM.

If anything, the effective_cache_size needs to be 75% of (available)
RAM as this is telling Postgres the amount of your database the *OS* is
likely to cache in memory.

Having said that, I think you will need to define "crawling". Is it
updates/inserts that are slow? This may be triggers/rules/referential
integrity checking etc that is slowing it. If it is selects that are slow, this
may be incorrect indexes or sub-optimal queries. You need to show us
what you are trying to do and what the results are.

Regards,
Gary.
---------------------------(end of broadcast)---------------------------
TIP 9: the planner will ignore your desire to choose an index scan if your
joining column's datatypes do not match

Nov 23 '05 #4
On Wed, 20 Oct 2004 08:00:55 +0100, Gary Doades <gp*@gpdnet.co.uk> wrote:
Unlike many other database engines the shared buffers of Postgres is
not a private cache of the database data. It is a working area shared
between all the backend processes. This needs to be tuned for number
of connections and overall workload, *not* the amount of your database
that you want to keep in memory. There is still lots of debate about what
the "sweet spot" is. Maybe there isn't one, but its not normally 75% of
RAM.

If anything, the effective_cache_size needs to be 75% of (available)
RAM as this is telling Postgres the amount of your database the *OS* is
likely to cache in memory.

Having said that, I think you will need to define "crawling". Is it
updates/inserts that are slow? This may be triggers/rules/referential
integrity checking etc that is slowing it. If it is selects that are slow, this
may be incorrect indexes or sub-optimal queries. You need to show us
what you are trying to do and what the results are.


It's slow due to several things happening all at once. There are a lot
of inserts and updates happening. There is periodically a bulk insert
of 500k - 1 mill rows happening. I'm doing a vacuum anaylyze every
hour due to the amount of transactions happening, and a vacuum full
every night. All this has caused selects to be very slow. At times, a
"select count(1)" from a table will take several mins. I don't think
selects would have to wait on locks by inserts/updates would it?

I would just like to do anything possible to help speed this up.

-Josh

---------------------------(end of broadcast)---------------------------
TIP 3: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to ma*******@postgresql.org so that your
message can get through to the mailing list cleanly

Nov 23 '05 #5
It's slow due to several things happening all at once. There are a lot
of inserts and updates happening. There is periodically a bulk insert
of 500k - 1 mill rows happening. I'm doing a vacuum anaylyze every
hour due to the amount of transactions happening, and a vacuum full
every night. All this has caused selects to be very slow. At times, a
"select count(1)" from a table will take several mins. I don't think
selects would have to wait on locks by inserts/updates would it?

I would just like to do anything possible to help speed this up.

If there are really many rows in table , select count(1) would be a
little bit slow,
for postgresql use sequential scan to count the rows. If the query is
other kind,
then may be check if there are index on search condition or use EXPLAIN
command
to see the query plan would be greatly help.

By the way, what's the version of your postgresql? older version (<7.4?)
still suffer from index
space bloating.

regards

Laser

---------------------------(end of broadcast)---------------------------
TIP 7: don't forget to increase your free space map settings

Nov 23 '05 #6
On Wed, 2004-10-20 at 07:25, Josh Close wrote:
It's slow due to several things happening all at once. There are a lot
of inserts and updates happening. There is periodically a bulk insert
of 500k - 1 mill rows happening. I'm doing a vacuum anaylyze every
hour due to the amount of transactions happening, and a vacuum full
every night. All this has caused selects to be very slow. At times, a
"select count(1)" from a table will take several mins. I don't think
selects would have to wait on locks by inserts/updates would it?


1: Is the bulk insert being done inside of a single transaction, or as
individual inserts?

2: Are your fsm settings high enough for an hourly vacuum to be
effective?

3: How selective is the where clause for your select (1) query? If
there is no where clause or the where clause isn't very selective, then
there will be a sequential scan every time. Since PostgreSQL has to hit
the table after using an index anyway, if it's going to retrieve a fair
percent of a table, it just goes right to a seq scan, which for
postgresql, is the right thing to do.

Post "explain analyze" of your slowest queries to the performance list
if you can.
---------------------------(end of broadcast)---------------------------
TIP 4: Don't 'kill -9' the postmaster

Nov 23 '05 #7
On Wed, 20 Oct 2004 09:52:25 -0600, Scott Marlowe <sm******@qwest.net> wrote:
1: Is the bulk insert being done inside of a single transaction, or as
individual inserts?
The bulk insert is being done by COPY FROM STDIN. It copies in 100,000
rows at a time, then disconnects, reconnects, and copies 100k more,
and repeats 'till done. There are no indexes on the tables that the
copy is being done into either, so it won't be slowed down by that at
all.

2: Are your fsm settings high enough for an hourly vacuum to be
effective?
What is fsm? I'll tell you when I find that out.

3: How selective is the where clause for your select (1) query? If
there is no where clause or the where clause isn't very selective, then
there will be a sequential scan every time. Since PostgreSQL has to hit
the table after using an index anyway, if it's going to retrieve a fair
percent of a table, it just goes right to a seq scan, which for
postgresql, is the right thing to do.
There was no where clause.

Post "explain analyze" of your slowest queries to the performance list
if you can.


I don't think it's a query problem ( but I could optimize them more
I'm sure ), 'cause the same query takes a long time when there are
other queries happening, and not long at all when nothing else is
going on.

-Josh

---------------------------(end of broadcast)---------------------------
TIP 6: Have you searched our list archives?

http://archives.postgresql.org

Nov 23 '05 #8
On 20 Oct 2004 at 11:37, Josh Close wrote:
On Wed, 20 Oct 2004 09:52:25 -0600, Scott Marlowe <sm******@qwest.net> wrote:
1: Is the bulk insert being done inside of a single transaction, or as
individual inserts?


The bulk insert is being done by COPY FROM STDIN. It copies in 100,000
rows at a time, then disconnects, reconnects, and copies 100k more,
and repeats 'till done. There are no indexes on the tables that the
copy is being done into either, so it won't be slowed down by that at
all.


What about triggers? Also constraints (check contraints, integrity
constraints) All these will slow the inserts/updates down.

If you have integrity constraints make sure you have indexes on the
referenced columns in the referenced tables and make sure the data
types are the same.

How long does 100,000 rows take to insert exactly?

How many updates are you performing each hour?

Regards,
Gary.

---------------------------(end of broadcast)---------------------------
TIP 4: Don't 'kill -9' the postmaster

Nov 23 '05 #9
On Wed, Oct 20, 2004 at 08:25:22 -0500,
Josh Close <na****@gmail.com> wrote:

It's slow due to several things happening all at once. There are a lot
of inserts and updates happening. There is periodically a bulk insert
of 500k - 1 mill rows happening. I'm doing a vacuum anaylyze every
hour due to the amount of transactions happening, and a vacuum full
every night. All this has caused selects to be very slow. At times, a
"select count(1)" from a table will take several mins. I don't think
selects would have to wait on locks by inserts/updates would it?


You might not need to do the vacuum fulls that often. If the your hourly
vacuums have a high enough fsm setting, they should be keeping the database
from continually growing in size. At that point daily vacuum fulls are
overkill and if they are slowing stuff down you want to run quickly, you
should cut back on them.

---------------------------(end of broadcast)---------------------------
TIP 3: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to ma*******@postgresql.org so that your
message can get through to the mailing list cleanly

Nov 23 '05 #10
On Wed, 20 Oct 2004 18:47:25 +0100, Gary Doades <gp*@gpdnet.co.uk> wrote:
What about triggers? Also constraints (check contraints, integrity
constraints) All these will slow the inserts/updates down.
No triggers or constraints. There are some foreign keys, but the
tables that have the inserts don't have anything to them, even
indexes, to help speed up the inserts.

If you have integrity constraints make sure you have indexes on the
referenced columns in the referenced tables and make sure the data
types are the same.

How long does 100,000 rows take to insert exactly?
I believe with the bulk inserts, 100k only takes a couple mins.

How many updates are you performing each hour?
I'm not sure about this. Is there a pg stats table I can look at to
find this out..... I suppose I could do a count on the time stamp
also. I'll let you know when I find out.

Regards,
Gary.

---------------------------(end of broadcast)---------------------------
TIP 4: Don't 'kill -9' the postmaster


---------------------------(end of broadcast)---------------------------
TIP 3: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to ma*******@postgresql.org so that your
message can get through to the mailing list cleanly

Nov 23 '05 #11
On Wed, 20 Oct 2004 13:35:43 -0500, Bruno Wolff III <br***@wolff.to> wrote:
You might not need to do the vacuum fulls that often. If the your hourly
vacuums have a high enough fsm setting, they should be keeping the database
from continually growing in size. At that point daily vacuum fulls are
overkill and if they are slowing stuff down you want to run quickly, you
should cut back on them.


I have the vacuum_mem set at 32M right now. I haven't changed the fsm
settings at all though.

-Josh

---------------------------(end of broadcast)---------------------------
TIP 4: Don't 'kill -9' the postmaster

Nov 23 '05 #12
On 20 Oct 2004 at 13:34, Josh Close wrote:
How long does 100,000 rows take to insert exactly?


I believe with the bulk inserts, 100k only takes a couple mins.


Hmm, that seems a bit slow. How big are the rows you are inserting? Have you checked
the cpu and IO usage during the inserts? You will need to do some kind of cpu/IO
monitoring to determine where the bottleneck is.

What hardware is this on? Sorry if you specified it earlier, I can't seem to find mention of
it.

Cheers,
Gary.
---------------------------(end of broadcast)---------------------------
TIP 4: Don't 'kill -9' the postmaster

Nov 23 '05 #13
On Wed, 20 Oct 2004 19:59:38 +0100, Gary Doades <gp*@gpdnet.co.uk> wrote:
Hmm, that seems a bit slow. How big are the rows you are inserting? Have you checked
the cpu and IO usage during the inserts? You will need to do some kind of cpu/IO
monitoring to determine where the bottleneck is.
The bulk inserts don't take full cpu. Between 40% and 80%. On the
other hand, a select will take 99% cpu.

What hardware is this on? Sorry if you specified it earlier, I can't seem to find mention of
it.


It's on a P4 HT with 1,128 megs ram.

-Josh

---------------------------(end of broadcast)---------------------------
TIP 2: you can get off all lists at once with the unregister command
(send "unregister YourEmailAddressHere" to ma*******@postgresql.org)

Nov 23 '05 #14
On 20 Oct 2004 at 14:09, Josh Close wrote:
On Wed, 20 Oct 2004 19:59:38 +0100, Gary Doades <gp*@gpdnet.co.uk> wrote:
Hmm, that seems a bit slow. How big are the rows you are inserting? Have you checked
the cpu and IO usage during the inserts? You will need to do some kind of cpu/IO
monitoring to determine where the bottleneck is.


The bulk inserts don't take full cpu. Between 40% and 80%. On the
other hand, a select will take 99% cpu.


Is this the select(1) query? Please post an explain analyze for this and any other "slow"
queries.

I would expect the selects to take 99% cpu if all the data you were trying to select was
already in memory. Is this the case in general? I can do a "select count(1)" on a 500,000
row table in about 1 second on a Athlon 2800+ if all the data is cached. It takes about 25
seconds if it has to fetch it from disk.

I have just done a test by inserting (via COPY) of 149,000 rows in a table with 23
columns, mostly numeric, some int4, 4 timestamps. This took 28 seconds on my
Windows XP desktop, Athlon 2800+, 7200 rpm SATA disk, Postgres 8.0 beta 2. It used
around 20% to 40% cpu during the copy. The only index was the int4 primary key,
nothing else.

How does this compare?
What hardware is this on? Sorry if you specified it earlier, I can't seem to find mention of
it.


It's on a P4 HT with 1,128 megs ram.


Disk system??

Regards,
Gary.
---------------------------(end of broadcast)---------------------------
TIP 1: subscribe and unsubscribe commands go to ma*******@postgresql.org

Nov 23 '05 #15
On Wed, 20 Oct 2004 20:49:54 +0100, Gary Doades <gp*@gpdnet.co.uk> wrote:
Is this the select(1) query? Please post an explain analyze for this and any other "slow"
queries.
I think it took so long 'cause it wasn't cached. The second time I ran
it, it took less than a second. How you can tell if something is
cached? Is there a way to see what's in cache?

I would expect the selects to take 99% cpu if all the data you were trying to select was
already in memory. Is this the case in general? I can do a "select count(1)" on a 500,000
row table in about 1 second on a Athlon 2800+ if all the data is cached. It takes about 25
seconds if it has to fetch it from disk.
I think that's what's going on here.

I have just done a test by inserting (via COPY) of 149,000 rows in a table with 23
columns, mostly numeric, some int4, 4 timestamps. This took 28 seconds on my
Windows XP desktop, Athlon 2800+, 7200 rpm SATA disk, Postgres 8.0 beta 2. It used
around 20% to 40% cpu during the copy. The only index was the int4 primary key,
nothing else.
Well, there are a 3 text columns or so, and that's why the COPY takes
longer than yours. That hasn't been a big issue though. I copies fast
enough.

How does this compare?

Disk system??
It's in ide raid 1 config I believe. So it's not too fast. It will
soon be on a scsi raid 5 array. That should help speed some things up
also.

Regards,
Gary.


What about the postgresql.conf config settings. This is what I have and why.

shared_buffers = 21250

This is 174 megs, which is 15% of total ram. I read somewhere that it
should be between 12-15% of total ram.

sort_mem = 32768

This is default.

vacuum_mem = 32768

This is 32 megs. I put it that high because of something I read here
http://www.varlena.com/varlena/Gener...bits/perf.html

#max_fsm_pages = 20000

Default. I would think this could be upped more, but I don't know how much.

effective_cache_size = 105750

This is 846 megs ram which is 75% of total mem. I put it there 'cause
of a reply I got on the performance list.

I made all these changes today, and haven't had much of a chance to
speed test postgres since.

Any thoughs on these settings?

-Josh

---------------------------(end of broadcast)---------------------------
TIP 5: Have you checked our extensive FAQ?

http://www.postgresql.org/docs/faqs/FAQ.html

Nov 23 '05 #16
On 20 Oct 2004 at 15:36, Josh Close wrote:
On Wed, 20 Oct 2004 20:49:54 +0100, Gary Doades <gp*@gpdnet.co.uk> wrote:
Is this the select(1) query? Please post an explain analyze for this and any other "slow"
queries.
I think it took so long 'cause it wasn't cached. The second time I ran
it, it took less than a second. How you can tell if something is
cached? Is there a way to see what's in cache?


No. The OS caches the data as read from the disk. If you need the data to be in memory
for performance then you need to make sure you have enough available RAM to hold
your typical result sets if possible.
What about the postgresql.conf config settings. This is what I have and why.

sort_mem = 32768

This is default.


This is not the default. The default is 1000. You are telling Postgres to use 32Megs for
*each* sort that is taking place. If you have several queries each performing large sorts
you can quickly eat up available RAM this way. If you will only have a small number of
concurrrent queries performing sorts then this may be OK. Don't forget, a single query
can perform more than one sort operation. If you have 10 large sorts happening at the
same time, you can eat up to 320 megs this way!

You will need to tell us the number of updates/deletes you are having. This will
determine the vacuum needs. If the bulk of the data is inserted you may only need to
analyze frequently, not vacuum.

In order to get more help you will need to supply the update/delete frequency and the
explain analyze output from your queries.

Regards,
Gary.
---------------------------(end of broadcast)---------------------------
TIP 4: Don't 'kill -9' the postmaster

Nov 23 '05 #17
On Wed, 20 Oct 2004 23:43:54 +0100, Gary Doades <gp*@gpdnet.co.uk> wrote:
You will need to tell us the number of updates/deletes you are having. This will
determine the vacuum needs. If the bulk of the data is inserted you may only need to
analyze frequently, not vacuum.

In order to get more help you will need to supply the update/delete frequency and the
explain analyze output from your queries.


I will have to gather this information for you.

-Josh

---------------------------(end of broadcast)---------------------------
TIP 3: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to ma*******@postgresql.org so that your
message can get through to the mailing list cleanly

Nov 23 '05 #18

This thread has been closed and replies have been disabled. Please start a new discussion.

Similar topics

3
by: N.K. | last post by:
Hi, I've just installed postgres on the Linux server. It is supposed to start automatically which I think it does since I can run an sql stmt right away. When I'm trying to connect from a remote...
7
by: Abdul-Wahid Paterson | last post by:
Hi, I have had a site working for the last 2 years and have had no problems until at the weekend I replace my database server with a newer one. The database migration went like a dream and I had...
18
by: Chris Travers | last post by:
Hi all; I have been looking into how to ensure that synchronous replication, etc. could best be implimented. To date, I see only two options: incorporate the replication code into the database...
3
by: warwick.poole | last post by:
I am interested in finding out about Enterprise scale Postgres installations and clustering, especially on Linux. Essentially I would like to know the possibility that Postgres can store the...
6
by: Prabu Subroto | last post by:
Dear my friends... Usually I use MySQL. Now I have to migrate my database from MySQL to Postgres. I have created a database successfully with "creatdb" and a user account successfully. But...
18
by: Joe Lester | last post by:
This thread was renamed. It used to be: "shared_buffers Question". The old thread kind of died out. I'm hoping to get some more direction by rephrasing the problem, along with some extra...
9
by: David Helgason | last post by:
I'm calling one stored procedure with a prepared statement on the server with 6 arrays of around 1200 elements each as parameters. The parameters are around 220K in total. This is taking a...
1
by: Jack Orenstein | last post by:
I'm trying to configure PHP 5.2.0 with support for Postgres 8.1.3. Postgres was installed with FC5 without source. PHP's configure needs source. When I run configure: configure: error: Cannot...
0
by: NM | last post by:
Hello, I've got a problem inserting binary objects into the postgres database. I have binary objects (e.g. images or smth else) of any size which I want to insert into the database. Funny is it...
0
by: taylorcarr | last post by:
A Canon printer is a smart device known for being advanced, efficient, and reliable. It is designed for home, office, and hybrid workspace use and can also be used for a variety of purposes. However,...
0
by: aa123db | last post by:
Variable and constants Use var or let for variables and const fror constants. Var foo ='bar'; Let foo ='bar';const baz ='bar'; Functions function $name$ ($parameters$) { } ...
0
by: ryjfgjl | last post by:
If we have dozens or hundreds of excel to import into the database, if we use the excel import function provided by database editors such as navicat, it will be extremely tedious and time-consuming...
0
by: ryjfgjl | last post by:
In our work, we often receive Excel tables with data in the same format. If we want to analyze these data, it can be difficult to analyze them because the data is spread across multiple Excel files...
0
BarryA
by: BarryA | last post by:
What are the essential steps and strategies outlined in the Data Structures and Algorithms (DSA) roadmap for aspiring data scientists? How can individuals effectively utilize this roadmap to progress...
1
by: nemocccc | last post by:
hello, everyone, I want to develop a software for my android phone for daily needs, any suggestions?
1
by: Sonnysonu | last post by:
This is the data of csv file 1 2 3 1 2 3 1 2 3 1 2 3 2 3 2 3 3 the lengths should be different i have to store the data by column-wise with in the specific length. suppose the i have to...
0
marktang
by: marktang | last post by:
ONU (Optical Network Unit) is one of the key components for providing high-speed Internet services. Its primary function is to act as an endpoint device located at the user's premises. However,...
0
Oralloy
by: Oralloy | last post by:
Hello folks, I am unable to find appropriate documentation on the type promotion of bit-fields when using the generalised comparison operator "<=>". The problem is that using the GNU compilers,...

By using Bytes.com and it's services, you agree to our Privacy Policy and Terms of Use.

To disable or enable advertisements and analytics tracking please visit the manage ads & tracking page.