473,624 Members | 2,238 Online
Bytes | Software Development & Data Engineering Community
+ Post

Home Posts Topics Members FAQ

how much ram do i give postgres?

I know this is kinda a debate, but how much ram do I give postgres?
I've seen many places say around 10-15% or some say 25%....... If all
this server is doing is running postgres, why can't I give it 75%+?
Should the limit be as much as possible as long as the server doesn't
use any swap?

Any thoughts would be great, but I'd like to know why.

Thanks.

-Josh

---------------------------(end of broadcast)---------------------------
TIP 8: explain analyze is your friend

Nov 23 '05 #1
17 4113
Josh Close <na****@gmail.c om> writes:
I know this is kinda a debate, but how much ram do I give postgres?
I've seen many places say around 10-15% or some say 25%....... If all
this server is doing is running postgres, why can't I give it 75%+?
Should the limit be as much as possible as long as the server doesn't
use any swap?


The short answer is no; the sweet spot for shared_buffers is usually on
the order of 10000 buffers, and trying to go for "75% of RAM" isn't
going to do anything except hurt. For the long answer see the
pgsql-performance list archives.

regards, tom lane

---------------------------(end of broadcast)---------------------------
TIP 1: subscribe and unsubscribe commands go to ma*******@postg resql.org

Nov 23 '05 #2
On Tue, 19 Oct 2004 17:42:16 -0400, Tom Lane <tg*@sss.pgh.pa .us> wrote:
The short answer is no; the sweet spot for shared_buffers is usually on
the order of 10000 buffers, and trying to go for "75% of RAM" isn't
going to do anything except hurt. For the long answer see the
pgsql-performance list archives.

regards, tom lane


Well, I didn't find a whole lot in the list-archives, so I emailed
that list whith a few more questions. My postgres server is just
crawling right now :(

-Josh

---------------------------(end of broadcast)---------------------------
TIP 5: Have you checked our extensive FAQ?

http://www.postgresql.org/docs/faqs/FAQ.html

Nov 23 '05 #3
On 19 Oct 2004 at 17:35, Josh Close wrote:
Well, I didn't find a whole lot in the list-archives, so I emailed
that list whith a few more questions. My postgres server is just
crawling right now :(


Unlike many other database engines the shared buffers of Postgres is
not a private cache of the database data. It is a working area shared
between all the backend processes. This needs to be tuned for number
of connections and overall workload, *not* the amount of your database
that you want to keep in memory. There is still lots of debate about what
the "sweet spot" is. Maybe there isn't one, but its not normally 75% of
RAM.

If anything, the effective_cache _size needs to be 75% of (available)
RAM as this is telling Postgres the amount of your database the *OS* is
likely to cache in memory.

Having said that, I think you will need to define "crawling". Is it
updates/inserts that are slow? This may be triggers/rules/referential
integrity checking etc that is slowing it. If it is selects that are slow, this
may be incorrect indexes or sub-optimal queries. You need to show us
what you are trying to do and what the results are.

Regards,
Gary.
---------------------------(end of broadcast)---------------------------
TIP 9: the planner will ignore your desire to choose an index scan if your
joining column's datatypes do not match

Nov 23 '05 #4
On Wed, 20 Oct 2004 08:00:55 +0100, Gary Doades <gp*@gpdnet.co. uk> wrote:
Unlike many other database engines the shared buffers of Postgres is
not a private cache of the database data. It is a working area shared
between all the backend processes. This needs to be tuned for number
of connections and overall workload, *not* the amount of your database
that you want to keep in memory. There is still lots of debate about what
the "sweet spot" is. Maybe there isn't one, but its not normally 75% of
RAM.

If anything, the effective_cache _size needs to be 75% of (available)
RAM as this is telling Postgres the amount of your database the *OS* is
likely to cache in memory.

Having said that, I think you will need to define "crawling". Is it
updates/inserts that are slow? This may be triggers/rules/referential
integrity checking etc that is slowing it. If it is selects that are slow, this
may be incorrect indexes or sub-optimal queries. You need to show us
what you are trying to do and what the results are.


It's slow due to several things happening all at once. There are a lot
of inserts and updates happening. There is periodically a bulk insert
of 500k - 1 mill rows happening. I'm doing a vacuum anaylyze every
hour due to the amount of transactions happening, and a vacuum full
every night. All this has caused selects to be very slow. At times, a
"select count(1)" from a table will take several mins. I don't think
selects would have to wait on locks by inserts/updates would it?

I would just like to do anything possible to help speed this up.

-Josh

---------------------------(end of broadcast)---------------------------
TIP 3: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to ma*******@postg resql.org so that your
message can get through to the mailing list cleanly

Nov 23 '05 #5
It's slow due to several things happening all at once. There are a lot
of inserts and updates happening. There is periodically a bulk insert
of 500k - 1 mill rows happening. I'm doing a vacuum anaylyze every
hour due to the amount of transactions happening, and a vacuum full
every night. All this has caused selects to be very slow. At times, a
"select count(1)" from a table will take several mins. I don't think
selects would have to wait on locks by inserts/updates would it?

I would just like to do anything possible to help speed this up.

If there are really many rows in table , select count(1) would be a
little bit slow,
for postgresql use sequential scan to count the rows. If the query is
other kind,
then may be check if there are index on search condition or use EXPLAIN
command
to see the query plan would be greatly help.

By the way, what's the version of your postgresql? older version (<7.4?)
still suffer from index
space bloating.

regards

Laser

---------------------------(end of broadcast)---------------------------
TIP 7: don't forget to increase your free space map settings

Nov 23 '05 #6
On Wed, 2004-10-20 at 07:25, Josh Close wrote:
It's slow due to several things happening all at once. There are a lot
of inserts and updates happening. There is periodically a bulk insert
of 500k - 1 mill rows happening. I'm doing a vacuum anaylyze every
hour due to the amount of transactions happening, and a vacuum full
every night. All this has caused selects to be very slow. At times, a
"select count(1)" from a table will take several mins. I don't think
selects would have to wait on locks by inserts/updates would it?


1: Is the bulk insert being done inside of a single transaction, or as
individual inserts?

2: Are your fsm settings high enough for an hourly vacuum to be
effective?

3: How selective is the where clause for your select (1) query? If
there is no where clause or the where clause isn't very selective, then
there will be a sequential scan every time. Since PostgreSQL has to hit
the table after using an index anyway, if it's going to retrieve a fair
percent of a table, it just goes right to a seq scan, which for
postgresql, is the right thing to do.

Post "explain analyze" of your slowest queries to the performance list
if you can.
---------------------------(end of broadcast)---------------------------
TIP 4: Don't 'kill -9' the postmaster

Nov 23 '05 #7
On Wed, 20 Oct 2004 09:52:25 -0600, Scott Marlowe <sm******@qwest .net> wrote:
1: Is the bulk insert being done inside of a single transaction, or as
individual inserts?
The bulk insert is being done by COPY FROM STDIN. It copies in 100,000
rows at a time, then disconnects, reconnects, and copies 100k more,
and repeats 'till done. There are no indexes on the tables that the
copy is being done into either, so it won't be slowed down by that at
all.

2: Are your fsm settings high enough for an hourly vacuum to be
effective?
What is fsm? I'll tell you when I find that out.

3: How selective is the where clause for your select (1) query? If
there is no where clause or the where clause isn't very selective, then
there will be a sequential scan every time. Since PostgreSQL has to hit
the table after using an index anyway, if it's going to retrieve a fair
percent of a table, it just goes right to a seq scan, which for
postgresql, is the right thing to do.
There was no where clause.

Post "explain analyze" of your slowest queries to the performance list
if you can.


I don't think it's a query problem ( but I could optimize them more
I'm sure ), 'cause the same query takes a long time when there are
other queries happening, and not long at all when nothing else is
going on.

-Josh

---------------------------(end of broadcast)---------------------------
TIP 6: Have you searched our list archives?

http://archives.postgresql.org

Nov 23 '05 #8
On 20 Oct 2004 at 11:37, Josh Close wrote:
On Wed, 20 Oct 2004 09:52:25 -0600, Scott Marlowe <sm******@qwest .net> wrote:
1: Is the bulk insert being done inside of a single transaction, or as
individual inserts?


The bulk insert is being done by COPY FROM STDIN. It copies in 100,000
rows at a time, then disconnects, reconnects, and copies 100k more,
and repeats 'till done. There are no indexes on the tables that the
copy is being done into either, so it won't be slowed down by that at
all.


What about triggers? Also constraints (check contraints, integrity
constraints) All these will slow the inserts/updates down.

If you have integrity constraints make sure you have indexes on the
referenced columns in the referenced tables and make sure the data
types are the same.

How long does 100,000 rows take to insert exactly?

How many updates are you performing each hour?

Regards,
Gary.

---------------------------(end of broadcast)---------------------------
TIP 4: Don't 'kill -9' the postmaster

Nov 23 '05 #9
On Wed, Oct 20, 2004 at 08:25:22 -0500,
Josh Close <na****@gmail.c om> wrote:

It's slow due to several things happening all at once. There are a lot
of inserts and updates happening. There is periodically a bulk insert
of 500k - 1 mill rows happening. I'm doing a vacuum anaylyze every
hour due to the amount of transactions happening, and a vacuum full
every night. All this has caused selects to be very slow. At times, a
"select count(1)" from a table will take several mins. I don't think
selects would have to wait on locks by inserts/updates would it?


You might not need to do the vacuum fulls that often. If the your hourly
vacuums have a high enough fsm setting, they should be keeping the database
from continually growing in size. At that point daily vacuum fulls are
overkill and if they are slowing stuff down you want to run quickly, you
should cut back on them.

---------------------------(end of broadcast)---------------------------
TIP 3: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to ma*******@postg resql.org so that your
message can get through to the mailing list cleanly

Nov 23 '05 #10

This thread has been closed and replies have been disabled. Please start a new discussion.

Similar topics

3
4179
by: N.K. | last post by:
Hi, I've just installed postgres on the Linux server. It is supposed to start automatically which I think it does since I can run an sql stmt right away. When I'm trying to connect from a remote machine I get a message that the remote machine IP address is not specified in pg_hba.conf, that there is no record of that machine there. ph_hba.conf is set up correctly, because when I run the following: postmaster -i -D /var/lib/pgsql/data...
7
6647
by: Abdul-Wahid Paterson | last post by:
Hi, I have had a site working for the last 2 years and have had no problems until at the weekend I replace my database server with a newer one. The database migration went like a dream and I had the whole db changed over in 1 hour. Since the upgrade I have been getting the following error message sporadically.
18
2009
by: Chris Travers | last post by:
Hi all; I have been looking into how to ensure that synchronous replication, etc. could best be implimented. To date, I see only two options: incorporate the replication code into the database backend or have a separate "proxy" which handles the replication. The main problem with incorporating the system into the backend process is that it limits the development to the 10-month timeframe between releases. The main advantage is that...
3
8686
by: warwick.poole | last post by:
I am interested in finding out about Enterprise scale Postgres installations and clustering, especially on Linux. Essentially I would like to know the possibility that Postgres can store the database data in a central location (ex: on a SAN fiber array) and have a cluster of machines sharing processor/RAM/IO bandwidth to do the application processing. Or perhaps there is another solution similar to what www.emicnetworks.com have...
6
2941
by: Prabu Subroto | last post by:
Dear my friends... Usually I use MySQL. Now I have to migrate my database from MySQL to Postgres. I have created a database successfully with "creatdb" and a user account successfully. But I can not access the postgres with pgaccess.
18
5120
by: Joe Lester | last post by:
This thread was renamed. It used to be: "shared_buffers Question". The old thread kind of died out. I'm hoping to get some more direction by rephrasing the problem, along with some extra observations I've recently made. The core of the problem is that Postgres is filling up my hard drive with swap files at the rate of around 3 to 7 GB per week (that's Gigabytes not Megabytes) . At this rate it takes roughly two months to fill up my 40...
9
1379
by: David Helgason | last post by:
I'm calling one stored procedure with a prepared statement on the server with 6 arrays of around 1200 elements each as parameters. The parameters are around 220K in total. This is taking a surprising amount of time. Thus I put a lot of logging into the application and in the stored procedure that's getting called. It seems that almost all of most of the time is spent before the stored procedure is even entered. What wakes my suspicion...
1
3859
by: Jack Orenstein | last post by:
I'm trying to configure PHP 5.2.0 with support for Postgres 8.1.3. Postgres was installed with FC5 without source. PHP's configure needs source. When I run configure: configure: error: Cannot find libpq-fe.h. Please specify correct PostgreSQL installation path I tried downloading Postgres source and modifying PHPs configure to point to it, and that worked. But then compilation failed, e.g.
0
2693
by: NM | last post by:
Hello, I've got a problem inserting binary objects into the postgres database. I have binary objects (e.g. images or smth else) of any size which I want to insert into the database. Funny is it works for files larger than 8000 Bytes. If a file is less than 1000 Bytes I get the following message: Error message: --invalid input syntax for type oid: "\074\077......";
0
8233
marktang
by: marktang | last post by:
ONU (Optical Network Unit) is one of the key components for providing high-speed Internet services. Its primary function is to act as an endpoint device located at the user's premises. However, people are often confused as to whether an ONU can Work As a Router. In this blog post, we’ll explore What is ONU, What Is Router, ONU & Router’s main usage, and What is the difference between ONU and Router. Let’s take a closer look ! Part I. Meaning of...
0
8170
by: Hystou | last post by:
Most computers default to English, but sometimes we require a different language, especially when relocating. Forgot to request a specific language before your computer shipped? No problem! You can effortlessly switch the default language on Windows 10 without reinstalling. I'll walk you through it. First, let's disable language synchronization. With a Microsoft account, language settings sync across devices. To prevent any complications,...
0
8675
Oralloy
by: Oralloy | last post by:
Hello folks, I am unable to find appropriate documentation on the type promotion of bit-fields when using the generalised comparison operator "<=>". The problem is that using the GNU compilers, it seems that the internal comparison operator "<=>" tries to promote arguments from unsigned to signed. This is as boiled down as I can make it. Here is my compilation command: g++-12 -std=c++20 -Wnarrowing bit_field.cpp Here is the code in...
1
8334
by: Hystou | last post by:
Overview: Windows 11 and 10 have less user interface control over operating system update behaviour than previous versions of Windows. In Windows 11 and 10, there is no way to turn off the Windows Update option using the Control Panel or Settings app; it automatically checks for updates and installs any it finds, whether you like it or not. For most users, this new feature is actually very convenient. If you want to control the update process,...
0
7158
agi2029
by: agi2029 | last post by:
Let's talk about the concept of autonomous AI software engineers and no-code agents. These AIs are designed to manage the entire lifecycle of a software development project—planning, coding, testing, and deployment—without human intervention. Imagine an AI that can take a project description, break it down, write the code, debug it, and then launch it, all on its own.... Now, this would greatly impact the work of software developers. The idea...
1
6108
isladogs
by: isladogs | last post by:
The next Access Europe User Group meeting will be on Wednesday 1 May 2024 starting at 18:00 UK time (6PM UTC+1) and finishing by 19:30 (7.30PM). In this session, we are pleased to welcome a new presenter, Adolph Dupré who will be discussing some powerful techniques for using class modules. He will explain when you may want to use classes instead of User Defined Types (UDT). For example, to manage the data in unbound forms. Adolph will...
0
4173
by: adsilva | last post by:
A Windows Forms form does not have the event Unload, like VB6. What one acts like?
1
2604
by: 6302768590 | last post by:
Hai team i want code for transfer the data from one system to another through IP address by using C# our system has to for every 5mins then we have to update the data what the data is updated we have to send another system
2
1482
bsmnconsultancy
by: bsmnconsultancy | last post by:
In today's digital era, a well-designed website is crucial for businesses looking to succeed. Whether you're a small business owner or a large corporation in Toronto, having a strong online presence can significantly impact your brand's success. BSMN Consultancy, a leader in Website Development in Toronto offers valuable insights into creating effective websites that not only look great but also perform exceptionally well. In this comprehensive...

By using Bytes.com and it's services, you agree to our Privacy Policy and Terms of Use.

To disable or enable advertisements and analytics tracking please visit the manage ads & tracking page.