473,320 Members | 1,821 Online
Bytes | Software Development & Data Engineering Community
Post Job

Home Posts Topics Members FAQ

Join Bytes to post your question to a community of 473,320 software developers and data experts.

postmaster(s) have high load average

I have one process which writes a single float into 300 columns once per
second. I then run 4 process, from remote computers, to query a small
subset of the latest row.

I have even commented out everything in the query programs, all they do
is sleep, and the associated postmaster still sucks up 15% - 20% CPU.

Computer is a P4 /w 1Gig memory, all disk access is local. RH9 /w stock
postgresql-7.3 installed.

I have searched the documentation and tech site high and low for ideas....
17:36:27 up 31 days, 6:07, 13 users, load average: 4.11, 2.48, 1.62
107 processes: 99 sleeping, 8 running, 0 zombie, 0 stopped
CPU states: 22.3% user 76.0% system 0.0% nice 0.0% iowait 1.5% idle
Mem: 1030408k av, 976792k used, 53616k free, 0k shrd, 178704k
buff
715252k actv, 33360k in_d, 22348k in_c
Swap: 2048248k av, 91308k used, 1956940k free 589572k
cached
PID USER PRI NI SIZE RSS SHARE STAT %CPU %MEM TIME CPU COMMAND
23389 cjw 16 0 2896 2752 2132 R 18.2 0.2 0:44 0 postmaster
23388 cjw 16 0 2896 2752 2132 S 18.0 0.2 0:45 0 postmaster
23391 cjw 16 0 2896 2752 2132 S 18.0 0.2 0:43 0 postmaster
23366 cjw 16 0 3788 3644 2560 S 17.8 0.3 2:32 0 postmaster
23392 cjw 16 0 2896 2752 2132 R 16.2 0.2 0:05 0 postmaster
--
--Chris

How is it one careless match can start a forest fire, but it takes a
whole box to start a campfire?
---------------------------(end of broadcast)---------------------------
TIP 1: subscribe and unsubscribe commands go to ma*******@postgresql.org

Nov 11 '05 #1
5 5859
Martijn van Oosterhout wrote:
Have you run VACUUM and/or VACUUM FULL and/or ANALYZE recently?

a) yes. I have it run analyze every 30 minutes or 1600 record
additions. Records are never updated or deleted so I assume I don't
need vacuum.

b) It does it even at start up when there are fewer than 100 records in
the database.

c) Would this even matter for clients that only connect but NEVER make
any requests from the database?

--Chris
On Thu, Aug 07, 2003 at 05:44:05PM -0600, Chris Webster wrote:

I have one process which writes a single float into 300 columns once per
second. I then run 4 process, from remote computers, to query a small
subset of the latest row.

I have even commented out everything in the query programs, all they do
is sleep, and the associated postmaster still sucks up 15% - 20% CPU.

Computer is a P4 /w 1Gig memory, all disk access is local. RH9 /w stock
postgresql-7.3 installed.

I have searched the documentation and tech site high and low for ideas....
17:36:27 up 31 days, 6:07, 13 users, load average: 4.11, 2.48, 1.62
107 processes: 99 sleeping, 8 running, 0 zombie, 0 stopped
CPU states: 22.3% user 76.0% system 0.0% nice 0.0% iowait 1.5% idle
Mem: 1030408k av, 976792k used, 53616k free, 0k shrd, 178704k
buff
715252k actv, 33360k in_d, 22348k in_c
Swap: 2048248k av, 91308k used, 1956940k free 589572k
cached
PID USER PRI NI SIZE RSS SHARE STAT %CPU %MEM TIME CPU COMMAND
23389 cjw 16 0 2896 2752 2132 R 18.2 0.2 0:44 0 postmaster
23388 cjw 16 0 2896 2752 2132 S 18.0 0.2 0:45 0 postmaster
23391 cjw 16 0 2896 2752 2132 S 18.0 0.2 0:43 0 postmaster
23366 cjw 16 0 3788 3644 2560 S 17.8 0.3 2:32 0 postmaster
23392 cjw 16 0 2896 2752 2132 R 16.2 0.2 0:05 0 postmaster


---------------------------(end of broadcast)---------------------------
TIP 1: subscribe and unsubscribe commands go to ma*******@postgresql.org

Nov 11 '05 #2
Oops! cj*@ucar.edu (Chris Webster) was seen spray-painting on a wall:
Martijn van Oosterhout wrote:
Have you run VACUUM and/or VACUUM FULL and/or ANALYZE recently?
a) yes. I have it run analyze every 30 minutes or 1600 record
additions. Records are never updated or deleted so I assume I don't
need vacuum.


You only really need to run analyze when the statistical
characteristics of the data changes; as the database grows, that is
fairly likely to stabilize somewhat so that you can ANALYZE less
frequently over time...

Have you verified that nothing has gotten touched? Run a VACUUM
VERBOSE and see what it does... Note that if you ever get cases where
records are added but rolled back due to some later part of a
transaction failing, that too will lead to dead tuples...
b) It does it even at start up when there are fewer than 100 records
in the database.

c) Would this even matter for clients that only connect but NEVER
make any requests from the database?


Run VACUUM VERBOSE on it; you'll no doubt see that some internal
tables such as pg_activity, pg_statistic, and such have a lot of dead
tuples. Establishing a connection leads to _some_ DB activity, and
probably a dead tuple or two; every time you ANALYZE, you create a
bunch of dead tuples since old statistics are "killed off."
--
If this was helpful, <http://svcs.affero.net/rm.php?r=cbbrowne> rate me
http://www3.sympatico.ca/cbbrowne/sap.html
"Rules of Optimization:
Rule 1: Don't do it.
Rule 2 (for experts only): Don't do it yet."
-- M.A. Jackson
Nov 11 '05 #3
> Run VACUUM VERBOSE on it; you'll no doubt see that some internal
tables such as pg_activity, pg_statistic, and such have a lot of dead
tuples. Establishing a connection leads to _some_ DB activity, and
probably a dead tuple or two; every time you ANALYZE, you create a
bunch of dead tuples since old statistics are "killed off."


What? Does this mean that it is needed to routinely vacuum system tables
too? If so, which is the recommended procedure?

thx
cl.

---------------------------(end of broadcast)---------------------------
TIP 9: the planner will ignore your desire to choose an index scan if your
joining column's datatypes do not match

Nov 11 '05 #4
In an attempt to throw the authorities off his trail, cl******@hotmail.com ("Claudio Lapidus") transmitted:
Run VACUUM VERBOSE on it; you'll no doubt see that some internal
tables such as pg_activity, pg_statistic, and such have a lot of dead
tuples. Establishing a connection leads to _some_ DB activity, and
probably a dead tuple or two; every time you ANALYZE, you create a
bunch of dead tuples since old statistics are "killed off."


What? Does this mean that it is needed to routinely vacuum system tables
too? If so, which is the recommended procedure?


On some 7.2 systems I work with, a smattering of system tables are
vacuumed hourly along with applications that are known to be good
"fodder" for the purpose.

In 7.3 and 7.4, the "contrib" application, pg_autovacuum can do the
trick, vacuuming anything that reaches thresholds of
inserts/deletes/updates, and do so more or less as often as necessary.

If you haven't got a cron job looking something like:

0 0 * * * * vacuumdb -a -z > /dev/null 2> /dev/null

then you should probably add that, at least.
--
wm(X,Y):-write(X),write('@'),write(Y). wm('aa454','freenet.carleton.ca').
http://cbbrowne.com/info/multiplexor.html
"A hack is a terrible thing to waste, please give to the
implementation of your choice..." -- GJC
Nov 11 '05 #5

On Sat, 2003-08-09 at 21:25, Christopher Browne wrote:
In 7.3 and 7.4, the "contrib" application, pg_autovacuum can do the
trick, vacuuming anything that reaches thresholds of
inserts/deletes/updates, and do so more or less as often as necessary.
Actually pg_autovacuum is not included with 7.3, but works just fine
once you get it compiled.
If you haven't got a cron job looking something like:

0 0 * * * * vacuumdb -a -z > /dev/null 2> /dev/null

then you should probably add that, at least.


might be better to have it only vacuum a few specific tables that cause
most of your problems.
---------------------------(end of broadcast)---------------------------
TIP 8: explain analyze is your friend

Nov 11 '05 #6

This thread has been closed and replies have been disabled. Please start a new discussion.

Similar topics

7
by: Alvaro G Vicario | last post by:
I've seen some sites that display a "Server load" value in percentage. I've tried to google out the formula they may be using but I couln't find it. I'd like to implement such a thing in a Linux...
3
by: Evan Smith | last post by:
Scenario: new functionality was recently deployed in a 3-tier business application. Since deployment, CPU use has shot up to very high levels. Using event monitors to try to track down the...
0
by: Bonj | last post by:
hello this is a purely algorithmical question but if i have posted to an irrelevant group, apologies. can anyone point me at some good tutorials or info about the steps involved in creating a...
8
by: Lincoln Yeoh | last post by:
Hi, I recently upgraded to postgresql 7.4 and I am having a problem with postmaster using lots of memory for a query (keeps growing even up to 400MB+ till I stop postgresql ). I don't recall...
2
by: Gregory S. Williamson | last post by:
Dear peoples, We had an oddness today with one of of postgres servers (Dell 2 CPU box running linux) and postgres 7.4. The server was under heavy load (50+ for a 1 minutes spike; about 20 for the...
3
by: ewunia | last post by:
I am running the tagboard service on my server. Few months ago I bought the new dedicated server with Dual Xeon 3.2 GHz and 2GB Ram. The board is written in PHP with Mysql. Not long ago I updated...
15
by: Donkey | last post by:
Hi, The precision of built-in date type of C is very low. Even using long double float type or double float type, we can only use 12 or 16 digits after the decimal point. What can we do if we want...
1
by: Flanders | last post by:
I have developed a small arcade game with the help of a few VB books. A scoring system was implemented in the design of the game but I as hoping that some one would be able to instruct me on how...
0
isladogs
by: isladogs | last post by:
The next Access Europe meeting will be on Wednesday 6 Mar 2024 starting at 18:00 UK time (6PM UTC) and finishing at about 19:15 (7.15PM). In this month's session, we are pleased to welcome back...
0
by: Vimpel783 | last post by:
Hello! Guys, I found this code on the Internet, but I need to modify it a little. It works well, the problem is this: Data is sent from only one cell, in this case B5, but it is necessary that data...
0
by: jfyes | last post by:
As a hardware engineer, after seeing that CEIWEI recently released a new tool for Modbus RTU Over TCP/UDP filtering and monitoring, I actively went to its official website to take a look. It turned...
1
by: PapaRatzi | last post by:
Hello, I am teaching myself MS Access forms design and Visual Basic. I've created a table to capture a list of Top 30 singles and forms to capture new entries. The final step is a form (unbound)...
0
by: CloudSolutions | last post by:
Introduction: For many beginners and individual users, requiring a credit card and email registration may pose a barrier when starting to use cloud servers. However, some cloud server providers now...
0
by: Defcon1945 | last post by:
I'm trying to learn Python using Pycharm but import shutil doesn't work
0
by: af34tf | last post by:
Hi Guys, I have a domain whose name is BytesLimited.com, and I want to sell it. Does anyone know about platforms that allow me to list my domain in auction for free. Thank you
0
by: Faith0G | last post by:
I am starting a new it consulting business and it's been a while since I setup a new website. Is wordpress still the best web based software for hosting a 5 page website? The webpages will be...
0
isladogs
by: isladogs | last post by:
The next Access Europe User Group meeting will be on Wednesday 3 Apr 2024 starting at 18:00 UK time (6PM UTC+1) and finishing by 19:30 (7.30PM). In this session, we are pleased to welcome former...

By using Bytes.com and it's services, you agree to our Privacy Policy and Terms of Use.

To disable or enable advertisements and analytics tracking please visit the manage ads & tracking page.