473,289 Members | 1,961 Online
Bytes | Software Development & Data Engineering Community
Post Job

Home Posts Topics Members FAQ

Join Bytes to post your question to a community of 473,289 software developers and data experts.

PHP + Postgres: More than 1000 postmasters produce 70.000 contextswitches

Hello,
we installed a new Postgres 7.4.0 on a Suse 9 system.
This is used as a part of an extranet , based on Apache+PHP and has besides
a ldap
server no services running. The system has dual xeon 2ghz and 2GB RAM.
When migrating all applications from 2 other postgres7.2 servers to the new
one,
we had heavy load problems.
At the beginning there where problems with to much allocated shared memory,
as the system was swapping 5-10 mb / sec . So we now reconfigured the
shared_buffers to 2048, which should mean 2mb (linux=buffer each one kb) per
process.
We corrected higher values from sort_mem and vacuum_mem back to sort_mem=512
and
vacuum_mem=8192 , too, to reduce memory usage, although we have
kernel.shmall = 1342177280 and kernel.shmmax = 1342177280 .

Currenty i have limited the max_connections to 800, because every larger
value results in
a system load to 60+ and at least 20.000 context switches.

My problem is, that our apache produces much more than 800 open connections,

because we are using > 15 diff. databases and apache seems to keep
connections to every
database open , the same httpd-process has connected before.
For now i solved it in a very dirty way, i limited the number and the
lifetime
of each httpd process with those values :
MaxKeepAliveRequests 10
KeepAliveTimeout 2
MaxClients 100
MaxRequestsPerChild 300

We use php 4.3.4 and PHP 4.2.3 on the webservers. PHP ini says:
[PostgresSQL]
; Allow or prevent persistent links.
pgsql.allow_persistent = On
; Maximum number of persistent links. -1 means no limit.
pgsql.max_persistent = -1
; Maximum number of links (persistent+non persistent). -1 means no limit.
pgsql.max_links = -1

We are now running for days with an extremly unstable database backend...
Are 1.000 processes the natural limit on a linux based postgresql ?
Can we realize a more efficient connection pooling/reusing ?

thanks a lot for help and every idea is welcome,
Andre

BTW: Does anyone know commercial administration trainings in Germany, near
Duesseldorf?
---------------------------(end of broadcast)---------------------------
TIP 1: subscribe and unsubscribe commands go to ma*******@postgresql.org

Nov 22 '05 #1
4 2816
On Friday 20 February 2004 15:32, Gellert, Andre wrote:
Hello,
we installed a new Postgres 7.4.0 on a Suse 9 system.
This is used as a part of an extranet , based on Apache+PHP and has besides
a ldap
server no services running. The system has dual xeon 2ghz and 2GB RAM.
When migrating all applications from 2 other postgres7.2 servers to the new
one,
we had heavy load problems.
At the beginning there where problems with to much allocated shared memory,
as the system was swapping 5-10 mb / sec . So we now reconfigured the
shared_buffers to 2048, which should mean 2mb (linux=buffer each one kb)
per process.
Actually it's probably 8kB each = 16MB, but thats between *all* the backends.
You probably want something a fair bit larger than this. Go to
http://www.varlena.com/varlena/Gener...bits/index.php
and read the section on performance tuning and on the annotated
postgresql.conf
We corrected higher values from sort_mem and vacuum_mem back to
sort_mem=512 and
vacuum_mem=8192 , too, to reduce memory usage, although we have
kernel.shmall = 1342177280 and kernel.shmmax = 1342177280 .
You can probably put vaccum_mem back up.
Currenty i have limited the max_connections to 800, because every larger
value results in
a system load to 60+ and at least 20.000 context switches.
Might be your shared_buffers being too low, but we'll let someone else
comment.
My problem is, that our apache produces much more than 800 open
connections,

because we are using > 15 diff. databases and apache seems to keep
connections to every
database open , the same httpd-process has connected before.
For now i solved it in a very dirty way, i limited the number and the
lifetime
of each httpd process with those values :
MaxKeepAliveRequests 10
KeepAliveTimeout 2
MaxClients 100
MaxRequestsPerChild 300
You do want to limit the MaxRequestsPerChild if you're using persistent
connections. The problem seems to be with your PHP though
We use php 4.3.4 and PHP 4.2.3 on the webservers. PHP ini says:
[PostgresSQL]
; Allow or prevent persistent links.
pgsql.allow_persistent = On
; Maximum number of persistent links. -1 means no limit.
pgsql.max_persistent = -1
; Maximum number of links (persistent+non persistent). -1 means no limit.
pgsql.max_links = -1
So - you let PHP open persistent connections to PG and have no limit to the
number of different connections open at any one time?
Turn the persistent connections off - you'll probably find your problems go
away.
We are now running for days with an extremly unstable database backend...
Are 1.000 processes the natural limit on a linux based postgresql ?
Can we realize a more efficient connection pooling/reusing ?


You probably can pool your connections better, but difficult to say without
knowing what your PHP is doing.

--
Richard Huxton
Archonet Ltd

---------------------------(end of broadcast)---------------------------
TIP 7: don't forget to increase your free space map settings

Nov 22 '05 #2
Well, it seems for your application is better to limit php's persistent
connection pool as a quick measure.
Try to set these values to something sensible for you:

; Maximum number of persistent links. -1 means no limit.
pgsql.max_persistent = 20
; Maximum number of links (persistent+non persistent). -1 means no limit.
pgsql.max_links = 30

Or just disable persistent connections altogether, and see if that is
not resulting in better performance:

; Allow or prevent persistent links.
pgsql.allow_persistent = Off

In the long term look for some better connection pooling mechanism, I'm
sure you'll find something for PHP too (I'm not using php, maybe
somebody else on the list can help ?).

Cheers,
Csaba.

On Fri, 2004-02-20 at 16:32, Gellert, Andre wrote:
Hello,
we installed a new Postgres 7.4.0 on a Suse 9 system.
This is used as a part of an extranet , based on Apache+PHP and has besides
a ldap
server no services running. The system has dual xeon 2ghz and 2GB RAM.
When migrating all applications from 2 other postgres7.2 servers to the new
one,
we had heavy load problems.
At the beginning there where problems with to much allocated shared memory,
as the system was swapping 5-10 mb / sec . So we now reconfigured the
shared_buffers to 2048, which should mean 2mb (linux=buffer each one kb) per
process.
We corrected higher values from sort_mem and vacuum_mem back to sort_mem=512
and
vacuum_mem=8192 , too, to reduce memory usage, although we have
kernel.shmall = 1342177280 and kernel.shmmax = 1342177280 .

Currenty i have limited the max_connections to 800, because every larger
value results in
a system load to 60+ and at least 20.000 context switches.

My problem is, that our apache produces much more than 800 open connections,

because we are using > 15 diff. databases and apache seems to keep
connections to every
database open , the same httpd-process has connected before.
For now i solved it in a very dirty way, i limited the number and the
lifetime
of each httpd process with those values :
MaxKeepAliveRequests 10
KeepAliveTimeout 2
MaxClients 100
MaxRequestsPerChild 300

We use php 4.3.4 and PHP 4.2.3 on the webservers. PHP ini says:
[PostgresSQL]
; Allow or prevent persistent links.
pgsql.allow_persistent = On
; Maximum number of persistent links. -1 means no limit.
pgsql.max_persistent = -1
; Maximum number of links (persistent+non persistent). -1 means no limit.
pgsql.max_links = -1

We are now running for days with an extremly unstable database backend...
Are 1.000 processes the natural limit on a linux based postgresql ?
Can we realize a more efficient connection pooling/reusing ?

thanks a lot for help and every idea is welcome,
Andre

BTW: Does anyone know commercial administration trainings in Germany, near
Duesseldorf?
---------------------------(end of broadcast)---------------------------
TIP 1: subscribe and unsubscribe commands go to ma*******@postgresql.org

---------------------------(end of broadcast)---------------------------
TIP 6: Have you searched our list archives?

http://archives.postgresql.org

Nov 22 '05 #3

Have you tested it with regular pg_connects instead of pg_pconnect? while
many people expect pconnects to be faster, often, when they result in the
database having lots of open idle connections, they actually make the
system slower than just using plain connects.

You might want to look into some of the connection pooling options out
there that work with PHP, as persistant connections work well only for a
smaller number of hard working threads, and not so well for a large number
of connections of which only a few are actually hitting the db at the
same time. The becomes especially bad in your situation, where it sounds
like you have multiple databases to connect to, so php is keeping multiple
backends alive for each front end thread.
---------------------------(end of broadcast)---------------------------
TIP 2: you can get off all lists at once with the unregister command
(send "unregister YourEmailAddressHere" to ma*******@postgresql.org)

Nov 22 '05 #4
On 20 Feb 2004, Csaba Nagy wrote:
Well, it seems for your application is better to limit php's persistent
connection pool as a quick measure.
Try to set these values to something sensible for you:

; Maximum number of persistent links. -1 means no limit.
pgsql.max_persistent = 20
Please note that pgsql.max_persistant is PER apache / php backend process.

http://www.php.net/manual/en/ref.pgsql.php
QUOTE:
pgsql.max_persistent integer

The maximum number of persistent Postgres connections per process.
UNQUOTE:
; Maximum number of links (persistent+non persistent). -1 means no limit.
pgsql.max_links = 30
This one too is per process
Or just disable persistent connections altogether, and see if that is
not resulting in better performance:


My recommendation.

---------------------------(end of broadcast)---------------------------
TIP 1: subscribe and unsubscribe commands go to ma*******@postgresql.org

Nov 22 '05 #5

This thread has been closed and replies have been disabled. Please start a new discussion.

Similar topics

0
by: Adam Kempa | last post by:
Hello I've been installed Postgres 7.4.2 on FreeBSD system and when load average grow up i've error in postgres like this: ERROR: permission denied for function varchar ERROR: permission...
3
by: N.K. | last post by:
Hi, I've just installed postgres on the Linux server. It is supposed to start automatically which I think it does since I can run an sql stmt right away. When I'm trying to connect from a remote...
3
by: Klint Gore | last post by:
Does anyone know of a mailing list for application developers using postgres? It'd probably be more relevant than pgsql-general for my question. Failing that, what do people use to generate...
7
by: Barry | last post by:
Hi All, I am a newcommer to Postgresql, currently I am looking at moving a Pick based application across to PostgreSQL. I am using RH Linux and Postgresql 7.3.6 The test system I am using...
6
by: Prabu Subroto | last post by:
Dear my friends... Usually I use MySQL. Now I have to migrate my database from MySQL to Postgres. I have created a database successfully with "creatdb" and a user account successfully. But...
1
by: Edwin New | last post by:
I have a requirement to run two separate postmasters on the one machine. It looks like they compete for resources (semaphores) so won't run concurrently. I compiled twice, specifying different...
7
by: David Teran | last post by:
Hi, maybe anyone already knows that Apple is distributing Postgres 7.3.3 with RemoteDesktop 2. Its located in /System/Library/CoreServices/RemoteManagement/rmdb.bundle/ BUT... they did not do...
5
by: Scott Frankel | last post by:
I'm looking for the file postgres.h in my recent install of postgres-7.4.5 on a MacOS 10.3.5 system. I'm attempting to build PyGreSQL-3.5, which appears to require the postgres include dir. ...
0
by: NM | last post by:
Hello, I've got a problem inserting binary objects into the postgres database. I have binary objects (e.g. images or smth else) of any size which I want to insert into the database. Funny is it...
2
isladogs
by: isladogs | last post by:
The next Access Europe meeting will be on Wednesday 7 Feb 2024 starting at 18:00 UK time (6PM UTC) and finishing at about 19:30 (7.30PM). In this month's session, the creator of the excellent VBE...
0
by: DolphinDB | last post by:
The formulas of 101 quantitative trading alphas used by WorldQuant were presented in the paper 101 Formulaic Alphas. However, some formulas are complex, leading to challenges in calculation. Take...
0
by: DolphinDB | last post by:
Tired of spending countless mintues downsampling your data? Look no further! In this article, you’ll learn how to efficiently downsample 6.48 billion high-frequency records to 61 million...
0
by: Aftab Ahmad | last post by:
Hello Experts! I have written a code in MS Access for a cmd called "WhatsApp Message" to open WhatsApp using that very code but the problem is that it gives a popup message everytime I clicked on...
0
isladogs
by: isladogs | last post by:
The next Access Europe meeting will be on Wednesday 6 Mar 2024 starting at 18:00 UK time (6PM UTC) and finishing at about 19:15 (7.15PM). In this month's session, we are pleased to welcome back...
1
isladogs
by: isladogs | last post by:
The next Access Europe meeting will be on Wednesday 6 Mar 2024 starting at 18:00 UK time (6PM UTC) and finishing at about 19:15 (7.15PM). In this month's session, we are pleased to welcome back...
0
by: Vimpel783 | last post by:
Hello! Guys, I found this code on the Internet, but I need to modify it a little. It works well, the problem is this: Data is sent from only one cell, in this case B5, but it is necessary that data...
0
by: jfyes | last post by:
As a hardware engineer, after seeing that CEIWEI recently released a new tool for Modbus RTU Over TCP/UDP filtering and monitoring, I actively went to its official website to take a look. It turned...
0
by: ArrayDB | last post by:
The error message I've encountered is; ERROR:root:Error generating model response: exception: access violation writing 0x0000000000005140, which seems to be indicative of an access violation...

By using Bytes.com and it's services, you agree to our Privacy Policy and Terms of Use.

To disable or enable advertisements and analytics tracking please visit the manage ads & tracking page.