By using this site, you agree to our updated Privacy Policy and our Terms of Use. Manage your Cookies Settings.
438,177 Members | 976 Online
Bytes IT Community
+ Ask a Question
Need help? Post your question and get tips & solutions from a community of 438,177 IT Pros & Developers. It's quick & easy.

more than one instance of pgpool for a backend?

P: n/a
Hi,

pgpool seems to be very nice. I will use it in production environment as soon
as possible, but have a question regarding pgpool:

I have four different databases/user combinations which should have different
numbers of possible connection.

let my db have 80 concurrent connections and i want to divide them like this:
admin@db1 10
user@db1 40
admin@db2 5
user@db2 25

At the moment i run four different instances af apache with PHP and connect
via pg_pconnect. My MaxClients directives are set to the values above, so if
all connections are busy you can't even connect to apache and maybe you get a
timeout. That's not nice, but i keep my database from overloading and still
have enough resources for different databse/user combination. (OT: I would
love to here someone running successful PerChildMPM on apache2, at the moment
i need four apaches on four different ports to configure MaxClient)

How can i achive it with pgpool? Is it possible to run four pgpools for an
backend. running pgpool on port 9000-9004 each configured to the values above
and have one database cluster handle the pgpool connections? Can i still use
synchronous replication and so on.

My first guess is: it should work as pgpool handles all connections via
independent preforked childs and it should not matter wheather a child is
forked from one parent or another. But as i dont know alle the internals, i
would like to here an expert opinion.

kind regards,
janning

---------------------------(end of broadcast)---------------------------
TIP 8: explain analyze is your friend

Nov 23 '05 #1
Share this question for a faster answer!
Share on Google+

This discussion thread is closed

Replies have been disabled for this discussion.