By using this site, you agree to our updated Privacy Policy and our Terms of Use. Manage your Cookies Settings.
440,375 Members | 1,095 Online
Bytes IT Community
+ Ask a Question
Need help? Post your question and get tips & solutions from a community of 440,375 IT Pros & Developers. It's quick & easy.

Increasing Max Connections Mac OS 10.3

P: n/a
I installed Postgres 7.4.1 on a dual processor G5 running Mac OS
10.3.2. I'm trying to increase the max_connections to 300 and running
into some trouble. If anyone could shed some light, I'd greatly
appreciate it.

Here's part of my postgresql.conf:

# - Connection Settings -
tcpip_socket = true
max_connections = 300

# - Memory -
shared_buffers = 5000 # min 16, at least max_connections*2,
8KB each
#sort_mem = 1024 # min 64, size in KB
#vacuum_mem = 8192 # min 1024, size in KB
I've tried increasing the shmmax in /etc/rc to a really high number,
but I'm still getting errors when I try to start postgres with pg_ctl:

2004-02-09 11:07:24 FATAL: could not create shared memory segment:
Invalid argument
DETAIL: Failed system call was shmget(key=5432001, size=47030272,
03600).
HINT: This error usually means that PostgreSQL's request for a shared
memory segment exceeded your kernel's SHMMAX parameter. You can either
reduce the request size or reconfigure the kernel with larger SHMMAX.
To reduce the request size (currently 47030272 bytes), reduce
PostgreSQL's shared_buffers parameter (currently 5000) and/or its
max_connections parameter (currently 300).
If the request size is already small, it's possible that it is
less than your kernel's SHMMIN parameter, in which case raising the
request size or reconfiguring SHMMIN is called for.
Here's the relevant part of my etc/rc:

# System tuning
sysctl -w kern.maxvnodes=$(echo $(sysctl -n hw.physmem) '33554432 / 512
* 1024 $
sysctl -w kern.sysv.shmmax=500772160
sysctl -w kern.sysv.shmmin=1
sysctl -w kern.sysv.shmmni=32
sysctl -w kern.sysv.shmseg=8
sysctl -w kern.sysv.shmall=65536
I had previously tried the shmmax settings I'd seen on the forums but
those also gave me an error.
Nov 22 '05 #1
Share this Question
Share on Google+
15 Replies


P: n/a
On Monday February 9 2004 9:22, Joe Lester wrote:

I've tried increasing the shmmax in /etc/rc to a really high number,
but I'm still getting errors when I try to start postgres with pg_ctl:

2004-02-09 11:07:24 FATAL: could not create shared memory segment:
Invalid argument
sysctl -w kern.sysv.shmmax=500772160
sysctl -w kern.sysv.shmall=65536


You probably need to increase shmall as well.
---------------------------(end of broadcast)---------------------------
TIP 1: subscribe and unsubscribe commands go to ma*******@postgresql.org

Nov 22 '05 #2

P: n/a
Joe Lester <jo********@sweetwater.com> writes:
I installed Postgres 7.4.1 on a dual processor G5 running Mac OS
10.3.2. I'm trying to increase the max_connections to 300 and running
into some trouble.


Hmm, it WorksForMe (TM). You did reboot after changing /etc/rc, no?
Try "sysctl -a | grep sysv" to verify that the settings took effect.

Note that there's not much percentage in setting shmmax higher than
shmall * pagesize. I see hw.pagesize = 4096 according to sysctl,
which means your shmall=65536 constrains the total allocation to 256MB,
so setting shmmax to 500M doesn't do anything...

regards, tom lane

---------------------------(end of broadcast)---------------------------
TIP 4: Don't 'kill -9' the postmaster

Nov 22 '05 #3

P: n/a
Joe Lester <jo********@sweetwater.com> writes:
That's odd. It's giving me a -1 for the shmmax value. I assume that's
NOT normal. Why would that be?
It's not --- you should get back the same value you set. I speculate
that you tried to set a value that exceeded some internal sanity check
in the kernel. I wouldn't be too surprised if the kernel rejects values
larger than available RAM, for instance.
Note that there's not much percentage in setting shmmax higher than
shmall * pagesize.

I'm not quite clear on this. Does this mean that shmmax and shmall
should be set to the same value?


shmmax is the limit on a single shmget() request, in bytes. shmall is
the limit on total shared-memory allocation across all active shmget()
requests. So there's certainly no point in making the former larger
than the latter. Assuming that you only intend to have a single
Postgres postmaster requesting shared memory (I'm not sure whether there
are any components of OS X that request shared memory --- X11 might),
there's not much point in making the former smaller than the latter
either. Bear in mind though that shmall is measured in 4K pages not in
bytes. Thus the OS X factory-default settings of 4M and 1024 are in
fact both enforcing a 4MB limit.

regards, tom lane

---------------------------(end of broadcast)---------------------------
TIP 3: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to ma*******@postgresql.org so that your
message can get through to the mailing list cleanly

Nov 22 '05 #4

P: n/a
> Joe Lester <jo********@sweetwater.com> writes:
I installed Postgres 7.4.1 on a dual processor G5 running Mac OS
10.3.2. I'm trying to increase the max_connections to 300 and running
into some trouble.
Hmm, it WorksForMe (TM). You did reboot after changing /etc/rc, no?


Yes, I did a "Restart".
Try "sysctl -a | grep sysv" to verify that the settings took effect.
That's odd. It's giving me a -1 for the shmmax value. I assume that's
NOT normal. Why would that be?

[lester2:~] joe% sysctl -a | grep sysv
kern.sysv.shmmax: -1
kern.sysv.shmmin: 1
kern.sysv.shmmni: 32
kern.sysv.shmseg: 8
kern.sysv.shmall: 50772160
kern.sysv.semmni: 87381
kern.sysv.semmns: 87381
kern.sysv.semmnu: 87381
kern.sysv.semmsl: 87381
kern.sysv.semume: 10

Note that there's not much percentage in setting shmmax higher than
shmall * pagesize. I see hw.pagesize = 4096 according to sysctl,
which means your shmall=65536 constrains the total allocation to 256MB,
so setting shmmax to 500M doesn't do anything...


I'm not quite clear on this. Does this mean that shmmax and shmall
should be set to the same value? Could anyone share with me their own
settings for shmmax and shmall?

Thanks.

---------------------------(end of broadcast)---------------------------
TIP 9: the planner will ignore your desire to choose an index scan if your
joining column's datatypes do not match

Nov 22 '05 #5

P: n/a
> Joe Lester <jo********@sweetwater.com> writes:
That's odd. It's giving me a -1 for the shmmax value. I assume that's
NOT normal. Why would that be?


It's not --- you should get back the same value you set. I speculate
that you tried to set a value that exceeded some internal sanity check
in the kernel. I wouldn't be too surprised if the kernel rejects
values
larger than available RAM, for instance.


I tried a few different things to try to get the shmmax value to be
something other than 4194304 (the default in /etc/rc).

First, I restarted my mac, then, as the root user...

I tried setting it to a "high" number:
[lester2:~] root# sysctl -w kern.sysv.shmmax=9194304
kern.sysv.shmmax: -1

No luck. It set it back to -1

Then I tried setting it to a "low" number:
[lester2:~] root# sysctl -w kern.sysv.shmmax=3194304
kern.sysv.shmmax: -1

Still no action.

Then I tried setting it to 4194304 (the default in /etc/rc):
[lester2:~] root# sysctl -w kern.sysv.shmmax=4194304
kern.sysv.shmmax: -1 -> 4194304

It took this time! BUT... I need to increase the number because my
postgres error log is telling me that it needs to be at least 4620288:

DETAIL: Failed system call was shmget(key=5432001, size=4620288,
03600).
HINT: This error usually means that PostgreSQL's request for a shared
memory segment exceeded your kernel's SHMMAX parameter. You can either
reduce the request size or reconfigure the kernel with larger SHMMAX.
To reduce the request size (currently 4620288 bytes), reduce
PostgreSQL's shared_buffers parameter (currently 300) and/or its
max_connections parameter (currently 100).

Any ideas? Again, I am running Mac OS 10.3.2 and Postgres 7.4.1 on a
dual processor G5. Thanks.

Nov 22 '05 #6

P: n/a
Joe Lester <jo********@sweetwater.com> writes:
I tried a few different things to try to get the shmmax value to be
something other than 4194304 (the default in /etc/rc). First, I restarted my mac, then, as the root user...


You can't change shmmax on-the-fly in OS X --- that's why it's set up in
/etc/rc before the system is fully operational. AFAIK the *only* way to
change these parameters is edit /etc/rc and reboot.

regards, tom lane

---------------------------(end of broadcast)---------------------------
TIP 8: explain analyze is your friend

Nov 22 '05 #7

P: n/a
So the odd thing is, I came in this morning, reset my settings in
/etc/rc and it took this time! I am a little baffled since I tried
these same settings yesterday with no success. It's probably just me
being goofy. Here are the settings that took:

[lester2:~] joe% sysctl -a | grep sysv
kern.sysv.shmmax: 167772160
kern.sysv.shmmin: 1
kern.sysv.shmmni: 32
kern.sysv.shmseg: 8
kern.sysv.shmall: 65536

With shmmax set to 167772160 I am able to launch the server with 240
connections (set in postgresql.conf). That's a good thing. What worries
me though is that my postgres log keeps printing out messages, one
after the other, like:

2004-02-10 08:46:01 LOG: out of file descriptors: Too many open files;
release and retry
2004-02-10 08:46:01 LOG: out of file descriptors: Too many open files;
release and retry
2004-02-10 08:46:01 LOG: out of file descriptors: Too many open files;
release and retry
2004-02-10 08:46:02 LOG: out of file descriptors: Too many open files;
release and retry
2004-02-10 08:46:02 LOG: out of file descriptors: Too many open files;
release and retry
2004-02-10 08:46:02 LOG: out of file descriptors: Too many open files;
release and retry
2004-02-10 08:46:02 LOG: out of file descriptors: Too many open files;
release and retry
2004-02-10 08:46:02 LOG: out of file descriptors: Too many open files;
release and retry
2004-02-10 08:46:02 LOG: out of file descriptors: Too many open files;
release and retry
2004-02-10 08:46:03 LOG: out of file descriptors: Too many open files;
release and retry
2004-02-10 08:46:03 LOG: out of file descriptors: Too many open files;
release and retry
2004-02-10 08:46:03 LOG: out of file descriptors: Too many open files;
release and retry
2004-02-10 08:46:03 LOG: out of file descriptors: Too many open files;
release and retry

Even though I'm getting these messages in my log, all the queries I
send to the server seem to be working. I'm using libpq to access the
server from a Cocoa client. My program seems to be working just fine in
spite of the "out of file descriptors" warnings I'm getting in the
postgres log. It makes me kind if hesitant to try it out with 240
users. What should I do to get rid of these log messages? I'm running
Postgres 7.4.1 on a dual G5 running Mac OS 10.3.2. Thanks.

On Feb 9, 2004, at 8:12 PM, Tom Lane wrote:
Joe Lester <jo********@sweetwater.com> writes:
I tried a few different things to try to get the shmmax value to be
something other than 4194304 (the default in /etc/rc).

First, I restarted my mac, then, as the root user...


You can't change shmmax on-the-fly in OS X --- that's why it's set up
in
/etc/rc before the system is fully operational. AFAIK the *only* way
to
change these parameters is edit /etc/rc and reboot.

regards, tom lane

---------------------------(end of
broadcast)---------------------------
TIP 8: explain analyze is your friend


---------------------------(end of broadcast)---------------------------
TIP 1: subscribe and unsubscribe commands go to ma*******@postgresql.org

Nov 22 '05 #8

P: n/a
Joe Lester <jo********@sweetwater.com> writes:
[ lots of ]
2004-02-10 08:46:01 LOG: out of file descriptors: Too many open files;
release and retry
Sounds like you need to reduce max_files_per_process. Also look at
increasing the kernel's limit on number of open files (I remember seeing
it in sysctl's output yesterday, but I forget what it's called).
Even though I'm getting these messages in my log, all the queries I
send to the server seem to be working.


The Postgres server itself will generally survive this condition
(because it usually has other open files it can close). However,
everything else on the system is likely to start falling over :-(.
You don't want to run with the kernel file table completely full.

I'd suggest setting max_files_per_process to something like 50 to 100,
and making sure that the kernel's limit is max_files_per_process *
max_connections plus plenty of slop for the rest of the system.

regards, tom lane

---------------------------(end of broadcast)---------------------------
TIP 4: Don't 'kill -9' the postmaster

Nov 22 '05 #9

P: n/a
Joe,

I've run into this on my ibook too. The default number of files is
set very low by default. On my system 10.3.2, it's 256 for the
postgres user. You can raise it to something higher like 2048 with
the ulimit command. i have ulimit -n unlimited in my .bash_profile

ibook:~ root# su - postgres
ibook:~ postgres$ ulimit -a
open files (-n) 256
ibook:~ postgres$ ulimit -n unlimited
ibook:~ postgres$ ulimit -a
open files (-n) 10240
ibook:~ postgres$
regards,

On Feb 10, 2004, at 8:04 AM, Tom Lane wrote:
Joe Lester <jo********@sweetwater.com> writes:
[ lots of ]
2004-02-10 08:46:01 LOG: out of file descriptors: Too many open
files;
release and retry


Sounds like you need to reduce max_files_per_process. Also look at
increasing the kernel's limit on number of open files (I remember
seeing
it in sysctl's output yesterday, but I forget what it's called).
Even though I'm getting these messages in my log, all the queries I
send to the server seem to be working.


The Postgres server itself will generally survive this condition
(because it usually has other open files it can close). However,
everything else on the system is likely to start falling over :-(.
You don't want to run with the kernel file table completely full.

I'd suggest setting max_files_per_process to something like 50 to 100,
and making sure that the kernel's limit is max_files_per_process *
max_connections plus plenty of slop for the rest of the system.

regards, tom lane

---------------------------(end of
broadcast)---------------------------
TIP 4: Don't 'kill -9' the postmaster

---------------------------(end of broadcast)---------------------------
TIP 4: Don't 'kill -9' the postmaster

Nov 22 '05 #10

P: n/a
Brian Hirt <bh***@mobygames.com> writes:
I've run into this on my ibook too. The default number of files is
set very low by default. On my system 10.3.2, it's 256 for the
postgres user. You can raise it to something higher like 2048 with
the ulimit command. i have ulimit -n unlimited in my .bash_profile


Hmm, I hadn't even thought about ulimit. I thought those settings were
per-process, not per-user. If they are per-user they could be
problematic.

regards, tom lane

---------------------------(end of broadcast)---------------------------
TIP 7: don't forget to increase your free space map settings

Nov 22 '05 #11

P: n/a
On Feb 10, 2004, at 10:57 AM, Tom Lane wrote:

Hmm, I hadn't even thought about ulimit. I thought those settings were
per-process, not per-user. If they are per-user they could be
problematic.


not sure if it's per user or per process. after i did ulimit -n
unlimited the problem joe describes went away for me. i also did lower
max_files_per_process to 1000. my database has a large number of
files in it, a few thousand, so i assumed one of the back end processes
was going over the limit.

--brian
---------------------------(end of broadcast)---------------------------
TIP 3: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to ma*******@postgresql.org so that your
message can get through to the mailing list cleanly

Nov 22 '05 #12

P: n/a
Joe Lester <jo********@sweetwater.com> writes:
Would this be kern.maxfiles?
Sounds like what you want. There's probably no need to reduce
maxfilesperproc (and thereby constrain every process not only PG
backends). You can set PG's max_files_per_process instead.
Is it OK to set these before starting the server? Or should I set them
in /etc/rc?


Damifino. Try a manual sysctl -w and see if it takes ... if not, you
probably have to set it in /etc/rc.

regards, tom lane

---------------------------(end of broadcast)---------------------------
TIP 5: Have you checked our extensive FAQ?

http://www.postgresql.org/docs/faqs/FAQ.html

Nov 22 '05 #13

P: n/a
Brian Hirt <bh***@mobygames.com> writes:
... after i did ulimit -n
unlimited the problem joe describes went away for me.


Hmm. Postgres assumes it can use the smaller of max_files_per_process
and sysconf(_SC_OPEN_MAX). From what you describe, I suspect that OSX's
sysconf call ignores the "ulimit -n" restriction and thus encourages us
to think we can use more than we really can. If that's the correct
explanation then the LOG messages are just a cosmetic problem (as long
as kern.maxfiles comfortably exceeds max_connections times ulimit -n).

I wonder whether we should also probe getrlimit(RLIMIT_NOFILE)? Anyone
have an idea whether that returns different limits than sysconf()?

regards, tom lane

---------------------------(end of broadcast)---------------------------
TIP 3: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to ma*******@postgresql.org so that your
message can get through to the mailing list cleanly

Nov 22 '05 #14

P: n/a
Would this be kern.maxfiles? There's also one called
kern.maxfilesperproc.

Is it OK to set these before starting the server? Or should I set them
in /etc/rc?

On Feb 10, 2004, at 10:04 AM, Tom Lane wrote:
Also look at increasing the kernel's limit on number of open files
(I remember seeing it in sysctl's output yesterday, but I forget
what it's called).


---------------------------(end of broadcast)---------------------------
TIP 8: explain analyze is your friend

Nov 22 '05 #15

P: n/a
On 2/10/04 12:28 PM, Tom Lane wrote:
Joe Lester <jo********@sweetwater.com> writes:
Would this be kern.maxfiles?


Sounds like what you want. There's probably no need to reduce
maxfilesperproc (and thereby constrain every process not only PG
backends). You can set PG's max_files_per_process instead.
Is it OK to set these before starting the server? Or should I set them
in /etc/rc?


Damifino. Try a manual sysctl -w and see if it takes ... if not, you
probably have to set it in /etc/rc.


You can set some of this stuff in /etc/sysctl.conf, which is sourced from
/etc/rc if it exists. The file format looks like this:

kern.maxproc=1000
kern.maxprocperuid=512
...

It just passes that stuff to sysctl -w (after skipping comments, etc.) See
the /etc/rc file for details.

Unfortunately, I've found that putting the shared memory settings in
/etc/sysctl.conf does not work. Instead, I put them in /etc/rc directly.
They're actually already there, just above the part that sources
/etc/sysctl.conf. (Look for the phease "System tuning" in a comment.) I
commented those out and put my own settings in their place. That seems to
work for me.

-John

---------------------------(end of broadcast)---------------------------
TIP 4: Don't 'kill -9' the postmaster

Nov 22 '05 #16

This discussion thread is closed

Replies have been disabled for this discussion.