473,387 Members | 1,404 Online
Bytes | Software Development & Data Engineering Community
Post Job

Home Posts Topics Members FAQ

Join Bytes to post your question to a community of 473,387 software developers and data experts.

Increasing Max Connections Mac OS 10.3

I installed Postgres 7.4.1 on a dual processor G5 running Mac OS
10.3.2. I'm trying to increase the max_connections to 300 and running
into some trouble. If anyone could shed some light, I'd greatly
appreciate it.

Here's part of my postgresql.conf:

# - Connection Settings -
tcpip_socket = true
max_connections = 300

# - Memory -
shared_buffers = 5000 # min 16, at least max_connections*2,
8KB each
#sort_mem = 1024 # min 64, size in KB
#vacuum_mem = 8192 # min 1024, size in KB
I've tried increasing the shmmax in /etc/rc to a really high number,
but I'm still getting errors when I try to start postgres with pg_ctl:

2004-02-09 11:07:24 FATAL: could not create shared memory segment:
Invalid argument
DETAIL: Failed system call was shmget(key=5432001, size=47030272,
03600).
HINT: This error usually means that PostgreSQL's request for a shared
memory segment exceeded your kernel's SHMMAX parameter. You can either
reduce the request size or reconfigure the kernel with larger SHMMAX.
To reduce the request size (currently 47030272 bytes), reduce
PostgreSQL's shared_buffers parameter (currently 5000) and/or its
max_connections parameter (currently 300).
If the request size is already small, it's possible that it is
less than your kernel's SHMMIN parameter, in which case raising the
request size or reconfiguring SHMMIN is called for.
Here's the relevant part of my etc/rc:

# System tuning
sysctl -w kern.maxvnodes=$(echo $(sysctl -n hw.physmem) '33554432 / 512
* 1024 $
sysctl -w kern.sysv.shmmax=500772160
sysctl -w kern.sysv.shmmin=1
sysctl -w kern.sysv.shmmni=32
sysctl -w kern.sysv.shmseg=8
sysctl -w kern.sysv.shmall=65536
I had previously tried the shmmax settings I'd seen on the forums but
those also gave me an error.
Nov 22 '05 #1
15 4978
On Monday February 9 2004 9:22, Joe Lester wrote:

I've tried increasing the shmmax in /etc/rc to a really high number,
but I'm still getting errors when I try to start postgres with pg_ctl:

2004-02-09 11:07:24 FATAL: could not create shared memory segment:
Invalid argument
sysctl -w kern.sysv.shmmax=500772160
sysctl -w kern.sysv.shmall=65536


You probably need to increase shmall as well.
---------------------------(end of broadcast)---------------------------
TIP 1: subscribe and unsubscribe commands go to ma*******@postgresql.org

Nov 22 '05 #2
Joe Lester <jo********@sweetwater.com> writes:
I installed Postgres 7.4.1 on a dual processor G5 running Mac OS
10.3.2. I'm trying to increase the max_connections to 300 and running
into some trouble.


Hmm, it WorksForMe (TM). You did reboot after changing /etc/rc, no?
Try "sysctl -a | grep sysv" to verify that the settings took effect.

Note that there's not much percentage in setting shmmax higher than
shmall * pagesize. I see hw.pagesize = 4096 according to sysctl,
which means your shmall=65536 constrains the total allocation to 256MB,
so setting shmmax to 500M doesn't do anything...

regards, tom lane

---------------------------(end of broadcast)---------------------------
TIP 4: Don't 'kill -9' the postmaster

Nov 22 '05 #3
Joe Lester <jo********@sweetwater.com> writes:
That's odd. It's giving me a -1 for the shmmax value. I assume that's
NOT normal. Why would that be?
It's not --- you should get back the same value you set. I speculate
that you tried to set a value that exceeded some internal sanity check
in the kernel. I wouldn't be too surprised if the kernel rejects values
larger than available RAM, for instance.
Note that there's not much percentage in setting shmmax higher than
shmall * pagesize.

I'm not quite clear on this. Does this mean that shmmax and shmall
should be set to the same value?


shmmax is the limit on a single shmget() request, in bytes. shmall is
the limit on total shared-memory allocation across all active shmget()
requests. So there's certainly no point in making the former larger
than the latter. Assuming that you only intend to have a single
Postgres postmaster requesting shared memory (I'm not sure whether there
are any components of OS X that request shared memory --- X11 might),
there's not much point in making the former smaller than the latter
either. Bear in mind though that shmall is measured in 4K pages not in
bytes. Thus the OS X factory-default settings of 4M and 1024 are in
fact both enforcing a 4MB limit.

regards, tom lane

---------------------------(end of broadcast)---------------------------
TIP 3: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to ma*******@postgresql.org so that your
message can get through to the mailing list cleanly

Nov 22 '05 #4
> Joe Lester <jo********@sweetwater.com> writes:
I installed Postgres 7.4.1 on a dual processor G5 running Mac OS
10.3.2. I'm trying to increase the max_connections to 300 and running
into some trouble.
Hmm, it WorksForMe (TM). You did reboot after changing /etc/rc, no?


Yes, I did a "Restart".
Try "sysctl -a | grep sysv" to verify that the settings took effect.
That's odd. It's giving me a -1 for the shmmax value. I assume that's
NOT normal. Why would that be?

[lester2:~] joe% sysctl -a | grep sysv
kern.sysv.shmmax: -1
kern.sysv.shmmin: 1
kern.sysv.shmmni: 32
kern.sysv.shmseg: 8
kern.sysv.shmall: 50772160
kern.sysv.semmni: 87381
kern.sysv.semmns: 87381
kern.sysv.semmnu: 87381
kern.sysv.semmsl: 87381
kern.sysv.semume: 10

Note that there's not much percentage in setting shmmax higher than
shmall * pagesize. I see hw.pagesize = 4096 according to sysctl,
which means your shmall=65536 constrains the total allocation to 256MB,
so setting shmmax to 500M doesn't do anything...


I'm not quite clear on this. Does this mean that shmmax and shmall
should be set to the same value? Could anyone share with me their own
settings for shmmax and shmall?

Thanks.

---------------------------(end of broadcast)---------------------------
TIP 9: the planner will ignore your desire to choose an index scan if your
joining column's datatypes do not match

Nov 22 '05 #5
> Joe Lester <jo********@sweetwater.com> writes:
That's odd. It's giving me a -1 for the shmmax value. I assume that's
NOT normal. Why would that be?


It's not --- you should get back the same value you set. I speculate
that you tried to set a value that exceeded some internal sanity check
in the kernel. I wouldn't be too surprised if the kernel rejects
values
larger than available RAM, for instance.


I tried a few different things to try to get the shmmax value to be
something other than 4194304 (the default in /etc/rc).

First, I restarted my mac, then, as the root user...

I tried setting it to a "high" number:
[lester2:~] root# sysctl -w kern.sysv.shmmax=9194304
kern.sysv.shmmax: -1

No luck. It set it back to -1

Then I tried setting it to a "low" number:
[lester2:~] root# sysctl -w kern.sysv.shmmax=3194304
kern.sysv.shmmax: -1

Still no action.

Then I tried setting it to 4194304 (the default in /etc/rc):
[lester2:~] root# sysctl -w kern.sysv.shmmax=4194304
kern.sysv.shmmax: -1 -> 4194304

It took this time! BUT... I need to increase the number because my
postgres error log is telling me that it needs to be at least 4620288:

DETAIL: Failed system call was shmget(key=5432001, size=4620288,
03600).
HINT: This error usually means that PostgreSQL's request for a shared
memory segment exceeded your kernel's SHMMAX parameter. You can either
reduce the request size or reconfigure the kernel with larger SHMMAX.
To reduce the request size (currently 4620288 bytes), reduce
PostgreSQL's shared_buffers parameter (currently 300) and/or its
max_connections parameter (currently 100).

Any ideas? Again, I am running Mac OS 10.3.2 and Postgres 7.4.1 on a
dual processor G5. Thanks.

Nov 22 '05 #6
Joe Lester <jo********@sweetwater.com> writes:
I tried a few different things to try to get the shmmax value to be
something other than 4194304 (the default in /etc/rc). First, I restarted my mac, then, as the root user...


You can't change shmmax on-the-fly in OS X --- that's why it's set up in
/etc/rc before the system is fully operational. AFAIK the *only* way to
change these parameters is edit /etc/rc and reboot.

regards, tom lane

---------------------------(end of broadcast)---------------------------
TIP 8: explain analyze is your friend

Nov 22 '05 #7
So the odd thing is, I came in this morning, reset my settings in
/etc/rc and it took this time! I am a little baffled since I tried
these same settings yesterday with no success. It's probably just me
being goofy. Here are the settings that took:

[lester2:~] joe% sysctl -a | grep sysv
kern.sysv.shmmax: 167772160
kern.sysv.shmmin: 1
kern.sysv.shmmni: 32
kern.sysv.shmseg: 8
kern.sysv.shmall: 65536

With shmmax set to 167772160 I am able to launch the server with 240
connections (set in postgresql.conf). That's a good thing. What worries
me though is that my postgres log keeps printing out messages, one
after the other, like:

2004-02-10 08:46:01 LOG: out of file descriptors: Too many open files;
release and retry
2004-02-10 08:46:01 LOG: out of file descriptors: Too many open files;
release and retry
2004-02-10 08:46:01 LOG: out of file descriptors: Too many open files;
release and retry
2004-02-10 08:46:02 LOG: out of file descriptors: Too many open files;
release and retry
2004-02-10 08:46:02 LOG: out of file descriptors: Too many open files;
release and retry
2004-02-10 08:46:02 LOG: out of file descriptors: Too many open files;
release and retry
2004-02-10 08:46:02 LOG: out of file descriptors: Too many open files;
release and retry
2004-02-10 08:46:02 LOG: out of file descriptors: Too many open files;
release and retry
2004-02-10 08:46:02 LOG: out of file descriptors: Too many open files;
release and retry
2004-02-10 08:46:03 LOG: out of file descriptors: Too many open files;
release and retry
2004-02-10 08:46:03 LOG: out of file descriptors: Too many open files;
release and retry
2004-02-10 08:46:03 LOG: out of file descriptors: Too many open files;
release and retry
2004-02-10 08:46:03 LOG: out of file descriptors: Too many open files;
release and retry

Even though I'm getting these messages in my log, all the queries I
send to the server seem to be working. I'm using libpq to access the
server from a Cocoa client. My program seems to be working just fine in
spite of the "out of file descriptors" warnings I'm getting in the
postgres log. It makes me kind if hesitant to try it out with 240
users. What should I do to get rid of these log messages? I'm running
Postgres 7.4.1 on a dual G5 running Mac OS 10.3.2. Thanks.

On Feb 9, 2004, at 8:12 PM, Tom Lane wrote:
Joe Lester <jo********@sweetwater.com> writes:
I tried a few different things to try to get the shmmax value to be
something other than 4194304 (the default in /etc/rc).

First, I restarted my mac, then, as the root user...


You can't change shmmax on-the-fly in OS X --- that's why it's set up
in
/etc/rc before the system is fully operational. AFAIK the *only* way
to
change these parameters is edit /etc/rc and reboot.

regards, tom lane

---------------------------(end of
broadcast)---------------------------
TIP 8: explain analyze is your friend


---------------------------(end of broadcast)---------------------------
TIP 1: subscribe and unsubscribe commands go to ma*******@postgresql.org

Nov 22 '05 #8
Joe Lester <jo********@sweetwater.com> writes:
[ lots of ]
2004-02-10 08:46:01 LOG: out of file descriptors: Too many open files;
release and retry
Sounds like you need to reduce max_files_per_process. Also look at
increasing the kernel's limit on number of open files (I remember seeing
it in sysctl's output yesterday, but I forget what it's called).
Even though I'm getting these messages in my log, all the queries I
send to the server seem to be working.


The Postgres server itself will generally survive this condition
(because it usually has other open files it can close). However,
everything else on the system is likely to start falling over :-(.
You don't want to run with the kernel file table completely full.

I'd suggest setting max_files_per_process to something like 50 to 100,
and making sure that the kernel's limit is max_files_per_process *
max_connections plus plenty of slop for the rest of the system.

regards, tom lane

---------------------------(end of broadcast)---------------------------
TIP 4: Don't 'kill -9' the postmaster

Nov 22 '05 #9
Joe,

I've run into this on my ibook too. The default number of files is
set very low by default. On my system 10.3.2, it's 256 for the
postgres user. You can raise it to something higher like 2048 with
the ulimit command. i have ulimit -n unlimited in my .bash_profile

ibook:~ root# su - postgres
ibook:~ postgres$ ulimit -a
open files (-n) 256
ibook:~ postgres$ ulimit -n unlimited
ibook:~ postgres$ ulimit -a
open files (-n) 10240
ibook:~ postgres$
regards,

On Feb 10, 2004, at 8:04 AM, Tom Lane wrote:
Joe Lester <jo********@sweetwater.com> writes:
[ lots of ]
2004-02-10 08:46:01 LOG: out of file descriptors: Too many open
files;
release and retry


Sounds like you need to reduce max_files_per_process. Also look at
increasing the kernel's limit on number of open files (I remember
seeing
it in sysctl's output yesterday, but I forget what it's called).
Even though I'm getting these messages in my log, all the queries I
send to the server seem to be working.


The Postgres server itself will generally survive this condition
(because it usually has other open files it can close). However,
everything else on the system is likely to start falling over :-(.
You don't want to run with the kernel file table completely full.

I'd suggest setting max_files_per_process to something like 50 to 100,
and making sure that the kernel's limit is max_files_per_process *
max_connections plus plenty of slop for the rest of the system.

regards, tom lane

---------------------------(end of
broadcast)---------------------------
TIP 4: Don't 'kill -9' the postmaster

---------------------------(end of broadcast)---------------------------
TIP 4: Don't 'kill -9' the postmaster

Nov 22 '05 #10
Brian Hirt <bh***@mobygames.com> writes:
I've run into this on my ibook too. The default number of files is
set very low by default. On my system 10.3.2, it's 256 for the
postgres user. You can raise it to something higher like 2048 with
the ulimit command. i have ulimit -n unlimited in my .bash_profile


Hmm, I hadn't even thought about ulimit. I thought those settings were
per-process, not per-user. If they are per-user they could be
problematic.

regards, tom lane

---------------------------(end of broadcast)---------------------------
TIP 7: don't forget to increase your free space map settings

Nov 22 '05 #11
On Feb 10, 2004, at 10:57 AM, Tom Lane wrote:

Hmm, I hadn't even thought about ulimit. I thought those settings were
per-process, not per-user. If they are per-user they could be
problematic.


not sure if it's per user or per process. after i did ulimit -n
unlimited the problem joe describes went away for me. i also did lower
max_files_per_process to 1000. my database has a large number of
files in it, a few thousand, so i assumed one of the back end processes
was going over the limit.

--brian
---------------------------(end of broadcast)---------------------------
TIP 3: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to ma*******@postgresql.org so that your
message can get through to the mailing list cleanly

Nov 22 '05 #12
Joe Lester <jo********@sweetwater.com> writes:
Would this be kern.maxfiles?
Sounds like what you want. There's probably no need to reduce
maxfilesperproc (and thereby constrain every process not only PG
backends). You can set PG's max_files_per_process instead.
Is it OK to set these before starting the server? Or should I set them
in /etc/rc?


Damifino. Try a manual sysctl -w and see if it takes ... if not, you
probably have to set it in /etc/rc.

regards, tom lane

---------------------------(end of broadcast)---------------------------
TIP 5: Have you checked our extensive FAQ?

http://www.postgresql.org/docs/faqs/FAQ.html

Nov 22 '05 #13
Brian Hirt <bh***@mobygames.com> writes:
... after i did ulimit -n
unlimited the problem joe describes went away for me.


Hmm. Postgres assumes it can use the smaller of max_files_per_process
and sysconf(_SC_OPEN_MAX). From what you describe, I suspect that OSX's
sysconf call ignores the "ulimit -n" restriction and thus encourages us
to think we can use more than we really can. If that's the correct
explanation then the LOG messages are just a cosmetic problem (as long
as kern.maxfiles comfortably exceeds max_connections times ulimit -n).

I wonder whether we should also probe getrlimit(RLIMIT_NOFILE)? Anyone
have an idea whether that returns different limits than sysconf()?

regards, tom lane

---------------------------(end of broadcast)---------------------------
TIP 3: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to ma*******@postgresql.org so that your
message can get through to the mailing list cleanly

Nov 22 '05 #14
Would this be kern.maxfiles? There's also one called
kern.maxfilesperproc.

Is it OK to set these before starting the server? Or should I set them
in /etc/rc?

On Feb 10, 2004, at 10:04 AM, Tom Lane wrote:
Also look at increasing the kernel's limit on number of open files
(I remember seeing it in sysctl's output yesterday, but I forget
what it's called).


---------------------------(end of broadcast)---------------------------
TIP 8: explain analyze is your friend

Nov 22 '05 #15
On 2/10/04 12:28 PM, Tom Lane wrote:
Joe Lester <jo********@sweetwater.com> writes:
Would this be kern.maxfiles?


Sounds like what you want. There's probably no need to reduce
maxfilesperproc (and thereby constrain every process not only PG
backends). You can set PG's max_files_per_process instead.
Is it OK to set these before starting the server? Or should I set them
in /etc/rc?


Damifino. Try a manual sysctl -w and see if it takes ... if not, you
probably have to set it in /etc/rc.


You can set some of this stuff in /etc/sysctl.conf, which is sourced from
/etc/rc if it exists. The file format looks like this:

kern.maxproc=1000
kern.maxprocperuid=512
...

It just passes that stuff to sysctl -w (after skipping comments, etc.) See
the /etc/rc file for details.

Unfortunately, I've found that putting the shared memory settings in
/etc/sysctl.conf does not work. Instead, I put them in /etc/rc directly.
They're actually already there, just above the part that sources
/etc/sysctl.conf. (Look for the phease "System tuning" in a comment.) I
commented those out and put my own settings in their place. That seems to
work for me.

-John

---------------------------(end of broadcast)---------------------------
TIP 4: Don't 'kill -9' the postmaster

Nov 22 '05 #16

This thread has been closed and replies have been disabled. Please start a new discussion.

Similar topics

3
by: Mudge | last post by:
Hi, My hosting provider only allows me to use 50 connections to my MySQL database that my Web site will use. I don't know what this 50 connections means exactly. Does this mean that only 50...
5
by: Good Man | last post by:
Hi everyone I'm using the "MySQL Administrator" program to keep tabs on the health of a web system i am developing. I think it's nice to have quick (gui) feedback on the query cache, memory...
4
by: Angelos | last post by:
I get this error mysql_pconnect Too many connections ... every now and then. Does anyone knows where it comes from ? There are a lot of sites running on the server and all of them use the...
2
by: Bob | last post by:
We have a production web site that's data intensive (save user input to DB and query for displaying) with the ASP.NET app part on one W2K server and SQL 2000 DB on another W2K server. I have set...
17
by: Peter Proost | last post by:
Hi Group, I've got an interesting problem, I don't know if this is the right group but I think so because everything I've read about it so far says it's a .net problem. Here's the problem, we're...
1
by: bhardwajvinu | last post by:
We are experiencing a problem running MySql 5 whereby under heavy load(with 2 connection and getting raw data from two or three table by a common java task that is running as infinite loop ), memory...
5
by: Usman Jamil | last post by:
Hi I've a class that creates a connection to a database, gets and loop on a dataset given a query and then close the connection. When I use netstat viewer to see if there is any connection open...
4
by: Rahul B | last post by:
Hi, I was getting the error: sqlcode: -911 sqlstate: 40001 , which is "The maximum number of lock requests has been reached for the database." So i increased the locklist size to 200 from the...
4
by: =?Utf-8?B?cmFuZHkxMjAw?= | last post by:
Visual Studio 2005, C# WinForms application: Here’s the question: How can I increase the standard 1 MB stack size of the UI thread in a C# WinForms application? Here’s why I ask: I’ve...
0
by: taylorcarr | last post by:
A Canon printer is a smart device known for being advanced, efficient, and reliable. It is designed for home, office, and hybrid workspace use and can also be used for a variety of purposes. However,...
0
by: Charles Arthur | last post by:
How do i turn on java script on a villaon, callus and itel keypad mobile phone
0
by: emmanuelkatto | last post by:
Hi All, I am Emmanuel katto from Uganda. I want to ask what challenges you've faced while migrating a website to cloud. Please let me know. Thanks! Emmanuel
1
by: Sonnysonu | last post by:
This is the data of csv file 1 2 3 1 2 3 1 2 3 1 2 3 2 3 2 3 3 the lengths should be different i have to store the data by column-wise with in the specific length. suppose the i have to...
0
by: Hystou | last post by:
There are some requirements for setting up RAID: 1. The motherboard and BIOS support RAID configuration. 2. The motherboard has 2 or more available SATA protocol SSD/HDD slots (including MSATA, M.2...
0
by: Hystou | last post by:
Most computers default to English, but sometimes we require a different language, especially when relocating. Forgot to request a specific language before your computer shipped? No problem! You can...
0
Oralloy
by: Oralloy | last post by:
Hello folks, I am unable to find appropriate documentation on the type promotion of bit-fields when using the generalised comparison operator "<=>". The problem is that using the GNU compilers,...
0
jinu1996
by: jinu1996 | last post by:
In today's digital age, having a compelling online presence is paramount for businesses aiming to thrive in a competitive landscape. At the heart of this digital strategy lies an intricately woven...
0
by: Hystou | last post by:
Overview: Windows 11 and 10 have less user interface control over operating system update behaviour than previous versions of Windows. In Windows 11 and 10, there is no way to turn off the Windows...

By using Bytes.com and it's services, you agree to our Privacy Policy and Terms of Use.

To disable or enable advertisements and analytics tracking please visit the manage ads & tracking page.