By using this site, you agree to our updated Privacy Policy and our Terms of Use. Manage your Cookies Settings.
457,714 Members | 1,363 Online
Bytes IT Community
+ Ask a Question
Need help? Post your question and get tips & solutions from a community of 457,714 IT Pros & Developers. It's quick & easy.

Ulimit and Kernel Param Settings

P: n/a
Hello All,

Brief info on the system:

Db2 version : "DB2 v8.1.0.121" and FixPak "13".
OS : RHEL 3 2.4.21-37.0.1
Memory : 8 GB
Swap : 2 GB

The issue here is the instance not being able to be started after the
reboot of the server. Giving the following error message:

"SQL1220N The database manager shared memory set cannot be allocated."

SQL1220N The database manager shared memory set cannot be allocated.

Explanation:

The database manager could not allocate its shared memory set. The
cause of this error may be insufficient memory resources either for the
database manager or the environment in which its
operation is being attempted. Memory resources that can cause this
error include:

o The number of shared memory identifiers allocated in the system

o The size of the shared memory segment

o The amount of paging or swapping space available in the system

o The amount of physical memory available in the system

User Response:

One or more of the following:

o Validate that sufficient memory resources are available to satisfy
the database manager's requirements, and those of the other programs
running on the system.

o On Linux 32-bit, increase the kernel parameter shmmax to 256 MB. On
Linux 64-bit, increase the kernel parameter shmmax to 1GB.

o Reduce database manager's memory requirement for this memory set by
reducing the database manager configuration parameters which affect it.
These are: maxagents , maxdari and, numdb .

o Where appropriate, stop other programs using the system.
------------------------------------------------------------------------------------------------------------------------
So I Started with reducing the # of maxagent. Which worked ! But the
strange part here was, initially the # of maxagent was set to 400,
which i had to reduce to 300 for bringing up the instance. Though once
the instance was up, i did update the maxagent back to 400. And then
did a db2stop and db2start it worked fine.

Also, I noticed in the diag log the following changes had been made by
db2:

kernel.semmni to 1024
kernel.msgmni to 1024
kernel.shmmax to 268435456 (256MB)

So my understanding of the issue, is that this is related with the
kernel parameters and not with the # of maxagents. So, I am just
wondering whether the parameter has to be set manually in
/etc/sysctl.conf. If so, wud this be ideal values considering the
amount of memory and swap space available. Any thoughts on this will be
greatly appreciated.

Thanks a lot !

Nov 27 '06 #1
Share this Question
Share on Google+
8 Replies


P: n/a
aj
I used to have issues like this w/ Informix of all things. Mem
segments would not clear out, even after the DB was offline. Perhaps
this is the root cause?

Have a look at the ipcs and ipcrm command.

HTH

cheers
aj

db2_d_b_a wrote:
Hello All,

Brief info on the system:

Db2 version : "DB2 v8.1.0.121" and FixPak "13".
OS : RHEL 3 2.4.21-37.0.1
Memory : 8 GB
Swap : 2 GB

The issue here is the instance not being able to be started after the
reboot of the server. Giving the following error message:

"SQL1220N The database manager shared memory set cannot be allocated."

SQL1220N The database manager shared memory set cannot be allocated.

Explanation:

The database manager could not allocate its shared memory set. The
cause of this error may be insufficient memory resources either for the
database manager or the environment in which its
operation is being attempted. Memory resources that can cause this
error include:

o The number of shared memory identifiers allocated in the system

o The size of the shared memory segment

o The amount of paging or swapping space available in the system

o The amount of physical memory available in the system

User Response:

One or more of the following:

o Validate that sufficient memory resources are available to satisfy
the database manager's requirements, and those of the other programs
running on the system.

o On Linux 32-bit, increase the kernel parameter shmmax to 256 MB. On
Linux 64-bit, increase the kernel parameter shmmax to 1GB.

o Reduce database manager's memory requirement for this memory set by
reducing the database manager configuration parameters which affect it.
These are: maxagents , maxdari and, numdb .

o Where appropriate, stop other programs using the system.
------------------------------------------------------------------------------------------------------------------------
So I Started with reducing the # of maxagent. Which worked ! But the
strange part here was, initially the # of maxagent was set to 400,
which i had to reduce to 300 for bringing up the instance. Though once
the instance was up, i did update the maxagent back to 400. And then
did a db2stop and db2start it worked fine.

Also, I noticed in the diag log the following changes had been made by
db2:

kernel.semmni to 1024
kernel.msgmni to 1024
kernel.shmmax to 268435456 (256MB)

So my understanding of the issue, is that this is related with the
kernel parameters and not with the # of maxagents. So, I am just
wondering whether the parameter has to be set manually in
/etc/sysctl.conf. If so, wud this be ideal values considering the
amount of memory and swap space available. Any thoughts on this will be
greatly appreciated.

Thanks a lot !
Nov 27 '06 #2

P: n/a
I had done a ipclean before bringing up the instance. it did not help.

however, this is the ipcs -l output for the system before and after the
instance came up respectively:

Before the instance came up :

------ Shared Memory Limits --------
max number of segments = 4096
max seg size (kbytes) = 32768
max total shared memory (kbytes) = 8388608
min seg size (bytes) = 1

------ Semaphore Limits --------
max number of arrays = 128
max semaphores per array = 250
max semaphores system wide = 32000
max ops per semop call = 32
semaphore max value = 32767

------ Messages: Limits --------
max queues system wide = 16
max size of message (bytes) = 8192
default max size of queue (bytes) = 16384

After the instance came up :

------ Shared Memory Limits --------
max number of segments = 4096
max seg size (kbytes) = 262144
max total shared memory (kbytes) = 8388608
min seg size (bytes) = 1

------ Semaphore Limits --------
max number of arrays = 1024
max semaphores per array = 250
max semaphores system wide = 32000
max ops per semop call = 32
semaphore max value = 32767

------ Messages: Limits --------
max queues system wide = 1024
max size of message (bytes) = 8192
default max size of queue (bytes) = 16384

Was just curious to know, if manually keeping these modified entries in
/etc/sysctl.conf will help the initial glitch in bringing up the
instance. Or should I be increasing these max values still further
more.

Thanks a lot !
aj wrote:
I used to have issues like this w/ Informix of all things. Mem
segments would not clear out, even after the DB was offline. Perhaps
this is the root cause?

Have a look at the ipcs and ipcrm command.

HTH

cheers
aj

db2_d_b_a wrote:
Hello All,

Brief info on the system:

Db2 version : "DB2 v8.1.0.121" and FixPak "13".
OS : RHEL 3 2.4.21-37.0.1
Memory : 8 GB
Swap : 2 GB

The issue here is the instance not being able to be started after the
reboot of the server. Giving the following error message:

"SQL1220N The database manager shared memory set cannot be allocated."

SQL1220N The database manager shared memory set cannot be allocated.

Explanation:

The database manager could not allocate its shared memory set. The
cause of this error may be insufficient memory resources either for the
database manager or the environment in which its
operation is being attempted. Memory resources that can cause this
error include:

o The number of shared memory identifiers allocated in the system

o The size of the shared memory segment

o The amount of paging or swapping space available in the system

o The amount of physical memory available in the system

User Response:

One or more of the following:

o Validate that sufficient memory resources are available to satisfy
the database manager's requirements, and those of the other programs
running on the system.

o On Linux 32-bit, increase the kernel parameter shmmax to 256 MB. On
Linux 64-bit, increase the kernel parameter shmmax to 1GB.

o Reduce database manager's memory requirement for this memory set by
reducing the database manager configuration parameters which affect it.
These are: maxagents , maxdari and, numdb .

o Where appropriate, stop other programs using the system.
------------------------------------------------------------------------------------------------------------------------
So I Started with reducing the # of maxagent. Which worked ! But the
strange part here was, initially the # of maxagent was set to 400,
which i had to reduce to 300 for bringing up the instance. Though once
the instance was up, i did update the maxagent back to 400. And then
did a db2stop and db2start it worked fine.

Also, I noticed in the diag log the following changes had been made by
db2:

kernel.semmni to 1024
kernel.msgmni to 1024
kernel.shmmax to 268435456 (256MB)

So my understanding of the issue, is that this is related with the
kernel parameters and not with the # of maxagents. So, I am just
wondering whether the parameter has to be set manually in
/etc/sysctl.conf. If so, wud this be ideal values considering the
amount of memory and swap space available. Any thoughts on this will be
greatly appreciated.

Thanks a lot !
Nov 27 '06 #3

P: n/a
See the "DB2 for Linux Howto" here:

http://tldp.org/HOWTO/DB2-HOWTO/prerequisites.html
"db2_d_b_a" <su*****@gmail.comwrites:
I had done a ipclean before bringing up the instance. it did not
help.

however, this is the ipcs -l output for the system before and
after the instance came up respectively:

Before the instance came up :

------ Shared Memory Limits -------- max number of segments = 4096
max seg size (kbytes) = 32768 max total shared memory (kbytes) =
8388608 min seg size (bytes) = 1

------ Semaphore Limits -------- max number of arrays = 128 max
semaphores per array = 250 max semaphores system wide = 32000 max
ops per semop call = 32 semaphore max value = 32767

------ Messages: Limits -------- max queues system wide = 16 max
size of message (bytes) = 8192 default max size of queue (bytes) =
16384

After the instance came up :

------ Shared Memory Limits -------- max number of segments = 4096
max seg size (kbytes) = 262144 max total shared memory (kbytes) =
8388608 min seg size (bytes) = 1

------ Semaphore Limits -------- max number of arrays = 1024 max
semaphores per array = 250 max semaphores system wide = 32000 max
ops per semop call = 32 semaphore max value = 32767

------ Messages: Limits -------- max queues system wide = 1024 max
size of message (bytes) = 8192 default max size of queue (bytes) =
16384

Was just curious to know, if manually keeping these modified
entries in /etc/sysctl.conf will help the initial glitch in
bringing up the instance. Or should I be increasing these max
values still further more.

Thanks a lot !
aj wrote:
>I used to have issues like this w/ Informix of all things. Mem
segments would not clear out, even after the DB was offline.
Perhaps this is the root cause?

Have a look at the ipcs and ipcrm command.

HTH

cheers aj

db2_d_b_a wrote:
Hello All,

Brief info on the system:

Db2 version : "DB2 v8.1.0.121" and FixPak "13". OS : RHEL 3
2.4.21-37.0.1 Memory : 8 GB Swap : 2 GB

The issue here is the instance not being able to be started
after the reboot of the server. Giving the following error
message:

"SQL1220N The database manager shared memory set cannot be
allocated."

SQL1220N The database manager shared memory set cannot be
allocated.

Explanation:

The database manager could not allocate its shared memory
set. The cause of this error may be insufficient memory
resources either for the database manager or the environment
in which its operation is being attempted. Memory resources
that can cause this error include:

o The number of shared memory identifiers allocated in the
system

o The size of the shared memory segment

o The amount of paging or swapping space available in the
system

o The amount of physical memory available in the system

User Response:

One or more of the following:

o Validate that sufficient memory resources are available to
satisfy the database manager's requirements, and those of the
other programs running on the system.

o On Linux 32-bit, increase the kernel parameter shmmax to 256
MB. On Linux 64-bit, increase the kernel parameter shmmax to
1GB.

o Reduce database manager's memory requirement for this memory
set by reducing the database manager configuration parameters
which affect it. These are: maxagents , maxdari and, numdb .

o Where appropriate, stop other programs using the system.
------------------------------------------------------------------------------------------------------------------------
So I Started with reducing the # of maxagent. Which worked !
But the strange part here was, initially the # of maxagent was
set to 400, which i had to reduce to 300 for bringing up the
instance. Though once the instance was up, i did update the
maxagent back to 400. And then did a db2stop and db2start it
worked fine.

Also, I noticed in the diag log the following changes had been
made by db2:

kernel.semmni to 1024 kernel.msgmni to 1024 kernel.shmmax to
268435456 (256MB)

So my understanding of the issue, is that this is related with
the kernel parameters and not with the # of maxagents. So, I
am just wondering whether the parameter has to be set manually
in /etc/sysctl.conf. If so, wud this be ideal values
considering the amount of memory and swap space available. Any
thoughts on this will be greatly appreciated.

Thanks a lot !
--
Haider
Nov 28 '06 #4

P: n/a
That was some neat piece of information .. thanks a lot Haider !

Haider Rizvi wrote:
See the "DB2 for Linux Howto" here:

http://tldp.org/HOWTO/DB2-HOWTO/prerequisites.html
"db2_d_b_a" <su*****@gmail.comwrites:
I had done a ipclean before bringing up the instance. it did not
help.

however, this is the ipcs -l output for the system before and
after the instance came up respectively:

Before the instance came up :

------ Shared Memory Limits -------- max number of segments = 4096
max seg size (kbytes) = 32768 max total shared memory (kbytes) =
8388608 min seg size (bytes) = 1

------ Semaphore Limits -------- max number of arrays = 128 max
semaphores per array = 250 max semaphores system wide = 32000 max
ops per semop call = 32 semaphore max value = 32767

------ Messages: Limits -------- max queues system wide = 16 max
size of message (bytes) = 8192 default max size of queue (bytes) =
16384

After the instance came up :

------ Shared Memory Limits -------- max number of segments = 4096
max seg size (kbytes) = 262144 max total shared memory (kbytes) =
8388608 min seg size (bytes) = 1

------ Semaphore Limits -------- max number of arrays = 1024 max
semaphores per array = 250 max semaphores system wide = 32000 max
ops per semop call = 32 semaphore max value = 32767

------ Messages: Limits -------- max queues system wide = 1024 max
size of message (bytes) = 8192 default max size of queue (bytes) =
16384

Was just curious to know, if manually keeping these modified
entries in /etc/sysctl.conf will help the initial glitch in
bringing up the instance. Or should I be increasing these max
values still further more.

Thanks a lot !
aj wrote:
I used to have issues like this w/ Informix of all things. Mem
segments would not clear out, even after the DB was offline.
Perhaps this is the root cause?

Have a look at the ipcs and ipcrm command.

HTH

cheers aj

db2_d_b_a wrote:
Hello All,

Brief info on the system:

Db2 version : "DB2 v8.1.0.121" and FixPak "13". OS : RHEL 3
2.4.21-37.0.1 Memory : 8 GB Swap : 2 GB

The issue here is the instance not being able to be started
after the reboot of the server. Giving the following error
message:

"SQL1220N The database manager shared memory set cannot be
allocated."

SQL1220N The database manager shared memory set cannot be
allocated.

Explanation:

The database manager could not allocate its shared memory
set. The cause of this error may be insufficient memory
resources either for the database manager or the environment
in which its operation is being attempted. Memory resources
that can cause this error include:

o The number of shared memory identifiers allocated in the
system

o The size of the shared memory segment

o The amount of paging or swapping space available in the
system

o The amount of physical memory available in the system

User Response:

One or more of the following:

o Validate that sufficient memory resources are available to
satisfy the database manager's requirements, and those of the
other programs running on the system.

o On Linux 32-bit, increase the kernel parameter shmmax to 256
MB. On Linux 64-bit, increase the kernel parameter shmmax to
1GB.

o Reduce database manager's memory requirement for this memory
set by reducing the database manager configuration parameters
which affect it. These are: maxagents , maxdari and, numdb .

o Where appropriate, stop other programs using the system.
------------------------------------------------------------------------------------------------------------------------
So I Started with reducing the # of maxagent. Which worked !
But the strange part here was, initially the # of maxagent was
set to 400, which i had to reduce to 300 for bringing up the
instance. Though once the instance was up, i did update the
maxagent back to 400. And then did a db2stop and db2start it
worked fine.

Also, I noticed in the diag log the following changes had been
made by db2:

kernel.semmni to 1024 kernel.msgmni to 1024 kernel.shmmax to
268435456 (256MB)

So my understanding of the issue, is that this is related with
the kernel parameters and not with the # of maxagents. So, I
am just wondering whether the parameter has to be set manually
in /etc/sysctl.conf. If so, wud this be ideal values
considering the amount of memory and swap space available. Any
thoughts on this will be greatly appreciated.

Thanks a lot !

--
Haider
Nov 28 '06 #5

P: n/a

db2_d_b_a wrote:
I had done a ipclean before bringing up the instance. it did not help.

however, this is the ipcs -l output for the system before and after the
instance came up respectively:

Before the instance came up :

------ Shared Memory Limits --------
max number of segments = 4096
max seg size (kbytes) = 32768
max total shared memory (kbytes) = 8388608
min seg size (bytes) = 1

------ Semaphore Limits --------
max number of arrays = 128
max semaphores per array = 250
max semaphores system wide = 32000
max ops per semop call = 32
semaphore max value = 32767

------ Messages: Limits --------
max queues system wide = 16
max size of message (bytes) = 8192
default max size of queue (bytes) = 16384

After the instance came up :

------ Shared Memory Limits --------
max number of segments = 4096
max seg size (kbytes) = 262144
max total shared memory (kbytes) = 8388608
min seg size (bytes) = 1

------ Semaphore Limits --------
max number of arrays = 1024
max semaphores per array = 250
max semaphores system wide = 32000
max ops per semop call = 32
semaphore max value = 32767

------ Messages: Limits --------
max queues system wide = 1024
max size of message (bytes) = 8192
default max size of queue (bytes) = 16384

Was just curious to know, if manually keeping these modified entries in
/etc/sysctl.conf will help the initial glitch in bringing up the
instance. Or should I be increasing these max values still further
more.

Thanks a lot !
aj wrote:
I used to have issues like this w/ Informix of all things. Mem
segments would not clear out, even after the DB was offline. Perhaps
this is the root cause?

Have a look at the ipcs and ipcrm command.

HTH

cheers
aj

db2_d_b_a wrote:
Hello All,
>
Brief info on the system:
>
Db2 version : "DB2 v8.1.0.121" and FixPak "13".
OS : RHEL 3 2.4.21-37.0.1
Memory : 8 GB
Swap : 2 GB
>
The issue here is the instance not being able to be started after the
reboot of the server. Giving the following error message:
>
"SQL1220N The database manager shared memory set cannot be allocated."
>
SQL1220N The database manager shared memory set cannot be allocated.
>
Explanation:
>
The database manager could not allocate its shared memory set. The
cause of this error may be insufficient memory resources either for the
database manager or the environment in which its
operation is being attempted. Memory resources that can cause this
error include:
>
o The number of shared memory identifiers allocated in the system
>
o The size of the shared memory segment
>
o The amount of paging or swapping space available in the system
>
o The amount of physical memory available in the system
>
User Response:
>
One or more of the following:
>
o Validate that sufficient memory resources are available to satisfy
the database manager's requirements, and those of the other programs
running on the system.
>
o On Linux 32-bit, increase the kernel parameter shmmax to 256 MB. On
Linux 64-bit, increase the kernel parameter shmmax to 1GB.
>
o Reduce database manager's memory requirement for this memory set by
reducing the database manager configuration parameters which affect it.
These are: maxagents , maxdari and, numdb .
>
o Where appropriate, stop other programs using the system.
------------------------------------------------------------------------------------------------------------------------
So I Started with reducing the # of maxagent. Which worked ! But the
strange part here was, initially the # of maxagent was set to 400,
which i had to reduce to 300 for bringing up the instance. Though once
the instance was up, i did update the maxagent back to 400. And then
did a db2stop and db2start it worked fine.
>
Also, I noticed in the diag log the following changes had been made by
db2:
>
kernel.semmni to 1024
kernel.msgmni to 1024
kernel.shmmax to 268435456 (256MB)
>
So my understanding of the issue, is that this is related with the
kernel parameters and not with the # of maxagents. So, I am just
wondering whether the parameter has to be set manually in
/etc/sysctl.conf. If so, wud this be ideal values considering the
amount of memory and swap space available. Any thoughts on this will be
greatly appreciated.
>
Thanks a lot !
>
Hello,

It sounds like you probably resolved your problem, but here's some
extra background in case you're curious.

You were hitting a problem with the kernel.shmmax setting. The reason
the problem "goes away" after bringing the database up succesfully once
is that DB2 modifies the current shmmax setting internally, but only
after the database manager shared memory segment is allocated. Once
you reboot though, you'll see that shmmax gets reverted back to it's
original setting, and the first time you bring the instance back up
you'll hit the shmmax issue again. Permanently updating kernel.shmmax
to 256MB (1GB on 64-bit Linux) will ensure you don't hit this same
problem again (the other kernel parms listed are good recommendations
as well).

This kernel.shmmax issue has been fixed in DB2 9.

Cheers,
Liam.

Nov 28 '06 #6

P: n/a
Hi Liam,

I have some how figured out that its the issue is with the shmax kernel
parameter. And it will be good to keep it at 256MB in the sysctl.
However, I would like to know, whether 256MB will be good enough or
should i be setting it slightly higher.

cheers !


Liam Finnie wrote:
db2_d_b_a wrote:
I had done a ipclean before bringing up the instance. it did not help.

however, this is the ipcs -l output for the system before and after the
instance came up respectively:

Before the instance came up :

------ Shared Memory Limits --------
max number of segments = 4096
max seg size (kbytes) = 32768
max total shared memory (kbytes) = 8388608
min seg size (bytes) = 1

------ Semaphore Limits --------
max number of arrays = 128
max semaphores per array = 250
max semaphores system wide = 32000
max ops per semop call = 32
semaphore max value = 32767

------ Messages: Limits --------
max queues system wide = 16
max size of message (bytes) = 8192
default max size of queue (bytes) = 16384

After the instance came up :

------ Shared Memory Limits --------
max number of segments = 4096
max seg size (kbytes) = 262144
max total shared memory (kbytes) = 8388608
min seg size (bytes) = 1

------ Semaphore Limits --------
max number of arrays = 1024
max semaphores per array = 250
max semaphores system wide = 32000
max ops per semop call = 32
semaphore max value = 32767

------ Messages: Limits --------
max queues system wide = 1024
max size of message (bytes) = 8192
default max size of queue (bytes) = 16384

Was just curious to know, if manually keeping these modified entries in
/etc/sysctl.conf will help the initial glitch in bringing up the
instance. Or should I be increasing these max values still further
more.

Thanks a lot !
aj wrote:
I used to have issues like this w/ Informix of all things. Mem
segments would not clear out, even after the DB was offline. Perhaps
this is the root cause?
>
Have a look at the ipcs and ipcrm command.
>
HTH
>
cheers
aj
>
db2_d_b_a wrote:
Hello All,

Brief info on the system:

Db2 version : "DB2 v8.1.0.121" and FixPak "13".
OS : RHEL 3 2.4.21-37.0.1
Memory : 8 GB
Swap : 2 GB

The issue here is the instance not being able to be started after the
reboot of the server. Giving the following error message:

"SQL1220N The database manager shared memory set cannot be allocated."

SQL1220N The database manager shared memory set cannot be allocated.

Explanation:

The database manager could not allocate its shared memory set. The
cause of this error may be insufficient memory resources either for the
database manager or the environment in which its
operation is being attempted. Memory resources that can cause this
error include:

o The number of shared memory identifiers allocated in the system

o The size of the shared memory segment

o The amount of paging or swapping space available in the system

o The amount of physical memory available in the system

User Response:

One or more of the following:

o Validate that sufficient memory resources are available to satisfy
the database manager's requirements, and those of the other programs
running on the system.

o On Linux 32-bit, increase the kernel parameter shmmax to 256 MB. On
Linux 64-bit, increase the kernel parameter shmmax to 1GB.

o Reduce database manager's memory requirement for this memory set by
reducing the database manager configuration parameters which affect it.
These are: maxagents , maxdari and, numdb .

o Where appropriate, stop other programs using the system.
------------------------------------------------------------------------------------------------------------------------
So I Started with reducing the # of maxagent. Which worked ! But the
strange part here was, initially the # of maxagent was set to 400,
which i had to reduce to 300 for bringing up the instance. Though once
the instance was up, i did update the maxagent back to 400. And then
did a db2stop and db2start it worked fine.

Also, I noticed in the diag log the following changes had been made by
db2:

kernel.semmni to 1024
kernel.msgmni to 1024
kernel.shmmax to 268435456 (256MB)

So my understanding of the issue, is that this is related with the
kernel parameters and not with the # of maxagents. So, I am just
wondering whether the parameter has to be set manually in
/etc/sysctl.conf. If so, wud this be ideal values considering the
amount of memory and swap space available. Any thoughts on this will be
greatly appreciated.

Thanks a lot !

Hello,

It sounds like you probably resolved your problem, but here's some
extra background in case you're curious.

You were hitting a problem with the kernel.shmmax setting. The reason
the problem "goes away" after bringing the database up succesfully once
is that DB2 modifies the current shmmax setting internally, but only
after the database manager shared memory segment is allocated. Once
you reboot though, you'll see that shmmax gets reverted back to it's
original setting, and the first time you bring the instance back up
you'll hit the shmmax issue again. Permanently updating kernel.shmmax
to 256MB (1GB on 64-bit Linux) will ensure you don't hit this same
problem again (the other kernel parms listed are good recommendations
as well).

This kernel.shmmax issue has been fixed in DB2 9.

Cheers,
Liam.
Nov 28 '06 #7

P: n/a
Hi Liam,

I have some how figured out that its the issue is with the shmax kernel
parameter. And it will be good to keep it at 256MB in the sysctl.
However, I would like to know, whether 256MB will be good enough or
should i be setting it slightly higher.

cheers !


Liam Finnie wrote:
db2_d_b_a wrote:
I had done a ipclean before bringing up the instance. it did not help.

however, this is the ipcs -l output for the system before and after the
instance came up respectively:

Before the instance came up :

------ Shared Memory Limits --------
max number of segments = 4096
max seg size (kbytes) = 32768
max total shared memory (kbytes) = 8388608
min seg size (bytes) = 1

------ Semaphore Limits --------
max number of arrays = 128
max semaphores per array = 250
max semaphores system wide = 32000
max ops per semop call = 32
semaphore max value = 32767

------ Messages: Limits --------
max queues system wide = 16
max size of message (bytes) = 8192
default max size of queue (bytes) = 16384

After the instance came up :

------ Shared Memory Limits --------
max number of segments = 4096
max seg size (kbytes) = 262144
max total shared memory (kbytes) = 8388608
min seg size (bytes) = 1

------ Semaphore Limits --------
max number of arrays = 1024
max semaphores per array = 250
max semaphores system wide = 32000
max ops per semop call = 32
semaphore max value = 32767

------ Messages: Limits --------
max queues system wide = 1024
max size of message (bytes) = 8192
default max size of queue (bytes) = 16384

Was just curious to know, if manually keeping these modified entries in
/etc/sysctl.conf will help the initial glitch in bringing up the
instance. Or should I be increasing these max values still further
more.

Thanks a lot !
aj wrote:
I used to have issues like this w/ Informix of all things. Mem
segments would not clear out, even after the DB was offline. Perhaps
this is the root cause?
>
Have a look at the ipcs and ipcrm command.
>
HTH
>
cheers
aj
>
db2_d_b_a wrote:
Hello All,

Brief info on the system:

Db2 version : "DB2 v8.1.0.121" and FixPak "13".
OS : RHEL 3 2.4.21-37.0.1
Memory : 8 GB
Swap : 2 GB

The issue here is the instance not being able to be started after the
reboot of the server. Giving the following error message:

"SQL1220N The database manager shared memory set cannot be allocated."

SQL1220N The database manager shared memory set cannot be allocated.

Explanation:

The database manager could not allocate its shared memory set. The
cause of this error may be insufficient memory resources either for the
database manager or the environment in which its
operation is being attempted. Memory resources that can cause this
error include:

o The number of shared memory identifiers allocated in the system

o The size of the shared memory segment

o The amount of paging or swapping space available in the system

o The amount of physical memory available in the system

User Response:

One or more of the following:

o Validate that sufficient memory resources are available to satisfy
the database manager's requirements, and those of the other programs
running on the system.

o On Linux 32-bit, increase the kernel parameter shmmax to 256 MB. On
Linux 64-bit, increase the kernel parameter shmmax to 1GB.

o Reduce database manager's memory requirement for this memory set by
reducing the database manager configuration parameters which affect it.
These are: maxagents , maxdari and, numdb .

o Where appropriate, stop other programs using the system.
------------------------------------------------------------------------------------------------------------------------
So I Started with reducing the # of maxagent. Which worked ! But the
strange part here was, initially the # of maxagent was set to 400,
which i had to reduce to 300 for bringing up the instance. Though once
the instance was up, i did update the maxagent back to 400. And then
did a db2stop and db2start it worked fine.

Also, I noticed in the diag log the following changes had been made by
db2:

kernel.semmni to 1024
kernel.msgmni to 1024
kernel.shmmax to 268435456 (256MB)

So my understanding of the issue, is that this is related with the
kernel parameters and not with the # of maxagents. So, I am just
wondering whether the parameter has to be set manually in
/etc/sysctl.conf. If so, wud this be ideal values considering the
amount of memory and swap space available. Any thoughts on this will be
greatly appreciated.

Thanks a lot !

Hello,

It sounds like you probably resolved your problem, but here's some
extra background in case you're curious.

You were hitting a problem with the kernel.shmmax setting. The reason
the problem "goes away" after bringing the database up succesfully once
is that DB2 modifies the current shmmax setting internally, but only
after the database manager shared memory segment is allocated. Once
you reboot though, you'll see that shmmax gets reverted back to it's
original setting, and the first time you bring the instance back up
you'll hit the shmmax issue again. Permanently updating kernel.shmmax
to 256MB (1GB on 64-bit Linux) will ensure you don't hit this same
problem again (the other kernel parms listed are good recommendations
as well).

This kernel.shmmax issue has been fixed in DB2 9.

Cheers,
Liam.
Nov 28 '06 #8

P: n/a

db2_d_b_a wrote:
Hi Liam,

I have some how figured out that its the issue is with the shmax kernel
parameter. And it will be good to keep it at 256MB in the sysctl.
However, I would like to know, whether 256MB will be good enough or
should i be setting it slightly higher.

cheers !


Liam Finnie wrote:
db2_d_b_a wrote:
I had done a ipclean before bringing up the instance. it did not help.
>
however, this is the ipcs -l output for the system before and after the
instance came up respectively:
>
Before the instance came up :
>
------ Shared Memory Limits --------
max number of segments = 4096
max seg size (kbytes) = 32768
max total shared memory (kbytes) = 8388608
min seg size (bytes) = 1
>
------ Semaphore Limits --------
max number of arrays = 128
max semaphores per array = 250
max semaphores system wide = 32000
max ops per semop call = 32
semaphore max value = 32767
>
------ Messages: Limits --------
max queues system wide = 16
max size of message (bytes) = 8192
default max size of queue (bytes) = 16384
>
After the instance came up :
>
------ Shared Memory Limits --------
max number of segments = 4096
max seg size (kbytes) = 262144
max total shared memory (kbytes) = 8388608
min seg size (bytes) = 1
>
------ Semaphore Limits --------
max number of arrays = 1024
max semaphores per array = 250
max semaphores system wide = 32000
max ops per semop call = 32
semaphore max value = 32767
>
------ Messages: Limits --------
max queues system wide = 1024
max size of message (bytes) = 8192
default max size of queue (bytes) = 16384
>
Was just curious to know, if manually keeping these modified entries in
/etc/sysctl.conf will help the initial glitch in bringing up the
instance. Or should I be increasing these max values still further
more.
>
Thanks a lot !
>
>
aj wrote:
I used to have issues like this w/ Informix of all things. Mem
segments would not clear out, even after the DB was offline. Perhaps
this is the root cause?

Have a look at the ipcs and ipcrm command.

HTH

cheers
aj

db2_d_b_a wrote:
Hello All,
>
Brief info on the system:
>
Db2 version : "DB2 v8.1.0.121" and FixPak "13".
OS : RHEL 3 2.4.21-37.0.1
Memory : 8 GB
Swap : 2 GB
>
The issue here is the instance not being able to be started after the
reboot of the server. Giving the following error message:
>
"SQL1220N The database manager shared memory set cannot be allocated."
>
SQL1220N The database manager shared memory set cannot be allocated.
>
Explanation:
>
The database manager could not allocate its shared memory set. The
cause of this error may be insufficient memory resources either for the
database manager or the environment in which its
operation is being attempted. Memory resources that can cause this
error include:
>
o The number of shared memory identifiers allocated in the system
>
o The size of the shared memory segment
>
o The amount of paging or swapping space available in the system
>
o The amount of physical memory available in the system
>
User Response:
>
One or more of the following:
>
o Validate that sufficient memory resources are available to satisfy
the database manager's requirements, and those of the other programs
running on the system.
>
o On Linux 32-bit, increase the kernel parameter shmmax to 256 MB. On
Linux 64-bit, increase the kernel parameter shmmax to 1GB.
>
o Reduce database manager's memory requirement for this memory set by
reducing the database manager configuration parameters which affect it.
These are: maxagents , maxdari and, numdb .
>
o Where appropriate, stop other programs using the system.
------------------------------------------------------------------------------------------------------------------------
So I Started with reducing the # of maxagent. Which worked ! But the
strange part here was, initially the # of maxagent was set to 400,
which i had to reduce to 300 for bringing up the instance. Though once
the instance was up, i did update the maxagent back to 400. And then
did a db2stop and db2start it worked fine.
>
Also, I noticed in the diag log the following changes had been made by
db2:
>
kernel.semmni to 1024
kernel.msgmni to 1024
kernel.shmmax to 268435456 (256MB)
>
So my understanding of the issue, is that this is related with the
kernel parameters and not with the # of maxagents. So, I am just
wondering whether the parameter has to be set manually in
/etc/sysctl.conf. If so, wud this be ideal values considering the
amount of memory and swap space available. Any thoughts on this will be
greatly appreciated.
>
Thanks a lot !
>
Hello,

It sounds like you probably resolved your problem, but here's some
extra background in case you're curious.

You were hitting a problem with the kernel.shmmax setting. The reason
the problem "goes away" after bringing the database up succesfully once
is that DB2 modifies the current shmmax setting internally, but only
after the database manager shared memory segment is allocated. Once
you reboot though, you'll see that shmmax gets reverted back to it's
original setting, and the first time you bring the instance back up
you'll hit the shmmax issue again. Permanently updating kernel.shmmax
to 256MB (1GB on 64-bit Linux) will ensure you don't hit this same
problem again (the other kernel parms listed are good recommendations
as well).

This kernel.shmmax issue has been fixed in DB2 9.

Cheers,
Liam.
Hello,

Having it set to 256 MB is good enough to prevent failures.
Internally, if DB2 needs a piece of shared memory larger than 256 MB,
it will convert that into multiple 256MB segments. If you set
kernel.shmmax larger than the biggest piece of shared memory you will
need (likely your database shared memory segment), then we will try to
allocate the entire piece of shared memory using a single segment. So,
there could be a very slight performance/resource benefit to setting
shmmax higher.

Cheers,
Liam.

Nov 29 '06 #9

This discussion thread is closed

Replies have been disabled for this discussion.