468,278 Members | 1,567 Online
Bytes | Developer Community
New Post

Home Posts Topics Members FAQ

Post your question to a community of 468,278 developers. It's quick & easy.

Daemon Server, Forking, Defunct Processes

I am writing a server daemon that forks a child for every incoming
request. The child process terminates when the client closes the
connection. My problem is that I am unsure how to prevent the child
process from becoming defunct. Here is an over-simplified main
function...

int main(void) {

// Daemonize
int pid;
if ((pid = fork()) != 0) {
exit(0);
}
setsid();
if ((pid = fork()) != 0) {
exit(0);
}

// Assume this gets the socket descriptor, binds
// the socket and listens on given port.
int server_socket = create_server_socket(1234);

int client_socket;

while (1) {
// Assume this accepts a request to the server socket
client_socket = accept_socket_request(server_socket);

if (!fork()) {
// Child

// Assume this initiates a text interaction
// with the client, looping until the client
// enters 'exit'. Then it returns.
session();

// Close the client connection
shutdown(client_socket, 2);

// The child exits
exit(0);

} else {

// Parent
waitpid(-1, NULL, WNOHANG);
}
}

// Close the server socket
shutdown(server_socket, 2);
return 0;
}

What is happening is that if a client terminates a child process the
parent isn't aware until a new connection/process is made. I think
this is because of my placement of waitpid(). The result is that
there is a defunct child sitting around until a new connection is
made, then the defunct process is cleaned up. There is always at
least 1 defunct process lying around. How can I remedy this?

Thanks,
Scott
Sep 11 '08 #1
3 4891
>I am writing a server daemon that forks a child for every incoming
>request. The child process terminates when the client closes the
connection. My problem is that I am unsure how to prevent the child
process from becoming defunct.
This problem is more appropriate for comp.unix.programmer.

A child process *ALWAYS* becomes defunct, at least briefly.
>Here is an over-simplified main
function...

int main(void) {

// Daemonize
int pid;
if ((pid = fork()) != 0) {
exit(0);
}
setsid();
if ((pid = fork()) != 0) {
exit(0);
}

// Assume this gets the socket descriptor, binds
// the socket and listens on given port.
int server_socket = create_server_socket(1234);

int client_socket;

while (1) {
// Assume this accepts a request to the server socket
client_socket = accept_socket_request(server_socket);

if (!fork()) {
// Child

// Assume this initiates a text interaction
// with the client, looping until the client
// enters 'exit'. Then it returns.
session();

// Close the client connection
shutdown(client_socket, 2);

// The child exits
exit(0);

} else {

// Parent
waitpid(-1, NULL, WNOHANG);
You might want to loop here until waitpid returns something other than
a process ID, to clean up multiple children at one time.
> }
}

// Close the server socket
shutdown(server_socket, 2);
return 0;
}

What is happening is that if a client terminates a child process the
parent isn't aware until a new connection/process is made. I think
this is because of my placement of waitpid(). The result is that
there is a defunct child sitting around until a new connection is
made, then the defunct process is cleaned up.
>There is always at
least 1 defunct process lying around.
Please explain why this is a problem. (Ever-increasing numbers of
defunct children are a problem, but that's not the issue here).
>How can I remedy this?
If you do not care about getting the status of a terminated child,
nor do you want to keep track of which children have terminated,
something like:

signal(SIGCHLD, SIG_IGN);
OR
signal(SIGCLD, SIG_IGN);

in the parent may be appropriate (chances are your system has only
one of these signals). Beware that passing this state on to a child
can make it malfunction if it has children and doesn't expect this,
so set the signal handling back to SIG_DFL after the fork() in the
child process.

Sep 11 '08 #2
On 11 Sep 2008 at 4:40, Gordon Burditt wrote:
>>There is always at least 1 defunct process lying around. How can I
remedy this?

If you do not care about getting the status of a terminated child,
nor do you want to keep track of which children have terminated,
something like:

signal(SIGCHLD, SIG_IGN);
OR
signal(SIGCLD, SIG_IGN);

in the parent may be appropriate (chances are your system has only
one of these signals).
It may be worth pointing out that pre-2001 versions of POSIX allowed
terminated child processes to become zombies even if the disposition of
SIGCHLD is set to SIG_IGN - I seem to recall that Linux up to 2.4.* had
this behavior.

Sep 11 '08 #3
Scottman wrote:
>
I am writing a server daemon that forks a child for every incoming
request. The child process terminates when the client closes the
connection. My problem is that I am unsure how to prevent the
child process from becoming defunct. Here is an over-simplified
main function...
The C language doesn't contain forks, child processes, etc. That
makes this off-topic on c.l.c. Try comp.unix.programmer.

--
[mail]: Chuck F (cbfalconer at maineline dot net)
[page]: <http://cbfalconer.home.att.net>
Try the download section.
Sep 11 '08 #4

This discussion thread is closed

Replies have been disabled for this discussion.

Similar topics

3 posts views Thread by Rob Hunter | last post: by
1 post views Thread by Bob Swerdlow | last post: by
1 post views Thread by Matt Stevens | last post: by
3 posts views Thread by felixfix | last post: by
3 posts views Thread by czajnik | last post: by
2 posts views Thread by Reid Priedhorsky | last post: by
10 posts views Thread by qwertycat | last post: by
6 posts views Thread by Johny | last post: by
1 post views Thread by MrBee | last post: by
By using this site, you agree to our Privacy Policy and Terms of Use.