By using this site, you agree to our updated Privacy Policy and our Terms of Use. Manage your Cookies Settings.
444,119 Members | 2,086 Online
Bytes IT Community
+ Ask a Question
Need help? Post your question and get tips & solutions from a community of 444,119 IT Pros & Developers. It's quick & easy.

How to tell if a forked process is done?

P: n/a
Howdy,

I want to know how to tell if a forked process is done.

Actually, my real question is that I want to run a shell script inside
of a python script, and after the shell script has finished running, I
want to do more stuff *condition* on the fact that the shell script
has finished running, inside the same python script.

The only way I can think of is to fork a process and then call the
shell script, as in:
pid = os.fork()
if pid == 0:
os.execl(shellscript_name.sh, "")
but how can I know if the shell script is finished?

In sum, my two questions are:
1. How can I know if a forked shell script is finished?
2. How can I run a shell script inside a python script without
forking a new process, so that I can know the shell script is done
from within the same python script?

thanks in advance,

John
Jul 18 '05 #1
Share this Question
Share on Google+
21 Replies


P: n/a
You can just do os.system("some command"), it will block until the shell
command is done, and return the exit code. If you need more control,
perhaps look into the popen2 module and the Popen3/4 classes inside it.

--
Nick Welch aka mackstann | mack @ incise.org | http://incise.org
Help stamp out and abolish redundancy.

Jul 18 '05 #2

P: n/a
In article <87**************************@posting.google.com >,
jo****@ugcs.caltech.edu (John Lin) wrote:
I want to know how to tell if a forked process is done.

Actually, my real question is that I want to run a shell script inside
of a python script, and after the shell script has finished running, I
want to do more stuff *condition* on the fact that the shell script
has finished running, inside the same python script.

The only way I can think of is to fork a process and then call the
shell script, as in:
pid = os.fork()
if pid == 0:
os.execl(shellscript_name.sh, "")
but how can I know if the shell script is finished?

In sum, my two questions are:
1. How can I know if a forked shell script is finished?
2. How can I run a shell script inside a python script without
forking a new process, so that I can know the shell script is done
from within the same python script?


The simplest way to do what you appear to want is
os.system("shellscript_path")

If any of the command line is actually going to come
from input data, or you have some other reason to prefer
an argument list like exec, see os.spawnv and similar,
with os.P_WAIT as first parameter. Or if your next
question is going to be how to read from a pipe between
the two processes, see os.popen() for starters.

All of these functions will fork a process, but they use
waitpid or some similar function to suspend the calling
process until the fork exits. See man 2 waitpid, which
is available from python as posix.waitpid() or os.waitpid().

Donn Cave, do**@u.washington.edu
Jul 18 '05 #3

P: n/a
At some point, jo****@ugcs.caltech.edu (John Lin) wrote:
Howdy,

I want to know how to tell if a forked process is done.

Actually, my real question is that I want to run a shell script inside
of a python script, and after the shell script has finished running, I
want to do more stuff *condition* on the fact that the shell script
has finished running, inside the same python script.

The only way I can think of is to fork a process and then call the
shell script, as in:
pid = os.fork()
if pid == 0:
os.execl(shellscript_name.sh, "")
but how can I know if the shell script is finished?


Look up os.wait and os.waitpid in the Python Library Reference. Or,
for this case, use os.system().

--
|>|\/|<
/--------------------------------------------------------------------------\
|David M. Cooke
|cookedm(at)physics(dot)mcmaster(dot)ca
Jul 18 '05 #4

P: n/a
On 23 Sep 2003 16:47:40 -0700, in article
<87**************************@posting.google.com >, John Lin wrote:
The only way I can think of is to fork a process and then call the
shell script, as in:
pid = os.fork()
[...]
In sum, my two questions are:
1. How can I know if a forked shell script is finished?
Under Unix, there's a wait() system call to go along with fork(). I'll bet
Python was something similar.
2. How can I run a shell script inside a python script without
forking a new process, so that I can know the shell script is done
from within the same python script?


Something like system()?
Jul 18 '05 #5

P: n/a
[John Lin]
I want to know how to tell if a forked process is done.


A few people suggested `os.system()' already, and I presume this is what
you want and need.

In the less usual case you want concurrency between Python and the
forked shell command, for only later checking if the forked process is
done, the usual way is to send a zero signal to the child using
`os.kill()'. The zero signal would not do any damage in case your
forked process is still running. But if the process does not exist, the
parent will get an exception for the `os.kill()', which you may
intercept. So you know if the child is running or finished.

--
François Pinard http://www.iro.umontreal.ca/~pinard

Jul 18 '05 #6

P: n/a
François Pinard wrote:
In the less usual case you want concurrency between Python and the
forked shell command, for only later checking if the forked process
is done, the usual way is to send a zero signal to the child using
`os.kill()'. The zero signal would not do any damage in case your
forked process is still running. But if the process does not exist,
the parent will get an exception for the `os.kill()', which you may
intercept. So you know if the child is running or finished.
This will yield a false positive and potential damage if the OS has
spawned another process with the same pid, and running under your uid,
as the task you wanted to supervise.
// Klaus

--<> unselfish actions pay back better

Jul 18 '05 #7

P: n/a

os.waitpid() can tell whether a child has exited, and return its status
if so. It can either enter a blocking wait, or it can return
immediately.
pid = os.spawnv(os.P_NOWAIT, "/bin/sleep", ["sleep", "30"])
With WNOHANG, it returns immediately. The returned pid is 0 to show
that the process has not exited yet. os.waitpid(pid, os.WNOHANG) (0, 0)

Wait for the process to return. The second number is related to the
exit status and should be managed with os.WEXITSTATUS() etc. os.waitpid(pid, 0) (29202, 0)

Waiting again produces a traceback (no danger from another process
created with the same pid) os.waitpid(pid, 0)

Traceback (most recent call last):
File "<stdin>", line 1, in ?
OSError: [Errno 10] No child processes

If you want to use an argument list instead of an argument string, but
always want to wait for the program to complete, use os.spawn* with
P_WAIT.

Jeff

Jul 18 '05 #8

P: n/a

os.waitpid() can tell whether a child has exited, and return its status
if so. It can either enter a blocking wait, or it can return
immediately.
pid = os.spawnv(os.P_NOWAIT, "/bin/sleep", ["sleep", "30"])
With WNOHANG, it returns immediately. The returned pid is 0 to show
that the process has not exited yet. os.waitpid(pid, os.WNOHANG) (0, 0)

Wait for the process to return. The second number is related to the
exit status and should be managed with os.WEXITSTATUS() etc. os.waitpid(pid, 0) (29202, 0)

Waiting again produces a traceback (no danger from another process
created with the same pid) os.waitpid(pid, 0)

Traceback (most recent call last):
File "<stdin>", line 1, in ?
OSError: [Errno 10] No child processes

If you want to use an argument list instead of an argument string, but
always want to wait for the program to complete, use os.spawn* with
P_WAIT.

Jeff

Jul 18 '05 #9

P: n/a
[Klaus Alexander Seistrup]
François Pinard wrote:
In the less usual case you want concurrency between Python and the
forked shell command, for only later checking if the forked process
is done, the usual way is to send a zero signal to the child using
`os.kill()'. The zero signal would not do any damage in case your
forked process is still running. But if the process does not exist,
the parent will get an exception for the `os.kill()', which you may
intercept. So you know if the child is running or finished.


This will yield a false positive and potential damage if the OS has
spawned another process with the same pid, and running under your uid,
as the task you wanted to supervise.


Granted in theory, yet this does not seem to be considered a real
problem in practice. To generate another process with the same pid, the
system would need to generate so many intermediate processes that the
process counter would overflow and come back to its current value. The
`kill(pid, 0)' trick is still the way people seem to do it.

Do you know anything reasonably simple, safer, and that does the job?

--
François Pinard http://www.iro.umontreal.ca/~pinard

Jul 18 '05 #10

P: n/a
[Klaus Alexander Seistrup]
François Pinard wrote:
In the less usual case you want concurrency between Python and the
forked shell command, for only later checking if the forked process
is done, the usual way is to send a zero signal to the child using
`os.kill()'. The zero signal would not do any damage in case your
forked process is still running. But if the process does not exist,
the parent will get an exception for the `os.kill()', which you may
intercept. So you know if the child is running or finished.


This will yield a false positive and potential damage if the OS has
spawned another process with the same pid, and running under your uid,
as the task you wanted to supervise.


Granted in theory, yet this does not seem to be considered a real
problem in practice. To generate another process with the same pid, the
system would need to generate so many intermediate processes that the
process counter would overflow and come back to its current value. The
`kill(pid, 0)' trick is still the way people seem to do it.

Do you know anything reasonably simple, safer, and that does the job?

--
François Pinard http://www.iro.umontreal.ca/~pinard

Jul 18 '05 #11

P: n/a
On Wed, 24 Sep 2003 06:27:41 +0000 (UTC), rumours say that Klaus
Alexander Seistrup <sp**@magnetic-ink.dk> might have written:
This will yield a false positive and potential damage if the OS has
spawned another process with the same pid, and running under your uid,
as the task you wanted to supervise.


This *could* happen (I have seen 16-bit-pid systems with a couple of
stupid processes spawning children too rapidly), but if you are not root
and you are sure that your own processes do not breed like rabbits, then
you can be almost sure that os.kill(pid, 0) will throw an EPERM if your
child has died.
--
TZOTZIOY, I speak England very best,
Microsoft Security Alert: the Matrix began as open source.
Jul 18 '05 #12

P: n/a
On Wed, 24 Sep 2003 09:52:45 -0400, rumours say that François Pinard
<pi****@iro.umontreal.ca> might have written:
This will yield a false positive and potential damage if the OS has
spawned another process with the same pid, and running under your uid,
as the task you wanted to supervise.


Granted in theory, yet this does not seem to be considered a real
problem in practice. To generate another process with the same pid, the
system would need to generate so many intermediate processes that the
process counter would overflow and come back to its current value. The
`kill(pid, 0)' trick is still the way people seem to do it.

Do you know anything reasonably simple, safer, and that does the job?


Not that simple: a semaphore.
Not that safe: the existence of a semaphore file.

kill(pid,0) is ok; but a pipe and select would be useful too, safe and
relatively simple (for old Unix programmers at least :).
--
TZOTZIOY, I speak England very best,
Microsoft Security Alert: the Matrix began as open source.
Jul 18 '05 #13

P: n/a
François Pinard <pi****@iro.umontreal.ca> wrote:
[Klaus Alexander Seistrup]
This will yield a false positive and potential damage if the OS has
spawned another process with the same pid, and running under your uid,
as the task you wanted to supervise. Granted in theory, yet this does not seem to be considered a real
problem in practice. To generate another process with the same pid, the
system would need to generate so many intermediate processes that the
process counter would overflow and come back to its current value.
There it least one Unix that reuse process ids immediately when
they are free. A vague memory says that it is AIX that does
this, but I'm not sure; it could be some of the BSD dialects too.

And on most other Unices start to reuse process ids *long* before
they reach 2^31.

However, in this very case, *that* isn't a problem. The process
id won't be free to reuse until the parent has called wait(2) to
reap its child. On the other hand, that means that kill(pid, 0)
won't signal an error even after the child has died; the zombie
is still there...
import os, time
def f(): ... child = os.fork()
... if child == 0:
... time.sleep(10)
... print "Exit:", time.ctime()
... os._exit(0)
... else:
... return child
... p = f() Wed Sep 24 21:52:38 2003 os.kill(p, 0)
Exit: Wed Sep 24 21:53:38 2003
os.kill(p, 0) # Note no error
os.kill(p, 0) # Still no error
os.wait() (30242, 0) os.kill(p, 0) # *Now* we get an error

Traceback (most recent call last):
File "<stdin>", line 1, in ?
OSError: [Errno 3] No such process
The
`kill(pid, 0)' trick is still the way people seem to do it.
If they do so when waiting for a child process to exit, they will
have problems...
Do you know anything reasonably simple, safer, and that does the job?


See my .signature. :-)

--
Thomas Bellman, Lysator Computer Club, Linköping University, Sweden
"Life IS pain, highness. Anyone who tells ! bellman @ lysator.liu.se
differently is selling something." ! Make Love -- Nicht Wahr!
Jul 18 '05 #14

P: n/a
In article <bk**********@news.island.liu.se>,
Thomas Bellman <be*****@lysator.liu.se> wrote:

....
There it least one Unix that reuse process ids immediately when
they are free. A vague memory says that it is AIX that does
this, but I'm not sure; it could be some of the BSD dialects too.
Not AIX 4 or 5, and no BSD I've seen- cursory inspection suggests
it is not the case with FreeBSD 5.1 nor MacOS X 10.2.6.
However, in this very case, *that* isn't a problem. The process
id won't be free to reuse until the parent has called wait(2) to
reap its child. On the other hand, that means that kill(pid, 0)
won't signal an error even after the child has died; the zombie
is still there...


Good points.

Donn Cave, do**@u.washington.edu
Jul 18 '05 #15

P: n/a
François Pinard wrote:
Do you know anything reasonably simple, safer, and that does the job?
I'd always use fork() + wait() so I know I killing the right process if
I have to kill something. Killing blindly is gambling.
// Klaus

--<> unselfish actions pay back better

Jul 18 '05 #16

P: n/a
Christos TZOTZIOY Georgiou wrote:
if you are not root and you are sure that your own processes
do not breed like rabbits, then you can be almost sure that
os.kill(pid, 0) will throw an EPERM if your child has died.
That's too many ifs to my taste.

Your own processes needn't breed like rabbits - other's processes
can breed like rabbits, too, and if you're unlucky, your next
spawn will have the same pid as a previous process of yours.

Anyway you look at it, killing blindly is bad programming practice.
// Klaus

--<> unselfish actions pay back better

Jul 18 '05 #17

P: n/a
On Wed, 24 Sep 2003 22:30:36 +0000 (UTC), rumours say that Klaus
Alexander Seistrup <sp**@magnetic-ink.dk> might have written:
if you are not root and you are sure that your own processes
do not breed like rabbits, then you can be almost sure that
os.kill(pid, 0) will throw an EPERM if your child has died.
That's too many ifs to my taste.


I agree with that --that's what the 'almost' was about. But...
Your own processes needn't breed like rabbits - other's processes
can breed like rabbits, too, and if you're unlucky, your next
spawn will have the same pid as a previous process of yours.
....here there is a little inconsistency with the flaw of this thread; I
discussed the chance of /another user's process/ using the pid of a
child between two kill(pid,0) attempts, not /another child/, because the
/original point/ was: parent process spawns a single child and then
checks for its existence (see also my mentioning of EPERM). If anybody
mentioned multiple spawning of the parent process, I'm afraid the post
didn't show up in my newsreader. Thus "your next spawn" seems not
relevant.
Anyway you look at it, killing blindly is bad programming practice.


Yes, it is; I have used kill(pid,0) in the past, aware that it's a quick
and dirty solution. Semaphores are much more safe in such a situation.
--
TZOTZIOY, I speak England very best,
Microsoft Security Alert: the Matrix began as open source.
Jul 18 '05 #18

P: n/a
Klaus Alexander Seistrup wrote:
Anyway you look at it, killing blindly is bad programming practice.


But he's killing with a signal of 0. From kill(2):

If sig is 0, then no signal is sent, but error checking is
still performed.

It's perfectly reasonable behavior to kill a process with a 0 signal; it
does no harm.

--
Erik Max Francis && ma*@alcyone.com && http://www.alcyone.com/max/
__ San Jose, CA, USA && 37 20 N 121 53 W && &tSftDotIotE
/ \ I love mankind; it's people I can't stand.
\__/ Charles Schultz
Jul 18 '05 #19

P: n/a
Christos TZOTZIOY Georgiou wrote:
Thus "your next spawn" seems not relevant.
I beg to differ. By "your next spawn" I'm not talking about spawning
from the current process. It could be a job on a nother terminal, a
cron job etc.
killing blindly is bad programming practice.


Yes, it is; I have used kill(pid,0) in the past, aware that it's a
quick and dirty solution. Semaphores are much more safe in such a
situation.


I fully agree.
// Klaus

--<> unselfish actions pay back better

Jul 18 '05 #20

P: n/a
Erik Max Francis wrote:
killing blindly is bad programming practice.
But he's killing with a signal of 0. From kill(2):

If sig is 0, then no signal is sent, but error checking is
still performed.

It's perfectly reasonable behavior to kill a process with a 0 signal;
it does no harm.


I overlooked that detail, thanks for correcting me.
// Klaus

--<> unselfish actions pay back better

Jul 18 '05 #21

P: n/a
In article <3F***************@alcyone.com>,
Erik Max Francis <ma*@alcyone.com> wrote:
Klaus Alexander Seistrup wrote:
Anyway you look at it, killing blindly is bad programming practice.


But he's killing with a signal of 0. From kill(2):

If sig is 0, then no signal is sent, but error checking is
still performed.

It's perfectly reasonable behavior to kill a process with a 0 signal; it
does no harm.


I think it would be reasonable to posit an implied context
for discussion of any programming technique, that said
technique would be deployed for some purpose.

If that is not too bold of an assumption, I think it follows
that our standard for a good programming practice has to be
a little more stringent that just whether deployment of the
technique causes any harm. In this case, for example, the
proposed technique is use the information returned from kill(0)
to decide whether some process is still alive, and according
to our theory of purposeful programming, we may guess that
the program then acts on the basis of that decision.

If kill(0) actually does not reliably indicate that the process
is alive because process IDs are not unique over time, then
it is arguably harmful as a programming practice even if it
is harmless as a system call.

What actually happens with processes that exited but haven't
been reaped by their parent with wait(2) or similar? Seems
to vary quite a bit, here's what I found -

FreeBSD 5.1 - No such process, ps PID=0, owned by me
Linux 2.4 - kill -0 works
MacOS X/Darwin - No such process, ps original PID, owner root

Donn Cave, do**@u.washington.edu
Jul 18 '05 #22

This discussion thread is closed

Replies have been disabled for this discussion.