On Sep 24, 6:27*pm, Tim Arnold <a_j...@bellsouth.netwrote:
I have a bunch of processes to run and each one needs its own working
directory. I'd also like to know when all of the processes are
finished.
(1) First thought was threads, until I saw that os.chdir was process-
global.
(2) Next thought was fork, but I don't know how to signal when each
child is
finished.
(3) Current thought is to break the process from a method into a
external
script; call the script in separate threads. *This is the only way I
can see
to give each process a separate dir (external process fixes that), and
I can
find out when each process is finished (thread fixes that).
Am I missing something? Is there a better way? I hate to rewrite this
method
as a script since I've got a lot of object metadata that I'll have to
regenerate with each call of the script.
Use subprocess; it supports a cwd argument to provide the given
directory as the child's working directory.
Help on class Popen in module subprocess:
class Popen(__builtin__.object)
| Methods defined here:
|
| __del__(self)
|
| __init__(self, args, bufsize=0, executable=None, stdin=None,
stdout=None, st
derr=None, preexec_fn=None, close_fds=False, shell=False, cwd=None,
env=None, un
iversal_newlines=False, startupinfo=None, creationflags=0)
| Create new Popen instance.
You want to provide the cwd argument above.
Then once you have launched all your n processes, run thru' a loop
waiting for each one to finish.
# cmds is a list of dicts providing details on what processes to run..
what it's cwd should be
runs = []
for c in cmds:
run = subprocess.Popen(cmds['cmd'], cwd = cmds['cwd'] ..... etc
other args)
runs.append(run)
# Now wait for all the processes to finish
for run in runs:
run.wait()
Note that if any of the processes generate lot of stdout/stderr, you
will get a deadlock in the above loop. Then you way want to go for
threads or use run.poll and do the reading of the output from your
child processes.
Karthik
>
thanks for any suggestions,
--Tim Arnold