467,118 Members | 1,012 Online
Bytes | Developer Community
Ask Question

Home New Posts Topics Members FAQ

Post your question to a community of 467,118 developers. It's quick & easy.

multiple processes, private working directories

I have a bunch of processes to run and each one needs its own working
directory. I'd also like to know when all of the processes are
finished.

(1) First thought was threads, until I saw that os.chdir was process-
global.
(2) Next thought was fork, but I don't know how to signal when each
child is
finished.
(3) Current thought is to break the process from a method into a
external
script; call the script in separate threads. This is the only way I
can see
to give each process a separate dir (external process fixes that), and
I can
find out when each process is finished (thread fixes that).

Am I missing something? Is there a better way? I hate to rewrite this
method
as a script since I've got a lot of object metadata that I'll have to
regenerate with each call of the script.

thanks for any suggestions,
--Tim Arnold
Sep 25 '08 #1
  • viewed: 3144
Share:
6 Replies
r0g
Tim Arnold wrote:
I have a bunch of processes to run and each one needs its own working
directory. I'd also like to know when all of the processes are
finished.

(1) First thought was threads, until I saw that os.chdir was process-
global.
(2) Next thought was fork, but I don't know how to signal when each
child is
finished.
(3) Current thought is to break the process from a method into a
external
script; call the script in separate threads. This is the only way I
can see
to give each process a separate dir (external process fixes that), and
I can
find out when each process is finished (thread fixes that).

Am I missing something? Is there a better way? I hate to rewrite this
method
as a script since I've got a lot of object metadata that I'll have to
regenerate with each call of the script.

thanks for any suggestions,
--Tim Arnold
(1) + avoid os.chdir and maintain hard paths to all files/folders? or
(2) + sockets? or
(2) + polling your systems task list?
Sep 25 '08 #2
On Sep 24, 9:27 pm, Tim Arnold <a_j...@bellsouth.netwrote:
I have a bunch of processes to run and each one needs its own working
directory. I'd also like to know when all of the processes are
finished.

(1) First thought was threads, until I saw that os.chdir was process-
global.
(2) Next thought was fork, but I don't know how to signal when each
child is
finished.
(3) Current thought is to break the process from a method into a
external
script; call the script in separate threads. This is the only way I
can see
to give each process a separate dir (external process fixes that), and
I can
find out when each process is finished (thread fixes that).

Am I missing something? Is there a better way? I hate to rewrite this
method
as a script since I've got a lot of object metadata that I'll have to
regenerate with each call of the script.

thanks for any suggestions,
--Tim Arnold
1, Does the work in the different directories really have to be done
concurrently? You say you'd like to know when each thread/process was
finished, suggesting that they are not server processes but rather
accomplish some limited task.

2. If the answer to 1. is yes: All that os.chdir gives you is an
implicit global variable. Is that convenience really worth a multi-
process architecture? Would it not be easier to just work with
explicit path names instead? You could store the path of the per-
thread working directory in an instance of threading.local - for
example:
>>import threading
t = threading.local()

class Worker(threading.Thread):
.... def __init__(self, path):
.... t.path=path
...

the thread-specific value of t.path would then be available to all
classes and functions running within that thread.
Sep 25 '08 #3
On Sep 24, 6:27*pm, Tim Arnold <a_j...@bellsouth.netwrote:
I have a bunch of processes to run and each one needs its own working
directory. I'd also like to know when all of the processes are
finished.

(1) First thought was threads, until I saw that os.chdir was process-
global.
(2) Next thought was fork, but I don't know how to signal when each
child is
finished.
(3) Current thought is to break the process from a method into a
external
script; call the script in separate threads. *This is the only way I
can see
to give each process a separate dir (external process fixes that), and
I can
find out when each process is finished (thread fixes that).

Am I missing something? Is there a better way? I hate to rewrite this
method
as a script since I've got a lot of object metadata that I'll have to
regenerate with each call of the script.
Use subprocess; it supports a cwd argument to provide the given
directory as the child's working directory.

Help on class Popen in module subprocess:

class Popen(__builtin__.object)
| Methods defined here:
|
| __del__(self)
|
| __init__(self, args, bufsize=0, executable=None, stdin=None,
stdout=None, st
derr=None, preexec_fn=None, close_fds=False, shell=False, cwd=None,
env=None, un
iversal_newlines=False, startupinfo=None, creationflags=0)
| Create new Popen instance.

You want to provide the cwd argument above.
Then once you have launched all your n processes, run thru' a loop
waiting for each one to finish.

# cmds is a list of dicts providing details on what processes to run..
what it's cwd should be

runs = []
for c in cmds:
run = subprocess.Popen(cmds['cmd'], cwd = cmds['cwd'] ..... etc
other args)
runs.append(run)

# Now wait for all the processes to finish
for run in runs:
run.wait()

Note that if any of the processes generate lot of stdout/stderr, you
will get a deadlock in the above loop. Then you way want to go for
threads or use run.poll and do the reading of the output from your
child processes.

Karthik

>
thanks for any suggestions,
--Tim Arnold
Sep 25 '08 #4
On Sep 24, 9:27*pm, Tim Arnold <a_j...@bellsouth.netwrote:
(2) Next thought was fork, but I don't know how to signal when each
child is
finished.
Consider the multiprocessing module, which is available in Python 2.6,
but it began its life as a third-party module that acts like threading
module but uses processes. I think you can still run it as a third-
party module in 2.5.
Carl Banks
Sep 25 '08 #5
"Tim Arnold" <a_****@bellsouth.netwrote in message
news:57**********************************@l43g2000 hsh.googlegroups.com...
>I have a bunch of processes to run and each one needs its own working
directory. I'd also like to know when all of the processes are
finished.
Thanks for the ideas everyone--I now have some news tools in the toolbox.
The task is to use pdflatex to compile a bunch of (>100) chapters and know
when the book is complete (i.e. the book pdf is done and the separate
chapter pdfs are finished. I have to wait for that before I start some
postprocessing and reporting chores.

My original scheme was to use a class to manage the builds with threads,
calling pdflatex within each thread. Since pdflatex really does need to be
in the directory with the source, I had a problem.

I'm reading now about python's multiprocessing capabilty, but I think I can
use Karthik's suggestion to call pdflatex in subprocess with the cwd set.
That seems like the simple solution at this point, but I'm going to give
Cameron's pipes suggestion a go as well.

In any case, it's clear I need to rethink the problem. Thanks to everyone
for helping me get past my brain-lock.

--Tim Arnold
Sep 26 '08 #6
On Sep 25, 8:16 am, "Tim Arnold" <tim.arn...@sas.comwrote:
"Tim Arnold" <a_j...@bellsouth.netwrote in message

news:57**********************************@l43g2000 hsh.googlegroups.com...
I have a bunch of processes to run and each one needs its own working
directory. I'd also like to know when all of the processes are
finished.

Thanks for the ideas everyone--I now have some news tools in the toolbox.
The task is to use pdflatex to compile a bunch of (>100) chapters and know
when the book is complete (i.e. the book pdf is done and the separate
chapter pdfs are finished. I have to wait for that before I start some
postprocessing and reporting chores.

My original scheme was to use a class to manage the builds with threads,
calling pdflatex within each thread. Since pdflatex really does need to be
in the directory with the source, I had a problem.

I'm reading now about python's multiprocessing capabilty, but I think I can
use Karthik's suggestion to call pdflatex in subprocess with the cwd set.
That seems like the simple solution at this point, but I'm going to give
Cameron's pipes suggestion a go as well.

In any case, it's clear I need to rethink the problem. Thanks to everyone
for helping me get past my brain-lock.

--Tim Arnold
I still don't see why this should be done concurrently? Do you have >
100 processors available? I also happen to be writing a book in Latex
these days. I have one master document and pull in all chapters using
\include, and pdflatex is only ever run on the master document. For a
quick preview of the chapter I'm currently working on, I just use
\includeonly - compiles in no time at all.

How do you manage to get consistent page numbers and cross-referencing
if you process all chapters separately, and even in _parallel_ ? That
just doesn't look right to me.

Sep 26 '08 #7

This discussion thread is closed

Replies have been disabled for this discussion.

Similar topics

5 posts views Thread by BPearson | last post: by
reply views Thread by Cameron Simpson | last post: by
3 posts views Thread by Tim Arnold | last post: by
By using this site, you agree to our Privacy Policy and Terms of Use.