due to a memory leak (bug, I guess) in pyraf(or rather in IRAF) I have to fork an iterative process that goes through hundreds of image frames and does unspeakable things to them. In the child process, this returns a dictionary of image information for each image and is supposed to send it to the parent through pickle.dump. However, the child process hangs at pickle.dump and nothing happens. No error messages. The dictionary that is supposed to be dumped in the pipe is rather large, with possibly thousands of entires (about 2000-4000 is normal).
Is there some size limit on what you can dump in the pipe and in that case how do I increase it? I know nearly nothing about pipes and forks, so I'd be happy if you help.
Here's the code. Though you can't run it without rest of the functions maybe you'll see something horribly wrong with my forking?? The line "Finished dumping" never gets printed, it just hangs at "Dumping". I first thought the child is somehow unaware of the parent's import of pickle, so that's why I import it again, but that doesn't make a difference. Oh, and I do a os.execv("/bin/true",["true"]) at the end of the child because when IRAF closes in the child it apparently kills the child prematurely so then the code crashes cause there's no child. But just ignore IRAF/pyraf, just please tell me if there's some reason dump would hang in the child the way this is written.
thanks in advance.
Expand|Select|Wrap|Line Numbers
- imagedict={}
- for im in infodict:
- Receive,Send =os.pipe()
- pid = os.fork()
- if pid !=0:
- # parent
- os.close(Send)
- Receive = os.fdopen(Receive)
- os.waitpid(pid, 0)
- imagedict[im]=pickle.load(Receive) else:
- # child
- os.close(Receive)
- Send=os.fdopen(Send,'w')
- import pickle
- tmpdict=veryLargeProcedure(im)
- print "Dumping"
- pickle.dump(tmpdict,Send)
- print "finished dumping"
- Send.close()
- os.execv("/bin/true",["true"])