Hi,
I am trying to communicate with a subprocess via the subprocess
module. Consider the following example:
>>from subprocess import Popen, PIPE Popen("""python -c 'input("hey")'""", shell=True)
<subprocess.Popen object at 0x729f0>
>>hey
Here hey is immediately print to stdout of my interpreter, I did not
type in the "hey". But I want to read from the output into a string,
so I do
>>x = Popen("""python -c 'input("hey\n")'""", shell=True, stdout=PIPE, bufsize=2**10) x.stdout.read(1)
# blocks forever
Is it possible to read to and write to the std streams of a
subprocess? What am I doing wrong?
Regards,
-Justin 13 4914
<ba**********@googlemail.comwrote:
Is it possible to read to and write to the std streams of a
subprocess? What am I doing wrong?
I think this problem lies deeper - there has been a lot of
complaints about blocking and data getting stuck in pipes
and sockets...
I have noticed that the Python file objects seem to be
inherently half duplex, but I am not sure if it is python
or the underlying OS. (Suse 10 in my case)
You can fix it by unblocking using the fcntl module,
but then all your accesses have to be in try - except
clauses.
It may be worth making some sort of FAQ on this
subject, as it appears from time to time.
The standard advice has been to use file.flush()
after file.write(), but if you are threading and
have called file.read(n), then the flushing does
not help - this is why I say that the file object
seems to be inherently half duplex.
It makes perfect sense, of course, if the file is a
real disk file, as you have to finish the read before
you can move the heads to do the write- but for
pipes, sockets and RS-232 serial lines it does not
make so much sense.
Does anybody know where it comes from -
Python, the various OSses, or C?
- Hendrik
Hi,
Thanks for your answer. I had a look into the fcntl module and tried
to unlock the output-file, but
>>fcntl.lockf(x.stdout, fcntl.LOCK_UN)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
IOError: [Errno 9] Bad file descriptor
I wonder why it does work with the sys.stdin It's really a pity, it's
the first time python does not work as expected. =/
Flushing the stdin did not help, too.
Regards,
-Justin
En Wed, 28 Feb 2007 18:27:43 -0300, <ba**********@googlemail.comescribió:
Hi,
I am trying to communicate with a subprocess via the subprocess
module. Consider the following example:
>>>from subprocess import Popen, PIPE Popen("""python -c 'input("hey")'""", shell=True)
<subprocess.Popen object at 0x729f0>
>>>hey
Here hey is immediately print to stdout of my interpreter, I did not
type in the "hey". But I want to read from the output into a string,
so I do
>>>x = Popen("""python -c 'input("hey\n")'""", shell=True, stdout=PIPE, bufsize=2**10) x.stdout.read(1)
# blocks forever
Blocks, or is the child process waiting for you to input something in
response?
Is it possible to read to and write to the std streams of a
subprocess? What am I doing wrong?
This works for me on Windows XP. Note that I'm using a tuple with
arguments, and raw_input instead of input (just to avoid a traceback on
stderr)
pyx=Popen(("python", "-c", "raw_input('hey')"), shell=True, stdout=PIPE)
pyx.stdout.read(1)
1234
'h'
pyx.stdout.read()
'ey'
I typed that 1234 (response to raw_input).
You may need to use python -u, or redirect stderr too, but what your real
problem is?
--
Gabriel Genellina
Okay, here is what I want to do:
I have a C Program that I have the source for and want to hook with
python into that. What I want to do is: run the C program as a
subprocess.
The C programm gets its "commands" from its stdin and sends its state
to stdout. Thus I have some kind of dialog over stdin.
So, once I start the C Program from the shell, I immediately get its
output in my terminal. If I start it from a subprocess in python and
use python's sys.stdin/sys.stdout as the subprocess' stdout/stdin I
also get it immediately.
BUT If I use PIPE for both (so I can .write() on the stdin and .read()
from the subprocess' stdout stream (better: file descriptor)) reading
from the subprocess stdout blocks forever. If I write something onto
the subprocess' stdin that causes it to somehow proceed, I can read
from its stdout.
Thus a useful dialogue is not possible.
Regards,
-Justin ba**********@googlemail.com wrote:
Okay, here is what I want to do:
I have a C Program that I have the source for and want to hook with
python into that. What I want to do is: run the C program as a
subprocess.
The C programm gets its "commands" from its stdin and sends its state
to stdout. Thus I have some kind of dialog over stdin.
So, once I start the C Program from the shell, I immediately get its
output in my terminal. If I start it from a subprocess in python and
use python's sys.stdin/sys.stdout as the subprocess' stdout/stdin I
also get it immediately.
BUT If I use PIPE for both (so I can .write() on the stdin and .read()
from the subprocess' stdout stream (better: file descriptor)) reading
from the subprocess stdout blocks forever. If I write something onto
the subprocess' stdin that causes it to somehow proceed, I can read
from its stdout.
Thus a useful dialogue is not possible.
Regards,
-Justin
Have you considered using pexpect: http://pexpect.sourceforge.net/ ?
George
En Thu, 01 Mar 2007 14:42:00 -0300, <ba**********@googlemail.comescribió:
BUT If I use PIPE for both (so I can .write() on the stdin and .read()
from the subprocess' stdout stream (better: file descriptor)) reading
from the subprocess stdout blocks forever. If I write something onto
the subprocess' stdin that causes it to somehow proceed, I can read
from its stdout.
On http://docs.python.org/lib/popen2-flow-control.html there are some
notes on possible flow control problems you may encounter.
If you have no control over the child process, it may be safer to use a
different thread for reading its output.
--
Gabriel Genellina
<ba**********@googlemail.comwrote:
Hi,
Thanks for your answer. I had a look into the fcntl module and tried
to unlock the output-file, but
>fcntl.lockf(x.stdout, fcntl.LOCK_UN)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
IOError: [Errno 9] Bad file descriptor
I wonder why it does work with the sys.stdin It's really a pity, it's
the first time python does not work as expected. =/
Flushing the stdin did not help, too.
its block, not lock, and one uses file.flush() after using file.write(),
so the stdin is the wrong side - you have to push, you can't pull..
Here is the unblock function I use - it comes from the internet,
possibly from this group, but I have forgotten who wrote it.
# Some magic to make a file non blocking - from the internet
def unblock(f):
"""Given file 'f', sets its unblock flag to true."""
fcntl.fcntl(f.fileno(), fcntl.F_SETFL, os.O_NONBLOCK)
hope this helps - note that the f is not the file's name but the
thing you get when you write :
f = open(...
- Hendrik
<ba**********@googlemail.comwrote:
8<------------------
The C programm gets its "commands" from its stdin and sends its state
to stdout. Thus I have some kind of dialog over stdin.
So, once I start the C Program from the shell, I immediately get its
output in my terminal. If I start it from a subprocess in python and
use python's sys.stdin/sys.stdout as the subprocess' stdout/stdin I
also get it immediately.
so why don't you just write to your stdout and read from your stdin?
>
BUT If I use PIPE for both (so I can .write() on the stdin and .read()
This confuses me - I assume you mean write to the c program's stdin?
from the subprocess' stdout stream (better: file descriptor)) reading
from the subprocess stdout blocks forever. If I write something onto
the subprocess' stdin that causes it to somehow proceed, I can read
from its stdout.
This sounds like the c program is getting stuck waiting for input...
>
Thus a useful dialogue is not possible.
If you are both waiting for input, you have a Mexican standoff...
And if you are using threads, and you have issued a .read() on
a file, then a .write() to the same file, even followed by a .flush()
will not complete until after the completion of the .read().
So in such a case you have to unblock the file, and do the .read() in
a try - except clause, to "free up" the "file driver" so that the .write()
can complete.
But I am not sure if this is in fact your problem, or if it is just normal
synchronisation hassles...
- Hendrik
If you are both waiting for input, you have a Mexican standoff...
That is not the problem. The problem is, that the buffers are not
flushed correctly. It's a dialogue, so nothing complicated. But python
does not get what the subprocess sends onto the subprocess' standard
out - not every time, anyway.
I'm quite confused, but hopefully will understand what's going on and
come back here. ba**********@googlemail.com writes:
So, once I start the C Program from the shell, I immediately get its
output in my terminal. If I start it from a subprocess in python and
use python's sys.stdin/sys.stdout as the subprocess' stdout/stdin I
also get it immediately.
If stdout is connected to a terminal, it's usually line buffered, so the
buffer is flushed whenever a newline is written.
BUT If I use PIPE for both (so I can .write() on the stdin and .read()
from the subprocess' stdout stream (better: file descriptor)) reading
from the subprocess stdout blocks forever. If I write something onto
the subprocess' stdin that causes it to somehow proceed, I can read
from its stdout.
When stdout is not connected to a terminal, it's usually fully buffered,
so that nothing is actually written to the file until the buffer
overflows or until it's explictly flushed.
If you can modify the C program, you could force its stdout stream to be
line buffered. Alternatively, you could call fflush on stdout whenever
you're about to read from stdin. If you can't modify the C program you
may have to resort to e.g. pseudo ttys to trick it into believing that
its stdout is a terminal.
Bernhard
--
Intevation GmbH http://intevation.de/
Skencil http://skencil.org/
Thuban http://thuban.intevation.org/
In article <ma***************************************@python. org>,
"Gabriel Genellina" <ga*******@yahoo.com.arwrote:
En Thu, 01 Mar 2007 14:42:00 -0300, <ba**********@googlemail.comescribió:
BUT If I use PIPE for both (so I can .write() on the stdin and .read()
from the subprocess' stdout stream (better: file descriptor)) reading
from the subprocess stdout blocks forever. If I write something onto
the subprocess' stdin that causes it to somehow proceed, I can read
from its stdout.
On http://docs.python.org/lib/popen2-flow-control.html there are some
notes on possible flow control problems you may encounter.
It's a nice summary of one problem, a deadlock due to full pipe
buffer when reading from two pipes. The proposed simple solution
depends too much on the cooperation of the child process to be
very interesting, though. The good news is that there is a real
solution and it isn't terribly complex, you just have to use select()
and UNIX file descriptor I/O. The bad news is that while this is
a real problem, it isn't the one commonly encountered by first
time users of popen.
The more common problem, where you're trying to have a dialogue
over pipes with a program that wasn't written specifically to
support that, is not solvable per se - I mean, you have to use
another device (pty) or redesign the application.
If you have no control over the child process, it may be safer to use a
different thread for reading its output.
Right - `I used threads to solve my problem, and now I have two
problems.' It can work for some variations on this problem, but
not the majority of them.
Donn Cave, do**@u.washington.edu
En Fri, 02 Mar 2007 14:38:59 -0300, Donn Cave <do**@u.washington.edu>
escribió:
In article <ma***************************************@python. org>,
"Gabriel Genellina" <ga*******@yahoo.com.arwrote:
>On http://docs.python.org/lib/popen2-flow-control.html there are some notes on possible flow control problems you may encounter.
It's a nice summary of one problem, a deadlock due to full pipe
buffer when reading from two pipes. The proposed simple solution
depends too much on the cooperation of the child process to be
very interesting, though. The good news is that there is a real
solution and it isn't terribly complex, you just have to use select()
and UNIX file descriptor I/O. The bad news is that while this is
a real problem, it isn't the one commonly encountered by first
time users of popen.
More bad news: you can't use select() with file handles on Windows.
>If you have no control over the child process, it may be safer to use a different thread for reading its output.
Right - `I used threads to solve my problem, and now I have two
problems.' It can work for some variations on this problem, but
not the majority of them.
Any pointers on what kind of problems may happen, and usual solutions for
them?
On Windows one could use asynchronous I/O, or I/O completion ports, but
neither of these are available directly from Python. So using a separate
thread for reading may be the only solution, and I can't see why is it so
bad. (Apart from buffering on the child process, which you can't control
anyway).
--
Gabriel Genellina
In article <ma***************************************@python. org>,
"Gabriel Genellina" <ga*******@yahoo.com.arwrote:
En Fri, 02 Mar 2007 14:38:59 -0300, Donn Cave <do**@u.washington.edu>
escribió:
In article <ma***************************************@python. org>,
"Gabriel Genellina" <ga*******@yahoo.com.arwrote:
On http://docs.python.org/lib/popen2-flow-control.html there are some
notes on possible flow control problems you may encounter.
It's a nice summary of one problem, a deadlock due to full pipe
buffer when reading from two pipes. The proposed simple solution
depends too much on the cooperation of the child process to be
very interesting, though. The good news is that there is a real
solution and it isn't terribly complex, you just have to use select()
and UNIX file descriptor I/O. The bad news is that while this is
a real problem, it isn't the one commonly encountered by first
time users of popen.
More bad news: you can't use select() with file handles on Windows.
Bad news about UNIX I/O on Microsoft Windows is not really news.
I am sure I have heard of some event handling function analogous
to select, but don't know if it's a practical solution here.
If you have no control over the child process, it may be safer to use a
different thread for reading its output.
Right - `I used threads to solve my problem, and now I have two
problems.' It can work for some variations on this problem, but
not the majority of them.
Any pointers on what kind of problems may happen, and usual solutions for
them?
On Windows one could use asynchronous I/O, or I/O completion ports, but
neither of these are available directly from Python. So using a separate
thread for reading may be the only solution, and I can't see why is it so
bad. (Apart from buffering on the child process, which you can't control
anyway).
I wouldn't care to get into an extensive discussion of the general
merits and pitfalls of threads. Other than that ... let's look at
the problem:
- I am waiting for child process buffered output
- I have no control over the child process
Therefore I spawn a thread to do this waiting, so the parent thread
can continue about its business. But assuming that its business
eventually does involve this dialogue with the child process, it
seems that I have not resolved that problem at all, I've only added
to it. I still have no way to get the output.
Now if you want to use threads because you're trying to use Microsoft
Windows as some sort of a half-assed UNIX, that's a different issue
and I wouldn't have any idea what's best.
Donn Cave, do**@u.washington.edu This discussion thread is closed Replies have been disabled for this discussion. Similar topics
18 posts
views
Thread by jas |
last post: by
|
8 posts
views
Thread by cypher543 |
last post: by
|
3 posts
views
Thread by Pappy |
last post: by
|
2 posts
views
Thread by Greg Ercolano |
last post: by
|
12 posts
views
Thread by bhunter |
last post: by
|
4 posts
views
Thread by grayaii |
last post: by
|
2 posts
views
Thread by dudeja.rajat |
last post: by
|
7 posts
views
Thread by Samuel A. Falvo II |
last post: by
|
9 posts
views
Thread by Catherine Moroney |
last post: by
| | | | | | | | | | |