By using this site, you agree to our updated Privacy Policy and our Terms of Use. Manage your Cookies Settings.
446,212 Members | 1,121 Online
Bytes IT Community
+ Ask a Question
Need help? Post your question and get tips & solutions from a community of 446,212 IT Pros & Developers. It's quick & easy.

Working around buffering issues when writing to pipes

P: n/a
Keywords: subprocess stdout stderr unbuffered pty tty pexpect flush setvbuf

I'm trying to find a solution to <URL:http://bugs.python.org/issue1241>. In
short: unless specifically told not to, normal C stdio will use full output
buffering when connected to a pipe. It will use default (typically
unbuffered) output when connected to a tty/pty.

This is why subprocess.Popen() won't work with the following program when
stdout and stderr are pipes:

#include <stdio.h>
#include <unistd.h>
int main()
{
int i;
for(i = 0; i < 5; i++)
{
printf("stdout ding %d\n", i);
fprintf(stderr, "stderr ding %d\n", i);
sleep(1);
}
return 0;
}

Then

subprocess.Popen(cmd, stdout=subprocess.PIPE, stderr=subprocess.PIPE)

is read using polling, but the pipes don't return any data until the
program is done. As expected, specifying a bufsize of 0 or 1 in the Popen()
call has no effect. See end of mail for example Python scripts.

Unfortunately, changing the child program to flush stdout/err or specifying
setvbuf(stdout,0,_IONBF,0); is an undesired workaround in this case.

I went with pexpect and it works well, except that I must handle stdout and
stderr separately. There seems to be no way to do this with pexpect, or am
I mistaken?

Are there alternative ways of solving this? Perhaps some way of

I'm on Linux, and portability to other platforms is not a requirement.

s
#Test script using subprocess:
import subprocess
cmd = '/tmp/slow' # The C program shown above
print 'Starting...'
proc = subprocess.Popen(cmd, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
while True:
lineout = proc.stdout.readline()
lineerr = proc.stderr.readline()
exitcode = proc.poll()
if (not lineout and not lineerr) and exitcode is not None: break
if not lineout == '': print lineout.strip()
if not lineerr == '': print 'ERR: ' + lineerr.strip()
print 'Done.'
#Test script using pexpect, merging stdout and stderr:
import sys
import pexpect
cmd = '/home/sveniu/dev/logfetch/bin/slow'
print 'Starting...'
child = pexpect.spawn(cmd, timeout=100, maxread=1, logfile=sys.stdout)
try:
while True: child.expect('\n')
except pexpect.EOF: pass
print 'Done.'
Jun 27 '08 #1
Share this Question
Share on Google+
1 Reply


P: n/a
sven _ <sv******@gmail.comwrote:
In short: unless specifically told not to, normal C stdio will use
full output buffering when connected to a pipe. It will use default
(typically unbuffered) output when connected to a tty/pty.
Wrong. Standard output to a terminal is typically line-buffered.
(Standard error is never buffered by default, per the ISO C standard.)
This is why subprocess.Popen() won't work with the following program
when stdout and stderr are pipes:
Yes, obviously.
I went with pexpect and it works well, except that I must handle
stdout and stderr separately. There seems to be no way to do this with
pexpect, or am I mistaken?
You could do it by changing the command you pass to pexpect.spawn so
that it redirects its stderr to (say) a pipe. Doing this is messy,
though -- you'd probably need to set the pipe's write-end fd in an
environment variable or something.

It's probably better to use os.pipe and pty.fork.

As mentioned above, the inferior process's output will still be
line-buffered unless it does something special to change this.

-- [mdw]
Jun 27 '08 #2

This discussion thread is closed

Replies have been disabled for this discussion.