By using this site, you agree to our updated Privacy Policy and our Terms of Use. Manage your Cookies Settings.
440,629 Members | 1,222 Online
Bytes IT Community
+ Ask a Question
Need help? Post your question and get tips & solutions from a community of 440,629 IT Pros & Developers. It's quick & easy.

Using Python/CGI to stream large tar files on the fly??

P: 4
Hi,

I'm running a CGI script written in python that tars up the photos in user selected directories (via a form). It's running on Apache 1.3.31.1 on Solaris 5.8.

It works well for a small amount of files, but when I try to use it for large archives (typically over 10MB, but this seems to vary) my download stops short. Occasionally I get an error message, this is usually a Broken Pipe (IO Error 32), but sometimes I don't see anything at all (but this is perhaps a flushing issue with stderr on my webspace, perhaps it's because the Broken Pipe is a red herring).

My understanding is that to get a broken pipe the client could close its download request prematurely (definately not the case), or Apache/Python is closing stdout on its side? I get no timeout error in the browser - it just stops dead, thinking the download is complete - the tar is always corrupted and well short of the size I'd expect.

I would have thought that the stdout pipe to the user would have stayed open as long data was being streamed to it? Is it possible that my webspace provider has set some time limit? Could some sort of buffer under/over-run be occurring on the stdout stream?

I've tried various other methods of solving the problem - I've ruled out using zips as I cannot create the objects in memory (using CStringIO or similar) before streaming them as a one complete string due to runtime memory limitations on my webspace (only 30MB as far as I can see!). The only other thing I can think of is to write a temporary file to disk and allow the user to download this - again, not practical as diskspace is limited, and is a bit ugly too.

The function causing the problem is below, it's pretty self explantory. If anyone has any ideas what might be causing the problem I'd be very greatful - it's driving me round the bend!

Thanks,

Phil.


def download( folders ):

print "Content-type: application/x-tar"
print "Content-Disposition: attachment; filename=\"Download.tar\"\n"

parentZipFile = tarfile.open( '', "w", sys.stdout )

#signal.signal( signal.SIGPIPE, signal.SIG_DFL )

for folder in folders:

photoDir = os.path.join( folder, "photos" )
if os.path.isdir( photoDir ):

# We have photos!
photos = glob.glob( photoDir + "/*.jpg" )
photos += glob.glob( photoDir + "/*.JPG" )
for photo in photos:

parentZipFile.add( photo, string.join( photo.split( "/" )[-3:], "/" ) )
parentZipFile.close()
May 29 '07 #1
Share this Question
Share on Google+
2 Replies


P: 4
To clarify this is definately not a *byte* limit on streaming.

From my nice fast connection at work, I have no problem downloading 54MB for example, but if I crank it up to a larger 150-odd MB tar it falls over in exactly the same way.

With larger archives I can see as it tries to extract that the initial decompression goes fine - and from what I can see the end of the file is hit unexpectadly.

This does look like a timeout on the Apache server.

I was wondering if anyone could clarify, and perhaps suggest a workaround?

Thanks again,

Phil.
May 30 '07 #2

bartonc
Expert 5K+
P: 6,596
Can't help with CGI scripts. Can help with posting code:
We use [code] tags that will maintain the indentation of your code.
Great work-around on your part, using nested[indent] tags, though.
It's all right there, on the right hand side of the page when posting or replying: 4 little things to keep in mind in * GUIDELINES...
May 30 '07 #3

Post your reply

Sign in to post your reply or Sign up for a free account.