By using this site, you agree to our updated Privacy Policy and our Terms of Use. Manage your Cookies Settings.
425,534 Members | 1,811 Online
Bytes IT Community
+ Ask a Question
Need help? Post your question and get tips & solutions from a community of 425,534 IT Pros & Developers. It's quick & easy.

Using Python/CGI to stream large tar files on the fly with Apache??

P: 4
Hi,

First of all sorry for the double post - this is on the Python page too, but as far as I can see this is an Apache issue now. Mods - feel free to delete the similar titled posts from me on Python, if this is the case (can't seem to do it myself!).

Anyway,

I'm running a CGI script written in python that tars up the photos in user selected directories (via a form). It's running on Apache 1.3.31.1 on Solaris 5.8.

It works well for a small amount of files, but when I try to use it for large archives (typically over 10MB, but this seems to vary) my download stops short. Occasionally I get an error message, this is usually a Broken Pipe (IO Error 32), but sometimes I don't see anything at all (but this is perhaps a flushing issue with stderr on my webspace, perhaps it's because the Broken Pipe is a red herring).

My understanding is that to get a broken pipe the client could close its download request prematurely (definately not the case), or Apache/Python is closing stdout on its side? I get no timeout error in the browser - it just stops dead, thinking the download is complete - the tar is always corrupted and well short of the size I'd expect.

I would have thought that the stdout pipe to the user would have stayed open as long data was being streamed to it? Is it possible that my webspace provider has set some time limit? Could some sort of buffer under/over-run be occurring on the stdout stream?

I've tried various other methods of solving the problem - I've ruled out using zips as I cannot create the objects in memory (using CStringIO or similar) before streaming them as a one complete string due to runtime memory limitations on my webspace (only 30MB as far as I can see!). The only other thing I can think of is to write a temporary file to disk and allow the user to download this - again, not practical as diskspace is limited, and is a bit ugly too.

The function causing the problem is below, it's pretty self explantory. If anyone has any ideas what might be causing the problem I'd be very greatful - it's driving me round the bend!

Thanks,

Phil.

Expand|Select|Wrap|Line Numbers
  1. def download( folders ):
  2.  
  3.  
  4.     print "Content-type: application/x-tar"
  5.     print "Content-Disposition: attachment; filename=\"Download.tar\"\n"
  6.  
  7.     parentZipFile = tarfile.open( '', "w", sys.stdout )
  8.  
  9.     #signal.signal( signal.SIGPIPE, signal.SIG_DFL )
  10.  
  11.     for folder in folders:
  12.  
  13.  
  14.         photoDir = os.path.join( folder, "photos" )
  15.         if os.path.isdir( photoDir ):
  16.  
  17.  
  18.             # We have photos!
  19.             photos = glob.glob( photoDir + "/*.jpg" )
  20.             photos += glob.glob( photoDir + "/*.JPG" )
  21.             for photo in photos:
  22.  
  23.  
  24.                 parentZipFile.add( photo, string.join( photo.split( "/" )[-3:], "/" ) )
  25.  
  26.     parentZipFile.close()
  27.  
  28.  
__________
Second message from original post:


To clarify this is definately not a *byte* limit on streaming.

From my nice fast connection at work, I have no problem downloading 54MB for example, but if I crank it up to a larger 150-odd MB tar it falls over in exactly the same way.

With larger archives I can see as it tries to extract that the initial decompression goes fine - and from what I can see the end of the file is hit unexpectadly.

This does look like a timeout on the Apache server. Is there one specific for CGI scripts or will this be the universal TimeOut in the httpd.conf file?

I was wondering if anyone could clarify, and perhaps suggest a workaround - as the page is running on a managed webserver, I'm not likely to be able to demand them to change defaults that effect all the users (I'm probably not that persuavive and I appreicate some of the limits are set with good reason).


Thanks again,

Phil.
May 30 '07 #1
Share this Question
Share on Google+
3 Replies


Motoma
Expert 2.5K+
P: 3,235
I hope you don't mind, I am going to ask some questions which you may have already answered (just to clarify in my mind).

My understanding of the situation (please correct me if I am wrong):
Your program is creating the headers, tar'ing the file, then printing the tar'ed data to stdio.
The size of the file that causes an error is dependent on the speed of the connection.

My questions:
How are you receiving the error messages?
You alluded to the fact that the broken pipe error was not the only one you receive; what are the others?
Does the Apache log give any clues as to why things might be shutting down?

Typically, web hosting companies will put restrictions on how much memory a process is allowed to take up. I believe the directive for this is RLimitMEM. If what I have previously stated is true, this may be the cause of your problem: a slow connection might be causing the ouput buffer to exceed the upper memory limit. What you may want to try is writing the tar to a file and then performing a header redirect. Using the command line for tar may work better in this situation, as it likely will avoid the memory limit issue you seem to be having.
May 30 '07 #2

P: 4
I hope you don't mind, I am going to ask some questions which you may have already answered (just to clarify in my mind).
No not at all - thanks for the reply and see my answers below:

My understanding of the situation (please correct me if I am wrong):
Your program is creating the headers, tar'ing the file, then printing the tar'ed data to stdio.
The size of the file that causes an error is dependent on the speed of the connection.
Spot on I'm just printing a standard tar HTTP response header to stdout followed by a tar file that is being streamed to stdout on the fly, adding each jpeg file at a time.

The amout of data I ultimately receive at the client end is dependant on the speed of the connection, rather than the amount of data I transfer. Depending on the connection speed I have, there a threshold over which I get an incomplete and thus invalid tar.

My questions:
How are you receiving the error messages?
You alluded to the fact that the broken pipe error was not the only one you receive; what are the others?
Does the Apache log give any clues as to why things might be shutting down?
This is a bit ugly but I'm restricted by using web hosting to illustrate the problem - the script works fine trying it from a setup of Apache at home using localhost (of course the speeds are huge then too because there is no network transfer to speak of).
The downside of this is I cannot get my hands on the access_log and error_log files to see exactly what is going on. I have been piping stderr to a file at the start of my CGI script that lives in my webspace, which I can then look through on a browser.
When I look at this post-failure I have seen Python's IO Error 32 raised - which is a broken pipe. However sometimes I seem to capture nothing, I've just tried again now and I see an incomplete list of filenames (I also pipe these to stderr for logging) and then.... nothing. I'm reluctant to explain this, but perhaps I can by Apache raising SIGPIPE, SIGTERM, SIGKILL (all in quick succession) before my stderr is flushed to the file when I hit a preallocated time limit. But I am speculating now.
I've made the file I'm piping stderr to have a 0 length buffer, but of course this doesn't change the size of the stderr buffer itself. If I catch any exception I may be able to through so generic exception information out, flush it and then reraise the exception (granted I haven't tried this), but what I really need to see is python's last barf as python intended me to see it.

I do know from trying other methods that if I was getting an out of memory error I would expect Python to specifically raise this and not the broken pipe - at least I saw this frequently when hitting a 30MB memory limit when I was trying to originally create a zip object in memory to stream as one big string to the client.


Typically, web hosting companies will put restrictions on how much memory a process is allowed to take up. I believe the directive for this is RLimitMEM. If what I have previously stated is true, this may be the cause of your problem: a slow connection might be causing the ouput buffer to exceed the upper memory limit. What you may want to try is writing the tar to a file and then performing a header redirect. Using the command line for tar may work better in this situation, as it likely will avoid the memory limit issue you seem to be having.
This sounds feasible. Writing the tar file to my local filespace is no problem - are you suggesting I then shell an unix command to pipe this to stdout. Don't think I've ever tried a header redirect?

Thanks again for your help,

Phil
May 30 '07 #3

Motoma
Expert 2.5K+
P: 3,235
Now that is one thing I hadn't thought of: page execution timeout. I don't know off the top of my head what setting you could check for this, but you could probably Google it as easily as I can. You may want to try a few timing tests with different network connection speeds, paying attention to how long it takes before the transfer crashes. If you find it to be a consistent number (300 seconds is a common one), you may have found your problem.

I believe all you would need to do is this (pseudo code of course):
Expand|Select|Wrap|Line Numbers
  1. import os
  2.  
  3. os.system("tar -czf outfile.tar.gz /path/to/my/jpegs/*.jpg")
  4. print "Location: outfile.tar.gz\n"
  5.  
  6.  
May 31 '07 #4

Post your reply

Sign in to post your reply or Sign up for a free account.