By using this site, you agree to our updated Privacy Policy and our Terms of Use. Manage your Cookies Settings.
457,734 Members | 828 Online
Bytes IT Community
+ Ask a Question
Need help? Post your question and get tips & solutions from a community of 457,734 IT Pros & Developers. It's quick & easy.

Problem with urllib.urlretrieve

P: n/a
Hi,

i am doing a program to download all images from an specified site.
it already works with most of the sites, but in some cases like:
www.slashdot.org it only download 1kb of the image. This 1kb is a html
page with a 503 error.

What can i make to really get those images ?

Thanks

Your Help is aprecciate.
Jul 18 '05 #1
Share this Question
Share on Google+
1 Reply


P: n/a
On 11 Jun 2004 16:01:01 -0700, ra*****@gmail.com (ralobao) wrote:
Hi,

i am doing a program to download all images from an specified site.
it already works with most of the sites, but in some cases like:
www.slashdot.org it only download 1kb of the image. This 1kb is a html
page with a 503 error.

What can i make to really get those images ?

Thanks

Your Help is aprecciate.
I did something like this a while ago. I used websucker.py in the
Tools/ directory. And then added some conditionals to tell it to only
create files for certain extentions.

As to why it fails in your case, (/me puts on psychic hat) I guessing
slashdot does something to stop people from deep-linking their image
files to stop leeches.
<{{{*>


Jul 18 '05 #2

This discussion thread is closed

Replies have been disabled for this discussion.