470,855 Members | 1,193 Online
Bytes | Developer Community
New Post

Home Posts Topics Members FAQ

Post your question to a community of 470,855 developers. It's quick & easy.

Problem with urllib.urlretrieve

Hi,

i am doing a program to download all images from an specified site.
it already works with most of the sites, but in some cases like:
www.slashdot.org it only download 1kb of the image. This 1kb is a html
page with a 503 error.

What can i make to really get those images ?

Thanks

Your Help is aprecciate.
Jul 18 '05 #1
1 2638
On 11 Jun 2004 16:01:01 -0700, ra*****@gmail.com (ralobao) wrote:
Hi,

i am doing a program to download all images from an specified site.
it already works with most of the sites, but in some cases like:
www.slashdot.org it only download 1kb of the image. This 1kb is a html
page with a 503 error.

What can i make to really get those images ?

Thanks

Your Help is aprecciate.
I did something like this a while ago. I used websucker.py in the
Tools/ directory. And then added some conditionals to tell it to only
create files for certain extentions.

As to why it fails in your case, (/me puts on psychic hat) I guessing
slashdot does something to stop people from deep-linking their image
files to stop leeches.
<{{{*>


Jul 18 '05 #2

This discussion thread is closed

Replies have been disabled for this discussion.

Similar topics

reply views Thread by jeff | last post: by
1 post views Thread by Chris Lyon | last post: by
2 posts views Thread by Mike Zupan | last post: by
2 posts views Thread by Sam Sungshik Kong | last post: by
8 posts views Thread by Ritesh Raj Sarraf | last post: by
1 post views Thread by NewFilmFan | last post: by
1 post views Thread by Timothy Smith | last post: by
6 posts views Thread by justsee | last post: by
5 posts views Thread by supercooper | last post: by
1 post views Thread by Abandoned | last post: by
By using this site, you agree to our Privacy Policy and Terms of Use.