By using this site, you agree to our updated Privacy Policy and our Terms of Use. Manage your Cookies Settings.
443,908 Members | 1,892 Online
Bytes IT Community
+ Ask a Question
Need help? Post your question and get tips & solutions from a community of 443,908 IT Pros & Developers. It's quick & easy.

urllib.urlretireve problem

P: n/a
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

Hello Everybody,

I've got a small problem with urlretrieve.
Even passing a bad url to urlretrieve doesn't raise an exception. Or does
it?

If Yes, What exception is it ? And how do I use it in my program ? I've
searched a lot but haven't found anything helping.

Example:
try:

urllib.urlretrieve("http://security.debian.org/pool/updates/main/p/perl/libparl5.6_5.6.1-8.9_i386.deb")
except IOError, X:
DoSomething(X)
except OSError, X:
DoSomething(X)

urllib.urlretrieve doesn't raise an exception even though there is no
package named libparl5.6

Please Help!

rrs
- --
Ritesh Raj Sarraf
RESEARCHUT -- http://www.researchut.com
Gnupg Key ID: 04F130BC
"Stealing logic from one person is plagiarism, stealing from many is
research".
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.2.5 (GNU/Linux)

iD8DBQFCRcCk4Rhi6gTxMLwRAlb2AJ0fB3V5ZpwdAiCxfl/rGBWU92YBEACdFYIJ
8bGZMJ5nuKAqvjO0KEAylUg=
=eaHC
-----END PGP SIGNATURE-----

Jul 18 '05 #1
Share this Question
Share on Google+
8 Replies


P: n/a
I noticed you hadn't gotten a reply. When I execute this it put's the following
in the retrieved file:

<!DOCTYPE HTML PUBLIC "-//IETF//DTD HTML 2.0//EN">
<HTML><HEAD>
<TITLE>404 Not Found</TITLE>
</HEAD><BODY>
<H1>Not Found</H1>
The requested URL /pool/updates/main/p/perl/libparl5.6_5.6.1-8.9_i386.deb was no
t found on this server.<P>
</BODY></HTML>

You will probably need to use something else to first determine if the URL
actually exists.

Larry Bates
Ritesh Raj Sarraf wrote:
Hello Everybody,

I've got a small problem with urlretrieve.
Even passing a bad url to urlretrieve doesn't raise an exception. Or does
it?

If Yes, What exception is it ? And how do I use it in my program ? I've
searched a lot but haven't found anything helping.

Example:
try:

urllib.urlretrieve("http://security.debian.org/pool/updates/main/p/perl/libparl5.6_5.6.1-8.9_i386.deb")
except IOError, X:
DoSomething(X)
except OSError, X:
DoSomething(X)

urllib.urlretrieve doesn't raise an exception even though there is no
package named libparl5.6

Please Help!

rrs

Jul 18 '05 #2

P: n/a
Mertz' "Text Processing in Python" book had a good discussion about
trapping 403 and 404's.

http://gnosis.cx/TPiP/

Larry Bates wrote:
I noticed you hadn't gotten a reply. When I execute this it put's the following in the retrieved file:

<!DOCTYPE HTML PUBLIC "-//IETF//DTD HTML 2.0//EN">
<HTML><HEAD>
<TITLE>404 Not Found</TITLE>
</HEAD><BODY>
<H1>Not Found</H1>
The requested URL /pool/updates/main/p/perl/libparl5.6_5.6.1-8.9_i386.deb was no t found on this server.<P>
</BODY></HTML>

You will probably need to use something else to first determine if the URL actually exists.

Larry Bates
Ritesh Raj Sarraf wrote:
Hello Everybody,

I've got a small problem with urlretrieve.
Even passing a bad url to urlretrieve doesn't raise an exception. Or does it?

If Yes, What exception is it ? And how do I use it in my program ? I've searched a lot but haven't found anything helping.

Example:
try:

urllib.urlretrieve("http://security.debian.org/pool/updates/main/p/perl/libparl5.6_5.6.1-8.9_i386.deb") except IOError, X:
DoSomething(X)
except OSError, X:
DoSomething(X)

urllib.urlretrieve doesn't raise an exception even though there is no package named libparl5.6

Please Help!

rrs


Jul 18 '05 #3

P: n/a
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

Larry Bates wrote:
I noticed you hadn't gotten a reply.**When*I*execute*this*it*put's*the
following in the retrieved file:

<!DOCTYPE HTML PUBLIC "-//IETF//DTD HTML 2.0//EN">
<HTML><HEAD>
<TITLE>404 Not Found</TITLE>
</HEAD><BODY>
<H1>Not Found</H1>
The requested URL /pool/updates/main/p/perl/libparl5.6_5.6.1-8.9_i386.deb
was no t found on this server.<P>
</BODY></HTML>

You will probably need to use something else to first determine if the URL
actually exists.


I'm happy that at least someone responded as this was my first post to the
python mailing list.

I'm coding a program for offline package management.
The link that I provided could be obsolete by newer packages. That is where
my problem is. I wanted to know how to raise an exception here so that
depending on the type of exception I could make my program function.

For example, for Temporary Name Resolution Failure, python raises an
exception which I've handled well. The problem lies with obsolete urls
where no exception is raised and I end up having a 404 error page as my
data.

Can we have an exception for that ? Or can we have the exit status of
urllib.urlretrieve to know if it downloaded the desired file.
I think my problem is fixable in urllib.urlopen, I just find
urllib.urlretrieve more convenient and want to know if it can be done with
it.

Thanks for responding.

rrs
- --
Ritesh Raj Sarraf
RESEARCHUT -- http://www.researchut.com
Gnupg Key ID: 04F130BC
"Stealing logic from one person is plagiarism, stealing from many is
research".
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.2.5 (GNU/Linux)

iD8DBQFCSuYS4Rhi6gTxMLwRAu0FAJ9R0s4TyB7zHcvDFTflOp 2joVkErQCfU4vG
8U0Ah5WTdTQHKRkmPsZsHdE=
=OMub
-----END PGP SIGNATURE-----

Jul 18 '05 #4

P: n/a
> I'm coding a program for offline package management.
The link that I provided could be obsolete by newer packages. That is
where my problem is. I wanted to know how to raise an exception here so
that depending on the type of exception I could make my program function.

For example, for Temporary Name Resolution Failure, python raises an
exception which I've handled well. The problem lies with obsolete urls
where no exception is raised and I end up having a 404 error page as my
data.

Can we have an exception for that ? Or can we have the exit status of
urllib.urlretrieve to know if it downloaded the desired file.
I think my problem is fixable in urllib.urlopen, I just find
urllib.urlretrieve more convenient and want to know if it can be done with
it.


It makes no sense having urllib generating exceptions for such a case. From
its point of view, things work pefectly - it got a result. No network error
or whatsoever.

Its your application that is not happy with the result - but it has to
figure that out by itself.

You could for instance try and see what kind of result you got using the
unix file command - it will tell you that you received a html file, not a
deb.

Or check the mimetype returned - its text/html in the error case of yours,
and most probably something like application/octet-stream otherwise.

Regards,

Diez

Jul 18 '05 #5

P: n/a
For example, for Temporary Name Resolution Failure, python raises an
exception which I've handled well. The problem lies with obsolete
urls where no exception is raised and I end up having a 404 error
page as my data.


Diez> It makes no sense having urllib generating exceptions for such a
Diez> case. From its point of view, things work pefectly - it got a
Diez> result. No network error or whatsoever.

You can subclass FancyURLOpener and define a method to handle 404s, 403s,
401s, etc. There should be no need to resort to grubbing around with file
extensions and such.

Skip

Jul 18 '05 #6

P: n/a
..from urllib2 import urlopen
.. try:
.. urlopen(someURL)
.. except IOError, errobj:
.. if hasattr(errobj, 'reason'): print 'server doesnt exist, is
down, DNS prob, or we don't have internet connect'
.. if hasattr(errobj, 'code'): print errobj.code

Jul 18 '05 #7

P: n/a

Diez B. Roggisch wrote:
It makes no sense having urllib generating exceptions for such a case. From its point of view, things work pefectly - it got a result. No network error or whatsoever.

Its your application that is not happy with the result - but it has to figure that out by itself.

You could for instance try and see what kind of result you got using the unix file command - it will tell you that you received a html file, not a deb.

Or check the mimetype returned - its text/html in the error case of yours, and most probably something like application/octet-stream otherwise.

Regards,

Diez


Also be aware that many webservers (especially IIS ones) are configured
to return some kind of custom page instead of a stock 404, and you
might be getting a 200 status code even though the page you requested
is not there. So depending on what site you are scraping, you might
have to read the page you got back to figure out if it's what you
wanted.

-- Wade Leftwich
Ithaca, NY

Jul 18 '05 #8

P: n/a
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

Diez B. Roggisch wrote:
You could for instance try and see what kind of result you got using the
unix file command - it will tell you that you received a html file, not a
deb.

Or check the mimetype returned - its text/html in the error case of yours,
and most probably something like application/octet-stream otherwise.


Using the unix file command is not possible at all. The whole goal of the
program is to help people get their packages downloaded from some other
(high speed) machine which could be running Windows/Mac OSX/Linux et
cetera. That is why I'm sticking strictly to python libraries.

The second suggestion sounds good. I'll look into that.

Thanks,

rrs
- --
Ritesh Raj Sarraf
RESEARCHUT -- http://www.researchut.com
Gnupg Key ID: 04F130BC
"Stealing logic from one person is plagiarism, stealing from many is
research".
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.2.5 (GNU/Linux)

iD8DBQFCTDhV4Rhi6gTxMLwRAi2BAJ4zp7IsQNMZ1zqpF/hGUAjUyYwKigCeKaqO
FbGuuFOIHawZ8y/ICf87wOI=
=btA5
-----END PGP SIGNATURE-----

Jul 18 '05 #9

This discussion thread is closed

Replies have been disabled for this discussion.