By using this site, you agree to our updated Privacy Policy and our Terms of Use. Manage your Cookies Settings.
449,353 Members | 1,256 Online
Bytes IT Community
+ Ask a Question
Need help? Post your question and get tips & solutions from a community of 449,353 IT Pros & Developers. It's quick & easy.

A problem while using urllib

P: n/a
Hi,
I was using urllib to grab urls from web. here is the work flow of
my program:

1. Get base url and max number of urls from user
2. Call filter to validate the base url
3. Read the source of the base url and grab all the urls from "href"
property of "a" tag
4. Call filter to validate every url grabbed
5. Continue 3-4 until the number of url grabbed gets the limit

In filter there is a method like this:

--------------------------------------------------
# check whether the url can be connected
def filteredByConnection(self, url):
assert url

try:
webPage = urllib2.urlopen(url)
except urllib2.URLError:
self.logGenerator.log("Error: " + url + " <urlopen error timed
out>")
return False
except urllib2.HTTPError:
self.logGenerator.log("Error: " + url + " not found")
return False
self.logGenerator.log("Connecting " + url + " successed")
webPage.close()
return True
----------------------------------------------------

But every time when I ran to the 70 to 75 urls (that means 70-75
urls have been tested via this way), the program will crash and all the
urls left will raise urllib2.URLError until the program exits. I tried
many ways to work it out, using urllib, set a sleep(1) in the filter (I
thought it was the massive urls crashed the program). But none works.
BTW, if I set the url from which the program crashed to base url, the
program will still crashed at the 70-75 url. How can I solve this
problem? thanks for your help

Regards,
Johnny

Oct 11 '05 #1
Share this Question
Share on Google+
11 Replies


P: n/a
Johnny Lee <jo************@hotmail.com> wrote:
...
try:
webPage = urllib2.urlopen(url)
except urllib2.URLError: ... webPage.close()
return True
----------------------------------------------------

But every time when I ran to the 70 to 75 urls (that means 70-75
urls have been tested via this way), the program will crash and all the
urls left will raise urllib2.URLError until the program exits. I tried
many ways to work it out, using urllib, set a sleep(1) in the filter (I
thought it was the massive urls crashed the program). But none works.
BTW, if I set the url from which the program crashed to base url, the
program will still crashed at the 70-75 url. How can I solve this
problem? thanks for your help


Sure looks like a resource leak somewhere (probably leaving a file open
until your program hits some wall of maximum simultaneously open files),
but I can't reproduce it here (MacOSX, tried both Python 2.3.5 and
2.4.1). What version of Python are you using, and on what platform?
Maybe a simple Python upgrade might fix your problem...
Alex
Oct 11 '05 #2

P: n/a

Alex Martelli wrote:
Johnny Lee <jo************@hotmail.com> wrote:
...
try:
webPage = urllib2.urlopen(url)
except urllib2.URLError:

...
webPage.close()
return True
----------------------------------------------------

But every time when I ran to the 70 to 75 urls (that means 70-75
urls have been tested via this way), the program will crash and all the
urls left will raise urllib2.URLError until the program exits. I tried
many ways to work it out, using urllib, set a sleep(1) in the filter (I
thought it was the massive urls crashed the program). But none works.
BTW, if I set the url from which the program crashed to base url, the
program will still crashed at the 70-75 url. How can I solve this
problem? thanks for your help


Sure looks like a resource leak somewhere (probably leaving a file open
until your program hits some wall of maximum simultaneously open files),
but I can't reproduce it here (MacOSX, tried both Python 2.3.5 and
2.4.1). What version of Python are you using, and on what platform?
Maybe a simple Python upgrade might fix your problem...
Alex


Thanks for the info you provided. I'm using 2.4.1 on cygwin of WinXP.
If you want to reproduce the problem, I can send the source to you.

This morning I found that this is caused by urllib2. When I use urllib
instead of urllib2, it won't crash any more. But the matters is that I
want to catch the HTTP 404 Error which is handled by FancyURLopener in
urllib.open(). So I can't catch it.

Regards,
Johnny

Oct 12 '05 #3

P: n/a
Johnny Lee wrote:
Alex Martelli wrote:
Johnny Lee <jo************@hotmail.com> wrote:
...
try:
webPage = urllib2.urlopen(url)
except urllib2.URLError:


...
webPage.close()
return True
----------------------------------------------------

But every time when I ran to the 70 to 75 urls (that means 70-75
urls have been tested via this way), the program will crash and all the
urls left will raise urllib2.URLError until the program exits. I tried
many ways to work it out, using urllib, set a sleep(1) in the filter (I
thought it was the massive urls crashed the program). But none works.
BTW, if I set the url from which the program crashed to base url, the
program will still crashed at the 70-75 url. How can I solve this
problem? thanks for your help


Sure looks like a resource leak somewhere (probably leaving a file open
until your program hits some wall of maximum simultaneously open files),
but I can't reproduce it here (MacOSX, tried both Python 2.3.5 and
2.4.1). What version of Python are you using, and on what platform?
Maybe a simple Python upgrade might fix your problem...
Alex

Thanks for the info you provided. I'm using 2.4.1 on cygwin of WinXP.
If you want to reproduce the problem, I can send the source to you.

This morning I found that this is caused by urllib2. When I use urllib
instead of urllib2, it won't crash any more. But the matters is that I
want to catch the HTTP 404 Error which is handled by FancyURLopener in
urllib.open(). So I can't catch it.


I'm using exactly that configuration, so if you let me have that source
I could take a look at it for you.

regards
Steve
--
Steve Holden +44 150 684 7255 +1 800 494 3119
Holden Web LLC www.holdenweb.com
PyCon TX 2006 www.python.org/pycon/

Oct 12 '05 #4

P: n/a

Steve Holden wrote:
Johnny Lee wrote:
Alex Martelli wrote:
Johnny Lee <jo************@hotmail.com> wrote:
...

try:
webPage = urllib2.urlopen(url)
except urllib2.URLError:

...

webPage.close()
return True
----------------------------------------------------

But every time when I ran to the 70 to 75 urls (that means 70-75
urls have been tested via this way), the program will crash and all the
urls left will raise urllib2.URLError until the program exits. I tried
many ways to work it out, using urllib, set a sleep(1) in the filter (I
thought it was the massive urls crashed the program). But none works.
BTW, if I set the url from which the program crashed to base url, the
program will still crashed at the 70-75 url. How can I solve this
problem? thanks for your help

Sure looks like a resource leak somewhere (probably leaving a file open
until your program hits some wall of maximum simultaneously open files),
but I can't reproduce it here (MacOSX, tried both Python 2.3.5 and
2.4.1). What version of Python are you using, and on what platform?
Maybe a simple Python upgrade might fix your problem...
Alex

Thanks for the info you provided. I'm using 2.4.1 on cygwin of WinXP.
If you want to reproduce the problem, I can send the source to you.

This morning I found that this is caused by urllib2. When I use urllib
instead of urllib2, it won't crash any more. But the matters is that I
want to catch the HTTP 404 Error which is handled by FancyURLopener in
urllib.open(). So I can't catch it.


I'm using exactly that configuration, so if you let me have that source
I could take a look at it for you.

regards
Steve
--
Steve Holden +44 150 684 7255 +1 800 494 3119
Holden Web LLC www.holdenweb.com
PyCon TX 2006 www.python.org/pycon/

I've sent the source, thanks for your help.

Regrads,
Johnny

Oct 12 '05 #5

P: n/a
Johnny Lee wrote:
Steve Holden wrote:
Johnny Lee wrote:
Alex Martelli wrote:
Johnny Lee <jo************@hotmail.com> wrote:
...
> try:
> webPage = urllib2.urlopen(url)
> except urllib2.URLError:

...
> webPage.close()
> return True
>----------------------------------------------------
>
> But every time when I ran to the 70 to 75 urls (that means 70-75
>urls have been tested via this way), the program will crash and all the
>urls left will raise urllib2.URLError until the program exits. I tried
>many ways to work it out, using urllib, set a sleep(1) in the filter (I
>thought it was the massive urls crashed the program). But none works.
>BTW, if I set the url from which the program crashed to base url, the
>program will still crashed at the 70-75 url. How can I solve this
>problem? thanks for your help

Sure looks like a resource leak somewhere (probably leaving a file open
until your program hits some wall of maximum simultaneously open files),
but I can't reproduce it here (MacOSX, tried both Python 2.3.5 and
2.4.1). What version of Python are you using, and on what platform?
Maybe a simple Python upgrade might fix your problem...
Alex
Thanks for the info you provided. I'm using 2.4.1 on cygwin of WinXP.
If you want to reproduce the problem, I can send the source to you.

This morning I found that this is caused by urllib2. When I use urllib
instead of urllib2, it won't crash any more. But the matters is that I
want to catch the HTTP 404 Error which is handled by FancyURLopener in
urllib.open(). So I can't catch it.

I'm using exactly that configuration, so if you let me have that source
I could take a look at it for you.

[...]
I've sent the source, thanks for your help.

[...]
Preliminary result, in case this rings bells with people who use urllib2
quite a lot. I modified the error case to report the actual message
returned with the exception and I'm seeing things like:

http://www.holdenweb.com/./Python/webframeworks.html
Message: <urlopen error (120, 'Operation already in progress')>
Start process
http://www.amazon.com/exec/obidos/AS...steveholden-20
Error: IOError while parsing
http://www.amazon.com/exec/obidos/AS...steveholden-20
Message: <urlopen error (120, 'Operation already in progress')>
Oct 12 '05 #6

P: n/a
Steve Holden wrote:
Johnny Lee wrote:
[...]
I've sent the source, thanks for your help.


[...]
Preliminary result, in case this rings bells with people who use urllib2
quite a lot. I modified the error case to report the actual message
returned with the exception and I'm seeing things like:

http://www.holdenweb.com/./Python/webframeworks.html
Message: <urlopen error (120, 'Operation already in progress')>
Start process
http://www.amazon.com/exec/obidos/AS...steveholden-20
Error: IOError while parsing
http://www.amazon.com/exec/obidos/AS...steveholden-20
Message: <urlopen error (120, 'Operation already in progress')>
.
.
.

So at least we know now what the error is, and it looks like some sort
of resource limit (though why only on Cygwin betas me) ... anyone,
before I start some serious debugging?

I realized after this post that WingIDE doesn't run under Cygwin, so I
modified the code further to raise an error and give us a proper
traceback. I also tested the program under the standard Windows 2.4.1
release, where it didn't fail, so I conclude you have unearthed a Cygwin
socket bug. Here's the traceback:

End process http://www.holdenweb.com/contact.html
Start process http://freshmeat.net/releases/192449
Error: IOError while parsing http://freshmeat.net/releases/192449
Message: <urlopen error (120, 'Operation already in progress')>
Traceback (most recent call last):
File "Spider_bug.py", line 225, in ?
spider.run()
File "Spider_bug.py", line 143, in run
self.grabUrl(tempUrl)
File "Spider_bug.py", line 166, in grabUrl
webPage = urllib2.urlopen(url).read()
File "/usr/lib/python2.4/urllib2.py", line 130, in urlopen
return _opener.open(url, data)
File "/usr/lib/python2.4/urllib2.py", line 358, in open
response = self._open(req, data)
File "/usr/lib/python2.4/urllib2.py", line 376, in _open
'_open', req)
File "/usr/lib/python2.4/urllib2.py", line 337, in _call_chain
result = func(*args)
File "/usr/lib/python2.4/urllib2.py", line 1021, in http_open
return self.do_open(httplib.HTTPConnection, req)
File "/usr/lib/python2.4/urllib2.py", line 996, in do_open
raise URLError(err)
urllib2.URLError: <urlopen error (120, 'Operation already in progress')>

Looking at that part of the course of urrllib2 we see:

headers["Connection"] = "close"
try:
h.request(req.get_method(), req.get_selector(), req.data,
headers)
r = h.getresponse()
except socket.error, err: # XXX what error?
raise URLError(err)

So my conclusion is that there's something in the Cygwin socket module
that causes problems not seen under other platforms.

I couldn't find any obviously-related error in the Python bug tracker,
and I have copied this message to the Cygwin list in case someone there
knows what the problem is.

Before making any kind of bug submission you should really see if you
can build a program shorter that the existing 220+ lines to demonstrate
the bug, but it does look to me like your program should work (as indeed
it does on other platforms).

regards
Steve
--
Steve Holden +44 150 684 7255 +1 800 494 3119
Holden Web LLC www.holdenweb.com
PyCon TX 2006 www.python.org/pycon/

Oct 12 '05 #7

P: n/a

Steve Holden wrote:
Steve Holden wrote:
Johnny Lee wrote:
[...]
I've sent the source, thanks for your help.


[...]
Preliminary result, in case this rings bells with people who use urllib2
quite a lot. I modified the error case to report the actual message
returned with the exception and I'm seeing things like:

http://www.holdenweb.com/./Python/webframeworks.html
Message: <urlopen error (120, 'Operation already in progress')>
Start process
http://www.amazon.com/exec/obidos/AS...steveholden-20
Error: IOError while parsing
http://www.amazon.com/exec/obidos/AS...steveholden-20
Message: <urlopen error (120, 'Operation already in progress')>
.
.
.

So at least we know now what the error is, and it looks like some sort
of resource limit (though why only on Cygwin betas me) ... anyone,
before I start some serious debugging?

I realized after this post that WingIDE doesn't run under Cygwin, so I
modified the code further to raise an error and give us a proper
traceback. I also tested the program under the standard Windows 2.4.1
release, where it didn't fail, so I conclude you have unearthed a Cygwin
socket bug. Here's the traceback:

End process http://www.holdenweb.com/contact.html
Start process http://freshmeat.net/releases/192449
Error: IOError while parsing http://freshmeat.net/releases/192449
Message: <urlopen error (120, 'Operation already in progress')>
Traceback (most recent call last):
File "Spider_bug.py", line 225, in ?
spider.run()
File "Spider_bug.py", line 143, in run
self.grabUrl(tempUrl)
File "Spider_bug.py", line 166, in grabUrl
webPage = urllib2.urlopen(url).read()
File "/usr/lib/python2.4/urllib2.py", line 130, in urlopen
return _opener.open(url, data)
File "/usr/lib/python2.4/urllib2.py", line 358, in open
response = self._open(req, data)
File "/usr/lib/python2.4/urllib2.py", line 376, in _open
'_open', req)
File "/usr/lib/python2.4/urllib2.py", line 337, in _call_chain
result = func(*args)
File "/usr/lib/python2.4/urllib2.py", line 1021, in http_open
return self.do_open(httplib.HTTPConnection, req)
File "/usr/lib/python2.4/urllib2.py", line 996, in do_open
raise URLError(err)
urllib2.URLError: <urlopen error (120, 'Operation already in progress')>

Looking at that part of the course of urrllib2 we see:

headers["Connection"] = "close"
try:
h.request(req.get_method(), req.get_selector(), req.data,
headers)
r = h.getresponse()
except socket.error, err: # XXX what error?
raise URLError(err)

So my conclusion is that there's something in the Cygwin socket module
that causes problems not seen under other platforms.

I couldn't find any obviously-related error in the Python bug tracker,
and I have copied this message to the Cygwin list in case someone there
knows what the problem is.

Before making any kind of bug submission you should really see if you
can build a program shorter that the existing 220+ lines to demonstrate
the bug, but it does look to me like your program should work (as indeed
it does on other platforms).

regards
Steve
--
Steve Holden +44 150 684 7255 +1 800 494 3119
Holden Web LLC www.holdenweb.com
PyCon TX 2006 www.python.org/pycon/


But if you change urllib2 to urllib, it works under cygwin. Are they
using different mechanism to connect to the page?

Oct 12 '05 #8

P: n/a
Johnny Lee wrote:
Steve Holden wrote:
Steve Holden wrote:
Johnny Lee wrote:
[...]
So my conclusion is that there's something in the Cygwin socket module
that causes problems not seen under other platforms.

I couldn't find any obviously-related error in the Python bug tracker,
and I have copied this message to the Cygwin list in case someone there
knows what the problem is.

Before making any kind of bug submission you should really see if you
can build a program shorter that the existing 220+ lines to demonstrate
the bug, but it does look to me like your program should work (as indeed
it does on other platforms).
[...]

But if you change urllib2 to urllib, it works under cygwin. Are they
using different mechanism to connect to the page?

I haven't looked into it that deeply. Perhaps you can take a look at the
library code to see what the differences are?

regards
Steve
--
Steve Holden +44 150 684 7255 +1 800 494 3119
Holden Web LLC www.holdenweb.com
PyCon TX 2006 www.python.org/pycon/

Oct 12 '05 #9

P: n/a
Steve Holden <st***@holdenweb.com> writes:
[...]
File "/usr/lib/python2.4/urllib2.py", line 996, in do_open
raise URLError(err)
urllib2.URLError: <urlopen error (120, 'Operation already in progress')>

Looking at that part of the course of urrllib2 we see:

headers["Connection"] = "close"
try:
h.request(req.get_method(), req.get_selector(), req.data,
headers)
r = h.getresponse()
except socket.error, err: # XXX what error?
raise URLError(err)

So my conclusion is that there's something in the Cygwin socket module
that causes problems not seen under other platforms.

I couldn't find any obviously-related error in the Python bug tracker,
and I have copied this message to the Cygwin list in case someone there
knows what the problem is.

[...]

I don't *think* this is related, but just in case:

http://python.org/sf/1208304
John

Oct 13 '05 #10

P: n/a
John J. Lee wrote:
Steve Holden <st***@holdenweb.com> writes:
[...]
File "/usr/lib/python2.4/urllib2.py", line 996, in do_open
raise URLError(err)
urllib2.URLError: <urlopen error (120, 'Operation already in progress')>

Looking at that part of the course of urrllib2 we see:

headers["Connection"] = "close"
try:
h.request(req.get_method(), req.get_selector(), req.data,
headers)
r = h.getresponse()
except socket.error, err: # XXX what error?
raise URLError(err)

So my conclusion is that there's something in the Cygwin socket module
that causes problems not seen under other platforms.

I couldn't find any obviously-related error in the Python bug tracker,
and I have copied this message to the Cygwin list in case someone there
knows what the problem is.


[...]

I don't *think* this is related, but just in case:

http://python.org/sf/1208304

Good catch, John, I suspect this is a possibility so I've added the
following note:

"""The Windows 2.4.1 build doesn't show this error, but the Cygwin 2.4.1
build does still have uncollectable objects after a urllib2.urlopen(),
so there may be a platform dependency here. No 2.4.2 on Cygwin yet, so
nothing conclusive as lsof isn't available."""

regards
Steve
--
Steve Holden +44 150 684 7255 +1 800 494 3119
Holden Web LLC www.holdenweb.com
PyCon TX 2006 www.python.org/pycon/

Oct 14 '05 #11

P: n/a

Steve Holden 写道:
Good catch, John, I suspect this is a possibility so I've added the
following note:

"""The Windows 2.4.1 build doesn't show this error, but the Cygwin 2.4.1
build does still have uncollectable objects after a urllib2.urlopen(),
so there may be a platform dependency here. No 2.4.2 on Cygwin yet, so
nothing conclusive as lsof isn't available."""

regards
Steve
--
Steve Holden +44 150 684 7255 +1 800 494 3119
Holden Web LLC www.holdenweb.com
PyCon TX 2006 www.python.org/pycon/


Maybe it's really a problem of platform dependency. Take a look at this
brief example, (not using urllib, but just want to show the platform
dependency of python):

Here is the snapshot from dos:
-----------------------
D:\>python
ActivePython 2.4.1 Build 247 (ActiveState Corp.) based on
Python 2.4.1 (#65, Jun 20 2005, 17:01:55) [MSC v.1310 32 bit (Intel)]
on win32
Type "help", "copyright", "credits" or "license" for more information.
f = open("t", "r")
f.tell() 0L f.readline() 'http://cn.realestate.yahoo.com\n' f.tell() 28L

--------------------------

Here is the a snapshot from cygwin:
---------------------------
Johnny Lee@esmcn-johnny /cygdrive/d
$ python
Python 2.4.1 (#1, May 27 2005, 18:02:40)
[GCC 3.3.3 (cygwin special)] on cygwin
Type "help", "copyright", "credits" or "license" for more information. f = open("t", "r")
f.tell() 0L f.readline() 'http://cn.realestate.yahoo.com\n' f.tell()

31L

--------------------------------

Oct 14 '05 #12

This discussion thread is closed

Replies have been disabled for this discussion.