472,358 Members | 1,699 Online
Bytes | Software Development & Data Engineering Community
+ Post

Home Posts Topics Members FAQ

Join Bytes to post your question to a community of 472,358 software developers and data experts.

A problem while using urllib

Hi,
I was using urllib to grab urls from web. here is the work flow of
my program:

1. Get base url and max number of urls from user
2. Call filter to validate the base url
3. Read the source of the base url and grab all the urls from "href"
property of "a" tag
4. Call filter to validate every url grabbed
5. Continue 3-4 until the number of url grabbed gets the limit

In filter there is a method like this:

--------------------------------------------------
# check whether the url can be connected
def filteredByConnection(self, url):
assert url

try:
webPage = urllib2.urlopen(url)
except urllib2.URLError:
self.logGenerator.log("Error: " + url + " <urlopen error timed
out>")
return False
except urllib2.HTTPError:
self.logGenerator.log("Error: " + url + " not found")
return False
self.logGenerator.log("Connecting " + url + " successed")
webPage.close()
return True
----------------------------------------------------

But every time when I ran to the 70 to 75 urls (that means 70-75
urls have been tested via this way), the program will crash and all the
urls left will raise urllib2.URLError until the program exits. I tried
many ways to work it out, using urllib, set a sleep(1) in the filter (I
thought it was the massive urls crashed the program). But none works.
BTW, if I set the url from which the program crashed to base url, the
program will still crashed at the 70-75 url. How can I solve this
problem? thanks for your help

Regards,
Johnny

Oct 11 '05 #1
11 3452
Johnny Lee <jo************@hotmail.com> wrote:
...
try:
webPage = urllib2.urlopen(url)
except urllib2.URLError: ... webPage.close()
return True
----------------------------------------------------

But every time when I ran to the 70 to 75 urls (that means 70-75
urls have been tested via this way), the program will crash and all the
urls left will raise urllib2.URLError until the program exits. I tried
many ways to work it out, using urllib, set a sleep(1) in the filter (I
thought it was the massive urls crashed the program). But none works.
BTW, if I set the url from which the program crashed to base url, the
program will still crashed at the 70-75 url. How can I solve this
problem? thanks for your help


Sure looks like a resource leak somewhere (probably leaving a file open
until your program hits some wall of maximum simultaneously open files),
but I can't reproduce it here (MacOSX, tried both Python 2.3.5 and
2.4.1). What version of Python are you using, and on what platform?
Maybe a simple Python upgrade might fix your problem...
Alex
Oct 11 '05 #2

Alex Martelli wrote:
Johnny Lee <jo************@hotmail.com> wrote:
...
try:
webPage = urllib2.urlopen(url)
except urllib2.URLError:

...
webPage.close()
return True
----------------------------------------------------

But every time when I ran to the 70 to 75 urls (that means 70-75
urls have been tested via this way), the program will crash and all the
urls left will raise urllib2.URLError until the program exits. I tried
many ways to work it out, using urllib, set a sleep(1) in the filter (I
thought it was the massive urls crashed the program). But none works.
BTW, if I set the url from which the program crashed to base url, the
program will still crashed at the 70-75 url. How can I solve this
problem? thanks for your help


Sure looks like a resource leak somewhere (probably leaving a file open
until your program hits some wall of maximum simultaneously open files),
but I can't reproduce it here (MacOSX, tried both Python 2.3.5 and
2.4.1). What version of Python are you using, and on what platform?
Maybe a simple Python upgrade might fix your problem...
Alex


Thanks for the info you provided. I'm using 2.4.1 on cygwin of WinXP.
If you want to reproduce the problem, I can send the source to you.

This morning I found that this is caused by urllib2. When I use urllib
instead of urllib2, it won't crash any more. But the matters is that I
want to catch the HTTP 404 Error which is handled by FancyURLopener in
urllib.open(). So I can't catch it.

Regards,
Johnny

Oct 12 '05 #3
Johnny Lee wrote:
Alex Martelli wrote:
Johnny Lee <jo************@hotmail.com> wrote:
...
try:
webPage = urllib2.urlopen(url)
except urllib2.URLError:


...
webPage.close()
return True
----------------------------------------------------

But every time when I ran to the 70 to 75 urls (that means 70-75
urls have been tested via this way), the program will crash and all the
urls left will raise urllib2.URLError until the program exits. I tried
many ways to work it out, using urllib, set a sleep(1) in the filter (I
thought it was the massive urls crashed the program). But none works.
BTW, if I set the url from which the program crashed to base url, the
program will still crashed at the 70-75 url. How can I solve this
problem? thanks for your help


Sure looks like a resource leak somewhere (probably leaving a file open
until your program hits some wall of maximum simultaneously open files),
but I can't reproduce it here (MacOSX, tried both Python 2.3.5 and
2.4.1). What version of Python are you using, and on what platform?
Maybe a simple Python upgrade might fix your problem...
Alex

Thanks for the info you provided. I'm using 2.4.1 on cygwin of WinXP.
If you want to reproduce the problem, I can send the source to you.

This morning I found that this is caused by urllib2. When I use urllib
instead of urllib2, it won't crash any more. But the matters is that I
want to catch the HTTP 404 Error which is handled by FancyURLopener in
urllib.open(). So I can't catch it.


I'm using exactly that configuration, so if you let me have that source
I could take a look at it for you.

regards
Steve
--
Steve Holden +44 150 684 7255 +1 800 494 3119
Holden Web LLC www.holdenweb.com
PyCon TX 2006 www.python.org/pycon/

Oct 12 '05 #4

Steve Holden wrote:
Johnny Lee wrote:
Alex Martelli wrote:
Johnny Lee <jo************@hotmail.com> wrote:
...

try:
webPage = urllib2.urlopen(url)
except urllib2.URLError:

...

webPage.close()
return True
----------------------------------------------------

But every time when I ran to the 70 to 75 urls (that means 70-75
urls have been tested via this way), the program will crash and all the
urls left will raise urllib2.URLError until the program exits. I tried
many ways to work it out, using urllib, set a sleep(1) in the filter (I
thought it was the massive urls crashed the program). But none works.
BTW, if I set the url from which the program crashed to base url, the
program will still crashed at the 70-75 url. How can I solve this
problem? thanks for your help

Sure looks like a resource leak somewhere (probably leaving a file open
until your program hits some wall of maximum simultaneously open files),
but I can't reproduce it here (MacOSX, tried both Python 2.3.5 and
2.4.1). What version of Python are you using, and on what platform?
Maybe a simple Python upgrade might fix your problem...
Alex

Thanks for the info you provided. I'm using 2.4.1 on cygwin of WinXP.
If you want to reproduce the problem, I can send the source to you.

This morning I found that this is caused by urllib2. When I use urllib
instead of urllib2, it won't crash any more. But the matters is that I
want to catch the HTTP 404 Error which is handled by FancyURLopener in
urllib.open(). So I can't catch it.


I'm using exactly that configuration, so if you let me have that source
I could take a look at it for you.

regards
Steve
--
Steve Holden +44 150 684 7255 +1 800 494 3119
Holden Web LLC www.holdenweb.com
PyCon TX 2006 www.python.org/pycon/

I've sent the source, thanks for your help.

Regrads,
Johnny

Oct 12 '05 #5
Johnny Lee wrote:
Steve Holden wrote:
Johnny Lee wrote:
Alex Martelli wrote:
Johnny Lee <jo************@hotmail.com> wrote:
...
> try:
> webPage = urllib2.urlopen(url)
> except urllib2.URLError:

...
> webPage.close()
> return True
>----------------------------------------------------
>
> But every time when I ran to the 70 to 75 urls (that means 70-75
>urls have been tested via this way), the program will crash and all the
>urls left will raise urllib2.URLError until the program exits. I tried
>many ways to work it out, using urllib, set a sleep(1) in the filter (I
>thought it was the massive urls crashed the program). But none works.
>BTW, if I set the url from which the program crashed to base url, the
>program will still crashed at the 70-75 url. How can I solve this
>problem? thanks for your help

Sure looks like a resource leak somewhere (probably leaving a file open
until your program hits some wall of maximum simultaneously open files),
but I can't reproduce it here (MacOSX, tried both Python 2.3.5 and
2.4.1). What version of Python are you using, and on what platform?
Maybe a simple Python upgrade might fix your problem...
Alex
Thanks for the info you provided. I'm using 2.4.1 on cygwin of WinXP.
If you want to reproduce the problem, I can send the source to you.

This morning I found that this is caused by urllib2. When I use urllib
instead of urllib2, it won't crash any more. But the matters is that I
want to catch the HTTP 404 Error which is handled by FancyURLopener in
urllib.open(). So I can't catch it.

I'm using exactly that configuration, so if you let me have that source
I could take a look at it for you.

[...]
I've sent the source, thanks for your help.

[...]
Preliminary result, in case this rings bells with people who use urllib2
quite a lot. I modified the error case to report the actual message
returned with the exception and I'm seeing things like:

http://www.holdenweb.com/./Python/webframeworks.html
Message: <urlopen error (120, 'Operation already in progress')>
Start process
http://www.amazon.com/exec/obidos/AS...steveholden-20
Error: IOError while parsing
http://www.amazon.com/exec/obidos/AS...steveholden-20
Message: <urlopen error (120, 'Operation already in progress')>
Oct 12 '05 #6
Steve Holden wrote:
Johnny Lee wrote:
[...]
I've sent the source, thanks for your help.


[...]
Preliminary result, in case this rings bells with people who use urllib2
quite a lot. I modified the error case to report the actual message
returned with the exception and I'm seeing things like:

http://www.holdenweb.com/./Python/webframeworks.html
Message: <urlopen error (120, 'Operation already in progress')>
Start process
http://www.amazon.com/exec/obidos/AS...steveholden-20
Error: IOError while parsing
http://www.amazon.com/exec/obidos/AS...steveholden-20
Message: <urlopen error (120, 'Operation already in progress')>
.
.
.

So at least we know now what the error is, and it looks like some sort
of resource limit (though why only on Cygwin betas me) ... anyone,
before I start some serious debugging?

I realized after this post that WingIDE doesn't run under Cygwin, so I
modified the code further to raise an error and give us a proper
traceback. I also tested the program under the standard Windows 2.4.1
release, where it didn't fail, so I conclude you have unearthed a Cygwin
socket bug. Here's the traceback:

End process http://www.holdenweb.com/contact.html
Start process http://freshmeat.net/releases/192449
Error: IOError while parsing http://freshmeat.net/releases/192449
Message: <urlopen error (120, 'Operation already in progress')>
Traceback (most recent call last):
File "Spider_bug.py", line 225, in ?
spider.run()
File "Spider_bug.py", line 143, in run
self.grabUrl(tempUrl)
File "Spider_bug.py", line 166, in grabUrl
webPage = urllib2.urlopen(url).read()
File "/usr/lib/python2.4/urllib2.py", line 130, in urlopen
return _opener.open(url, data)
File "/usr/lib/python2.4/urllib2.py", line 358, in open
response = self._open(req, data)
File "/usr/lib/python2.4/urllib2.py", line 376, in _open
'_open', req)
File "/usr/lib/python2.4/urllib2.py", line 337, in _call_chain
result = func(*args)
File "/usr/lib/python2.4/urllib2.py", line 1021, in http_open
return self.do_open(httplib.HTTPConnection, req)
File "/usr/lib/python2.4/urllib2.py", line 996, in do_open
raise URLError(err)
urllib2.URLError: <urlopen error (120, 'Operation already in progress')>

Looking at that part of the course of urrllib2 we see:

headers["Connection"] = "close"
try:
h.request(req.get_method(), req.get_selector(), req.data,
headers)
r = h.getresponse()
except socket.error, err: # XXX what error?
raise URLError(err)

So my conclusion is that there's something in the Cygwin socket module
that causes problems not seen under other platforms.

I couldn't find any obviously-related error in the Python bug tracker,
and I have copied this message to the Cygwin list in case someone there
knows what the problem is.

Before making any kind of bug submission you should really see if you
can build a program shorter that the existing 220+ lines to demonstrate
the bug, but it does look to me like your program should work (as indeed
it does on other platforms).

regards
Steve
--
Steve Holden +44 150 684 7255 +1 800 494 3119
Holden Web LLC www.holdenweb.com
PyCon TX 2006 www.python.org/pycon/

Oct 12 '05 #7

Steve Holden wrote:
Steve Holden wrote:
Johnny Lee wrote:
[...]
I've sent the source, thanks for your help.


[...]
Preliminary result, in case this rings bells with people who use urllib2
quite a lot. I modified the error case to report the actual message
returned with the exception and I'm seeing things like:

http://www.holdenweb.com/./Python/webframeworks.html
Message: <urlopen error (120, 'Operation already in progress')>
Start process
http://www.amazon.com/exec/obidos/AS...steveholden-20
Error: IOError while parsing
http://www.amazon.com/exec/obidos/AS...steveholden-20
Message: <urlopen error (120, 'Operation already in progress')>
.
.
.

So at least we know now what the error is, and it looks like some sort
of resource limit (though why only on Cygwin betas me) ... anyone,
before I start some serious debugging?

I realized after this post that WingIDE doesn't run under Cygwin, so I
modified the code further to raise an error and give us a proper
traceback. I also tested the program under the standard Windows 2.4.1
release, where it didn't fail, so I conclude you have unearthed a Cygwin
socket bug. Here's the traceback:

End process http://www.holdenweb.com/contact.html
Start process http://freshmeat.net/releases/192449
Error: IOError while parsing http://freshmeat.net/releases/192449
Message: <urlopen error (120, 'Operation already in progress')>
Traceback (most recent call last):
File "Spider_bug.py", line 225, in ?
spider.run()
File "Spider_bug.py", line 143, in run
self.grabUrl(tempUrl)
File "Spider_bug.py", line 166, in grabUrl
webPage = urllib2.urlopen(url).read()
File "/usr/lib/python2.4/urllib2.py", line 130, in urlopen
return _opener.open(url, data)
File "/usr/lib/python2.4/urllib2.py", line 358, in open
response = self._open(req, data)
File "/usr/lib/python2.4/urllib2.py", line 376, in _open
'_open', req)
File "/usr/lib/python2.4/urllib2.py", line 337, in _call_chain
result = func(*args)
File "/usr/lib/python2.4/urllib2.py", line 1021, in http_open
return self.do_open(httplib.HTTPConnection, req)
File "/usr/lib/python2.4/urllib2.py", line 996, in do_open
raise URLError(err)
urllib2.URLError: <urlopen error (120, 'Operation already in progress')>

Looking at that part of the course of urrllib2 we see:

headers["Connection"] = "close"
try:
h.request(req.get_method(), req.get_selector(), req.data,
headers)
r = h.getresponse()
except socket.error, err: # XXX what error?
raise URLError(err)

So my conclusion is that there's something in the Cygwin socket module
that causes problems not seen under other platforms.

I couldn't find any obviously-related error in the Python bug tracker,
and I have copied this message to the Cygwin list in case someone there
knows what the problem is.

Before making any kind of bug submission you should really see if you
can build a program shorter that the existing 220+ lines to demonstrate
the bug, but it does look to me like your program should work (as indeed
it does on other platforms).

regards
Steve
--
Steve Holden +44 150 684 7255 +1 800 494 3119
Holden Web LLC www.holdenweb.com
PyCon TX 2006 www.python.org/pycon/


But if you change urllib2 to urllib, it works under cygwin. Are they
using different mechanism to connect to the page?

Oct 12 '05 #8
Johnny Lee wrote:
Steve Holden wrote:
Steve Holden wrote:
Johnny Lee wrote:
[...]
So my conclusion is that there's something in the Cygwin socket module
that causes problems not seen under other platforms.

I couldn't find any obviously-related error in the Python bug tracker,
and I have copied this message to the Cygwin list in case someone there
knows what the problem is.

Before making any kind of bug submission you should really see if you
can build a program shorter that the existing 220+ lines to demonstrate
the bug, but it does look to me like your program should work (as indeed
it does on other platforms).
[...]

But if you change urllib2 to urllib, it works under cygwin. Are they
using different mechanism to connect to the page?

I haven't looked into it that deeply. Perhaps you can take a look at the
library code to see what the differences are?

regards
Steve
--
Steve Holden +44 150 684 7255 +1 800 494 3119
Holden Web LLC www.holdenweb.com
PyCon TX 2006 www.python.org/pycon/

Oct 12 '05 #9
Steve Holden <st***@holdenweb.com> writes:
[...]
File "/usr/lib/python2.4/urllib2.py", line 996, in do_open
raise URLError(err)
urllib2.URLError: <urlopen error (120, 'Operation already in progress')>

Looking at that part of the course of urrllib2 we see:

headers["Connection"] = "close"
try:
h.request(req.get_method(), req.get_selector(), req.data,
headers)
r = h.getresponse()
except socket.error, err: # XXX what error?
raise URLError(err)

So my conclusion is that there's something in the Cygwin socket module
that causes problems not seen under other platforms.

I couldn't find any obviously-related error in the Python bug tracker,
and I have copied this message to the Cygwin list in case someone there
knows what the problem is.

[...]

I don't *think* this is related, but just in case:

http://python.org/sf/1208304
John

Oct 13 '05 #10
John J. Lee wrote:
Steve Holden <st***@holdenweb.com> writes:
[...]
File "/usr/lib/python2.4/urllib2.py", line 996, in do_open
raise URLError(err)
urllib2.URLError: <urlopen error (120, 'Operation already in progress')>

Looking at that part of the course of urrllib2 we see:

headers["Connection"] = "close"
try:
h.request(req.get_method(), req.get_selector(), req.data,
headers)
r = h.getresponse()
except socket.error, err: # XXX what error?
raise URLError(err)

So my conclusion is that there's something in the Cygwin socket module
that causes problems not seen under other platforms.

I couldn't find any obviously-related error in the Python bug tracker,
and I have copied this message to the Cygwin list in case someone there
knows what the problem is.


[...]

I don't *think* this is related, but just in case:

http://python.org/sf/1208304

Good catch, John, I suspect this is a possibility so I've added the
following note:

"""The Windows 2.4.1 build doesn't show this error, but the Cygwin 2.4.1
build does still have uncollectable objects after a urllib2.urlopen(),
so there may be a platform dependency here. No 2.4.2 on Cygwin yet, so
nothing conclusive as lsof isn't available."""

regards
Steve
--
Steve Holden +44 150 684 7255 +1 800 494 3119
Holden Web LLC www.holdenweb.com
PyCon TX 2006 www.python.org/pycon/

Oct 14 '05 #11

Steve Holden 写道:
Good catch, John, I suspect this is a possibility so I've added the
following note:

"""The Windows 2.4.1 build doesn't show this error, but the Cygwin 2.4.1
build does still have uncollectable objects after a urllib2.urlopen(),
so there may be a platform dependency here. No 2.4.2 on Cygwin yet, so
nothing conclusive as lsof isn't available."""

regards
Steve
--
Steve Holden +44 150 684 7255 +1 800 494 3119
Holden Web LLC www.holdenweb.com
PyCon TX 2006 www.python.org/pycon/


Maybe it's really a problem of platform dependency. Take a look at this
brief example, (not using urllib, but just want to show the platform
dependency of python):

Here is the snapshot from dos:
-----------------------
D:\>python
ActivePython 2.4.1 Build 247 (ActiveState Corp.) based on
Python 2.4.1 (#65, Jun 20 2005, 17:01:55) [MSC v.1310 32 bit (Intel)]
on win32
Type "help", "copyright", "credits" or "license" for more information.
f = open("t", "r")
f.tell() 0L f.readline() 'http://cn.realestate.yahoo.com\n' f.tell() 28L

--------------------------

Here is the a snapshot from cygwin:
---------------------------
Johnny Lee@esmcn-johnny /cygdrive/d
$ python
Python 2.4.1 (#1, May 27 2005, 18:02:40)
[GCC 3.3.3 (cygwin special)] on cygwin
Type "help", "copyright", "credits" or "license" for more information. f = open("t", "r")
f.tell() 0L f.readline() 'http://cn.realestate.yahoo.com\n' f.tell()

31L

--------------------------------

Oct 14 '05 #12

This thread has been closed and replies have been disabled. Please start a new discussion.

Similar topics

4
by: Richard Shea | last post by:
Hi - I'm new to Python. I've been trying to use URLLIB and the 'tidy' function (part of the mx.tidy package). There's one thing I'm having real difficulties understanding. When I did this ... ...
1
by: Steve Allgood | last post by:
I'm having trouble posting a form at the USPS web site. I've been successful using urllib at other sites, but I'm missing why this won't work: # begin code # get zip+4 import urllib def...
1
by: Brian Beck | last post by:
Hi. I'm having some problems with code based directly on the following httplib documentation code: http://www.zvon.org/other/python/doc21/lib/httplib-examples.html I've included the code and...
11
by: Pater Maximus | last post by:
I am trying to implement the recipe listed at http://aspn.activestate.com/ASPN/Cookbook/Python/Recipe/211886 However, I can not get to first base. When I try to run import urllib...
0
by: Pieter Edelman | last post by:
Hi all, I'm trying to submit some data using a POST request to a HTTP server with BASIC authentication with python, but I can't get it to work. Since it's driving me completely nuts, so here's...
6
by: Hansan | last post by:
Hi all. I am working on a webpage where I use python and html. When I want to send one variable to a new script/page I use the following code: 0) print '''<input type=hidden name="eventid"...
6
by: justsee | last post by:
Hi, I'm using Python 2.3 on Windows for the first time, and am doing something wrong in using urllib to retrieve images from urls embedded in a csv file. If I explicitly specify a url and image...
1
by: onceuponapriori | last post by:
Greetings gents. I'm a Railser working on a django app that needs to do some scraping to gather its data. I need to programatically access a site that requires a username and password. Once I...
4
by: kgrafals | last post by:
Hi, I'm just trying to read from a webpage with urllib but I'm getting IOErrors. This is my code: import urllib sock = urllib.urlopen("http://www.google.com/") and this is the error:
2
by: Kemmylinns12 | last post by:
Blockchain technology has emerged as a transformative force in the business world, offering unprecedented opportunities for innovation and efficiency. While initially associated with cryptocurrencies...
0
by: antdb | last post by:
Ⅰ. Advantage of AntDB: hyper-convergence + streaming processing engine In the overall architecture, a new "hyper-convergence" concept was proposed, which integrated multiple engines and...
1
by: Matthew3360 | last post by:
Hi there. I have been struggling to find out how to use a variable as my location in my header redirect function. Here is my code. header("Location:".$urlback); Is this the right layout the...
2
by: Matthew3360 | last post by:
Hi, I have a python app that i want to be able to get variables from a php page on my webserver. My python app is on my computer. How would I make it so the python app could use a http request to get...
0
by: AndyPSV | last post by:
HOW CAN I CREATE AN AI with an .executable file that would suck all files in the folder and on my computerHOW CAN I CREATE AN AI with an .executable file that would suck all files in the folder and...
0
by: Arjunsri | last post by:
I have a Redshift database that I need to use as an import data source. I have configured the DSN connection using the server, port, database, and credentials and received a successful connection...
0
hi
by: WisdomUfot | last post by:
It's an interesting question you've got about how Gmail hides the HTTP referrer when a link in an email is clicked. While I don't have the specific technical details, Gmail likely implements measures...
0
by: Carina712 | last post by:
Setting background colors for Excel documents can help to improve the visual appeal of the document and make it easier to read and understand. Background colors can be used to highlight important...
0
by: Ricardo de Mila | last post by:
Dear people, good afternoon... I have a form in msAccess with lots of controls and a specific routine must be triggered if the mouse_down event happens in any control. Than I need to discover what...

By using Bytes.com and it's services, you agree to our Privacy Policy and Terms of Use.

To disable or enable advertisements and analytics tracking please visit the manage ads & tracking page.