Hi,
I was using urllib to grab urls from web. here is the work flow of
my program:
1. Get base url and max number of urls from user
2. Call filter to validate the base url
3. Read the source of the base url and grab all the urls from "href"
property of "a" tag
4. Call filter to validate every url grabbed
5. Continue 3-4 until the number of url grabbed gets the limit
In filter there is a method like this:
--------------------------------------------------
# check whether the url can be connected
def filteredByConne ction(self, url):
assert url
try:
webPage = urllib2.urlopen (url)
except urllib2.URLErro r:
self.logGenerat or.log("Error: " + url + " <urlopen error timed
out>")
return False
except urllib2.HTTPErr or:
self.logGenerat or.log("Error: " + url + " not found")
return False
self.logGenerat or.log("Connect ing " + url + " successed")
webPage.close()
return True
----------------------------------------------------
But every time when I ran to the 70 to 75 urls (that means 70-75
urls have been tested via this way), the program will crash and all the
urls left will raise urllib2.URLErro r until the program exits. I tried
many ways to work it out, using urllib, set a sleep(1) in the filter (I
thought it was the massive urls crashed the program). But none works.
BTW, if I set the url from which the program crashed to base url, the
program will still crashed at the 70-75 url. How can I solve this
problem? thanks for your help
Regards,
Johnny 11 3553
Johnny Lee <jo************ @hotmail.com> wrote:
... try: webPage = urllib2.urlopen (url) except urllib2.URLErro r:
... webPage.close() return True ----------------------------------------------------
But every time when I ran to the 70 to 75 urls (that means 70-75 urls have been tested via this way), the program will crash and all the urls left will raise urllib2.URLErro r until the program exits. I tried many ways to work it out, using urllib, set a sleep(1) in the filter (I thought it was the massive urls crashed the program). But none works. BTW, if I set the url from which the program crashed to base url, the program will still crashed at the 70-75 url. How can I solve this problem? thanks for your help
Sure looks like a resource leak somewhere (probably leaving a file open
until your program hits some wall of maximum simultaneously open files),
but I can't reproduce it here (MacOSX, tried both Python 2.3.5 and
2.4.1). What version of Python are you using, and on what platform?
Maybe a simple Python upgrade might fix your problem...
Alex
Alex Martelli wrote: Johnny Lee <jo************ @hotmail.com> wrote: ... try: webPage = urllib2.urlopen (url) except urllib2.URLErro r: ... webPage.close() return True ----------------------------------------------------
But every time when I ran to the 70 to 75 urls (that means 70-75 urls have been tested via this way), the program will crash and all the urls left will raise urllib2.URLErro r until the program exits. I tried many ways to work it out, using urllib, set a sleep(1) in the filter (I thought it was the massive urls crashed the program). But none works. BTW, if I set the url from which the program crashed to base url, the program will still crashed at the 70-75 url. How can I solve this problem? thanks for your help
Sure looks like a resource leak somewhere (probably leaving a file open until your program hits some wall of maximum simultaneously open files), but I can't reproduce it here (MacOSX, tried both Python 2.3.5 and 2.4.1). What version of Python are you using, and on what platform? Maybe a simple Python upgrade might fix your problem...
Alex
Thanks for the info you provided. I'm using 2.4.1 on cygwin of WinXP.
If you want to reproduce the problem, I can send the source to you.
This morning I found that this is caused by urllib2. When I use urllib
instead of urllib2, it won't crash any more. But the matters is that I
want to catch the HTTP 404 Error which is handled by FancyURLopener in
urllib.open(). So I can't catch it.
Regards,
Johnny
Johnny Lee wrote: Alex Martelli wrote:
Johnny Lee <jo************ @hotmail.com> wrote: ...
try: webPage = urllib2.urlopen (url) except urllib2.URLErro r:
...
webPage.close() return True ----------------------------------------------------
But every time when I ran to the 70 to 75 urls (that means 70-75 urls have been tested via this way), the program will crash and all the urls left will raise urllib2.URLErro r until the program exits. I tried many ways to work it out, using urllib, set a sleep(1) in the filter (I thought it was the massive urls crashed the program). But none works. BTW, if I set the url from which the program crashed to base url, the program will still crashed at the 70-75 url. How can I solve this problem? thanks for your help
Sure looks like a resource leak somewhere (probably leaving a file open until your program hits some wall of maximum simultaneously open files), but I can't reproduce it here (MacOSX, tried both Python 2.3.5 and 2.4.1). What version of Python are you using, and on what platform? Maybe a simple Python upgrade might fix your problem...
Alex
Thanks for the info you provided. I'm using 2.4.1 on cygwin of WinXP. If you want to reproduce the problem, I can send the source to you.
This morning I found that this is caused by urllib2. When I use urllib instead of urllib2, it won't crash any more. But the matters is that I want to catch the HTTP 404 Error which is handled by FancyURLopener in urllib.open(). So I can't catch it.
I'm using exactly that configuration, so if you let me have that source
I could take a look at it for you.
regards
Steve
--
Steve Holden +44 150 684 7255 +1 800 494 3119
Holden Web LLC www.holdenweb.com
PyCon TX 2006 www.python.org/pycon/
Steve Holden wrote: Johnny Lee wrote: Alex Martelli wrote:
Johnny Lee <jo************ @hotmail.com> wrote: ...
try: webPage = urllib2.urlopen (url) except urllib2.URLErro r:
...
webPage.close() return True ----------------------------------------------------
But every time when I ran to the 70 to 75 urls (that means 70-75 urls have been tested via this way), the program will crash and all the urls left will raise urllib2.URLErro r until the program exits. I tried many ways to work it out, using urllib, set a sleep(1) in the filter (I thought it was the massive urls crashed the program). But none works. BTW, if I set the url from which the program crashed to base url, the program will still crashed at the 70-75 url. How can I solve this problem? thanks for your help
Sure looks like a resource leak somewhere (probably leaving a file open until your program hits some wall of maximum simultaneously open files), but I can't reproduce it here (MacOSX, tried both Python 2.3.5 and 2.4.1). What version of Python are you using, and on what platform? Maybe a simple Python upgrade might fix your problem...
Alex
Thanks for the info you provided. I'm using 2.4.1 on cygwin of WinXP. If you want to reproduce the problem, I can send the source to you.
This morning I found that this is caused by urllib2. When I use urllib instead of urllib2, it won't crash any more. But the matters is that I want to catch the HTTP 404 Error which is handled by FancyURLopener in urllib.open(). So I can't catch it.
I'm using exactly that configuration, so if you let me have that source I could take a look at it for you.
regards Steve -- Steve Holden +44 150 684 7255 +1 800 494 3119 Holden Web LLC www.holdenweb.com PyCon TX 2006 www.python.org/pycon/
I've sent the source, thanks for your help.
Regrads,
Johnny
Johnny Lee wrote: Steve Holden wrote:
Johnny Lee wrote:
Alex Martelli wrote:
Johnny Lee <jo************ @hotmail.com> wrote: ...
> try: > webPage = urllib2.urlopen (url) > except urllib2.URLErro r:
...
> webPage.close() > return True >---------------------------------------------------- > > But every time when I ran to the 70 to 75 urls (that means 70-75 >urls have been tested via this way), the program will crash and all the >urls left will raise urllib2.URLErro r until the program exits. I tried >many ways to work it out, using urllib, set a sleep(1) in the filter (I >thought it was the massive urls crashed the program). But none works. >BTW, if I set the url from which the program crashed to base url, the >program will still crashed at the 70-75 url. How can I solve this >problem? thanks for your help
Sure looks like a resource leak somewhere (probably leaving a file open until your program hits some wall of maximum simultaneously open files), but I can't reproduce it here (MacOSX, tried both Python 2.3.5 and 2.4.1). What version of Python are you using, and on what platform? Maybe a simple Python upgrade might fix your problem...
Alex
Thanks for the info you provided. I'm using 2.4.1 on cygwin of WinXP. If you want to reproduce the problem, I can send the source to you.
This morning I found that this is caused by urllib2. When I use urllib instead of urllib2, it won't crash any more. But the matters is that I want to catch the HTTP 404 Error which is handled by FancyURLopener in urllib.open( ). So I can't catch it.
I'm using exactly that configuration, so if you let me have that source I could take a look at it for you.
[...] I've sent the source, thanks for your help.
[...]
Preliminary result, in case this rings bells with people who use urllib2
quite a lot. I modified the error case to report the actual message
returned with the exception and I'm seeing things like: http://www.holdenweb.com/./Python/webframeworks.html
Message: <urlopen error (120, 'Operation already in progress')>
Start process http://www.amazon.com/exec/obidos/AS...steveholden-20
Error: IOError while parsing http://www.amazon.com/exec/obidos/AS...steveholden-20
Message: <urlopen error (120, 'Operation already in progress')>
Steve Holden wrote: Johnny Lee wrote: [...]
I've sent the source, thanks for your help.
[...] Preliminary result, in case this rings bells with people who use urllib2 quite a lot. I modified the error case to report the actual message returned with the exception and I'm seeing things like:
http://www.holdenweb.com/./Python/webframeworks.html Message: <urlopen error (120, 'Operation already in progress')> Start process http://www.amazon.com/exec/obidos/AS...steveholden-20 Error: IOError while parsing http://www.amazon.com/exec/obidos/AS...steveholden-20 Message: <urlopen error (120, 'Operation already in progress')> . . .
So at least we know now what the error is, and it looks like some sort of resource limit (though why only on Cygwin betas me) ... anyone, before I start some serious debugging?
I realized after this post that WingIDE doesn't run under Cygwin, so I
modified the code further to raise an error and give us a proper
traceback. I also tested the program under the standard Windows 2.4.1
release, where it didn't fail, so I conclude you have unearthed a Cygwin
socket bug. Here's the traceback:
End process http://www.holdenweb.com/contact.html
Start process http://freshmeat.net/releases/192449
Error: IOError while parsing http://freshmeat.net/releases/192449
Message: <urlopen error (120, 'Operation already in progress')>
Traceback (most recent call last):
File "Spider_bug.py" , line 225, in ?
spider.run()
File "Spider_bug.py" , line 143, in run
self.grabUrl(te mpUrl)
File "Spider_bug.py" , line 166, in grabUrl
webPage = urllib2.urlopen (url).read()
File "/usr/lib/python2.4/urllib2.py", line 130, in urlopen
return _opener.open(ur l, data)
File "/usr/lib/python2.4/urllib2.py", line 358, in open
response = self._open(req, data)
File "/usr/lib/python2.4/urllib2.py", line 376, in _open
'_open', req)
File "/usr/lib/python2.4/urllib2.py", line 337, in _call_chain
result = func(*args)
File "/usr/lib/python2.4/urllib2.py", line 1021, in http_open
return self.do_open(ht tplib.HTTPConne ction, req)
File "/usr/lib/python2.4/urllib2.py", line 996, in do_open
raise URLError(err)
urllib2.URLErro r: <urlopen error (120, 'Operation already in progress')>
Looking at that part of the course of urrllib2 we see:
headers["Connection "] = "close"
try:
h.request(req.g et_method(), req.get_selecto r(), req.data,
headers)
r = h.getresponse()
except socket.error, err: # XXX what error?
raise URLError(err)
So my conclusion is that there's something in the Cygwin socket module
that causes problems not seen under other platforms.
I couldn't find any obviously-related error in the Python bug tracker,
and I have copied this message to the Cygwin list in case someone there
knows what the problem is.
Before making any kind of bug submission you should really see if you
can build a program shorter that the existing 220+ lines to demonstrate
the bug, but it does look to me like your program should work (as indeed
it does on other platforms).
regards
Steve
--
Steve Holden +44 150 684 7255 +1 800 494 3119
Holden Web LLC www.holdenweb.com
PyCon TX 2006 www.python.org/pycon/
Steve Holden wrote: Steve Holden wrote: Johnny Lee wrote: [...]
I've sent the source, thanks for your help.
[...] Preliminary result, in case this rings bells with people who use urllib2 quite a lot. I modified the error case to report the actual message returned with the exception and I'm seeing things like:
http://www.holdenweb.com/./Python/webframeworks.html Message: <urlopen error (120, 'Operation already in progress')> Start process http://www.amazon.com/exec/obidos/AS...steveholden-20 Error: IOError while parsing http://www.amazon.com/exec/obidos/AS...steveholden-20 Message: <urlopen error (120, 'Operation already in progress')> . . .
So at least we know now what the error is, and it looks like some sort of resource limit (though why only on Cygwin betas me) ... anyone, before I start some serious debugging? I realized after this post that WingIDE doesn't run under Cygwin, so I modified the code further to raise an error and give us a proper traceback. I also tested the program under the standard Windows 2.4.1 release, where it didn't fail, so I conclude you have unearthed a Cygwin socket bug. Here's the traceback:
End process http://www.holdenweb.com/contact.html Start process http://freshmeat.net/releases/192449 Error: IOError while parsing http://freshmeat.net/releases/192449 Message: <urlopen error (120, 'Operation already in progress')> Traceback (most recent call last): File "Spider_bug.py" , line 225, in ? spider.run() File "Spider_bug.py" , line 143, in run self.grabUrl(te mpUrl) File "Spider_bug.py" , line 166, in grabUrl webPage = urllib2.urlopen (url).read() File "/usr/lib/python2.4/urllib2.py", line 130, in urlopen return _opener.open(ur l, data) File "/usr/lib/python2.4/urllib2.py", line 358, in open response = self._open(req, data) File "/usr/lib/python2.4/urllib2.py", line 376, in _open '_open', req) File "/usr/lib/python2.4/urllib2.py", line 337, in _call_chain result = func(*args) File "/usr/lib/python2.4/urllib2.py", line 1021, in http_open return self.do_open(ht tplib.HTTPConne ction, req) File "/usr/lib/python2.4/urllib2.py", line 996, in do_open raise URLError(err) urllib2.URLErro r: <urlopen error (120, 'Operation already in progress')>
Looking at that part of the course of urrllib2 we see:
headers["Connection "] = "close" try: h.request(req.g et_method(), req.get_selecto r(), req.data, headers) r = h.getresponse() except socket.error, err: # XXX what error? raise URLError(err)
So my conclusion is that there's something in the Cygwin socket module that causes problems not seen under other platforms.
I couldn't find any obviously-related error in the Python bug tracker, and I have copied this message to the Cygwin list in case someone there knows what the problem is.
Before making any kind of bug submission you should really see if you can build a program shorter that the existing 220+ lines to demonstrate the bug, but it does look to me like your program should work (as indeed it does on other platforms).
regards Steve -- Steve Holden +44 150 684 7255 +1 800 494 3119 Holden Web LLC www.holdenweb.com PyCon TX 2006 www.python.org/pycon/
But if you change urllib2 to urllib, it works under cygwin. Are they
using different mechanism to connect to the page?
Johnny Lee wrote: Steve Holden wrote:
Steve Holden wrote:
Johnny Lee wrote:
[...] So my conclusion is that there's something in the Cygwin socket module that causes problems not seen under other platforms.
I couldn't find any obviously-related error in the Python bug tracker, and I have copied this message to the Cygwin list in case someone there knows what the problem is.
Before making any kind of bug submission you should really see if you can build a program shorter that the existing 220+ lines to demonstrate the bug, but it does look to me like your program should work (as indeed it does on other platforms).
[...]
But if you change urllib2 to urllib, it works under cygwin. Are they using different mechanism to connect to the page?
I haven't looked into it that deeply. Perhaps you can take a look at the
library code to see what the differences are?
regards
Steve
--
Steve Holden +44 150 684 7255 +1 800 494 3119
Holden Web LLC www.holdenweb.com
PyCon TX 2006 www.python.org/pycon/
Steve Holden <st***@holdenwe b.com> writes:
[...] File "/usr/lib/python2.4/urllib2.py", line 996, in do_open raise URLError(err) urllib2.URLErro r: <urlopen error (120, 'Operation already in progress')>
Looking at that part of the course of urrllib2 we see:
headers["Connection "] = "close" try: h.request(req.g et_method(), req.get_selecto r(), req.data, headers) r = h.getresponse() except socket.error, err: # XXX what error? raise URLError(err)
So my conclusion is that there's something in the Cygwin socket module that causes problems not seen under other platforms.
I couldn't find any obviously-related error in the Python bug tracker, and I have copied this message to the Cygwin list in case someone there knows what the problem is.
[...]
I don't *think* this is related, but just in case: http://python.org/sf/1208304
John This thread has been closed and replies have been disabled. Please start a new discussion. Similar topics |
by: Richard Shea |
last post by:
Hi - I'm new to Python. I've been trying to use URLLIB and the 'tidy'
function (part of the mx.tidy package). There's one thing I'm having
real difficulties understanding. When I did this ...
finA= urllib.urlopen('http://www.python.org/')
foutA=open('C:\\testout.html','w')
tidy(finA,foutA,None)
I get ...
|
by: Steve Allgood |
last post by:
I'm having trouble posting a form at the USPS web site. I've been
successful using urllib at other sites, but I'm missing why this won't
work:
# begin code
# get zip+4
import urllib
def zip4query():
|
by: Brian Beck |
last post by:
Hi.
I'm having some problems with code based directly on the following
httplib documentation code:
http://www.zvon.org/other/python/doc21/lib/httplib-examples.html
I've included the code and traceback at the end of this post.
The odd thing is, using DEPRECATED FUNCTIONS to perform the same
function works fine!
|
by: Pater Maximus |
last post by:
I am trying to implement the recipe listed at
http://aspn.activestate.com/ASPN/Cookbook/Python/Recipe/211886
However, I can not get to first base. When I try to run
import urllib
fo=urllib.urlopen("http://www.dictionary.com/")
page = fo.read()
I get:
|
by: Pieter Edelman |
last post by:
Hi all,
I'm trying to submit some data using a POST request to a HTTP server with
BASIC authentication with python, but I can't get it to work. Since it's
driving me completely nuts, so here's my cry for help.
The server is an elog logbook server (http://midas.psi.ch/elog/). It is
protected with a password and an empty username. I can...
| |
by: Hansan |
last post by:
Hi all.
I am working on a webpage where I use python and html.
When I want to send one variable to a new script/page I use the following
code:
0) print '''<input type=hidden name="eventid"
value='''+str(variable_name)+'''>'''
This works fine, the problem occurs when I want to send a variable to a page
|
by: justsee |
last post by:
Hi,
I'm using Python 2.3 on Windows for the first time, and am doing
something wrong in using urllib to retrieve images from urls embedded
in a csv file. If I explicitly specify a url and image name it works
fine(commented example in the code), but if I pass in variables in this
for loop it throws errors:
--- The script:
import csv,...
|
by: onceuponapriori |
last post by:
Greetings gents. I'm a Railser working on a django app that needs to do
some scraping to gather its data.
I need to programatically access a site that requires a username and
password. Once I post to the login.php page, there seems to be a
redirect and it seems that the site is using a session (perhaps a
cookie) to determine whether the...
|
by: kgrafals |
last post by:
Hi,
I'm just trying to read from a webpage with urllib but I'm getting
IOErrors. This is my code:
import urllib
sock = urllib.urlopen("http://www.google.com/")
and this is the error:
|
by: marktang |
last post by:
ONU (Optical Network Unit) is one of the key components for providing high-speed Internet services. Its primary function is to act as an endpoint device located at the user's premises. However, people are often confused as to whether an ONU can Work As a Router. In this blog post, we’ll explore What is ONU, What Is Router, ONU & Router’s main...
|
by: Hystou |
last post by:
Most computers default to English, but sometimes we require a different language, especially when relocating. Forgot to request a specific language before your computer shipped? No problem! You can effortlessly switch the default language on Windows 10 without reinstalling. I'll walk you through it.
First, let's disable language...
| |
by: jinu1996 |
last post by:
In today's digital age, having a compelling online presence is paramount for businesses aiming to thrive in a competitive landscape. At the heart of this digital strategy lies an intricately woven tapestry of website design and digital marketing. It's not merely about having a website; it's about crafting an immersive digital experience that...
|
by: agi2029 |
last post by:
Let's talk about the concept of autonomous AI software engineers and no-code agents. These AIs are designed to manage the entire lifecycle of a software development project—planning, coding, testing, and deployment—without human intervention. Imagine an AI that can take a project description, break it down, write the code, debug it, and then...
|
by: isladogs |
last post by:
The next Access Europe User Group meeting will be on Wednesday 1 May 2024 starting at 18:00 UK time (6PM UTC+1) and finishing by 19:30 (7.30PM).
In this session, we are pleased to welcome a new presenter, Adolph Dupré who will be discussing some powerful techniques for using class modules.
He will explain when you may want to use classes...
|
by: conductexam |
last post by:
I have .net C# application in which I am extracting data from word file and save it in database particularly. To store word all data as it is I am converting the whole word file firstly in HTML and then checking html paragraph one by one.
At the time of converting from word file to html my equations which are in the word document file was convert...
|
by: 6302768590 |
last post by:
Hai team
i want code for transfer the data from one system to another through IP address by using C# our system has to for every 5mins then we have to update the data what the data is updated we have to send another system
|
by: muto222 |
last post by:
How can i add a mobile payment intergratation into php mysql website.
| |
by: bsmnconsultancy |
last post by:
In today's digital era, a well-designed website is crucial for businesses looking to succeed. Whether you're a small business owner or a large corporation in Toronto, having a strong online presence can significantly impact your brand's success. BSMN Consultancy, a leader in Website Development in Toronto offers valuable insights into creating...
| |