473,411 Members | 1,982 Online
Bytes | Software Development & Data Engineering Community
Post Job

Home Posts Topics Members FAQ

Join Bytes to post your question to a community of 473,411 software developers and data experts.

urllib (54, 'Connection reset by peer') error

Hi,

I have a small Python script to fetch some pages from the internet.
There are a lot of pages and I am looping through them and then
downloading the page using urlretrieve() in the urllib module.

The problem is that after 110 pages or so the script sort of hangs and
then I get the following traceback:
>>>>
Traceback (most recent call last):
File "volume_archiver.py", line 21, in <module>
urllib.urlretrieve(remotefile,localfile)
File "/Library/Frameworks/Python.framework/Versions/2.5/lib/
python2.5/urllib.py", line 89, in urlretrieve
return _urlopener.retrieve(url, filename, reporthook, data)
File "/Library/Frameworks/Python.framework/Versions/2.5/lib/
python2.5/urllib.py", line 222, in retrieve
fp = self.open(url, data)
File "/Library/Frameworks/Python.framework/Versions/2.5/lib/
python2.5/urllib.py", line 190, in open
return getattr(self, name)(url)
File "/Library/Frameworks/Python.framework/Versions/2.5/lib/
python2.5/urllib.py", line 328, in open_http
errcode, errmsg, headers = h.getreply()
File "/Library/Frameworks/Python.framework/Versions/2.5/lib/
python2.5/httplib.py", line 1195, in getreply
response = self._conn.getresponse()
File "/Library/Frameworks/Python.framework/Versions/2.5/lib/
python2.5/httplib.py", line 924, in getresponse
response.begin()
File "/Library/Frameworks/Python.framework/Versions/2.5/lib/
python2.5/httplib.py", line 385, in begin
version, status, reason = self._read_status()
File "/Library/Frameworks/Python.framework/Versions/2.5/lib/
python2.5/httplib.py", line 343, in _read_status
line = self.fp.readline()
File "/Library/Frameworks/Python.framework/Versions/2.5/lib/
python2.5/socket.py", line 331, in readline
data = recv(1)
IOError: [Errno socket error] (54, 'Connection reset by peer')
>>>>>>
My script code is as follows:
-----------------------------------------
import os
import urllib

volume_number = 149 # The volumes number 150 to 544

while volume_number < 544:
volume_number = volume_number + 1
localfile = '/Users/Chris/Desktop/Decisions/' + str(volume_number) +
'.html'
remotefile = 'http://caselaw.lp.findlaw.com/scripts/getcase.pl?
court=us&navby=vol&vol=' + str(volume_number)
print 'Getting volume number:', volume_number
urllib.urlretrieve(remotefile,localfile)

print 'Download complete.'
-----------------------------------------

Once I get the error once running the script again doesn't do much
good. It usually gets two or three pages and then hangs again.

What is causing this?


Jun 27 '08 #1
5 13010
On Jun 13, 4:21*pm, chrispoliq...@gmail.com wrote:
Hi,

I have a small Python script to fetch some pages from the internet.
There are a lot of pages and I am looping through them and then
downloading the page using urlretrieve() in the urllib module.

The problem is that after 110 pages or so the script sort of hangs and
then I get the following traceback:

Traceback (most recent call last):
* File "volume_archiver.py", line 21, in <module>
* * urllib.urlretrieve(remotefile,localfile)
* File "/Library/Frameworks/Python.framework/Versions/2.5/lib/
python2.5/urllib.py", line 89, in urlretrieve
* * return _urlopener.retrieve(url, filename, reporthook, data)
* File "/Library/Frameworks/Python.framework/Versions/2.5/lib/
python2.5/urllib.py", line 222, in retrieve
* * fp = self.open(url, data)
* File "/Library/Frameworks/Python.framework/Versions/2.5/lib/
python2.5/urllib.py", line 190, in open
* * return getattr(self, name)(url)
* File "/Library/Frameworks/Python.framework/Versions/2.5/lib/
python2.5/urllib.py", line 328, in open_http
* * errcode, errmsg, headers = h.getreply()
* File "/Library/Frameworks/Python.framework/Versions/2.5/lib/
python2.5/httplib.py", line 1195, in getreply
* * response = self._conn.getresponse()
* File "/Library/Frameworks/Python.framework/Versions/2.5/lib/
python2.5/httplib.py", line 924, in getresponse
* * response.begin()
* File "/Library/Frameworks/Python.framework/Versions/2.5/lib/
python2.5/httplib.py", line 385, in begin
* * version, status, reason = self._read_status()
* File "/Library/Frameworks/Python.framework/Versions/2.5/lib/
python2.5/httplib.py", line 343, in _read_status
* * line = self.fp.readline()
* File "/Library/Frameworks/Python.framework/Versions/2.5/lib/
python2.5/socket.py", line 331, in readline
* * data = recv(1)
IOError: [Errno socket error] (54, 'Connection reset by peer')

My script code is as follows:
-----------------------------------------
import os
import urllib

volume_number = 149 # The volumes number 150 to 544

while volume_number < 544:
* * * * volume_number = volume_number + 1
* * * * localfile = '/Users/Chris/Desktop/Decisions/' + str(volume_number) +
'.html'
* * * * remotefile = 'http://caselaw.lp.findlaw.com/scripts/getcase.pl?
court=us&navby=vol&vol=' + str(volume_number)
* * * * print 'Getting volume number:', volume_number
* * * * urllib.urlretrieve(remotefile,localfile)

print 'Download complete.'
-----------------------------------------

Once I get the error once running the script again doesn't do much
good. *It usually gets two or three pages and then hangs again.

What is causing this?
The server is causing it, you could just alter your code

import os
import urllib
import time

volume_number = 149 # The volumes number 150 to 544
localfile = '/Users/Chris/Desktop/Decisions/%s.html'
remotefile = 'http://caselaw.lp.findlaw.com/scripts/getcase.pl?
court=us&navby=vol&vol=%s'
while volume_number < 544:
volume_number += 1
print 'Getting volume number:', volume_number
try:
urllib.urlretrieve(remotefile%volume_number,localf ile
%volume_number)
except IOError:
volume_number -= 1
time.sleep(5)

print 'Download complete.'

That way if the attempt fails it rolls back the volume number, pauses
for a few seconds and tries again.
Jun 27 '08 #2
It means your client received a TCP segment with the reset bit sent.
The 'peer' will toss one your way if it determines that a connection
is no longer valid or if it receives a bad sequence number. If I had
to hazard a guess, I'd say it's probably a network device on the
server side trying to stop you from running a mass download
(especially if it's easily repeatable and happens at about the same
byte range).

-Jeff


On Fri, Jun 13, 2008 at 10:21 AM, <ch***********@gmail.comwrote:
Hi,

I have a small Python script to fetch some pages from the internet.
There are a lot of pages and I am looping through them and then
downloading the page using urlretrieve() in the urllib module.

The problem is that after 110 pages or so the script sort of hangs and
then I get the following traceback:
>>>>>
Traceback (most recent call last):
File "volume_archiver.py", line 21, in <module>
urllib.urlretrieve(remotefile,localfile)
File "/Library/Frameworks/Python.framework/Versions/2.5/lib/
python2.5/urllib.py", line 89, in urlretrieve
return _urlopener.retrieve(url, filename, reporthook, data)
File "/Library/Frameworks/Python.framework/Versions/2.5/lib/
python2.5/urllib.py", line 222, in retrieve
fp = self.open(url, data)
File "/Library/Frameworks/Python.framework/Versions/2.5/lib/
python2.5/urllib.py", line 190, in open
return getattr(self, name)(url)
File "/Library/Frameworks/Python.framework/Versions/2.5/lib/
python2.5/urllib.py", line 328, in open_http
errcode, errmsg, headers = h.getreply()
File "/Library/Frameworks/Python.framework/Versions/2.5/lib/
python2.5/httplib.py", line 1195, in getreply
response = self._conn.getresponse()
File "/Library/Frameworks/Python.framework/Versions/2.5/lib/
python2.5/httplib.py", line 924, in getresponse
response.begin()
File "/Library/Frameworks/Python.framework/Versions/2.5/lib/
python2.5/httplib.py", line 385, in begin
version, status, reason = self._read_status()
File "/Library/Frameworks/Python.framework/Versions/2.5/lib/
python2.5/httplib.py", line 343, in _read_status
line = self.fp.readline()
File "/Library/Frameworks/Python.framework/Versions/2.5/lib/
python2.5/socket.py", line 331, in readline
data = recv(1)
IOError: [Errno socket error] (54, 'Connection reset by peer')
>>>>>>>

My script code is as follows:
-----------------------------------------
import os
import urllib

volume_number = 149 # The volumes number 150 to 544

while volume_number < 544:
volume_number = volume_number + 1
localfile = '/Users/Chris/Desktop/Decisions/' + str(volume_number) +
'.html'
remotefile = 'http://caselaw.lp.findlaw.com/scripts/getcase.pl?
court=us&navby=vol&vol=' + str(volume_number)
print 'Getting volume number:', volume_number
urllib.urlretrieve(remotefile,localfile)

print 'Download complete.'
-----------------------------------------

Once I get the error once running the script again doesn't do much
good. It usually gets two or three pages and then hangs again.

What is causing this?


--
http://mail.python.org/mailman/listinfo/python-list
Jun 27 '08 #3
Thanks for the help. The error handling worked to a certain extent
but after a while the server does seem to stop responding to my
requests.

I have a list of about 7,000 links to pages I want to parse the HTML
of (it's basically a web crawler) but after a certain number of
urlretrieve() or urlopen() calls the server just stops responding.
Anyone know of a way to get around this? I don't own the server so I
can't make any modifications on that side.
Jun 27 '08 #4
ch***********@gmail.com wrote:
Thanks for the help. The error handling worked to a certain extent
but after a while the server does seem to stop responding to my
requests.

I have a list of about 7,000 links to pages I want to parse the HTML
of (it's basically a web crawler) but after a certain number of
urlretrieve() or urlopen() calls the server just stops responding.
Anyone know of a way to get around this? I don't own the server so I
can't make any modifications on that side.
I think someone's already mentioned this, but it's almost
certainly an explicit or implicit throttling on the remote server.
If you're pulling 7,000 pages from a single server you need to
be sure that you're within the Terms of Use of that service, or
at the least you need to contact the maintainers in courtesy to
confirm that this is acceptable.

If you don't you may well cause your IP block to be banned on
their network, which could affect others as well as yourself.

TJG
Jun 27 '08 #5
Tim Golden wrote:
ch***********@gmail.com wrote:
>Thanks for the help. The error handling worked to a certain extent
but after a while the server does seem to stop responding to my
requests.

I have a list of about 7,000 links to pages I want to parse the HTML
of (it's basically a web crawler) but after a certain number of
urlretrieve() or urlopen() calls the server just stops responding.
Anyone know of a way to get around this? I don't own the server so I
can't make any modifications on that side.

I think someone's already mentioned this, but it's almost
certainly an explicit or implicit throttling on the remote server.
If you're pulling 7,000 pages from a single server you need to
be sure that you're within the Terms of Use of that service, or
at the least you need to contact the maintainers in courtesy to
confirm that this is acceptable.

If you don't you may well cause your IP block to be banned on
their network, which could affect others as well as yourself.
Interestingly, "lp.findlaw.com" doesn't have any visible terms of service.
The information being downloaded is case law, which is public domain, so
there's no copyright issue. Some throttling and retry is needed to slow
down the process, but it should be fixable.

Try this: put in the retry code someone else suggested. Use a variable
retry delay, and wait one retry delay between downloading files. Whenever
a download fails, double the retry delay and try
again; don't let it get bigger than, say, 256 seconds. When a download
succeeds, halve the retry delay, but don't let it get smaller than 1 second.
That will make your downloader self-tune to the throttling imposed by
the server.

John Nagle
Jun 27 '08 #6

This thread has been closed and replies have been disabled. Please start a new discussion.

Similar topics

4
by: Donnal Walter | last post by:
On Windows XP I am able to connect to a remote telnet server from the command prompt using: telnet nnn.nnn.nnn.nnn 23 where nnn.nnn.nnn.nnn is the IP address of the host. But using telnetlib,...
3
by: mirandacascade | last post by:
This may be more of a socket question than a python question; not sure. Using this code to instantiate/connect/set options connectionHandle = socket.socket(socket.AF_INET, socket.SOCK_STREAM)...
4
by: Joe Lester | last post by:
I'm seeing this message a couple times per day in my postgres log: 2004-04-20 14:47:46 LOG: could not receive data from client: Connection reset by peer What does it mean? I've seen in the...
3
by: Van_Gogh | last post by:
Hi, I am learning how to use the smtplib module, but am having some very early problems, maybe because I don't understand it. So, am I correct that by following the example in the Python: >>>...
4
by: maneeshjp | last post by:
Hi to all, I have wrote a jsp-servlet program(using struts) which connects to DB2 via JDBC. I am using Tomcat5.5. JDBC driver as400thinjdbc.jar . DB connection paramaters are <data-sources>...
2
by: maneeshjp | last post by:
Hi to all, I have wrote a jsp-servlet program(using struts) which connects to DB2 via JDBC. I am using Tomcat5.5. JDBC driver as400thinjdbc.jar . DB connection paramaters are <data-sources>...
14
by: ahlongxp | last post by:
Hi, everyone, I'm implementing a simple client/server protocol. Now I've got a situation: client will send server command,header paires and optionally body. server checks headers and decides...
0
by: jhaski | last post by:
I made a python program that crawls a major website and collects a bunch of data. I have found that I get "connection reset by peer" issues on almost every page I crawl. I keep a log of every page,...
2
by: gigs | last post by:
I connect to web site with httplib.HTTPConnection. after some time i get this error: 104 "connection reset by peer". What exception i should use to catche this error thx!
0
by: emmanuelkatto | last post by:
Hi All, I am Emmanuel katto from Uganda. I want to ask what challenges you've faced while migrating a website to cloud. Please let me know. Thanks! Emmanuel
0
BarryA
by: BarryA | last post by:
What are the essential steps and strategies outlined in the Data Structures and Algorithms (DSA) roadmap for aspiring data scientists? How can individuals effectively utilize this roadmap to progress...
1
by: Sonnysonu | last post by:
This is the data of csv file 1 2 3 1 2 3 1 2 3 1 2 3 2 3 2 3 3 the lengths should be different i have to store the data by column-wise with in the specific length. suppose the i have to...
0
marktang
by: marktang | last post by:
ONU (Optical Network Unit) is one of the key components for providing high-speed Internet services. Its primary function is to act as an endpoint device located at the user's premises. However,...
0
by: Hystou | last post by:
Most computers default to English, but sometimes we require a different language, especially when relocating. Forgot to request a specific language before your computer shipped? No problem! You can...
0
Oralloy
by: Oralloy | last post by:
Hello folks, I am unable to find appropriate documentation on the type promotion of bit-fields when using the generalised comparison operator "<=>". The problem is that using the GNU compilers,...
0
jinu1996
by: jinu1996 | last post by:
In today's digital age, having a compelling online presence is paramount for businesses aiming to thrive in a competitive landscape. At the heart of this digital strategy lies an intricately woven...
0
by: Hystou | last post by:
Overview: Windows 11 and 10 have less user interface control over operating system update behaviour than previous versions of Windows. In Windows 11 and 10, there is no way to turn off the Windows...
0
tracyyun
by: tracyyun | last post by:
Dear forum friends, With the development of smart home technology, a variety of wireless communication protocols have appeared on the market, such as Zigbee, Z-Wave, Wi-Fi, Bluetooth, etc. Each...

By using Bytes.com and it's services, you agree to our Privacy Policy and Terms of Use.

To disable or enable advertisements and analytics tracking please visit the manage ads & tracking page.