472,126 Members | 1,557 Online
Bytes | Software Development & Data Engineering Community
Post +

Home Posts Topics Members FAQ

Join Bytes to post your question to a community of 472,126 software developers and data experts.

urllib2 - iteration over non-sequence

im trying to get urllib2 to work on my server which runs python
2.2.1. When i run the following code:
import urllib2
for line in urllib2.urlopen('www.google.com'):
print line
i will always get the error:
Traceback (most recent call last):
File "<stdin>", line 1, in ?
TypeError: iteration over non-sequence
Anyone have any answers?

Jun 9 '07 #1
10 4052
rp*****@gmail.com wrote:
im trying to get urllib2 to work on my server which runs python
2.2.1. When i run the following code:
import urllib2
for line in urllib2.urlopen('www.google.com'):
print line
i will always get the error:
Traceback (most recent call last):
File "<stdin>", line 1, in ?
TypeError: iteration over non-sequence
Anyone have any answers?
I ran your code:
>>import urllib2
urllib2.urlopen('www.google.com')
Traceback (most recent call last):
File "<interactive input>", line 1, in <module>
File "C:\Python25\lib\urllib2.py", line 121, in urlopen
return _opener.open(url, data)
File "C:\Python25\lib\urllib2.py", line 366, in open
protocol = req.get_type()
File "C:\Python25\lib\urllib2.py", line 241, in get_type
raise ValueError, "unknown url type: %s" % self.__original
ValueError: unknown url type: www.google.com

Note the traceback.

you need to call it with type in front of the url:
>>import urllib2
urllib2.urlopen('http://www.google.com')
<addinfourl at 27659320 whose fp = <socket._fileobject object at 0x01A51F48>>

Python's interactive mode is very useful for tracking down this type
of problem.

-Larry
Jun 9 '07 #2
Thanks for the reply Larry but I am still having trouble. If i
understand you correctly, your are just suggesting that i add an http://
in front of the address? However when i run this:
>>import urllib2
site = urllib2.urlopen('http://www.google.com')
for line in site:
print line
I am still getting the message:

TypeError: iteration over non-sequence
File "<stdin>", line 1
TypeError: iteration over non-sequence

Jun 9 '07 #3
rp*****@gmail.com wrote:
Thanks for the reply Larry but I am still having trouble. If i
understand you correctly, your are just suggesting that i add an http://
in front of the address? However when i run this:

>>>import urllib2
site = urllib2.urlopen('http://www.google.com')
for line in site:
print line

I am still getting the message:

TypeError: iteration over non-sequence
File "<stdin>", line 1
TypeError: iteration over non-sequence
Newer version of Python are willing to implement an iterator that
*reads* the contents of a file object and supplies the lines to you
one-by-one in a loop. However, you explicitly said the version of
Python you are using, and that predates generators/iterators.

So... You must explicitly read the contents of the file-like object
yourself, and loop through the lines you self. However, fear not --
it's easy. The socket._fileobject object provides a method "readlines"
that reads the *entire* contents of the object, and returns a list of
lines. And you can iterate through that list of lines. Like this:

import urllib2
url = urllib2.urlopen('http://www.google.com')
for line in url.readlines():
print line
url.close()
Gary Herron


Jun 9 '07 #4
Gary Herron wrote:
So... You must explicitly read the contents of the file-like object
yourself, and loop through the lines you self. However, fear not --
it's easy. The socket._fileobject object provides a method "readlines"
that reads the *entire* contents of the object, and returns a list of
lines. And you can iterate through that list of lines. Like this:

import urllib2
url = urllib2.urlopen('http://www.google.com')
for line in url.readlines():
print line
url.close()
This is really wasteful, as there's no point in reading in the whole
file before iterating over it. To get the same effect as file iteration
in later versions, use the .xreadlines method::

for line in aFile.xreadlines():
...

--
Erik Max Francis && ma*@alcyone.com && http://www.alcyone.com/max/
San Jose, CA, USA && 37 20 N 121 53 W && AIM, Y!M erikmaxfrancis
If you flee from terror, then terror continues to chase you.
-- Benjamin Netanyahu
Jun 10 '07 #5
Erik Max Francis <ma*@alcyone.comwrites:
This is really wasteful, as there's no point in reading in the whole
file before iterating over it. To get the same effect as file
iteration in later versions, use the .xreadlines method::

for line in aFile.xreadlines():
...
Ehhh, a heck of a lot of web pages don't have any newlines, so you end
up getting the whole file anyway, with that method. Something like

for line in iter(lambda: aFile.read(4096), ''): ...

may be best.
Jun 10 '07 #6
Paul Rubin wrote:
Erik Max Francis <ma*@alcyone.comwrites:
>This is really wasteful, as there's no point in reading in the whole
file before iterating over it. To get the same effect as file
iteration in later versions, use the .xreadlines method::

for line in aFile.xreadlines():
...

Ehhh, a heck of a lot of web pages don't have any newlines, so you end
up getting the whole file anyway, with that method. Something like

for line in iter(lambda: aFile.read(4096), ''): ...

may be best.
Certainly there's are cases where xreadlines or read(bytecount) are
reasonable, but only if the total pages size is *very* large. But for
most web pages, you guys are just nit-picking (or showing off) to
suggest that the full read implemented by readlines is wasteful.
Moreover, the original problem was with sockets -- which don't have
xreadlines. That seems to be a method on regular file objects.

For simplicity, I'd still suggest my original use of readlines. If
and when you find you are downloading web pages with sizes that are
putting a serious strain on your memory footprint, then one of the other
suggestions might be indicated.

Gary Herron


Jun 10 '07 #7
Gary Herron <gh*****@islandtraining.comwrites:
For simplicity, I'd still suggest my original use of readlines. If
and when you find you are downloading web pages with sizes that are
putting a serious strain on your memory footprint, then one of the other
suggestions might be indicated.
If you know in advance that the page you're retrieving will be
reasonable in size, then using readlines is fine. If you don't know
in advance what you're retrieving (e.g. you're working on a crawler)
you have to assume that you'll hit some very large pages with
difficult construction.
Jun 10 '07 #8
Gary Herron wrote:
Certainly there's are cases where xreadlines or read(bytecount) are
reasonable, but only if the total pages size is *very* large. But for
most web pages, you guys are just nit-picking (or showing off) to
suggest that the full read implemented by readlines is wasteful.
Moreover, the original problem was with sockets -- which don't have
xreadlines. That seems to be a method on regular file objects.

For simplicity, I'd still suggest my original use of readlines. If
and when you find you are downloading web pages with sizes that are
putting a serious strain on your memory footprint, then one of the other
suggestions might be indicated.
It isn't nitpicking to point out that you're making something that will
consume vastly more amounts of memory than it could possibly need. And
insisting that pages aren't _always_ huge is just a silly cop-out; of
course pages get very large.

There is absolutely no reason to read the entire file into memory (which
is what you're doing) before processing it. This is a good example of
the principle of there is one obvious right way to do it -- and it isn't
to read the whole thing in first for no reason whatsoever other than to
avoid an `x`.

--
Erik Max Francis && ma*@alcyone.com && http://www.alcyone.com/max/
San Jose, CA, USA && 37 20 N 121 53 W && AIM, Y!M erikmaxfrancis
The more violent the love, the more violent the anger.
-- _Burmese Proverbs_ (tr. Hla Pe)
Jun 10 '07 #9
Paul Rubin wrote:
If you know in advance that the page you're retrieving will be
reasonable in size, then using readlines is fine. If you don't know
in advance what you're retrieving (e.g. you're working on a crawler)
you have to assume that you'll hit some very large pages with
difficult construction.
And that's before you even mention the point that, depending on the
application, it could easily open yourself up to a DOS attack.

There's premature optimization, and then there's premature completely
obvious and pointless waste. This falls in the latter category.

Besides, someone was asking for/needing an older equivalent to iterating
over a file. That's obviously .xreadlines, not .readlines.

--
Erik Max Francis && ma*@alcyone.com && http://www.alcyone.com/max/
San Jose, CA, USA && 37 20 N 121 53 W && AIM, Y!M erikmaxfrancis
The more violent the love, the more violent the anger.
-- _Burmese Proverbs_ (tr. Hla Pe)
Jun 10 '07 #10
En Sun, 10 Jun 2007 02:54:47 -0300, Erik Max Francis <ma*@alcyone.com>
escribió:
Gary Herron wrote:
>Certainly there's are cases where xreadlines or read(bytecount) are
reasonable, but only if the total pages size is *very* large. But for
most web pages, you guys are just nit-picking (or showing off) to
suggest that the full read implemented by readlines is wasteful.
Moreover, the original problem was with sockets -- which don't have
xreadlines. That seems to be a method on regular file objects.
There is absolutely no reason to read the entire file into memory (which
is what you're doing) before processing it. This is a good example of
the principle of there is one obvious right way to do it -- and it isn't
to read the whole thing in first for no reason whatsoever other than to
avoid an `x`.
The problem is -and you appear not to have noticed that- that the object
returned by urlopen does NOT have a xreadlines() method; and even if it
had, a lot of pages don't contain any '\n' so using xreadlines would read
the whole page in memory anyway.

Python 2.2 (the version that the OP is using) did include a xreadlines
module (now defunct) but on this case it is painfully slooooooooooooow -
perhaps it tries to read the source one character at a time.

So the best way would be to use (as Paul Rubin already said):

for line in iter(lambda: f.read(4096), ''): print line

--
Gabriel Genellina

Jun 10 '07 #11

This discussion thread is closed

Replies have been disabled for this discussion.

Similar topics

4 posts views Thread by O. Koch | last post: by
5 posts views Thread by Pascal | last post: by
reply views Thread by Benjamin Schollnick | last post: by
1 post views Thread by Ray Slakinski | last post: by
6 posts views Thread by Alejandro Dubrovsky | last post: by
4 posts views Thread by Bo Yang | last post: by
2 posts views Thread by Ant | last post: by
1 post views Thread by Alessandro Fachin | last post: by
2 posts views Thread by ken | last post: by
1 post views Thread by Larry Hale | last post: by
reply views Thread by leo001 | last post: by

By using Bytes.com and it's services, you agree to our Privacy Policy and Terms of Use.

To disable or enable advertisements and analytics tracking please visit the manage ads & tracking page.