473,385 Members | 1,973 Online
Bytes | Software Development & Data Engineering Community
Post Job

Home Posts Topics Members FAQ

Join Bytes to post your question to a community of 473,385 software developers and data experts.

scraping nested tables with BeautifulSoup

I'm trying to get the data on the "Central London Property Price Guide"
box at the left hand side of this page
http://www.findaproperty.com/regi0018.html

I have managed to get the data :) but when I start looking for tables I
only get tables of depth 1 how do I go about accessing inner tables?
same happens for links...

this is what I've go so far

import sys
from urllib import urlopen
from BeautifulSoup import BeautifulSoup

data = urlopen('http://www.findaproperty.com/regi0018.html').read()
soup = BeautifulSoup(data)

for tables in soup('table'):
table = tables('table')
if not table: continue
print table #this returns only 1 table

#this doesn't work at all

nested_table = table('table')
print nested_table

all suggestions welcome

Apr 4 '06 #1
7 8550
Go********@gmail.com wrote:
I'm trying to get the data on the "Central London Property Price Guide"
box at the left hand side of this page
http://www.findaproperty.com/regi0018.html

I have managed to get the data :) but when I start looking for tables I
only get tables of depth 1 how do I go about accessing inner tables?
same happens for links...

this is what I've go so far

import sys
from urllib import urlopen
from BeautifulSoup import BeautifulSoup

data = urlopen('http://www.findaproperty.com/regi0018.html').read()
soup = BeautifulSoup(data)

for tables in soup('table'):
table = tables('table')
if not table: continue
print table #this returns only 1 table


There's something fishy here. soup('table') should yield all the tables
in the document, even nested ones. For example, this program:

data = '''
<body>
<table width='100%'>
<tr><td>
<TABLE WIDTH='150'>
<tr><td>Stuff</td></tr>
</table>
</td></tr>
</table>
</body>
'''

from BeautifulSoup import BeautifulSoup as BS

soup = BS(data)
for table in soup('table'):
print table.get('width')
prints:
100%
150

Another tidbit - if I open the page in Firefox and save it, then open
that file into BeautifulSoup, it finds 25 tables and this code finds the
table you want:

from BeautifulSoup import BeautifulSoup
data2 = open('regi0018-firefox.html')
soup = BeautifulSoup(data2)

print len(soup('table'))

priceGuide = soup('table', dict(bgcolor="#e0f0f8", border="0",
cellpadding="2", cellspacing="2", width="150"))[1]
print priceGuide.tr
prints:
25
<tr><td bgcolor="#e0f0f8" valign="top"><font face="Arial"
size="2"><b>Central London Property Price Guide</b></font></td></tr>
Looking at the saved file, Firefox has clearly done some cleanup. So I
think you have to look at why BS is not processing the original data the
way you want. It seems to be choking on something.

Kent
Apr 4 '06 #2
Hey Kent,

thanks for your reply. how did you exactly save the file in firefox? if
I save the file locally I get the same error.

print len(soup('table')) gives me 4 instead 25

Apr 4 '06 #3
Go********@gmail.com wrote:
Hey Kent,

thanks for your reply. how did you exactly save the file in firefox? if
I save the file locally I get the same error.


I think I right-clicked on the page and chose "Save page as..."

Here is a program that shows where BS is choking. It finds the last leaf
node in the parse data by descending the last child of each node:

from urllib import urlopen
from BeautifulSoup import BeautifulSoup

data = urlopen('http://www.findaproperty.com/regi0018.html').read()
soup = BeautifulSoup(data)

tag = soup
while hasattr(tag, 'contents') and tag.contents:
tag = tag.contents[-1]

print type(tag)
print tag
It prints:
<class 'BeautifulSoup.NavigableString'>

<!/BUTTONS>

<TABLE BORDER=0 CELLSPACING=0 CELLPADDING=2 WIDTH=100% BGCOLOR=F0F0F0>
<TD ALIGN=left VALIGN=top>
<snip lots more>

So for some reason BS thinks that everything from <!BUTTONS> to the end
is a single string.

Kent
Apr 4 '06 #4
so it must be the malformed HTML comment that is confusing BS. I might
try different methods to see if I get the same problem...

thanks

Apr 4 '06 #5
Go********@gmail.com wrote:
Hey Kent,

thanks for your reply. how did you exactly save the file in firefox? if
I save the file locally I get the same error.


The Firefox version, among other things, turns all the funky <!FOO> and
<!/FOO> tags into comments. Here is a way to do the same thing with BS:

import re
from urllib import urlopen
from BeautifulSoup import BeautifulSoup, BeautifulStoneSoup

# This tells BS to turn <!FOO> into <!-- FOO --> which allows it
# to do a better job parsing this data
fixExclRe = re.compile(r'<!(?!--)([^>]+)>')
BeautifulStoneSoup.PARSER_MASSAGE.append( (fixExclRe, r'<!-- \1 -->') )

data = urlopen('http://www.findaproperty.com/regi0018.html').read()
soup = BeautifulSoup(data)

priceGuide = soup('table', dict(bgcolor="e0f0f8", border="0",
cellpadding="2", cellspacing="2", width="150"))[1]
print priceGuide
Kent
Apr 4 '06 #6
Thanks Kent that works perfectly.. How can I strip all the HTML and
create easily a dictionary of {location:price} ??

Apr 4 '06 #7
Go********@gmail.com wrote:
Thanks Kent that works perfectly.. How can I strip all the HTML and
create easily a dictionary of {location:price} ??


This should help:

prices = priceGuide.table

for tr in prices:
print tr.a.string, tr.a.findNext('font').string

Kent
Apr 4 '06 #8

This thread has been closed and replies have been disabled. Please start a new discussion.

Similar topics

4
by: David Jones | last post by:
Hi, I'm interested in learning about web scraping/site scraping using Python. Does anybody know of some online resources or have any modules that are available to help out. O'Reilly published an...
0
by: Gurpreet Sachdeva | last post by:
Hi guys, Can anyone suggest some good tool for handling nested tables in a HTML page... BeautifulSoup is somehow not working with Nested Tables. Thanks and Regards, Garry
3
by: Sanjay Arora | last post by:
We are looking to select the language & toolset more suitable for a project that requires getting data from several web-sites in real- time....html parsing/scraping. It would require full emulation...
3
by: ArKane | last post by:
Hello all, I've been hacking away at perl for a few months now, mainly using the LWP module, used for web scraping. Amoung its capabilities include support for HTTPS and proxies, authentication,...
7
by: ljr2600 | last post by:
Hello, I'm very new to python and still familiarizing myself with the language, sorry if the post seems moronic or simple. For a side project I'm working on I need to be able to scrape a...
5
by: Larry Bates | last post by:
Info: Python version: ActivePython 2.5.1.1 Platform: Windows I wanted to install BeautifulSoup today for a small project and decided to use easy_install. I can install other packages just...
3
by: bruce | last post by:
Hi... got a short test app that i'm playing with. the goal is to get data off the page in question. basically, i should be able to get a list of "tr" nodes, and then to iterate/parse them....
1
by: Walter Cruz | last post by:
On Fri, Sep 5, 2008 at 11:29 AM, Jackie Wang <jackie.python@gmail.comwrote: Use BeautifulSoup. 's - Walter
2
by: academicedgar | last post by:
Hi I would appreciate some help. I am trying to learn Python and want to use BeautifulSoup to pull some data from tables. I was really psyched earlier tonight when I discovered that I could do...
0
by: taylorcarr | last post by:
A Canon printer is a smart device known for being advanced, efficient, and reliable. It is designed for home, office, and hybrid workspace use and can also be used for a variety of purposes. However,...
0
by: Charles Arthur | last post by:
How do i turn on java script on a villaon, callus and itel keypad mobile phone
0
by: aa123db | last post by:
Variable and constants Use var or let for variables and const fror constants. Var foo ='bar'; Let foo ='bar';const baz ='bar'; Functions function $name$ ($parameters$) { } ...
0
by: emmanuelkatto | last post by:
Hi All, I am Emmanuel katto from Uganda. I want to ask what challenges you've faced while migrating a website to cloud. Please let me know. Thanks! Emmanuel
1
by: nemocccc | last post by:
hello, everyone, I want to develop a software for my android phone for daily needs, any suggestions?
1
by: Sonnysonu | last post by:
This is the data of csv file 1 2 3 1 2 3 1 2 3 1 2 3 2 3 2 3 3 the lengths should be different i have to store the data by column-wise with in the specific length. suppose the i have to...
0
by: Hystou | last post by:
There are some requirements for setting up RAID: 1. The motherboard and BIOS support RAID configuration. 2. The motherboard has 2 or more available SATA protocol SSD/HDD slots (including MSATA, M.2...
0
Oralloy
by: Oralloy | last post by:
Hello folks, I am unable to find appropriate documentation on the type promotion of bit-fields when using the generalised comparison operator "<=>". The problem is that using the GNU compilers,...
0
jinu1996
by: jinu1996 | last post by:
In today's digital age, having a compelling online presence is paramount for businesses aiming to thrive in a competitive landscape. At the heart of this digital strategy lies an intricately woven...

By using Bytes.com and it's services, you agree to our Privacy Policy and Terms of Use.

To disable or enable advertisements and analytics tracking please visit the manage ads & tracking page.