473,703 Members | 2,804 Online
Bytes | Software Development & Data Engineering Community
+ Post

Home Posts Topics Members FAQ

beautifulsoup .vs tidy

hi...

never used perl, but i have an issue trying to resolve some html that
appears to be "dirty/malformed" regarding the overall structure. in
researching validators, i came across the beautifulsoup app and wanted to
know if anybody could give me pros/cons of the app as it relates to any of
the other validation apps...

the issue i'm facing involves parsing some websites, so i'm trying to
extract information based on the DOM/XPath functions.. i'm using perl to
handle the extraction....

thanks

-bruce
be*******@earth link.net

Jul 1 '06 #1
8 4718
bruce wrote:
hi...

never used perl, but i have an issue trying to resolve some html that
appears to be "dirty/malformed" regarding the overall structure. in
researching validators, i came across the beautifulsoup app and wanted to
know if anybody could give me pros/cons of the app as it relates to any of
the other validation apps...

the issue i'm facing involves parsing some websites, so i'm trying to
extract information based on the DOM/XPath functions.. i'm using perl to
handle the extraction....


1.) XPath is not a good idea at all with "malformed" HTML or perhaps
web pages in general.
2.) BeautifulSoup is not a validator but works well with bad HTML. Also
look at Mechanize and ClientForm.
3.) XMLStarlet is a good XML validator
(http://xmlstar.sourceforge.net/). It's not Python but you don't need
to care about the language it is written in.
4.) For a simple HTML validator, Just use http://validator.w3.org/

Jul 1 '06 #2

bruce wrote:
hi...

never used perl, but i have an issue trying to resolve some html that
appears to be "dirty/malformed" regarding the overall structure. in
researching validators, i came across the beautifulsoup app and wanted to
know if anybody could give me pros/cons of the app as it relates to any of
the other validation apps...

I'm not too sure of what you are after. You mention tidy in the subject
which made me think that maybe you were trying to generate well-formed
HTML from malformed webppages that nonetheless browsers can interpret.
If that is the case then try HTML tidy:
http://www.w3.org/People/Raggett/tidy/

- Pad.

Jul 1 '06 #3
bruce wrote:
that's exactly what i'm trying to accomplish... i've used tidy, but it seems
to still generate warnings...

initFile -> tidy ->cleanFile -> perl app (using xpath/livxml)

the xpath/linxml functions in the perl app complain regarding the file.


what exactly do they complain about ?

</F>

Jul 1 '06 #4
Ravi Teja wrote:

1.) XPath is not a good idea at all with "malformed" HTML or perhaps
web pages in general.


import libxml2dom
import urllib
f = urllib.urlopen( "http://wiki.python.org/moin/")
s = f.read()
f.close()
# s contains HTML not XML text
d = libxml2dom.pars eString(s, html=1)
# get the community-related links
for label in d.xpath("//li[.//a/text() = 'Community']//li//a/text()"):
print label.nodeValue

Of course, lxml should be able to do this kind of thing as well. I'd be
interested to know why this "is not a good idea", though.

Paul

Jul 1 '06 #5
bruce wrote:
that's exactly what i'm trying to accomplish... i've used tidy, but it seems
to still generate warnings...

initFile -tidy ->cleanFile -perl app (using xpath/livxml)

the xpath/linxml functions in the perl app complain regarding the file. my
thought is that tidy isn't cleaning enough, or that the perl xpath/libxml
functions are too strict!
Clean HTML is not valid XML. If you want to process the output with an
XML library you'll need to tell Tidy to output XHTML. Then it should
be valid for XML processing.

Of course BeautifulSoup is also a very nice library if you need to
extract some information, but don't necessarilly require XML processing
to do it.

-- Matt Good

Jul 1 '06 #6

Paul Boddie wrote:
Ravi Teja wrote:

1.) XPath is not a good idea at all with "malformed" HTML or perhaps
web pages in general.

import libxml2dom
import urllib
f = urllib.urlopen( "http://wiki.python.org/moin/")
s = f.read()
f.close()
# s contains HTML not XML text
d = libxml2dom.pars eString(s, html=1)
# get the community-related links
for label in d.xpath("//li[.//a/text() = 'Community']//li//a/text()"):
print label.nodeValue
I wasn't aware that your module does html as well.
Of course, lxml should be able to do this kind of thing as well. I'd be
interested to know why this "is not a good idea", though.
No reason that you don't know already.

http://www.boddie.org.uk/python/HTML.html

"If the document text is well-formed XML, we could omit the html
parameter or set it to have a false value."

XML parsers are not required to be forgiving to be regarded compliant.
And much HTML out there is not well formed.

Jul 1 '06 #7
Ravi Teja wrote:
>Of course, lxml should be able to do this kind of thing as well. I'd be
interested to know why this "is not a good idea", though.

No reason that you don't know already.

http://www.boddie.org.uk/python/HTML.html

"If the document text is well-formed XML, we could omit the html
parameter or set it to have a false value."

XML parsers are not required to be forgiving to be regarded compliant.
And much HTML out there is not well formed.
so? once you run it through an HTML-aware parser, the *resulting*
structure is well formed.

a site generator->converter->xpath approach is no less reliable than any
other HTML-scraping approach.

</F>

Jul 2 '06 #8
bruce wrote:
hi paddy...

that's exactly what i'm trying to accomplish... i've used tidy, but it seems
to still generate warnings...

initFile -tidy ->cleanFile -perl app (using xpath/livxml)

the xpath/linxml functions in the perl app complain regarding the file. my
thought is that tidy isn't cleaning enough, or that the perl xpath/libxml
functions are too strict!

which is why i decided to see if anyone on the python side has
experienced/solved this problem..
FWIW here's my usual approach:

http://copia.ogbuji.net/blog/2005-07-22/Beyond_HTM

Personally, I avoid Tidy. I've too often seen it crash or hang on
really bad HTML. TagSoup seems to be built like a tank. I've also
never seen BeautifulSoup choke, but I don't use it as much as TagSoup.

--
Uche Ogbuji Fourthought, Inc.
http://uche.ogbuji.net http://fourthought.com
http://copia.ogbuji.net http://4Suite.org
Articles: http://uche.ogbuji.net/tech/publications/

Jul 3 '06 #9

This thread has been closed and replies have been disabled. Please start a new discussion.

Similar topics

1
2161
by: Dan Stromberg | last post by:
Has anyone tried to construct an HTML janitor script using BeautifulSoup? My situation: I'm trying to convert a series of web pages from .html to palmdoc format, using plucker, which is written in python. The plucker project suggests passing html through "tidy", to get well-formed html for plucker to work with. However, some of the pages I want to convert are so bad that even tidy
4
3119
by: Steve Young | last post by:
I tried using BeautifulSoup to make changes to the url links on html pages, but when the page was displayed, it was garbled up and didn't look right (even when I didn't actually change anything on the page yet). I ran these steps in python to see what was up: >>from BeautifulSoup import BeautifulSoup >>from urllib2 import build_opener, Request >> >>req = Request('http://www.python.org/')
7
8601
by: Gonzillaaa | last post by:
I'm trying to get the data on the "Central London Property Price Guide" box at the left hand side of this page http://www.findaproperty.com/regi0018.html I have managed to get the data :) but when I start looking for tables I only get tables of depth 1 how do I go about accessing inner tables? same happens for links... this is what I've go so far
4
2730
by: William Xu | last post by:
Hi, all, This piece of code used to work well. i guess the error occurs after some upgrade. >>> import urllib >>> from BeautifulSoup import BeautifulSoup >>> url = 'http://www.google.com' >>> port = urllib.urlopen(url).read() >>> soup = BeautifulSoup()
5
2655
by: John Nagle | last post by:
This, which is from a real web site, went into BeautifulSoup: <param name="movie" value="/images/offersBanners/sw04.swf?binfot=We offer fantastic rates for selected weeks or days!!&blinkt=Click here And this came out, via prettify: <addresssnippet siteurl="http%3A//apartmentsapart.com" url="http%3A//www.apartmentsapart.com/Europe/Spain/Madrid/FAQ"> <param name="movie" value="/images/offersBanners/sw04.swf?binfot=We offer
9
1761
by: Mizipzor | last post by:
Is there a way to "subscribe" to individual topics? im currently getting bombarded with daily digests and i wish to only receive a mail when there is activity in a topic that interests me. Can this be done? Thanks in advance.
11
3172
by: John Nagle | last post by:
The syntax that browsers understand as HTML comments is much less restrictive than what BeautifulSoup understands. I keep running into sites with formally incorrect HTML comments which are parsed happily by browsers. Here's yet another example, this one from "http://www.webdirectory.com". The page starts like this: <!Hello there! Welcome to The Environment Directory!> <!Not too much exciting HTML code here but it does the job! >...
5
3770
by: Larry Bates | last post by:
Info: Python version: ActivePython 2.5.1.1 Platform: Windows I wanted to install BeautifulSoup today for a small project and decided to use easy_install. I can install other packages just fine. Unfortunately I get the following error from BeautifulSoup installation attempt: C:\Python25\Lib\SITE-P~1>easy_install BeautifulSoup
3
2167
by: bsagert | last post by:
I downloaded BeautifulSoup.py from http://www.crummy.com/software/BeautifulSoup/ and being a n00bie, I just placed it in my Windows c:\python25\lib\ file. When I type "import beautifulsoup" from the interactive prompt it works like a charm. This seemed too easy in retrospect. Then I downloaded the PIL (Python Imaging Library) module from http://www.pythonware.com/products/pil/. Instead of a simple file that BeautifulSoup sent me, PIL is...
0
8739
marktang
by: marktang | last post by:
ONU (Optical Network Unit) is one of the key components for providing high-speed Internet services. Its primary function is to act as an endpoint device located at the user's premises. However, people are often confused as to whether an ONU can Work As a Router. In this blog post, weíll explore What is ONU, What Is Router, ONU & Routerís main usage, and What is the difference between ONU and Router. Letís take a closer look ! Part I. Meaning of...
0
8654
by: Hystou | last post by:
Most computers default to English, but sometimes we require a different language, especially when relocating. Forgot to request a specific language before your computer shipped? No problem! You can effortlessly switch the default language on Windows 10 without reinstalling. I'll walk you through it. First, let's disable language synchronization. With a Microsoft account, language settings sync across devices. To prevent any complications,...
0
9234
Oralloy
by: Oralloy | last post by:
Hello folks, I am unable to find appropriate documentation on the type promotion of bit-fields when using the generalised comparison operator "<=>". The problem is that using the GNU compilers, it seems that the internal comparison operator "<=>" tries to promote arguments from unsigned to signed. This is as boiled down as I can make it. Here is my compilation command: g++-12 -std=c++20 -Wnarrowing bit_field.cpp Here is the code in...
1
8983
by: Hystou | last post by:
Overview: Windows 11 and 10 have less user interface control over operating system update behaviour than previous versions of Windows. In Windows 11 and 10, there is no way to turn off the Windows Update option using the Control Panel or Settings app; it automatically checks for updates and installs any it finds, whether you like it or not. For most users, this new feature is actually very convenient. If you want to control the update process,...
0
8941
tracyyun
by: tracyyun | last post by:
Dear forum friends, With the development of smart home technology, a variety of wireless communication protocols have appeared on the market, such as Zigbee, Z-Wave, Wi-Fi, Bluetooth, etc. Each protocol has its own unique characteristics and advantages, but as a user who is planning to build a smart home system, I am a bit confused by the choice of these technologies. I'm particularly interested in Zigbee because I've heard it does some...
0
5910
by: conductexam | last post by:
I have .net C# application in which I am extracting data from word file and save it in database particularly. To store word all data as it is I am converting the whole word file firstly in HTML and then checking html paragraph one by one. At the time of converting from word file to html my equations which are in the word document file was convert into image. Globals.ThisAddIn.Application.ActiveDocument.Select();...
0
4412
by: TSSRALBI | last post by:
Hello I'm a network technician in training and I need your help. I am currently learning how to create and manage the different types of VPNs and I have a question about LAN-to-LAN VPNs. The last exercise I practiced was to create a LAN-to-LAN VPN between two Pfsense firewalls, by using IPSEC protocols. I succeeded, with both firewalls in the same network. But I'm wondering if it's possible to do the same thing, with 2 Pfsense firewalls...
1
3107
by: 6302768590 | last post by:
Hai team i want code for transfer the data from one system to another through IP address by using C# our system has to for every 5mins then we have to update the data what the data is updated we have to send another system
2
2406
muto222
by: muto222 | last post by:
How can i add a mobile payment intergratation into php mysql website.

By using Bytes.com and it's services, you agree to our Privacy Policy and Terms of Use.

To disable or enable advertisements and analytics tracking please visit the manage ads & tracking page.