473,788 Members | 2,754 Online
Bytes | Software Development & Data Engineering Community
+ Post

Home Posts Topics Members FAQ

BeautifulSoup vs. real-world HTML comments

The syntax that browsers understand as HTML comments is much less
restrictive than what BeautifulSoup understands. I keep running into
sites with formally incorrect HTML comments which are parsed happily
by browsers. Here's yet another example, this one from
"http://www.webdirector y.com". The page starts like this:
<!Hello there! Welcome to The Environment Directory!>
<!Not too much exciting HTML code here but it does the job! >
<!See ya, - JD >

<HTML><HEAD>
<TITLE>Environm ent Web Directory</TITLE>
Those are, of course, invalid HTML comments. But Firefox, IE, etc. handle them
without problems.

BeautifulSoup can't parse this page usefully at all.
It treats the entire page as a text chunk. It's actually
HTMLParser that parses comments, so this is really an HTMLParser
level problem.
John Nagle
Apr 4 '07 #1
11 3178
On Apr 4, 2:08 pm, John Nagle <n...@animats.c omwrote:
The syntax that browsers understand as HTML comments is much less
restrictive than what BeautifulSoup understands. I keep running into
sites with formally incorrect HTML comments which are parsed happily
by browsers. Here's yet another example, this one from
"http://www.webdirector y.com". The page starts like this:

<!Hello there! Welcome to The Environment Directory!>
<!Not too much exciting HTML code here but it does the job! >
<!See ya, - JD >

<HTML><HEAD>
<TITLE>Environm ent Web Directory</TITLE>

Those are, of course, invalid HTML comments. But Firefox, IE, etc. handle them
without problems.

BeautifulSoup can't parse this page usefully at all.
It treats the entire page as a text chunk. It's actually
HTMLParser that parses comments, so this is really an HTMLParser
level problem.
Google for a program called "tidy". Install it, and run it as a
filter on any HTML you download. "tidy" has invested in it quite a
bit of work understanding common bad HTML and how browsers deal with
it. It would be pointless to duplicate that work in the Python
standard library; let HTMLParser be small and tight, and outsource the
handling of floozy input to a dedicated program.
Carl Banks

Apr 4 '07 #2
Carl Banks wrote:
On Apr 4, 2:08 pm, John Nagle <n...@animats.c omwrote:
>BeautifulSou p can't parse this page usefully at all.
It treats the entire page as a text chunk. It's actually
HTMLParser that parses comments, so this is really an HTMLParser
level problem.

Google for a program called "tidy". Install it, and run it as a
filter on any HTML you download. "tidy" has invested in it quite a
bit of work understanding common bad HTML and how browsers deal with
it. It would be pointless to duplicate that work in the Python
standard library; let HTMLParser be small and tight, and outsource the
handling of floozy input to a dedicated program.
Well, BeautifulSoup is just such a dedicated library. However, it defers its
handling of comments to HTMLParser. That's the problem.

--
Robert Kern

"I have come to believe that the whole world is an enigma, a harmless enigma
that is made terrible by our own mad attempt to interpret it as though it had
an underlying truth."
-- Umberto Eco

Apr 4 '07 #3

Carl Banks wrote:
On Apr 4, 2:08 pm, John Nagle <n...@animats.c omwrote:
The syntax that browsers understand as HTML comments is much less
restrictive than what BeautifulSoup understands. I keep running into
sites with formally incorrect HTML comments which are parsed happily
by browsers. Here's yet another example, this one from
"http://www.webdirector y.com". The page starts like this:

<!Hello there! Welcome to The Environment Directory!>
<!Not too much exciting HTML code here but it does the job! >
<!See ya, - JD >

<HTML><HEAD>
<TITLE>Environm ent Web Directory</TITLE>

Those are, of course, invalid HTML comments. But Firefox, IE, etc. handle them
without problems.

BeautifulSoup can't parse this page usefully at all.
It treats the entire page as a text chunk. It's actually
HTMLParser that parses comments, so this is really an HTMLParser
level problem.

Google for a program called "tidy". Install it, and run it as a
filter on any HTML you download. "tidy" has invested in it quite a
bit of work understanding common bad HTML and how browsers deal with
it. It would be pointless to duplicate that work in the Python
standard library; let HTMLParser be small and tight, and outsource the
handling of floozy input to a dedicated program.
That's a good suggestion. In fact it looks like there's a Python API
for tidy:
http://utidylib.berlios.de/
Tried it, seems to get rid of <! comments just fine.

Apr 4 '07 #4
Carl Banks wrote:
On Apr 4, 2:08 pm, John Nagle <n...@animats.c omwrote:
> The syntax that browsers understand as HTML comments is much less
restrictive than what BeautifulSoup understands. I keep running into
sites with formally incorrect HTML comments which are parsed happily
by browsers. Here's yet another example, this one from
"http://www.webdirector y.com". The page starts like this:

<!Hello there! Welcome to The Environment Directory!>
<!Not too much exciting HTML code here but it does the job! >
<!See ya, - JD >

<HTML><HEAD>
<TITLE>Environm ent Web Directory</TITLE>

Those are, of course, invalid HTML comments. But Firefox, IE, etc. handle them
without problems.

BeautifulSou p can't parse this page usefully at all.
It treats the entire page as a text chunk. It's actually
HTMLParser that parses comments, so this is really an HTMLParser
level problem.

Google for a program called "tidy". Install it, and run it as a
filter on any HTML you download. "tidy" has invested in it quite a
bit of work understanding common bad HTML and how browsers deal with
it. It would be pointless to duplicate that work in the Python
standard library; let HTMLParser be small and tight, and outsource the
handling of floozy input to a dedicated program.

eGenix have produced the mxTidy library that handily incorporates these
features in a way that makes them easy for Python programmers to use.

regards
Steve
--
Steve Holden +44 150 684 7255 +1 800 494 3119
Holden Web LLC/Ltd http://www.holdenweb.com
Skype: holdenweb http://del.icio.us/steve.holden
Recent Ramblings http://holdenweb.blogspot.com

Apr 4 '07 #5
On Apr 4, 2:43 pm, Robert Kern <robert.k...@gm ail.comwrote:
Carl Banks wrote:
On Apr 4, 2:08 pm, John Nagle <n...@animats.c omwrote:
BeautifulSoup can't parse this page usefully at all.
It treats the entire page as a text chunk. It's actually
HTMLParser that parses comments, so this is really an HTMLParser
level problem.
Google for a program called "tidy". Install it, and run it as a
filter on any HTML you download. "tidy" has invested in it quite a
bit of work understanding common bad HTML and how browsers deal with
it. It would be pointless to duplicate that work in the Python
standard library; let HTMLParser be small and tight, and outsource the
handling of floozy input to a dedicated program.

Well, BeautifulSoup is just such a dedicated library.
No, not really.
However, it defers its
handling of comments to HTMLParser. That's the problem.
Well, it's up to the writers of Beautiful Soup to decide how much bad
HTML they want to accept. ISTM they're happy to live with the
limitations of HTMLParser, meaning that they do not consider Beautiful
Soup to be a library dedicated to reading every piece of bad HTML out
there.

(Though it's not like I read their mailing list. Maybe they aren't
happy with HTMLParser.)
Carl Banks

Apr 4 '07 #6
John Nagle wrote:
The syntax that browsers understand as HTML comments is much less
restrictive than what BeautifulSoup understands. I keep running into
sites with formally incorrect HTML comments which are parsed happily
by browsers. Here's yet another example, this one from
"http://www.webdirector y.com". The page starts like this:
<!Hello there! Welcome to The Environment Directory!>
<!Not too much exciting HTML code here but it does the job! >
<!See ya, - JD >
Anything based on libxml2 and its HTML parser will handle such broken
HTML just fine, even if they just ignore such erroneous attempts at
comments, discarding them as the plain nonsense they clearly are.
Certainly, libxml2dom seems to deal with the page:

import libxml2dom
d = libxml2dom.pars eURI("http://www.webdirector y.com", html=1,
htmlencoding="i so-8859-1")

I guess lxml and the original libxml2 bindings work at least as well.
Note that some browsers won't be as happy if you give them such
content as XHTML.

Paul

Apr 4 '07 #7
Carl Banks wrote:
On Apr 4, 2:43 pm, Robert Kern <robert.k...@gm ail.comwrote:
>Carl Banks wrote:
>>On Apr 4, 2:08 pm, John Nagle <n...@animats.c omwrote:
BeautifulSou p can't parse this page usefully at all.
It treats the entire page as a text chunk. It's actually
HTMLParser that parses comments, so this is really an HTMLParser
level problem.
Google for a program called "tidy". Install it, and run it as a
filter on any HTML you download. "tidy" has invested in it quite a
bit of work understanding common bad HTML and how browsers deal with
it. It would be pointless to duplicate that work in the Python
standard library; let HTMLParser be small and tight, and outsource the
handling of floozy input to a dedicated program.
Well, BeautifulSoup is just such a dedicated library.

No, not really.
Yes, it is. Whether it succeeds in all particulars is besides the point. The
only mission of BeautifulSoup is to handle bad HTML. That tidy doesn't
successfully handle some other subset of bad HTML doesn't mean it's not a
dedicated program for handling bad HTML.
>However, it defers its
handling of comments to HTMLParser. That's the problem.

Well, it's up to the writers of Beautiful Soup to decide how much bad
HTML they want to accept. ISTM they're happy to live with the
limitations of HTMLParser, meaning that they do not consider Beautiful
Soup to be a library dedicated to reading every piece of bad HTML out
there.
Sorry, let me be clearer: The problem is that they haven't overridden the
handling of comments of SGMLParser (not HTMLParser, sorry) like it has many
other parts of SGMLParser. Yes, any fix should go into BeautifulSoup and not
SGMLParser.

All it takes is someone to code up their desired behavior for these perverse
comments and submit it to Leonard Richardson.

--
Robert Kern

"I have come to believe that the whole world is an enigma, a harmless enigma
that is made terrible by our own mad attempt to interpret it as though it had
an underlying truth."
-- Umberto Eco

Apr 4 '07 #8
On Apr 4, 4:55 pm, Robert Kern <robert.k...@gm ail.comwrote:
Carl Banks wrote:
On Apr 4, 2:43 pm, Robert Kern <robert.k...@gm ail.comwrote:
Carl Banks wrote:
On Apr 4, 2:08 pm, John Nagle <n...@animats.c omwrote:
BeautifulSo up can't parse this page usefully at all.
It treats the entire page as a text chunk. It's actually
HTMLParser that parses comments, so this is really an HTMLParser
level problem.
Google for a program called "tidy". Install it, and run it as a
filter on any HTML you download. "tidy" has invested in it quite a
bit of work understanding common bad HTML and how browsers deal with
it. It would be pointless to duplicate that work in the Python
standard library; let HTMLParser be small and tight, and outsource the
handling of floozy input to a dedicated program.
Well, BeautifulSoup is just such a dedicated library.
No, not really.

Yes, it is. Whether it succeeds in all particulars is besides the point. The
only mission of BeautifulSoup is to handle bad HTML.
I think the authors of BeautifulSoup have the right to decide what
their own mission is.
Carl Banks

Apr 4 '07 #9
Carl Banks wrote:
On Apr 4, 4:55 pm, Robert Kern <robert.k...@gm ail.comwrote:
>Carl Banks wrote:
>>On Apr 4, 2:43 pm, Robert Kern <robert.k...@gm ail.comwrote:
Carl Banks wrote:
On Apr 4, 2:08 pm, John Nagle <n...@animats.c omwrote:
>BeautifulS oup can't parse this page usefully at all.
>It treats the entire page as a text chunk. It's actually
>HTMLPars er that parses comments, so this is really an HTMLParser
>level problem.
Google for a program called "tidy". Install it, and run it as a
filter on any HTML you download. "tidy" has invested in it quite a
bit of work understanding common bad HTML and how browsers deal with
it. It would be pointless to duplicate that work in the Python
standard library; let HTMLParser be small and tight, and outsource the
handling of floozy input to a dedicated program.
Well, BeautifulSoup is just such a dedicated library.
No, not really.
Yes, it is. Whether it succeeds in all particulars is besides the point. The
only mission of BeautifulSoup is to handle bad HTML.

I think the authors of BeautifulSoup have the right to decide what
their own mission is.
Yes, and he's stated it pretty clearly:

"""You didn't write that awful page. You're just trying to get some data out of
it. Right now, you don't really care what HTML is supposed to look like.

Neither does this parser."""

--
Robert Kern

"I have come to believe that the whole world is an enigma, a harmless enigma
that is made terrible by our own mad attempt to interpret it as though it had
an underlying truth."
-- Umberto Eco

Apr 4 '07 #10

This thread has been closed and replies have been disabled. Please start a new discussion.

Similar topics

4
2734
by: William Xu | last post by:
Hi, all, This piece of code used to work well. i guess the error occurs after some upgrade. >>> import urllib >>> from BeautifulSoup import BeautifulSoup >>> url = 'http://www.google.com' >>> port = urllib.urlopen(url).read() >>> soup = BeautifulSoup()
3
5846
by: GinTon | last post by:
I'm trying to get the 'FOO' string but the problem is that inner 'P' tag there is another tag, 'a'. So: <p class="contentBody">FOO <a name="f"></a</p> So if I run 'print tree.first('p').string' to get the 'FOO' string it shows Null value because it's the 'a' tag: Null
5
2661
by: John Nagle | last post by:
This, which is from a real web site, went into BeautifulSoup: <param name="movie" value="/images/offersBanners/sw04.swf?binfot=We offer fantastic rates for selected weeks or days!!&blinkt=Click here And this came out, via prettify: <addresssnippet siteurl="http%3A//apartmentsapart.com" url="http%3A//www.apartmentsapart.com/Europe/Spain/Madrid/FAQ"> <param name="movie" value="/images/offersBanners/sw04.swf?binfot=We offer
9
1769
by: Mizipzor | last post by:
Is there a way to "subscribe" to individual topics? im currently getting bombarded with daily digests and i wish to only receive a mail when there is activity in a topic that interests me. Can this be done? Thanks in advance.
3
3426
by: John Nagle | last post by:
Are weak refs slower than strong refs? I've been considering making the "parent" links in BeautifulSoup into weak refs, so the trees will release immediately when they're no longer needed. In general, all links back towards the root of a tree should be weak refs; this breaks the loops that give reference counting trouble. John Nagle
2
1528
by: Frank Stutzman | last post by:
I've got a simple script that looks like (watch the wrap): --------------------------------------------------- import BeautifulSoup,urllib ifile = urllib.urlopen("http://www.naco.faa.gov/digital_tpp_search.asp?fldId ent=klax&fld_ident_type=ICAO&ver=0711&bnSubmit=Complete+Search").read() soup=BeautifulSoup.BeautifulSoup(ifile) print soup.prettify() ----------------------------------------------------
5
3776
by: Larry Bates | last post by:
Info: Python version: ActivePython 2.5.1.1 Platform: Windows I wanted to install BeautifulSoup today for a small project and decided to use easy_install. I can install other packages just fine. Unfortunately I get the following error from BeautifulSoup installation attempt: C:\Python25\Lib\SITE-P~1>easy_install BeautifulSoup
11
2018
by: John Nagle | last post by:
Mike Driscoll wrote: What on earth do you need a "Windows binary" for? "BeautifulSoup" is ONE PYTHON SOURCE FILE, "BeautifulSoup.py". It can be downloaded here: http://www.crummy.com/software/BeautifulSoup/download/BeautifulSoup.py And yes, the site is up.
3
2174
by: bsagert | last post by:
I downloaded BeautifulSoup.py from http://www.crummy.com/software/BeautifulSoup/ and being a n00bie, I just placed it in my Windows c:\python25\lib\ file. When I type "import beautifulsoup" from the interactive prompt it works like a charm. This seemed too easy in retrospect. Then I downloaded the PIL (Python Imaging Library) module from http://www.pythonware.com/products/pil/. Instead of a simple file that BeautifulSoup sent me, PIL is...
2
2617
by: academicedgar | last post by:
Hi I would appreciate some help. I am trying to learn Python and want to use BeautifulSoup to pull some data from tables. I was really psyched earlier tonight when I discovered that I could do this from BeautifulSoup import BeautifulSoup bst=file(r"c:\bstest.htm").read() soup=BeautifulSoup(bst) rows=soup.findAll('tr')
0
9656
marktang
by: marktang | last post by:
ONU (Optical Network Unit) is one of the key components for providing high-speed Internet services. Its primary function is to act as an endpoint device located at the user's premises. However, people are often confused as to whether an ONU can Work As a Router. In this blog post, we’ll explore What is ONU, What Is Router, ONU & Router’s main usage, and What is the difference between ONU and Router. Let’s take a closer look ! Part I. Meaning of...
0
9498
by: Hystou | last post by:
Most computers default to English, but sometimes we require a different language, especially when relocating. Forgot to request a specific language before your computer shipped? No problem! You can effortlessly switch the default language on Windows 10 without reinstalling. I'll walk you through it. First, let's disable language synchronization. With a Microsoft account, language settings sync across devices. To prevent any complications,...
0
10366
Oralloy
by: Oralloy | last post by:
Hello folks, I am unable to find appropriate documentation on the type promotion of bit-fields when using the generalised comparison operator "<=>". The problem is that using the GNU compilers, it seems that the internal comparison operator "<=>" tries to promote arguments from unsigned to signed. This is as boiled down as I can make it. Here is my compilation command: g++-12 -std=c++20 -Wnarrowing bit_field.cpp Here is the code in...
0
10175
jinu1996
by: jinu1996 | last post by:
In today's digital age, having a compelling online presence is paramount for businesses aiming to thrive in a competitive landscape. At the heart of this digital strategy lies an intricately woven tapestry of website design and digital marketing. It's not merely about having a website; it's about crafting an immersive digital experience that captivates audiences and drives business growth. The Art of Business Website Design Your website is...
1
10112
by: Hystou | last post by:
Overview: Windows 11 and 10 have less user interface control over operating system update behaviour than previous versions of Windows. In Windows 11 and 10, there is no way to turn off the Windows Update option using the Control Panel or Settings app; it automatically checks for updates and installs any it finds, whether you like it or not. For most users, this new feature is actually very convenient. If you want to control the update process,...
0
9969
tracyyun
by: tracyyun | last post by:
Dear forum friends, With the development of smart home technology, a variety of wireless communication protocols have appeared on the market, such as Zigbee, Z-Wave, Wi-Fi, Bluetooth, etc. Each protocol has its own unique characteristics and advantages, but as a user who is planning to build a smart home system, I am a bit confused by the choice of these technologies. I'm particularly interested in Zigbee because I've heard it does some...
0
8993
agi2029
by: agi2029 | last post by:
Let's talk about the concept of autonomous AI software engineers and no-code agents. These AIs are designed to manage the entire lifecycle of a software development project—planning, coding, testing, and deployment—without human intervention. Imagine an AI that can take a project description, break it down, write the code, debug it, and then launch it, all on its own.... Now, this would greatly impact the work of software developers. The idea...
0
6750
by: conductexam | last post by:
I have .net C# application in which I am extracting data from word file and save it in database particularly. To store word all data as it is I am converting the whole word file firstly in HTML and then checking html paragraph one by one. At the time of converting from word file to html my equations which are in the word document file was convert into image. Globals.ThisAddIn.Application.ActiveDocument.Select();...
0
5536
by: adsilva | last post by:
A Windows Forms form does not have the event Unload, like VB6. What one acts like?

By using Bytes.com and it's services, you agree to our Privacy Policy and Terms of Use.

To disable or enable advertisements and analytics tracking please visit the manage ads & tracking page.