473,387 Members | 1,512 Online
Bytes | Software Development & Data Engineering Community
Post Job

Home Posts Topics Members FAQ

Join Bytes to post your question to a community of 473,387 software developers and data experts.

html parser?

Hi *,

is there a html parser available, which could i.e. extract all links from a
given text like that:
"""
<a href="foo.php?param1=test">BAR<img src="none.gif"></a>
<a href="foo2.php?param1=test&param2=test">BAR2</a>
"""

and return a set of dicts like that:
"""
{
['foo.php','BAR','param1','test'],
['foo2.php','BAR2','param1','test','param2','test']
}
"""

thanks,
Chris
Oct 18 '05 #1
6 3356
Christoph Söllner wrote:
Hi *,

is there a html parser available, which could i.e. extract all links from a
given text like that:
"""
<a href="foo.php?param1=test">BAR<img src="none.gif"></a>
<a href="foo2.php?param1=test&param2=test">BAR2</a>
"""

and return a set of dicts like that:
"""
{
['foo.php','BAR','param1','test'],
['foo2.php','BAR2','param1','test','param2','test']
}
"""

thanks,
Chris

I asked the same question a week ago, and the answer I got was a really
beautiful one. :-)

http://www.crummy.com/software/BeautifulSoup/

Les

Oct 18 '05 #2
right, that's what I was looking for. Thanks very much.
Oct 18 '05 #3
* Christoph Söllner (2005-10-18 12:20 +0100)
right, that's what I was looking for. Thanks very much.


For simple things like that "BeautifulSoup" might be overkill.

import formatter, \
htmllib, \
urllib

url = 'http://python.org'

htmlp = htmllib.HTMLParser(formatter.NullFormatter())
htmlp.feed(urllib.urlopen(url).read())
htmlp.close()

print htmlp.anchorlist

and then use urlparse to parse the links/urls...
Oct 18 '05 #4
Thorsten Kampe wrote:
For simple things like that "BeautifulSoup" might be overkill.


[HTMLParser example]

I've used SGMLParser with some success before, although the SAX-style
processing is objectionable to many people. One alternative is to use
libxml2dom [1] and to parse documents as HTML:

import libxml2dom, urllib
url = 'http://www.python.org'
doc = libxml2dom.parse(urllib.urlopen(url), html=1)
anchors = doc.xpath("//a")

Currently, the parseURI function in libxml2dom doesn't do HTML parsing,
mostly because I haven't yet figured out what combination of parsing
options have to be set to make it happen, but a combination of urllib
and libxml2dom should perform adequately. In the above example, you'd
process the nodes in the anchors list to get the desired results.

Paul

[1] http://www.python.org/pypi/libxml2dom

Oct 18 '05 #5
Thorsten Kampe wrote:
* Christoph Söllner (2005-10-18 12:20 +0100)

right, that's what I was looking for. Thanks very much.


For simple things like that "BeautifulSoup" might be overkill.

import formatter, \
htmllib, \
urllib

url = 'http://python.org'

htmlp = htmllib.HTMLParser(formatter.NullFormatter())

The problem with HTMLParser is that does not handle unclosed tags and/or
attirbutes given with invalid syntax.
Unfortunately, many sites on the internet use malformed HTML pages. You
are right, BeautifulSoup is an overkill
(it is rather slow) but I'm affraid this is the only fault-tolerant
solution.

Les

Oct 19 '05 #6
To extract links without the overhead of Beautiful Soup, one option is
to copy what Beautiful Soup does, and write a SGMLParser subclass that
only looks at 'a' tags. In general I think writing SGMLParser
subclasses is a big pain (which is why I wrote Beautiful Soup), but
since you only care about one type of tag it's not so difficult:

from sgmllib import SGMLParser

class LinkParser(SGMLParser):
def __init__(self):
SGMLParser.__init__(self)
self.links = []
self.currentLink = None
self.currentLinkText = []

def start_a(self, attrs):
#If we encounter a nested a tag, end the current a tag and
#start a new one.
if self.currentLink != None:
self.end_a()
for attr, value in attrs:
if attr == 'href':
self.currentLink = value
break
if self.currentLink == None:
self.currentLink = ''

def handle_data(self, data):
if self.currentLink != None:
self.currentLinkText.append(data)

def end_a(self):
self.links.append([self.currentLink,
"".join(self.currentLinkText)])
self.currentLink = None
self.currentLinkText = []

Since this ignores any tags other than 'a', it will strip out all tags
from the text within an 'a' tag (this might be what you want, since
your example shows an 'img' tag being stripped out). It will also close
one 'a' tag when it finds another, rather than attempting to nest them.

<a href="foo.php">This text has <b>embedded HTML tags</b></a>
=>
[['foo.php', 'This text has embedded HTML tags']]

<a href="foo.php">This text has <a name="anchor">an embedded
anchor</a>.
=>
[['foo.php', 'This text has '], ['', 'an embedded anchor']]

Alternatively, you can subclass a Beautiful Soup class to ignore all
tags except for 'a' tags and the tags that they contain. This will give
you the whole Beautiful Soup API, but it'll be faster because Beautiful
Soup will only build a model of the parts of your document within 'a'
tags. The following code seems to work (and it looks like a good
candidate for inclusion in the core package).

from BeautifulSoup import BeautifulStoneSoup

class StrainedStoneSoup(BeautifulStoneSoup):
def __init__(self, interestingTags=["a"], *args):
args = list(args)
args.insert(0, self)
self.interestingMap = {}
for tag in interestingTags:
self.interestingMap[tag] = True
apply(BeautifulStoneSoup.__init__, args)

def unknown_starttag(self, name, attrs, selfClosing=0):
if self.interestingMap.get(name) or len(self.tagStack) > 1:
BeautifulStoneSoup.unknown_starttag(self, name, attrs,
selfClosing)

def unknown_endtag(self, name):
if len(self.tagStack) > 1:
BeautifulStoneSoup.unknown_endtag(self, name)

def handle_data(self, data):
if len(self.tagStack) > 1:
BeautifulStoneSoup.handle_data(self, data)

Oct 19 '05 #7

This thread has been closed and replies have been disabled. Please start a new discussion.

Similar topics

7
by: YoBro | last post by:
Hi I have used some of this code from the PHP manual, but I am bloody hopeless with regular expressions. Was hoping somebody could offer a hand. The output of this will put the name of a form...
4
by: Leif K-Brooks | last post by:
I'm writing a site with mod_python which will have, among other things, forums. I want to allow users to use some HTML (<em>, <strong>, <p>, etc.) on the forums, but I don't want to allow bad...
0
by: Himanshu Garg | last post by:
Hello, I am using HTML::Parser to extract text from html pages from http://bbc.co.uk/urdu/ However the encoding of the input text seems to change to some unknown encoding in the output. The...
3
by: Himanshu Garg | last post by:
Hello, I am trying to pinpoint an apparent bug in HTML::Parser. The encoding of the text seems to change incorrectly if the locale isn't set properly. However Parser.pm in the directory...
4
by: bariole | last post by:
Hi I am trying to make lexical analysis of some simplified html code with flex tool. However that kind of work is new to me and I don't know where to start. I have searched a web but I didn't...
82
by: Eric Lindsay | last post by:
I have been trying to get a better understanding of simple HTML, but I am finding conflicting information is very common. Not only that, even in what seemed elementary and without any possibility...
8
by: Lachlan Hunt | last post by:
Hi, I'm interested in finding out how erroneous comment syntax within an HTML document should be handled by a parser, according to SGML rules. At present, every browser handles comments in...
2
by: David Virgil Hobbs | last post by:
Loading text strings containing HTML code into an HTML parser in a Javascript/Jscript I would like to know, how one would go about loading a text string containing HTML code, so as to be able to...
0
by: june | last post by:
Hi, I have a big problem with parsing HTML into a XHTML using Cberneko to validate the html. First I tried to work with a HTML-File. This solutions works fine: String aHTMLFile =...
4
by: Jackie | last post by:
Hi, all, I want to get the information of the professors (name,title) from the following link: "http://www.economics.utoronto.ca/index.php/index/person/faculty/" Ideally, I'd like to have a...
0
by: taylorcarr | last post by:
A Canon printer is a smart device known for being advanced, efficient, and reliable. It is designed for home, office, and hybrid workspace use and can also be used for a variety of purposes. However,...
0
by: Charles Arthur | last post by:
How do i turn on java script on a villaon, callus and itel keypad mobile phone
0
by: ryjfgjl | last post by:
In our work, we often receive Excel tables with data in the same format. If we want to analyze these data, it can be difficult to analyze them because the data is spread across multiple Excel files...
0
by: emmanuelkatto | last post by:
Hi All, I am Emmanuel katto from Uganda. I want to ask what challenges you've faced while migrating a website to cloud. Please let me know. Thanks! Emmanuel
0
BarryA
by: BarryA | last post by:
What are the essential steps and strategies outlined in the Data Structures and Algorithms (DSA) roadmap for aspiring data scientists? How can individuals effectively utilize this roadmap to progress...
1
by: Sonnysonu | last post by:
This is the data of csv file 1 2 3 1 2 3 1 2 3 1 2 3 2 3 2 3 3 the lengths should be different i have to store the data by column-wise with in the specific length. suppose the i have to...
0
by: Hystou | last post by:
There are some requirements for setting up RAID: 1. The motherboard and BIOS support RAID configuration. 2. The motherboard has 2 or more available SATA protocol SSD/HDD slots (including MSATA, M.2...
0
Oralloy
by: Oralloy | last post by:
Hello folks, I am unable to find appropriate documentation on the type promotion of bit-fields when using the generalised comparison operator "<=>". The problem is that using the GNU compilers,...
0
jinu1996
by: jinu1996 | last post by:
In today's digital age, having a compelling online presence is paramount for businesses aiming to thrive in a competitive landscape. At the heart of this digital strategy lies an intricately woven...

By using Bytes.com and it's services, you agree to our Privacy Policy and Terms of Use.

To disable or enable advertisements and analytics tracking please visit the manage ads & tracking page.