How can I parse an HTML file and collect only that the A tags. I have a
start for the code but an unable to figure out how to finish the code.
HTML_parse gets the data from the URL document. Thanks for the help
def HTML_parse(data):
from HTMLParser import HTMLParser
parser = MyHTMLParser()
parser.feed(data)
class MyHTMLParser(HTMLParser):
def handle_starttag(self, tag, attrs):
def handle_endtag(self, tag):
def read_page(URL):
"this function returns the entire content of the specified URL
document"
import urllib
connect = urllib.urlopen(url)
data = connect.read()
connect.close()
return data 10 2653
I do not really know, what you want to do. Getting he urls from the a
tags of a html file? I think the easiest method would be a regular
expression. import urllib, sre html = urllib.urlopen("http://www.google.com").read() sre.findall('href="([^>]+)"', html)
['/imghp?hl=de&tab=wi&ie=UTF-8',
'http://groups.google.de/grphp?hl=de&tab=wg&ie=UTF-8',
'/dirhp?hl=de&tab=wd&ie=UTF-8',
'http://news.google.de/nwshp?hl=de&tab=wn&ie=UTF-8',
'http://froogle.google.de/frghp?hl=de&tab=wf&ie=UTF-8',
'/intl/de/options/'] sre.findall('href=[^>]+>([^<]+)</a>', html)
['Bilder', 'Groups', 'Verzeichnis', 'News', 'Froogle',
'Mehr »', 'Erweiterte Suche', 'Einstellungen',
'Sprachtools', 'Werbung', 'Unternehmensangebote', 'Alles \xfcber
Google', 'Google.com in English']
Google has some strange html, href without quotation marks: <a
href=http://www.google.com/ncr>Google.com in English</a>
George wrote: How can I parse an HTML file and collect only that the A tags. I have a start for the code but an unable to figure out how to finish the code. HTML_parse gets the data from the URL document. Thanks for the help
Have you tried using Beautiful Soup? http://www.crummy.com/software/BeautifulSoup/
"beza1e1" <an*************@googlemail.com> writes: I do not really know, what you want to do. Getting he urls from the a tags of a html file? I think the easiest method would be a regular expression.
I think this ranks as #2 on the list of "difficult one-day
hacks". Yeah, it's simple to write an RE that works most of the
time. It's a major PITA to write one that works in all the legal
cases. Getting one that also handles all the cases seen in the wild is
damn near impossible. import urllib, sre html = urllib.urlopen("http://www.google.com").read() sre.findall('href="([^>]+)"', html)
This fails in a number of cases. Whitespace around the "=" sign for
attibutes. Quotes around other attributes in the tag (required by
XHTML). '>' in the URL (legal, but disrecommended). Attributes quoted
with single quotes instead of double quotes, or just unqouted. It
misses IMG SRC attributes. It hands back relative URLs as such,
instead of resolving them to the absolute URL (which requires checking
for the base URL in the HEAD), which may or may not be acceptable.
Google has some strange html, href without quotation marks: <a href=http://www.google.com/ncr>Google.com in English</a>
That's not strange. That's just a bit unusual. Perfectly legal, though
- any browser (or other html processor) that fails to handle it is
broken.
<mike
--
Mike Meyer <mw*@mired.org> http://www.mired.org/home/mwm/
Independent WWW/Perforce/FreeBSD/Unix consultant, email for more information.
I think for a quick hack, this is as good as a parser. A simple parser
would miss some cases as well. RE are nearly not extendable though, so
your critic is valid.
The point is, what George wants to do. A mixture would be possible as
well:
Getting all <a ...> by a RE and then extracting the url with something
like a parser.
"beza1e1" <an*************@googlemail.com> writes: I think for a quick hack, this is as good as a parser. A simple parser would miss some cases as well. RE are nearly not extendable though, so your critic is valid.
Pretty much any first attempt is going to miss some cases. There
libraries available that are have stood the test of time. Simply
usinng one of those is the right solution.
The point is, what George wants to do. A mixture would be possible as well: Getting all <a ...> by a RE and then extracting the url with something like a parser.
I thought the point was to extract all URLs? Those appear in
attributes of tags other than A tags. While that's a meta-problem that
requires properly configuring the parser to deal with, it's something
that's *much* simpler to do if you've got a parser that understands
the structure of HTML - you should be able to specify tag/attribute
pairs to look for - than with something that is treating it as
unstructured text.
<mike
--
Mike Meyer <mw*@mired.org> http://www.mired.org/home/mwm/
Independent WWW/Perforce/FreeBSD/Unix consultant, email for more information.
you may define a start_a in MyHTMLParser.
e.g.
import htmllib
import formatter
class HTML_Parser(htmllib.HTMLParser):
def __init__(self):
htmllib.HTMLParser.__init__(self,
formatter.AbstractFormatter(formatter.NullWriter() ))
def start_a(self, args):
for key, value in args:
if key.lower() == 'href':
print value
html = HTML_Parser()
html.feed(open(r'a.htm','r').read())
html.close()
On 24 Sep 2005 10:13:30 -0700, George <bu*******@hotmail.com> wrote: How can I parse an HTML file and collect only that the A tags. I have a start for the code but an unable to figure out how to finish the code. HTML_parse gets the data from the URL document. Thanks for the help
def HTML_parse(data): from HTMLParser import HTMLParser parser = MyHTMLParser()
parser.feed(data)
class MyHTMLParser(HTMLParser):
def handle_starttag(self, tag, attrs):
def handle_endtag(self, tag):
def read_page(URL): "this function returns the entire content of the specified URL document" import urllib connect = urllib.urlopen(url) data = connect.read() connect.close() return data
-- http://mail.python.org/mailman/listinfo/python-list
--
Best Regards,
Leo Jay
"Stephen Prinster" <pr******@mail.com> wrote: George wrote: How can I parse an HTML file and collect only that the A tags. I have a start for the code but an unable to figure out how to finish the code. HTML_parse gets the data from the URL document. Thanks for the help
Have you tried using Beautiful Soup?
http://www.crummy.com/software/BeautifulSoup/
I agree; you can do what you want in two lines:
from BeautifulSoup import BeautifulSoup
hrefs = [link['href'] for link in BeautifulSoup(urllib.urlopen(url)).fetch('a')]
George
* George (2005-09-24 18:13 +0100) How can I parse an HTML file and collect only that the A tags.
import formatter, \
htmllib, \
urllib
url = 'http://python.org'
htmlp = htmllib.HTMLParser(formatter.NullFormatter())
htmlp.feed(urllib.urlopen(url).read())
htmlp.close()
print htmlp.anchorlist
I'm very new to python and I have tried to read the tutorials but I am
unable to understand exactly how I must do this problem.
Specifically, the showIPnums function takes a URL as input, calls the
read_page(url) function to obtain the entire page for that URL, and
then lists, in sorted order, the IP addresses implied in the "<A
HREF=· · ·>" tags within that page.
"""
Module to print IP addresses of tags in web file containing HTML showIPnums('http://22c118.cs.uiowa.edu/uploads/easy.html')
['0.0.0.0', '128.255.44.134', '128.255.45.54']
showIPnums('http://22c118.cs.uiowa.edu/uploads/pytorg.html')
['0.0.0.0', '128.255.135.49', '128.255.244.57', '128.255.30.11',
'128.255.34.132', '128.255.44.51', '128.255.45.53',
'128.255.45.54', '129.255.241.42', '64.202.167.129']
"""
def read_page(url):
import formatter
import htmllib
import urllib
htmlp = htmllib.HTMLParser(formatter.NullFormatter())
htmlp.feed(urllib.urlopen(url).read())
htmlp.close()
def showIPnums(URL):
page=read_page(URL)
if __name__ == '__main__':
import doctest, sys
doctest.testmod(sys.modules[__name__])
"George" <bu*******@hotmail.com> wrote: I'm very new to python and I have tried to read the tutorials but I am unable to understand exactly how I must do this problem.
Specifically, the showIPnums function takes a URL as input, calls the read_page(url) function to obtain the entire page for that URL, and then lists, in sorted order, the IP addresses implied in the "<A HREF=· · ·>" tags within that page.
""" Module to print IP addresses of tags in web file containing HTML
showIPnums('http://22c118.cs.uiowa.edu/uploads/easy.html') ['0.0.0.0', '128.255.44.134', '128.255.45.54'] showIPnums('http://22c118.cs.uiowa.edu/uploads/pytorg.html') ['0.0.0.0', '128.255.135.49', '128.255.244.57', '128.255.30.11', '128.255.34.132', '128.255.44.51', '128.255.45.53', '128.255.45.54', '129.255.241.42', '64.202.167.129']
"""
def read_page(url): import formatter import htmllib import urllib
htmlp = htmllib.HTMLParser(formatter.NullFormatter()) htmlp.feed(urllib.urlopen(url).read()) htmlp.close()
def showIPnums(URL): page=read_page(URL)
if __name__ == '__main__': import doctest, sys doctest.testmod(sys.modules[__name__])
You forgot to mention that you don't want duplicates in the result. Here's a function that passes
the doctest:
from urllib import urlopen
from urlparse import urlsplit
from socket import gethostbyname
from BeautifulSoup import BeautifulSoup
def showIPnums(url):
"""Return the unique IPs found in the anchors of the webpage at the given
url. showIPnums('http://22c118.cs.uiowa.edu/uploads/easy.html')
['0.0.0.0', '128.255.44.134', '128.255.45.54'] showIPnums('http://22c118.cs.uiowa.edu/uploads/pytorg.html')
['0.0.0.0', '128.255.135.49', '128.255.244.57', '128.255.30.11', '128.255.34.132',
'128.255.44.51', '128.255.45.53', '128.255.45.54', '129.255.241.42', '64.202.167.129']
"""
hrefs = set()
for link in BeautifulSoup(urlopen(url)).fetch('a'):
try: hrefs.add(gethostbyname(urlsplit(link["href"])[1]))
except: pass
return sorted(hrefs)
HTH,
George This thread has been closed and replies have been disabled. Please start a new discussion. Similar topics
by: Gerrit Holl |
last post by:
Posted with permission from the author.
I have some comments on this PEP, see the (coming) followup to this message.
PEP: 321
Title: Date/Time Parsing and Formatting
Version: $Revision: 1.3 $...
|
by: Viktor Rosenfeld |
last post by:
Hi,
I need to create a parser for a Python project, and I'd like to use process
kinda like lex/yacc. I've looked at various parsing packages online, but
didn't find anything useful for me:
-...
|
by: RiGGa |
last post by:
Hi,
I want to parse a web page in Python and have it write certain values out to
a mysql database. I really dont know where to start with parsing the html
code ( I can work out the database...
|
by: Fuzzyman |
last post by:
I am trying to parse an HTML page an only modify URLs within tags -
e.g. inside IMG, A, SCRIPT, FRAME tags etc...
I have built one that works fine using the HTMLParser.HTMLParser and
it works...
|
by: Willem Ligtenberg |
last post by:
I decided to use SAX to parse my xml file.
But the parser crashes on:
File "/usr/lib/python2.3/site-packages/_xmlplus/sax/handler.py", line 38, in fatalError
raise exception...
|
by: Terry |
last post by:
Hi,
This is a newbie's question. I want to preload 4 images and only when
all 4 images has been loaded into browser's cache, I want to start a
slideshow() function. If images are not completed...
|
by: yonido |
last post by:
hello,
my goal is to get patterns out of email files - say "message
forwarding" patterns (message forwarded from: xx to: yy subject: zz)
now lets say there are tons of these patterns (by gmail,...
|
by: Rick Walsh |
last post by:
I have an HTML table in the following format:
<table>
<tr><td>Header 1</td><td>Header 2</td></tr>
<tr><td>1</td><td>2</td></tr>
<tr><td>3</td><td>4</td></tr>
<tr><td>5</td><td>6</td></tr>...
|
by: ankitdesai |
last post by:
I would like to parse a couple of tables within an individual player's
SHTML page. For example, I would like to get the "Actual Pitching
Statistics" and the "Translated Pitching Statistics"...
|
by: Neil.Smith |
last post by:
I can't seem to find any references to this, but here goes:
In there anyway to parse an html/aspx file within an asp.net
application to gather a collection of controls in the file. For
instance...
|
by: taylorcarr |
last post by:
A Canon printer is a smart device known for being advanced, efficient, and reliable. It is designed for home, office, and hybrid workspace use and can also be used for a variety of purposes. However,...
|
by: ryjfgjl |
last post by:
If we have dozens or hundreds of excel to import into the database, if we use the excel import function provided by database editors such as navicat, it will be extremely tedious and time-consuming...
|
by: emmanuelkatto |
last post by:
Hi All, I am Emmanuel katto from Uganda. I want to ask what challenges you've faced while migrating a website to cloud.
Please let me know.
Thanks!
Emmanuel
|
by: BarryA |
last post by:
What are the essential steps and strategies outlined in the Data Structures and Algorithms (DSA) roadmap for aspiring data scientists? How can individuals effectively utilize this roadmap to progress...
|
by: nemocccc |
last post by:
hello, everyone, I want to develop a software for my android phone for daily needs, any suggestions?
|
by: Hystou |
last post by:
There are some requirements for setting up RAID:
1. The motherboard and BIOS support RAID configuration.
2. The motherboard has 2 or more available SATA protocol SSD/HDD slots (including MSATA, M.2...
|
by: Hystou |
last post by:
Most computers default to English, but sometimes we require a different language, especially when relocating. Forgot to request a specific language before your computer shipped? No problem! You can...
|
by: Oralloy |
last post by:
Hello folks,
I am unable to find appropriate documentation on the type promotion of bit-fields when using the generalised comparison operator "<=>".
The problem is that using the GNU compilers,...
|
by: jinu1996 |
last post by:
In today's digital age, having a compelling online presence is paramount for businesses aiming to thrive in a competitive landscape. At the heart of this digital strategy lies an intricately woven...
| |