473,762 Members | 6,675 Online
Bytes | Software Development & Data Engineering Community
+ Post

Home Posts Topics Members FAQ

Parsing an HTML a tag

How can I parse an HTML file and collect only that the A tags. I have a
start for the code but an unable to figure out how to finish the code.
HTML_parse gets the data from the URL document. Thanks for the help

def HTML_parse(data ):
from HTMLParser import HTMLParser
parser = MyHTMLParser()

parser.feed(dat a)

class MyHTMLParser(HT MLParser):

def handle_starttag (self, tag, attrs):

def handle_endtag(s elf, tag):

def read_page(URL):
"this function returns the entire content of the specified URL
document"
import urllib
connect = urllib.urlopen( url)
data = connect.read()
connect.close()
return data

Sep 24 '05 #1
10 2698
I do not really know, what you want to do. Getting he urls from the a
tags of a html file? I think the easiest method would be a regular
expression.
import urllib, sre
html = urllib.urlopen( "http://www.google.com" ).read()
sre.findall( 'href="([^>]+)"', html) ['/imghp?hl=de&tab =wi&ie=UTF-8',
'http://groups.google.d e/grphp?hl=de&tab =wg&ie=UTF-8',
'/dirhp?hl=de&tab =wd&ie=UTF-8',
'http://news.google.de/nwshp?hl=de&tab =wn&ie=UTF-8',
'http://froogle.google. de/frghp?hl=de&tab =wf&ie=UTF-8',
'/intl/de/options/'] sre.findall('hr ef=[^>]+>([^<]+)</a>', html)

['Bilder', 'Groups', 'Verzeichnis', 'News', 'Froogle',
'Mehr&nbsp;&raq uo;', 'Erweiterte Suche', 'Einstellungen' ,
'Sprachtools', 'Werbung', 'Unternehmensan gebote', 'Alles \xfcber
Google', 'Google.com in English']

Google has some strange html, href without quotation marks: <a
href=http://www.google.com/ncr>Google.com in English</a>

Sep 24 '05 #2
George wrote:
How can I parse an HTML file and collect only that the A tags. I have a
start for the code but an unable to figure out how to finish the code.
HTML_parse gets the data from the URL document. Thanks for the help


Have you tried using Beautiful Soup?

http://www.crummy.com/software/BeautifulSoup/
Sep 24 '05 #3
"beza1e1" <an************ *@googlemail.co m> writes:
I do not really know, what you want to do. Getting he urls from the a
tags of a html file? I think the easiest method would be a regular
expression.
I think this ranks as #2 on the list of "difficult one-day
hacks". Yeah, it's simple to write an RE that works most of the
time. It's a major PITA to write one that works in all the legal
cases. Getting one that also handles all the cases seen in the wild is
damn near impossible.
import urllib, sre
html = urllib.urlopen( "http://www.google.com" ).read()
sre.findall ('href="([^>]+)"', html)


This fails in a number of cases. Whitespace around the "=" sign for
attibutes. Quotes around other attributes in the tag (required by
XHTML). '>' in the URL (legal, but disrecommended) . Attributes quoted
with single quotes instead of double quotes, or just unqouted. It
misses IMG SRC attributes. It hands back relative URLs as such,
instead of resolving them to the absolute URL (which requires checking
for the base URL in the HEAD), which may or may not be acceptable.
Google has some strange html, href without quotation marks: <a
href=http://www.google.com/ncr>Google.com in English</a>


That's not strange. That's just a bit unusual. Perfectly legal, though
- any browser (or other html processor) that fails to handle it is
broken.

<mike
--
Mike Meyer <mw*@mired.or g> http://www.mired.org/home/mwm/
Independent WWW/Perforce/FreeBSD/Unix consultant, email for more information.
Sep 24 '05 #4
I think for a quick hack, this is as good as a parser. A simple parser
would miss some cases as well. RE are nearly not extendable though, so
your critic is valid.

The point is, what George wants to do. A mixture would be possible as
well:
Getting all <a ...> by a RE and then extracting the url with something
like a parser.

Sep 24 '05 #5
"beza1e1" <an************ *@googlemail.co m> writes:
I think for a quick hack, this is as good as a parser. A simple parser
would miss some cases as well. RE are nearly not extendable though, so
your critic is valid.
Pretty much any first attempt is going to miss some cases. There
libraries available that are have stood the test of time. Simply
usinng one of those is the right solution.
The point is, what George wants to do. A mixture would be possible as
well:
Getting all <a ...> by a RE and then extracting the url with something
like a parser.


I thought the point was to extract all URLs? Those appear in
attributes of tags other than A tags. While that's a meta-problem that
requires properly configuring the parser to deal with, it's something
that's *much* simpler to do if you've got a parser that understands
the structure of HTML - you should be able to specify tag/attribute
pairs to look for - than with something that is treating it as
unstructured text.

<mike

--
Mike Meyer <mw*@mired.or g> http://www.mired.org/home/mwm/
Independent WWW/Perforce/FreeBSD/Unix consultant, email for more information.
Sep 24 '05 #6
you may define a start_a in MyHTMLParser.

e.g.
import htmllib
import formatter

class HTML_Parser(htm llib.HTMLParser ):
def __init__(self):
htmllib.HTMLPar ser.__init__(se lf,
formatter.Abstr actFormatter(fo rmatter.NullWri ter()))

def start_a(self, args):
for key, value in args:
if key.lower() == 'href':
print value
html = HTML_Parser()
html.feed(open( r'a.htm','r').r ead())
html.close()
On 24 Sep 2005 10:13:30 -0700, George <bu*******@hotm ail.com> wrote:
How can I parse an HTML file and collect only that the A tags. I have a
start for the code but an unable to figure out how to finish the code.
HTML_parse gets the data from the URL document. Thanks for the help

def HTML_parse(data ):
from HTMLParser import HTMLParser
parser = MyHTMLParser()

parser.feed(dat a)

class MyHTMLParser(HT MLParser):

def handle_starttag (self, tag, attrs):

def handle_endtag(s elf, tag):

def read_page(URL):
"this function returns the entire content of the specified URL
document"
import urllib
connect = urllib.urlopen( url)
data = connect.read()
connect.close()
return data

--
http://mail.python.org/mailman/listinfo/python-list

--
Best Regards,
Leo Jay
Sep 24 '05 #7
"Stephen Prinster" <pr******@mail. com> wrote:
George wrote:
How can I parse an HTML file and collect only that the A tags. I have a
start for the code but an unable to figure out how to finish the code.
HTML_parse gets the data from the URL document. Thanks for the help


Have you tried using Beautiful Soup?

http://www.crummy.com/software/BeautifulSoup/


I agree; you can do what you want in two lines:

from BeautifulSoup import BeautifulSoup
hrefs = [link['href'] for link in BeautifulSoup(u rllib.urlopen(u rl)).fetch('a')]

George
Sep 24 '05 #8
* George (2005-09-24 18:13 +0100)
How can I parse an HTML file and collect only that the A tags.


import formatter, \
htmllib, \
urllib

url = 'http://python.org'

htmlp = htmllib.HTMLPar ser(formatter.N ullFormatter())
htmlp.feed(urll ib.urlopen(url) .read())
htmlp.close()

print htmlp.anchorlis t
Sep 24 '05 #9
I'm very new to python and I have tried to read the tutorials but I am
unable to understand exactly how I must do this problem.

Specifically, the showIPnums function takes a URL as input, calls the
read_page(url) function to obtain the entire page for that URL, and
then lists, in sorted order, the IP addresses implied in the "<A
HREF=· · ·>" tags within that page.
"""
Module to print IP addresses of tags in web file containing HTML
showIPnums('htt p://22c118.cs.uiowa .edu/uploads/easy.html') ['0.0.0.0', '128.255.44.134 ', '128.255.45.54']
showIPnums('htt p://22c118.cs.uiowa .edu/uploads/pytorg.html')

['0.0.0.0', '128.255.135.49 ', '128.255.244.57 ', '128.255.30.11' ,
'128.255.34.132 ', '128.255.44.51' , '128.255.45.53' ,
'128.255.45.54' , '129.255.241.42 ', '64.202.167.129 ']

"""

def read_page(url):
import formatter
import htmllib
import urllib

htmlp = htmllib.HTMLPar ser(formatter.N ullFormatter())
htmlp.feed(urll ib.urlopen(url) .read())
htmlp.close()

def showIPnums(URL) :
page=read_page( URL)

if __name__ == '__main__':
import doctest, sys
doctest.testmod (sys.modules[__name__])

Sep 25 '05 #10

This thread has been closed and replies have been disabled. Please start a new discussion.

Similar topics

8
9444
by: Gerrit Holl | last post by:
Posted with permission from the author. I have some comments on this PEP, see the (coming) followup to this message. PEP: 321 Title: Date/Time Parsing and Formatting Version: $Revision: 1.3 $ Last-Modified: $Date: 2003/10/28 19:48:44 $ Author: A.M. Kuchling <amk@amk.ca> Status: Draft Type: Standards Track
14
2638
by: Viktor Rosenfeld | last post by:
Hi, I need to create a parser for a Python project, and I'd like to use process kinda like lex/yacc. I've looked at various parsing packages online, but didn't find anything useful for me: - PyLR seems promising but is for Python 1.5 - Yappy seems promising, but I couldn't get it to work. It doesn't even compile the main example in it's documentation - mxTexttools is way complicated. I'd like something that I can give a BNF
9
2917
by: RiGGa | last post by:
Hi, I want to parse a web page in Python and have it write certain values out to a mysql database. I really dont know where to start with parsing the html code ( I can work out the database part ). I have had a look at htmllib but I need more info. Can anyone point me in the right direction , a tutorial or something would be great. Many thanks
0
1583
by: Fuzzyman | last post by:
I am trying to parse an HTML page an only modify URLs within tags - e.g. inside IMG, A, SCRIPT, FRAME tags etc... I have built one that works fine using the HTMLParser.HTMLParser and it works fine.... on good HTML. Having done a google it looks like parsing dodgy HTML and having HTMLParser choke is a common theme. I would have difficulties using regular expressions as I want to modify local reference URLS as well as absolute ones.
3
3660
by: Willem Ligtenberg | last post by:
I decided to use SAX to parse my xml file. But the parser crashes on: File "/usr/lib/python2.3/site-packages/_xmlplus/sax/handler.py", line 38, in fatalError raise exception xml.sax._exceptions.SAXParseException: NCBI_Entrezgene.dtd:8:0: error in processing external entity reference This is caused by: <!DOCTYPE Entrezgene-Set PUBLIC "-//NCBI//NCBI Entrezgene/EN" "NCBI_Entrezgene.dtd">
16
2906
by: Terry | last post by:
Hi, This is a newbie's question. I want to preload 4 images and only when all 4 images has been loaded into browser's cache, I want to start a slideshow() function. If images are not completed loaded into cache, the slideshow doesn't look very nice. I am not sure how/when to call the slideshow() function to make sure it starts after the preload has been completed.
1
2427
by: yonido | last post by:
hello, my goal is to get patterns out of email files - say "message forwarding" patterns (message forwarded from: xx to: yy subject: zz) now lets say there are tons of these patterns (by gmail, outlook, etc) - and i want to create some rules of how to get them out of the mail's html body. so at first i tried using regular expressions: for example - "any pattern that starts with a <p> and contains "from:"..." etc.
4
4865
by: Rick Walsh | last post by:
I have an HTML table in the following format: <table> <tr><td>Header 1</td><td>Header 2</td></tr> <tr><td>1</td><td>2</td></tr> <tr><td>3</td><td>4</td></tr> <tr><td>5</td><td>6</td></tr> </table> With an XSLT styles sheet, I can use for-each to grab the values in
9
4062
by: ankitdesai | last post by:
I would like to parse a couple of tables within an individual player's SHTML page. For example, I would like to get the "Actual Pitching Statistics" and the "Translated Pitching Statistics" portions of Babe Ruth page (http://www.baseballprospectus.com/dt/ruthba01.shtml) and store that info in a CSV file. Also, I would like to do this for numerous players whose IDs I have stored in a text file (e.g.: cobbty01, ruthba01, speaktr01, etc.)....
4
6842
by: Neil.Smith | last post by:
I can't seem to find any references to this, but here goes: In there anyway to parse an html/aspx file within an asp.net application to gather a collection of controls in the file. For instance what I'm trying to do is upload a html file onto the web server, convert it to aspx file and then parse it for input tags/controls, which in turn will become fields in a newly created database table. Clearly when the aspx file is called the...
0
9554
marktang
by: marktang | last post by:
ONU (Optical Network Unit) is one of the key components for providing high-speed Internet services. Its primary function is to act as an endpoint device located at the user's premises. However, people are often confused as to whether an ONU can Work As a Router. In this blog post, we’ll explore What is ONU, What Is Router, ONU & Router’s main usage, and What is the difference between ONU and Router. Let’s take a closer look ! Part I. Meaning of...
0
9377
by: Hystou | last post by:
Most computers default to English, but sometimes we require a different language, especially when relocating. Forgot to request a specific language before your computer shipped? No problem! You can effortlessly switch the default language on Windows 10 without reinstalling. I'll walk you through it. First, let's disable language synchronization. With a Microsoft account, language settings sync across devices. To prevent any complications,...
0
10136
Oralloy
by: Oralloy | last post by:
Hello folks, I am unable to find appropriate documentation on the type promotion of bit-fields when using the generalised comparison operator "<=>". The problem is that using the GNU compilers, it seems that the internal comparison operator "<=>" tries to promote arguments from unsigned to signed. This is as boiled down as I can make it. Here is my compilation command: g++-12 -std=c++20 -Wnarrowing bit_field.cpp Here is the code in...
0
9989
jinu1996
by: jinu1996 | last post by:
In today's digital age, having a compelling online presence is paramount for businesses aiming to thrive in a competitive landscape. At the heart of this digital strategy lies an intricately woven tapestry of website design and digital marketing. It's not merely about having a website; it's about crafting an immersive digital experience that captivates audiences and drives business growth. The Art of Business Website Design Your website is...
0
8814
agi2029
by: agi2029 | last post by:
Let's talk about the concept of autonomous AI software engineers and no-code agents. These AIs are designed to manage the entire lifecycle of a software development project—planning, coding, testing, and deployment—without human intervention. Imagine an AI that can take a project description, break it down, write the code, debug it, and then launch it, all on its own.... Now, this would greatly impact the work of software developers. The idea...
1
7358
isladogs
by: isladogs | last post by:
The next Access Europe User Group meeting will be on Wednesday 1 May 2024 starting at 18:00 UK time (6PM UTC+1) and finishing by 19:30 (7.30PM). In this session, we are pleased to welcome a new presenter, Adolph Dupré who will be discussing some powerful techniques for using class modules. He will explain when you may want to use classes instead of User Defined Types (UDT). For example, to manage the data in unbound forms. Adolph will...
0
5405
by: adsilva | last post by:
A Windows Forms form does not have the event Unload, like VB6. What one acts like?
1
3913
by: 6302768590 | last post by:
Hai team i want code for transfer the data from one system to another through IP address by using C# our system has to for every 5mins then we have to update the data what the data is updated we have to send another system
3
3509
muto222
by: muto222 | last post by:
How can i add a mobile payment intergratation into php mysql website.

By using Bytes.com and it's services, you agree to our Privacy Policy and Terms of Use.

To disable or enable advertisements and analytics tracking please visit the manage ads & tracking page.