473,405 Members | 2,210 Online
Bytes | Software Development & Data Engineering Community
Post Job

Home Posts Topics Members FAQ

Join Bytes to post your question to a community of 473,405 software developers and data experts.

high performance hyperlink extraction

The following script is a high-performance link (<a
href="...">...</a>) extractor. I'm posting to this list in hopes that
anyone interested will offer constructive
criticism/suggestions/comments/etc. Mainly I'm curious what comments
folks have on my regular expressions. Hopefully someone finds this
kind of thing as interesting like I do! :)

My design goals were as follows:
* extract links from text (most likey valid HTML)
* work faster than BeautifulSoup, sgmllib, or other markup parsing
libraries
* return accurate results

The basic idea is to:
1. find anchor ('a') tags within some HTML text that contain 'href'
attributes (I assume these are hyperlinks)
2. extract all attributes from each 'a' tag found as name, value pairs
import re
import urllib

whiteout = re.compile(r'\s+')

# grabs hyperlinks from text
href_re = re.compile(r'''
<a(?P<attrs>[^>]* # start of tag
href=(?P<delim>["']) # delimiter
(?P<link>[^"']*) # link
(?P=delim) # delimiter
[^>]*)> # rest of start tag
(?P<content>.*?) # link content
</a> # end tag
''', re.VERBOSE | re.IGNORECASE)

# grabs attribute name, value pairs
attrs_re = re.compile(r'''
(?P<name>\w+)= # attribute name
(?P<delim>["']) # delimiter
(?P<value>[^"']*) # attribute value
(?P=delim) # delimiter
''', re.VERBOSE)

def getLinks(html_data):
newdata = whiteout.sub(' ', html_data)
matches = href_re.finditer(newdata)
ancs = []
for match in matches:
d = match.groupdict()
a = {}
a['href'] = d.get('link', None)
a['content'] = d.get('content', None)
attr_matches = attrs_re.finditer(d.get('attrs', None))
for match in attr_matches:
da = match.groupdict()
name = da.get('name', None)
a[name] = da.get('value', None)
ancs.append(a)
return ancs

if __name__ == '__main__':
opener = urllib.FancyURLopener()
url = 'http://adammonsen.com/tut/libgladeTest.html'
html_data = opener.open(url).read()
for a in getLinks(html_data): print a
--
Adam Monsen
http://adammonsen.com/

Sep 13 '05 #1
2 1888

pretty nice, however, u wont capture the more and more common
javascripted redirections, like

<b onclick='location.href="http://www.nowhere.com"'>click me</b>

nor

<form action="http://www.yahoo.com">
<input type=submit value="clickme">
</form>

nor

<form action="http://www.yahoo.com" name=x>
<input type=button value="clickme" onclick=document.x.submit()>
</form>

..

im guessing it also wont handle correctly thing like:

<a href='javascript:alert("...")'>click</a>
but you probably already knew all this stuff, didnt you?
well, anyway, my 2 cents are, that instead of parsing the html looking
for
urls, like http://XXXX.XXXXXX.XXX/XXX?xXXx=xXx#x

or something like that.
//f3l

Sep 13 '05 #2
Adam Monsen wrote:
The following script is a high-performance link (<a
href="...">...</a>) extractor. [...]
* extract links from text (most likey valid HTML) [...] import re
import urllib

whiteout = re.compile(r'\s+')

# grabs hyperlinks from text
href_re = re.compile(r'''
<a(?P<attrs>[^>]* # start of tag
href=(?P<delim>["']) # delimiter
(?P<link>[^"']*) # link
(?P=delim) # delimiter
[^>]*)> # rest of start tag
(?P<content>.*?) # link content
</a> # end tag
''', re.VERBOSE | re.IGNORECASE)
A few notes:

The single or double quote delimiters are optional in some cases
(and frequently omitted even when required by the current
standard).

Where blank-spaces may appear is HTML entities is not so clear.
To follow the standard, one would have to acquire the SGML
standard, which costs money. Popular browsers allow end tags
such as "</a >" which the RE above will reject.
I'm not good at reading RE's, but it looks like the first line
will greedily match the entire start tag, and then back-track to
find the href attribute. There appear to many other good
opportunities for a cleverly-constructed input to force big-time
backtracking; for example, a '>' will end the start-tag, but in
within the delimiters, it's just another character. Can anyone
show a worst-case run-time?

Writing a Python RE to match all and only legal anchor tags may
not be possible. Writing a regular expression to do so is
definitely not possible.

[...] def getLinks(html_data):
newdata = whiteout.sub(' ', html_data)
matches = href_re.finditer(newdata)
ancs = []
for match in matches:
d = match.groupdict()
a = {}
a['href'] = d.get('link', None)
The statement above doesn't seem necessary. The 'href' just gets
over-written below, as just another attribute.
a['content'] = d.get('content', None)
attr_matches = attrs_re.finditer(d.get('attrs', None))
for match in attr_matches:
da = match.groupdict()
name = da.get('name', None)
a[name] = da.get('value', None)
ancs.append(a)
return ancs

--
--Bryan
Sep 14 '05 #3

This thread has been closed and replies have been disabled. Please start a new discussion.

Similar topics

7
by: Randell D. | last post by:
Folks, I have a Javascript performance question that I might have problems explaining... In PHP, better performance can be obtained dealing directly with a variable, as opposed to an element...
9
by: bluedolphin | last post by:
Hello All: I have been brought onboard to help on a project that had some performance problems last year. I have taken some steps to address the issues in question, but a huge question mark...
2
by: Jay Loden | last post by:
All, In studying Python, I have predictably run across quite a bit of talk about the GIL and threading in Python. As my day job, I work with a (mostly Java) application that is heavily threaded....
0
by: dotnetrocks | last post by:
Hi, I'm writing a high performance tcp/ip server using IOCP. Recently I found XF.Server component at http://www.kodart.com They claim that it is the fastest server implementation. Is it possible?...
0
by: emmanuelkatto | last post by:
Hi All, I am Emmanuel katto from Uganda. I want to ask what challenges you've faced while migrating a website to cloud. Please let me know. Thanks! Emmanuel
0
BarryA
by: BarryA | last post by:
What are the essential steps and strategies outlined in the Data Structures and Algorithms (DSA) roadmap for aspiring data scientists? How can individuals effectively utilize this roadmap to progress...
1
by: Sonnysonu | last post by:
This is the data of csv file 1 2 3 1 2 3 1 2 3 1 2 3 2 3 2 3 3 the lengths should be different i have to store the data by column-wise with in the specific length. suppose the i have to...
0
marktang
by: marktang | last post by:
ONU (Optical Network Unit) is one of the key components for providing high-speed Internet services. Its primary function is to act as an endpoint device located at the user's premises. However,...
0
by: Hystou | last post by:
Most computers default to English, but sometimes we require a different language, especially when relocating. Forgot to request a specific language before your computer shipped? No problem! You can...
0
jinu1996
by: jinu1996 | last post by:
In today's digital age, having a compelling online presence is paramount for businesses aiming to thrive in a competitive landscape. At the heart of this digital strategy lies an intricately woven...
0
by: Hystou | last post by:
Overview: Windows 11 and 10 have less user interface control over operating system update behaviour than previous versions of Windows. In Windows 11 and 10, there is no way to turn off the Windows...
0
tracyyun
by: tracyyun | last post by:
Dear forum friends, With the development of smart home technology, a variety of wireless communication protocols have appeared on the market, such as Zigbee, Z-Wave, Wi-Fi, Bluetooth, etc. Each...
0
isladogs
by: isladogs | last post by:
The next Access Europe User Group meeting will be on Wednesday 1 May 2024 starting at 18:00 UK time (6PM UTC+1) and finishing by 19:30 (7.30PM). In this session, we are pleased to welcome a new...

By using Bytes.com and it's services, you agree to our Privacy Policy and Terms of Use.

To disable or enable advertisements and analytics tracking please visit the manage ads & tracking page.