473,320 Members | 2,006 Online
Bytes | Software Development & Data Engineering Community
Post Job

Home Posts Topics Members FAQ

Join Bytes to post your question to a community of 473,320 software developers and data experts.

Using Beautiful Soup to entangle bookmarks.html

Hi,

I'm trying to use the Beautiful Soup package to parse through the
"bookmarks.html" file which Firefox exports all your bookmarks into.
I've been struggling with the documentation trying to figure out how to
extract all the urls. Has anybody got a couple of longer examples using
Beautiful Soup I could play around with?

Thanks,
Martin.

Sep 7 '06 #1
15 5937
Francach schrieb:
Hi,

I'm trying to use the Beautiful Soup package to parse through the
"bookmarks.html" file which Firefox exports all your bookmarks into.
I've been struggling with the documentation trying to figure out how to
extract all the urls. Has anybody got a couple of longer examples using
Beautiful Soup I could play around with?
Why do you use BeautifulSoup on that? It's generated content, and I
suppose it is well-formed, most probably even xml. So use a standard
parser here, better yet somthing like lxml/elementtree

Diez
Sep 7 '06 #2

Diez B. Roggisch wrote:
suppose it is well-formed, most probably even xml.
Maybe not. Otherwise, why would there be a script like this one[1]?
Anyway, I found that and other scripts that work with firefox
bookmarks.html files with a quick search [2]. Perhaps you will find
something there that is helpful.

[1]:
http://www.physic.ut.ee/~kkannike/en...e/bookmarks.py
[2]: http://www.google.com/search?q=firef...ks.html+python

Waylan

Sep 7 '06 #3
Diez B. Roggisch wrote:
Francach schrieb:
>Hi,

I'm trying to use the Beautiful Soup package to parse through the
"bookmarks.html" file which Firefox exports all your bookmarks into.
I've been struggling with the documentation trying to figure out how to
extract all the urls. Has anybody got a couple of longer examples using
Beautiful Soup I could play around with?


Why do you use BeautifulSoup on that? It's generated content, and I
suppose it is well-formed, most probably even xml. So use a standard
parser here, better yet somthing like lxml/elementtree

Diez
Once upon a time I have written for my own purposes some code on this
subject, so maybe it can be used as a starter (tested a bit, but
consider its status as a kind of alpha release):

<code>
from urllib import urlopen
from sgmllib import SGMLParser

class mySGMLParserClassProvidingListOf_HREFs(SGMLParser) :
# provides only HREFs <a href="someURL"for links to another pages skipping
# references to:
# - internal links on same page : "#..."
# - email adresses : "mailto:..."
# and skipping part with appended internal link info, so that e.g.:
# - "LinkSpec#internalLinkID" will be listed as "LinkSpec" only
# ---
# reset() overwrites an empty function available in SGMLParser class
def reset(self):
SGMLParser.reset(self)
self.A_HREFs = []
#: def reset(self)

# start_a() overwrites an empty function available in SGMLParser class
# from which this class is derived. start_a() will be called each
time the
# SGMLParser detects an <a ...tag within the feed(ed) HTML document:
def start_a(self, tagAttributes_asListOfNameValuePairs):
for attrName, attrValue in tagAttributes_asListOfNameValuePairs:
if attrName=='href':
if attrValue[0] != '#' and attrValue[:7] !='mailto:':
if attrValue.find('#') >= 0:
attrValue = attrValue[:attrValue.find('#')]
#: if
self.A_HREFs.append(attrValue)
#: if
#: if
#: for
#: def start_a(self, attributes_NamesAndValues_AsListOfTuples)
#: class mySGMLParserClassProvidingListOf_HREFs(SGMLParser)
#
------------------------------------------------------------------------------
# ---
# Execution block:
fileLikeObjFrom_urlopen = urlopen('www.google.com') # set URL
mySGMLParserClassObj_withListOfHREFs =
mySGMLParserClassProvidingListOf_HREFs()
mySGMLParserClassObj_withListOfHREFs.feed(fileLike ObjFrom_urlopen.read())
mySGMLParserClassObj_withListOfHREFs.close()
fileLikeObjFrom_urlopen.close()

for href in mySGMLParserClassObj_withListOfHREFs.A_HREFs:
print href
#: for
</code>

Claudio Grondi
Sep 7 '06 #4
waylan schrieb:
Diez B. Roggisch wrote:
>suppose it is well-formed, most probably even xml.

Maybe not. Otherwise, why would there be a script like this one[1]?
Anyway, I found that and other scripts that work with firefox
bookmarks.html files with a quick search [2]. Perhaps you will find
something there that is helpful.
I have to admit: I didn't check on that file, and simply couldn't
believe it was so badly written as it apparently is.

But I was at least capable of shoving it through HTMLParser. But I'm not
sure if that is of any use.

Excuse me causing confusion.

Diez
Sep 7 '06 #5

Francach wrote:
Hi,

I'm trying to use the Beautiful Soup package to parse through the
"bookmarks.html" file which Firefox exports all your bookmarks into.
I've been struggling with the documentation trying to figure out how to
extract all the urls. Has anybody got a couple of longer examples using
Beautiful Soup I could play around with?

Thanks,
Martin.
If the only thing you want out of the document is the URL's why not
search for: href="..." ? You could get a regular expression that
matches that pretty easily. I think this should just about get you
there, but my regular expressions have gotten very rusty.

/href=\".+\"/

Sep 7 '06 #6
On 7 Sep 2006 14:30:25 -0700, Adam Jones <aj*****@gmail.comwrote:
>
Francach wrote:
Hi,

I'm trying to use the Beautiful Soup package to parse through the
"bookmarks.html" file which Firefox exports all your bookmarks into.
I've been struggling with the documentation trying to figure out how to
extract all the urls. Has anybody got a couple of longer examples using
Beautiful Soup I could play around with?

Thanks,
Martin.

If the only thing you want out of the document is the URL's why not
search for: href="..." ? You could get a regular expression that
matches that pretty easily. I think this should just about get you
there, but my regular expressions have gotten very rusty.

/href=\".+\"/
I doubt the bookmarks file is huge so something simple like

f = open('bookmarks.html').readlines()
data = [x for x in f if x.strip().startswith('<DT><A ')]

would get you started.

On my exported firefox bookmarks, this gives me all the urls, they
just need to be parsed a bit more accurately, I might be tempted to
just use a couple of splits() to keep it real simple.

HTH
--

Tim Williams
Sep 7 '06 #7
Francach wrote:
Hi,

I'm trying to use the Beautiful Soup package to parse through the
"bookmarks.html" file which Firefox exports all your bookmarks into.
I've been struggling with the documentation trying to figure out how to
extract all the urls. Has anybody got a couple of longer examples using
Beautiful Soup I could play around with?

Thanks,
Martin.
from BeautifulSoup import BeautifulSoup
urls = [tag['href'] for tag in
BeautifulSoup(open('bookmarks.html')).findAll('a')]

Regards,
George

Sep 8 '06 #8
Hi,

thanks for the helpful reply.
I wanted to do two things - learn to use Beautiful Soup and bring out
all the information
in the bookmarks file to import into another application. So I need to
be able to travel down the tree in the bookmarks file. bookmarks seems
to use header tags which can then contain a tags where the href
attributes are. What I don't understand is how to create objects which
can then be used to return the information in the next level of the
tree.

Thanks again,
Martin.

George Sakkis wrote:
Francach wrote:
Hi,

I'm trying to use the Beautiful Soup package to parse through the
"bookmarks.html" file which Firefox exports all your bookmarks into.
I've been struggling with the documentation trying to figure out how to
extract all the urls. Has anybody got a couple of longer examples using
Beautiful Soup I could play around with?

Thanks,
Martin.

from BeautifulSoup import BeautifulSoup
urls = [tag['href'] for tag in
BeautifulSoup(open('bookmarks.html')).findAll('a')]

Regards,
George
Sep 8 '06 #9
Francach wrote:
George Sakkis wrote:
Francach wrote:
Hi,
>
I'm trying to use the Beautiful Soup package to parse through the
"bookmarks.html" file which Firefox exports all your bookmarks into.
I've been struggling with the documentation trying to figure out how to
extract all the urls. Has anybody got a couple of longer examples using
Beautiful Soup I could play around with?
>
Thanks,
Martin.
from BeautifulSoup import BeautifulSoup
urls = [tag['href'] for tag in
BeautifulSoup(open('bookmarks.html')).findAll('a')]
Hi,

thanks for the helpful reply.
I wanted to do two things - learn to use Beautiful Soup and bring out
all the information
in the bookmarks file to import into another application. So I need to
be able to travel down the tree in the bookmarks file. bookmarks seems
to use header tags which can then contain a tags where the href
attributes are. What I don't understand is how to create objects which
can then be used to return the information in the next level of the
tree.

Thanks again,
Martin.
I'm not sure I understand what you want to do. Originally you asked to
extract all urls and BeautifulSoup can do this for you in one line. Why
do you care about intermediate objects or if the anchor tags are nested
under header tags or not ? Read and embrace BeautifulSoup's philosophy:
"You didn't write that awful page. You're just trying to get some data
out of it. Right now, you don't really care what HTML is supposed to
look like."

George

Sep 8 '06 #10
Hi George,

Firefox lets you group the bookmarks along with other information into
directories and sub-directories. Firefox uses header tags for this
purpose. I'd like to get this grouping information out aswell.

Regards,
Martin.
the idea is to extract.
George Sakkis wrote:
Francach wrote:
George Sakkis wrote:
Francach wrote:
Hi,

I'm trying to use the Beautiful Soup package to parse through the
"bookmarks.html" file which Firefox exports all your bookmarks into.
I've been struggling with the documentation trying to figure out how to
extract all the urls. Has anybody got a couple of longer examples using
Beautiful Soup I could play around with?

Thanks,
Martin.
>
from BeautifulSoup import BeautifulSoup
urls = [tag['href'] for tag in
BeautifulSoup(open('bookmarks.html')).findAll('a')]
Hi,

thanks for the helpful reply.
I wanted to do two things - learn to use Beautiful Soup and bring out
all the information
in the bookmarks file to import into another application. So I need to
be able to travel down the tree in the bookmarks file. bookmarks seems
to use header tags which can then contain a tags where the href
attributes are. What I don't understand is how to create objects which
can then be used to return the information in the next level of the
tree.

Thanks again,
Martin.

I'm not sure I understand what you want to do. Originally you asked to
extract all urls and BeautifulSoup can do this for you in one line. Why
do you care about intermediate objects or if the anchor tags are nested
under header tags or not ? Read and embrace BeautifulSoup's philosophy:
"You didn't write that awful page. You're just trying to get some data
out of it. Right now, you don't really care what HTML is supposed to
look like."

George
Sep 8 '06 #11
Francach wrote:
>
Firefox lets you group the bookmarks along with other information into
directories and sub-directories. Firefox uses header tags for this
purpose. I'd like to get this grouping information out aswell.
import libxml2dom # http://www.python.org/pypi/libxml2dom
d = libxml2dom.parse("bookmarks.html", html=1)
for node in d.xpath("html/body//dt/*[1]"):
if node.localName == "h3":
print "Section:", node.nodeValue
elif node.localName == "a":
print "Link:", node.getAttribute("href")

One exercise, using the above code as a starting point, would be to
reproduce the hierarchy exactly, rather than just showing the section
names and the links which follow them. Ultimately, you may be looking
for a way to just convert the HTML into a simple XML document or into
another hierarchical representation which excludes the HTML baggage and
details irrelevant to your problem.

Paul

Sep 8 '06 #12
Francach wrote:
Hi George,

Firefox lets you group the bookmarks along with other information into
directories and sub-directories. Firefox uses header tags for this
purpose. I'd like to get this grouping information out aswell.

Regards,
Martin.
Here's what I came up with:
http://rafb.net/paste/results/G91EAo70.html. Tested only on my
bookmarks; see if it works for you.

For each subfolder there is a recursive call that walks the respective
subtree, so it's probably not the most efficient solution, but I
couldn't think of any one-pass way to do it using BeautifulSoup.

George

Sep 8 '06 #13
Hallo George,

thanks a lot! This is exactly the direction I had in mind.
Your script demonstrates nicely how Beautiful Soup works.

Regards,
Martin.

George Sakkis wrote:
Francach wrote:
Hi George,

Firefox lets you group the bookmarks along with other information into
directories and sub-directories. Firefox uses header tags for this
purpose. I'd like to get this grouping information out aswell.

Regards,
Martin.

Here's what I came up with:
http://rafb.net/paste/results/G91EAo70.html. Tested only on my
bookmarks; see if it works for you.

For each subfolder there is a recursive call that walks the respective
subtree, so it's probably not the most efficient solution, but I
couldn't think of any one-pass way to do it using BeautifulSoup.

George
Sep 9 '06 #14
"George Sakkis" <ge***********@gmail.comwrote:
>Here's what I came up with:
http://rafb.net/paste/results/G91EAo70.html. Tested only on my
bookmarks; see if it works for you.
That URL is dead. Got another?

-----
robin
noisetheatre.blogspot.com
Sep 21 '06 #15
robin wrote:
"George Sakkis" <ge***********@gmail.comwrote:
Here's what I came up with:
http://rafb.net/paste/results/G91EAo70.html. Tested only on my
bookmarks; see if it works for you.

That URL is dead. Got another?
Yeap, try this one:
http://gsakkis-utils.googlecode.com/...s/bookmarks.py

George

Sep 21 '06 #16

This thread has been closed and replies have been disabled. Please start a new discussion.

Similar topics

13
by: NoSpamThankYouMam | last post by:
I am looking for a product that I am not sure exists. I have bookmarks to webpages in Internet Explorer, Mozilla Firefox, Opera, Netscape Navigator, and on a "Favorite Links" page on my website....
15
by: could ildg | last post by:
In re, the punctuation "^" can exclude a single character, but I want to exclude a whole word now. for example I have a string "hi, how are you. hello", I want to extract all the part before the...
2
by: meyerkp | last post by:
Hi all, I'm trying to extract some information from an html file using beautiful soup. The strings I want get are after br tags, eg: <font size='6'> <br>this info <br>more info <br>and...
3
by: rh0dium | last post by:
Hi all, I am trying to parse into a dictionary a table and I am having all kinds of fun. Can someone please help me out. What I want is this: dic={'Division Code':'SALS','Employee':'LOO...
1
by: Tempo | last post by:
Heya. I have never used a module/script before, and the first problem I have run into is that I do not know how to install a module/script. I have downloaded Beautiful Soup, but how do I use it in...
0
by: Anthra Norell | last post by:
Hi, Martin, SE is a stream editor that does not introduce the overhead and complications of overkill parsing. See if it suits your needs: http://cheeseshop.python.org/pypi/SE/2.2%20beta ...
3
by: PicURLPy | last post by:
Hello, I want to extract some image links from different html pages, in particular i want extract those image tags which height values are greater than 200. Is there an elegant way in...
3
by: cjl | last post by:
I am learning python and beautiful soup, and I'm stuck. A web page has a table that contains data I would like to scrape. The table has a unique class, so I can use: soup.find("table",...
2
by: Alexnb | last post by:
Okay, I have used BeautifulSoup a lot lately, but I am wondering, how do you open a local html file? Usually I do something like this for a url soup =...
0
by: DolphinDB | last post by:
Tired of spending countless mintues downsampling your data? Look no further! In this article, you’ll learn how to efficiently downsample 6.48 billion high-frequency records to 61 million...
0
by: ryjfgjl | last post by:
ExcelToDatabase: batch import excel into database automatically...
0
isladogs
by: isladogs | last post by:
The next Access Europe meeting will be on Wednesday 6 Mar 2024 starting at 18:00 UK time (6PM UTC) and finishing at about 19:15 (7.15PM). In this month's session, we are pleased to welcome back...
0
by: Vimpel783 | last post by:
Hello! Guys, I found this code on the Internet, but I need to modify it a little. It works well, the problem is this: Data is sent from only one cell, in this case B5, but it is necessary that data...
0
by: jfyes | last post by:
As a hardware engineer, after seeing that CEIWEI recently released a new tool for Modbus RTU Over TCP/UDP filtering and monitoring, I actively went to its official website to take a look. It turned...
0
by: ArrayDB | last post by:
The error message I've encountered is; ERROR:root:Error generating model response: exception: access violation writing 0x0000000000005140, which seems to be indicative of an access violation...
0
by: CloudSolutions | last post by:
Introduction: For many beginners and individual users, requiring a credit card and email registration may pose a barrier when starting to use cloud servers. However, some cloud server providers now...
0
by: Defcon1945 | last post by:
I'm trying to learn Python using Pycharm but import shutil doesn't work
0
by: Faith0G | last post by:
I am starting a new it consulting business and it's been a while since I setup a new website. Is wordpress still the best web based software for hosting a 5 page website? The webpages will be...

By using Bytes.com and it's services, you agree to our Privacy Policy and Terms of Use.

To disable or enable advertisements and analytics tracking please visit the manage ads & tracking page.