473,386 Members | 1,832 Online
Bytes | Software Development & Data Engineering Community
Post Job

Home Posts Topics Members FAQ

Join Bytes to post your question to a community of 473,386 software developers and data experts.

BeautifulSoup

I tried using BeautifulSoup to make changes to the url
links on html pages, but when the page was displayed,
it was garbled up and didn't look right (even when I
didn't actually change anything on the page yet). I
ran these steps in python to see what was up:
from BeautifulSoup import BeautifulSoup
from urllib2 import build_opener, Request

req = Request('http://www.python.org/')
f = build_opener().open(req)
page = f.read()
f.close()

len(page) 12040
soup = BeautifulSoup()
soup.feed(page)
page2 = soup.renderContents()
len(page2)

11889

I have version 2.1 of BeautifulSoup. It seems that
other ppl have used BeautifulSoup and it works fine
for them so I'm not sure what I'm doing wrong. Any
help would be appreciated, thanks.

-Steve

__________________________________________________ __
Start your day with Yahoo! - make it your home page
http://www.yahoo.com/r/hs

Aug 19 '05 #1
4 3095
Steve -

Is there a chance you could post a before and after example, so we can
see just what you are trying to do instead of talking conceptually all
around it and making us guess? If you are just doing some spot
translations of specific values in an HTML file, you can probably get
away with a simple (but readable) program using pyparsing.

-- Paul

Aug 19 '05 #2
Here's a pyparsing program that reads my personal web page, and spits
out HTML with all of the HREF's reversed.

-- Paul
(Download pyparsing at http://pyparsing.sourceforge.net.)

from pyparsing import Literal, quotedString
import urllib

LT = Literal("<")
GT = Literal(">")
EQUALS = Literal("=")
htmlAnchor = LT + "A" + "HREF" + EQUALS +
quotedString.setResultsName("href") + GT

def convertHREF(s,l,toks):
# do HREF conversion here - for demonstration, we will just reverse
them
print toks.href
return "<A HREF=%s>" % toks.href[::-1]

htmlAnchor.setParseAction( convertHREF )

inputURL = "http://www.geocities.com/ptmcg"
inputPage = urllib.urlopen(inputURL)
inputHTML = inputPage.read()
inputPage.close()

print htmlAnchor.transformString( inputHTML )

Aug 19 '05 #3
"Paul McGuire" <pt***@austin.rr.com> writes:
Here's a pyparsing program that reads my personal web page, and spits
out HTML with all of the HREF's reversed.
Parsing HTML isn't easy, which makes me wonder how good this solution
really is. Not meant as a comment on the quality of this code or
PyParsing, but as curiosity from someone who does a lot of [X}HTML
herding.
-- Paul
(Download pyparsing at http://pyparsing.sourceforge.net.)
If it were in the ports tree, I'd have grabbed it and tried it
myself. But it isn't, so I'm going to be lazy and ask. If PyParsing
really makes dealing with HTML this easy, I may package it as a port
myself.
from pyparsing import Literal, quotedString
import urllib

LT = Literal("<")
GT = Literal(">")
EQUALS = Literal("=")
htmlAnchor = LT + "A" + "HREF" + EQUALS +
quotedString.setResultsName("href") + GT

def convertHREF(s,l,toks):
# do HREF conversion here - for demonstration, we will just reverse
them
print toks.href
return "<A HREF=%s>" % toks.href[::-1]

htmlAnchor.setParseAction( convertHREF )

inputURL = "http://www.geocities.com/ptmcg"
inputPage = urllib.urlopen(inputURL)
inputHTML = inputPage.read()
inputPage.close()

print htmlAnchor.transformString( inputHTML )


How well does it deal with other attributes in front of the href, like
<A onClick="..." href="...">?

How about if my HTML has things that look like HTML in attributes,
like <TAG ATTRIBUTE="stuff<A HREF=stuff">?

Thanks,
<mike
--
Mike Meyer <mw*@mired.org> http://www.mired.org/home/mwm/
Independent WWW/Perforce/FreeBSD/Unix consultant, email for more information.
Aug 20 '05 #4
Mike -

Thanks for asking. Typically I hang back from these discussions of
parsing HTML or XML (*especially* XML), since there are already a
number of parsers out there that can handle the full language syntax.
But it seems that many people trying to parse HTML aren't interested in
fully parsing an HTML page, so much as they are trying to match some
tag pattern, to extract or modify the embedded data. In these cases,
fully comprehending HTML syntax is rarely required.

In this particular instance, the OP had started another thread in which
he was trying to extract some HTML content using regexp's, and this
didn't seem to be converging to a workable solution. When he finally
revealed that what he was trying to do was extract and modify the URL's
in a web pages HTML source, this seemed like a tractable problem for a
quick pyparsing program. In the interests of keeping things simple, I
admittedly provided a limited solution. As you mentioned, no
additional attributes are handled by this code. But many HTML scrapers
are able to make simplifying assumptions about what HTML features can
be expected, and I did not want to spend a lot of time solving problems
that may never come up.

So you asked some good questions, let me try to give some reasonable
answers, or at least responses:

1. "If it were in the ports tree, I'd have grabbed it and tried it
myself."
By "ports tree", I assume you mean some directory of your Linux
distribution. I'm sure my Linux ignorance is showing here, most of my
work occurs on Windows systems. I've had pyparsing available on SF for
over a year and a half, and I do know that it has been incorporated (by
others) into a couple of Linux distros, including Debian, ubuntu,
gentoo, and Fedora. If you are interested in doing a port to another
Linux, that would be great! But I was hoping that hosting pyparsing on
SF would be easy enough for most people to be able to get at it.

2. "How well does it deal with other attributes in front of the href,
like <A onClick="..." href="...">?"
*This* version doesn't deal with other attributes at all, in the
interests of simplicity. However, pyparsing includes a helper method,
makeHTMLTags(), that *does* support arbitrary attributes within an
opening HTML tag. It is called like:

anchorStart,anchorEnd = makeHTMLTags("A")

makeHTMLTags returns a pyparsing subexpression that *does* comprehend
attributes, as well as opening tags that include their own closing '/'
(indicating an empty tag body). Tag attributes are accessible by name
in the returned results tokens, without requiring setResultsName()
calls (as in the example).

3. "How about if my HTML has things that look like HTML in attributes,
like <TAG ATTRIBUTE="stuff<A HREF=stuff">?"
Well, again, the simple example wont be able to tell the difference,
and it would process the ATTRIBUTE string as a real tag. To address
this, we would expand our statement to process quoted strings
explicitly, and separately from the htmlAnchor, as in:

htmlPatterns = quotedString | htmlAnchor

and then use htmlPatterns for the transformString call:

htmlPatterns.transformString( inputHTML )

You didn't ask, but one feature that is easy to handle is comments.
pyparsing includes some common comment syntaxes, such as cStyleComment
and htmlComment. To ignore them, one simply calls ignore() on the root
pyparsing node. In the simple example, this would look like:

htmlPatterns.ignore( htmlComment )

By adding this single statement, all HTML comments would be ignored.
Writing a full HTML parser with pyparsing would be tedious, and not a
great way to spend your time, given the availability of other parsing
tools. But for simple scraping and extracting, it can be a very
efficient way to go.

-- Paul

Aug 20 '05 #5

This thread has been closed and replies have been disabled. Please start a new discussion.

Similar topics

7
by: Gonzillaaa | last post by:
I'm trying to get the data on the "Central London Property Price Guide" box at the left hand side of this page http://www.findaproperty.com/regi0018.html I have managed to get the data :) but...
4
by: William Xu | last post by:
Hi, all, This piece of code used to work well. i guess the error occurs after some upgrade. >>> import urllib >>> from BeautifulSoup import BeautifulSoup >>> url = 'http://www.google.com'...
5
by: John Nagle | last post by:
This, which is from a real web site, went into BeautifulSoup: <param name="movie" value="/images/offersBanners/sw04.swf?binfot=We offer fantastic rates for selected weeks or days!!&blinkt=Click...
9
by: Mizipzor | last post by:
Is there a way to "subscribe" to individual topics? im currently getting bombarded with daily digests and i wish to only receive a mail when there is activity in a topic that interests me. Can this...
3
by: John Nagle | last post by:
Are weak refs slower than strong refs? I've been considering making the "parent" links in BeautifulSoup into weak refs, so the trees will release immediately when they're no longer needed. In...
11
by: John Nagle | last post by:
The syntax that browsers understand as HTML comments is much less restrictive than what BeautifulSoup understands. I keep running into sites with formally incorrect HTML comments which are parsed...
5
by: Larry Bates | last post by:
Info: Python version: ActivePython 2.5.1.1 Platform: Windows I wanted to install BeautifulSoup today for a small project and decided to use easy_install. I can install other packages just...
3
by: bsagert | last post by:
I downloaded BeautifulSoup.py from http://www.crummy.com/software/BeautifulSoup/ and being a n00bie, I just placed it in my Windows c:\python25\lib\ file. When I type "import beautifulsoup" from...
2
by: academicedgar | last post by:
Hi I would appreciate some help. I am trying to learn Python and want to use BeautifulSoup to pull some data from tables. I was really psyched earlier tonight when I discovered that I could do...
0
by: Charles Arthur | last post by:
How do i turn on java script on a villaon, callus and itel keypad mobile phone
0
by: ryjfgjl | last post by:
If we have dozens or hundreds of excel to import into the database, if we use the excel import function provided by database editors such as navicat, it will be extremely tedious and time-consuming...
0
by: ryjfgjl | last post by:
In our work, we often receive Excel tables with data in the same format. If we want to analyze these data, it can be difficult to analyze them because the data is spread across multiple Excel files...
0
by: emmanuelkatto | last post by:
Hi All, I am Emmanuel katto from Uganda. I want to ask what challenges you've faced while migrating a website to cloud. Please let me know. Thanks! Emmanuel
0
BarryA
by: BarryA | last post by:
What are the essential steps and strategies outlined in the Data Structures and Algorithms (DSA) roadmap for aspiring data scientists? How can individuals effectively utilize this roadmap to progress...
1
by: nemocccc | last post by:
hello, everyone, I want to develop a software for my android phone for daily needs, any suggestions?
0
by: Hystou | last post by:
There are some requirements for setting up RAID: 1. The motherboard and BIOS support RAID configuration. 2. The motherboard has 2 or more available SATA protocol SSD/HDD slots (including MSATA, M.2...
0
marktang
by: marktang | last post by:
ONU (Optical Network Unit) is one of the key components for providing high-speed Internet services. Its primary function is to act as an endpoint device located at the user's premises. However,...
0
jinu1996
by: jinu1996 | last post by:
In today's digital age, having a compelling online presence is paramount for businesses aiming to thrive in a competitive landscape. At the heart of this digital strategy lies an intricately woven...

By using Bytes.com and it's services, you agree to our Privacy Policy and Terms of Use.

To disable or enable advertisements and analytics tracking please visit the manage ads & tracking page.