473,659 Members | 2,646 Online
Bytes | Software Development & Data Engineering Community
+ Post

Home Posts Topics Members FAQ

BeautifulSoup

I tried using BeautifulSoup to make changes to the url
links on html pages, but when the page was displayed,
it was garbled up and didn't look right (even when I
didn't actually change anything on the page yet). I
ran these steps in python to see what was up:
from BeautifulSoup import BeautifulSoup
from urllib2 import build_opener, Request

req = Request('http://www.python.org/')
f = build_opener(). open(req)
page = f.read()
f.close()

len(page) 12040
soup = BeautifulSoup()
soup.feed(pag e)
page2 = soup.renderCont ents()
len(page2)

11889

I have version 2.1 of BeautifulSoup. It seems that
other ppl have used BeautifulSoup and it works fine
for them so I'm not sure what I'm doing wrong. Any
help would be appreciated, thanks.

-Steve

_______________ _______________ _______________ _______
Start your day with Yahoo! - make it your home page
http://www.yahoo.com/r/hs

Aug 19 '05 #1
4 3115
Steve -

Is there a chance you could post a before and after example, so we can
see just what you are trying to do instead of talking conceptually all
around it and making us guess? If you are just doing some spot
translations of specific values in an HTML file, you can probably get
away with a simple (but readable) program using pyparsing.

-- Paul

Aug 19 '05 #2
Here's a pyparsing program that reads my personal web page, and spits
out HTML with all of the HREF's reversed.

-- Paul
(Download pyparsing at http://pyparsing.sourceforge.net.)

from pyparsing import Literal, quotedString
import urllib

LT = Literal("<")
GT = Literal(">")
EQUALS = Literal("=")
htmlAnchor = LT + "A" + "HREF" + EQUALS +
quotedString.se tResultsName("h ref") + GT

def convertHREF(s,l ,toks):
# do HREF conversion here - for demonstration, we will just reverse
them
print toks.href
return "<A HREF=%s>" % toks.href[::-1]

htmlAnchor.setP arseAction( convertHREF )

inputURL = "http://www.geocities.c om/ptmcg"
inputPage = urllib.urlopen( inputURL)
inputHTML = inputPage.read( )
inputPage.close ()

print htmlAnchor.tran sformString( inputHTML )

Aug 19 '05 #3
"Paul McGuire" <pt***@austin.r r.com> writes:
Here's a pyparsing program that reads my personal web page, and spits
out HTML with all of the HREF's reversed.
Parsing HTML isn't easy, which makes me wonder how good this solution
really is. Not meant as a comment on the quality of this code or
PyParsing, but as curiosity from someone who does a lot of [X}HTML
herding.
-- Paul
(Download pyparsing at http://pyparsing.sourceforge.net.)
If it were in the ports tree, I'd have grabbed it and tried it
myself. But it isn't, so I'm going to be lazy and ask. If PyParsing
really makes dealing with HTML this easy, I may package it as a port
myself.
from pyparsing import Literal, quotedString
import urllib

LT = Literal("<")
GT = Literal(">")
EQUALS = Literal("=")
htmlAnchor = LT + "A" + "HREF" + EQUALS +
quotedString.se tResultsName("h ref") + GT

def convertHREF(s,l ,toks):
# do HREF conversion here - for demonstration, we will just reverse
them
print toks.href
return "<A HREF=%s>" % toks.href[::-1]

htmlAnchor.setP arseAction( convertHREF )

inputURL = "http://www.geocities.c om/ptmcg"
inputPage = urllib.urlopen( inputURL)
inputHTML = inputPage.read( )
inputPage.close ()

print htmlAnchor.tran sformString( inputHTML )


How well does it deal with other attributes in front of the href, like
<A onClick="..." href="...">?

How about if my HTML has things that look like HTML in attributes,
like <TAG ATTRIBUTE="stuf f<A HREF=stuff">?

Thanks,
<mike
--
Mike Meyer <mw*@mired.or g> http://www.mired.org/home/mwm/
Independent WWW/Perforce/FreeBSD/Unix consultant, email for more information.
Aug 20 '05 #4
Mike -

Thanks for asking. Typically I hang back from these discussions of
parsing HTML or XML (*especially* XML), since there are already a
number of parsers out there that can handle the full language syntax.
But it seems that many people trying to parse HTML aren't interested in
fully parsing an HTML page, so much as they are trying to match some
tag pattern, to extract or modify the embedded data. In these cases,
fully comprehending HTML syntax is rarely required.

In this particular instance, the OP had started another thread in which
he was trying to extract some HTML content using regexp's, and this
didn't seem to be converging to a workable solution. When he finally
revealed that what he was trying to do was extract and modify the URL's
in a web pages HTML source, this seemed like a tractable problem for a
quick pyparsing program. In the interests of keeping things simple, I
admittedly provided a limited solution. As you mentioned, no
additional attributes are handled by this code. But many HTML scrapers
are able to make simplifying assumptions about what HTML features can
be expected, and I did not want to spend a lot of time solving problems
that may never come up.

So you asked some good questions, let me try to give some reasonable
answers, or at least responses:

1. "If it were in the ports tree, I'd have grabbed it and tried it
myself."
By "ports tree", I assume you mean some directory of your Linux
distribution. I'm sure my Linux ignorance is showing here, most of my
work occurs on Windows systems. I've had pyparsing available on SF for
over a year and a half, and I do know that it has been incorporated (by
others) into a couple of Linux distros, including Debian, ubuntu,
gentoo, and Fedora. If you are interested in doing a port to another
Linux, that would be great! But I was hoping that hosting pyparsing on
SF would be easy enough for most people to be able to get at it.

2. "How well does it deal with other attributes in front of the href,
like <A onClick="..." href="...">?"
*This* version doesn't deal with other attributes at all, in the
interests of simplicity. However, pyparsing includes a helper method,
makeHTMLTags(), that *does* support arbitrary attributes within an
opening HTML tag. It is called like:

anchorStart,anc horEnd = makeHTMLTags("A ")

makeHTMLTags returns a pyparsing subexpression that *does* comprehend
attributes, as well as opening tags that include their own closing '/'
(indicating an empty tag body). Tag attributes are accessible by name
in the returned results tokens, without requiring setResultsName( )
calls (as in the example).

3. "How about if my HTML has things that look like HTML in attributes,
like <TAG ATTRIBUTE="stuf f<A HREF=stuff">?"
Well, again, the simple example wont be able to tell the difference,
and it would process the ATTRIBUTE string as a real tag. To address
this, we would expand our statement to process quoted strings
explicitly, and separately from the htmlAnchor, as in:

htmlPatterns = quotedString | htmlAnchor

and then use htmlPatterns for the transformString call:

htmlPatterns.tr ansformString( inputHTML )

You didn't ask, but one feature that is easy to handle is comments.
pyparsing includes some common comment syntaxes, such as cStyleComment
and htmlComment. To ignore them, one simply calls ignore() on the root
pyparsing node. In the simple example, this would look like:

htmlPatterns.ig nore( htmlComment )

By adding this single statement, all HTML comments would be ignored.
Writing a full HTML parser with pyparsing would be tedious, and not a
great way to spend your time, given the availability of other parsing
tools. But for simple scraping and extracting, it can be a very
efficient way to go.

-- Paul

Aug 20 '05 #5

This thread has been closed and replies have been disabled. Please start a new discussion.

Similar topics

7
8594
by: Gonzillaaa | last post by:
I'm trying to get the data on the "Central London Property Price Guide" box at the left hand side of this page http://www.findaproperty.com/regi0018.html I have managed to get the data :) but when I start looking for tables I only get tables of depth 1 how do I go about accessing inner tables? same happens for links... this is what I've go so far
4
2728
by: William Xu | last post by:
Hi, all, This piece of code used to work well. i guess the error occurs after some upgrade. >>> import urllib >>> from BeautifulSoup import BeautifulSoup >>> url = 'http://www.google.com' >>> port = urllib.urlopen(url).read() >>> soup = BeautifulSoup()
5
2648
by: John Nagle | last post by:
This, which is from a real web site, went into BeautifulSoup: <param name="movie" value="/images/offersBanners/sw04.swf?binfot=We offer fantastic rates for selected weeks or days!!&blinkt=Click here And this came out, via prettify: <addresssnippet siteurl="http%3A//apartmentsapart.com" url="http%3A//www.apartmentsapart.com/Europe/Spain/Madrid/FAQ"> <param name="movie" value="/images/offersBanners/sw04.swf?binfot=We offer
9
1754
by: Mizipzor | last post by:
Is there a way to "subscribe" to individual topics? im currently getting bombarded with daily digests and i wish to only receive a mail when there is activity in a topic that interests me. Can this be done? Thanks in advance.
3
3418
by: John Nagle | last post by:
Are weak refs slower than strong refs? I've been considering making the "parent" links in BeautifulSoup into weak refs, so the trees will release immediately when they're no longer needed. In general, all links back towards the root of a tree should be weak refs; this breaks the loops that give reference counting trouble. John Nagle
11
3166
by: John Nagle | last post by:
The syntax that browsers understand as HTML comments is much less restrictive than what BeautifulSoup understands. I keep running into sites with formally incorrect HTML comments which are parsed happily by browsers. Here's yet another example, this one from "http://www.webdirectory.com". The page starts like this: <!Hello there! Welcome to The Environment Directory!> <!Not too much exciting HTML code here but it does the job! >...
5
3768
by: Larry Bates | last post by:
Info: Python version: ActivePython 2.5.1.1 Platform: Windows I wanted to install BeautifulSoup today for a small project and decided to use easy_install. I can install other packages just fine. Unfortunately I get the following error from BeautifulSoup installation attempt: C:\Python25\Lib\SITE-P~1>easy_install BeautifulSoup
3
2167
by: bsagert | last post by:
I downloaded BeautifulSoup.py from http://www.crummy.com/software/BeautifulSoup/ and being a n00bie, I just placed it in my Windows c:\python25\lib\ file. When I type "import beautifulsoup" from the interactive prompt it works like a charm. This seemed too easy in retrospect. Then I downloaded the PIL (Python Imaging Library) module from http://www.pythonware.com/products/pil/. Instead of a simple file that BeautifulSoup sent me, PIL is...
2
2613
by: academicedgar | last post by:
Hi I would appreciate some help. I am trying to learn Python and want to use BeautifulSoup to pull some data from tables. I was really psyched earlier tonight when I discovered that I could do this from BeautifulSoup import BeautifulSoup bst=file(r"c:\bstest.htm").read() soup=BeautifulSoup(bst) rows=soup.findAll('tr')
0
8428
marktang
by: marktang | last post by:
ONU (Optical Network Unit) is one of the key components for providing high-speed Internet services. Its primary function is to act as an endpoint device located at the user's premises. However, people are often confused as to whether an ONU can Work As a Router. In this blog post, we’ll explore What is ONU, What Is Router, ONU & Router’s main usage, and What is the difference between ONU and Router. Let’s take a closer look ! Part I. Meaning of...
0
8339
by: Hystou | last post by:
Most computers default to English, but sometimes we require a different language, especially when relocating. Forgot to request a specific language before your computer shipped? No problem! You can effortlessly switch the default language on Windows 10 without reinstalling. I'll walk you through it. First, let's disable language synchronization. With a Microsoft account, language settings sync across devices. To prevent any complications,...
0
8851
Oralloy
by: Oralloy | last post by:
Hello folks, I am unable to find appropriate documentation on the type promotion of bit-fields when using the generalised comparison operator "<=>". The problem is that using the GNU compilers, it seems that the internal comparison operator "<=>" tries to promote arguments from unsigned to signed. This is as boiled down as I can make it. Here is my compilation command: g++-12 -std=c++20 -Wnarrowing bit_field.cpp Here is the code in...
0
7360
agi2029
by: agi2029 | last post by:
Let's talk about the concept of autonomous AI software engineers and no-code agents. These AIs are designed to manage the entire lifecycle of a software development project—planning, coding, testing, and deployment—without human intervention. Imagine an AI that can take a project description, break it down, write the code, debug it, and then launch it, all on its own.... Now, this would greatly impact the work of software developers. The idea...
0
5650
by: conductexam | last post by:
I have .net C# application in which I am extracting data from word file and save it in database particularly. To store word all data as it is I am converting the whole word file firstly in HTML and then checking html paragraph one by one. At the time of converting from word file to html my equations which are in the word document file was convert into image. Globals.ThisAddIn.Application.ActiveDocument.Select();...
0
4176
by: TSSRALBI | last post by:
Hello I'm a network technician in training and I need your help. I am currently learning how to create and manage the different types of VPNs and I have a question about LAN-to-LAN VPNs. The last exercise I practiced was to create a LAN-to-LAN VPN between two Pfsense firewalls, by using IPSEC protocols. I succeeded, with both firewalls in the same network. But I'm wondering if it's possible to do the same thing, with 2 Pfsense firewalls...
0
4338
by: adsilva | last post by:
A Windows Forms form does not have the event Unload, like VB6. What one acts like?
1
2757
by: 6302768590 | last post by:
Hai team i want code for transfer the data from one system to another through IP address by using C# our system has to for every 5mins then we have to update the data what the data is updated we have to send another system
2
1982
muto222
by: muto222 | last post by:
How can i add a mobile payment intergratation into php mysql website.

By using Bytes.com and it's services, you agree to our Privacy Policy and Terms of Use.

To disable or enable advertisements and analytics tracking please visit the manage ads & tracking page.