473,756 Members | 2,660 Online
Bytes | Software Development & Data Engineering Community
+ Post

Home Posts Topics Members FAQ

Extracting text from a Webpage using BeautifulSoup

Hi,

I wish to extract all the words on a set of webpages and store them in
a large dictionary. I then wish to procuce a list with the most common
words for the language under consideration. So, my code below reads
the page -

http://news.bbc.co.uk/welsh/hi/newsi...00/7420967.stm

a welsh language page. I hope to then establish the 1000 most commonly
used words in Welsh. The problem I'm having is that
soup.findAll(te xt=True) is returning the likes of -

u'doctype html public "-//W3C//DTD HTML 4.0 Transitional//EN" "http://
www.w3.org/TR/REC-html40/loose.dtd"'

and -

<a href=" \'+url+\'?rss=\ '+rssURI+\'" class="sel"

Any suggestions how I might overcome this problem?

Thanks,

Barry.
Here's my code -

import urllib
import urllib2
from BeautifulSoup import BeautifulSoup

# proxy_support = urllib2.ProxyHa ndler({"http":" http://
999.999.999.999 :8080"})
# opener = urllib2.build_o pener(proxy_sup port)
# urllib2.install _opener(opener)

page = urllib2.urlopen ('http://news.bbc.co.uk/welsh/hi/newsid_7420000/
newsid_7420900/7420967.stm')
soup = BeautifulSoup(p age)

pageText = soup.findAll(te xt=True)
print pageText

Jun 27 '08 #1
3 9814
On Tue, 27 May 2008 03:01:30 -0700, Magnus.Moraberg wrote:
I wish to extract all the words on a set of webpages and store them in
a large dictionary. I then wish to procuce a list with the most common
words for the language under consideration. So, my code below reads
the page -

http://news.bbc.co.uk/welsh/hi/newsi...00/7420967.stm

a welsh language page. I hope to then establish the 1000 most commonly
used words in Welsh. The problem I'm having is that
soup.findAll(te xt=True) is returning the likes of -

u'doctype html public "-//W3C//DTD HTML 4.0 Transitional//EN" "http://
www.w3.org/TR/REC-html40/loose.dtd"'
Just extract the text from the body of the document.

body_texts = soup.body(text= True)
and -

<a href=" \'+url+\'?rss=\ '+rssURI+\'" class="sel"

Any suggestions how I might overcome this problem?
Ask the BBC to produce HTML that's less buggy. ;-)

http://validator.w3.org/ reports bugs like "'body' tag not allowed here"
or closing tags without opening ones and so on.

Ciao,
Marc 'BlackJack' Rintsch
Jun 27 '08 #2
On 27 Maj, 12:54, Marc 'BlackJack' Rintsch <bj_...@gmx.net wrote:
On Tue, 27 May 2008 03:01:30 -0700, Magnus.Moraberg wrote:
I wish to extract all the words on a set of webpages and store them in
a large dictionary. I then wish to procuce a list with the most common
words for the language under consideration. So, my code below reads
the page -
http://news.bbc.co.uk/welsh/hi/newsi...00/7420967.stm
a welsh language page. I hope to then establish the 1000 most commonly
used words in Welsh. The problem I'm having is that
soup.findAll(te xt=True) is returning the likes of -
u'doctype html public "-//W3C//DTD HTML 4.0 Transitional//EN" "http://
www.w3.org/TR/REC-html40/loose.dtd"'

Just extract the text from the body of the document.

body_texts = soup.body(text= True)
and -
<a href=" \'+url+\'?rss=\ '+rssURI+\'" class="sel"
Any suggestions how I might overcome this problem?

Ask the BBC to produce HTML that's less buggy. ;-)

http://validator.w3.org/reports bugs like "'body' tag not allowed here"
or closing tags without opening ones and so on.

Ciao,
Marc 'BlackJack' Rintsch
Great, thanks!
Jun 27 '08 #3
On May 27, 5:01*am, Magnus.Morab... @gmail.com wrote:
Hi,

I wish to extract all the words on a set of webpages and store them in
a large dictionary. I then wish to procuce a list with the most common
words for the language under consideration. So, my code below reads
the page -

http://news.bbc.co.uk/welsh/hi/newsi...00/7420967.stm

a welsh language page. I hope to then establish the 1000 most commonly
used words in Welsh. The problem I'm having is that
soup.findAll(te xt=True) is returning the likes of -

u'doctype html public "-//W3C//DTD HTML 4.0 Transitional//EN" "http://www.w3.org/TR/REC-html40/loose.dtd"'

and -

<a href=" \'+url+\'?rss=\ '+rssURI+\'" class="sel"

Any suggestions how I might overcome this problem?

Thanks,

Barry.

Here's my code -

import urllib
import urllib2
from BeautifulSoup import BeautifulSoup

# proxy_support = urllib2.ProxyHa ndler({"http":" http://
999.999.999.999 :8080"})
# opener = urllib2.build_o pener(proxy_sup port)
# urllib2.install _opener(opener)

page = urllib2.urlopen ('http://news.bbc.co.uk/welsh/hi/newsid_7420000/
newsid_7420900/7420967.stm')
soup = BeautifulSoup(p age)

pageText = soup.findAll(te xt=True)
print pageText
As an alternative datapoint, you can try out the htmlStripper example
on the pyparsing wiki: http://pyparsing.wikispaces.com/spac...tmlStripper.py

-- Paul
Jun 27 '08 #4

This thread has been closed and replies have been disabled. Please start a new discussion.

Similar topics

4
3070
by: Ryan Kaskel | last post by:
How can I obtain the source of a remote webpage (e.g. http://www.python.org/index.html) using Python? Something like: pyPage = open('http://www.python.org/index.html',r).read() Obviously that won't work but how can I do something to that effect? Thanks, Ryan Kaskel
7
8604
by: Gonzillaaa | last post by:
I'm trying to get the data on the "Central London Property Price Guide" box at the left hand side of this page http://www.findaproperty.com/regi0018.html I have managed to get the data :) but when I start looking for tables I only get tables of depth 1 how do I go about accessing inner tables? same happens for links... this is what I've go so far
2
1852
by: s. d. rose | last post by:
Hello All. I am learning Python, and have never worked with HTML. However, I would like to write a simple script to audit my 100+ Netware servers via their web portal. I was reading Chapter 8 of Dive into Python, which deals with this topic. In the web portal of the server, there is a section similar to this: -- clients and <A href="http://eugenia.blogsome.com/?s=ipkall">clever</aservices. <--
3
1590
by: Frank Potter | last post by:
There are ten web pages I want to deal with. from http://www.af.shejis.com/new_lw/html/125926.shtml to http://www.af.shejis.com/new_lw/html/125936.shtml Each of them uses the charset of Chinese "gb2312", and firefox displays all of them in the right form, that's readable Chinese. My job is, I get every page and extract the html title of it and dispaly the title on linux shell Termial.
9
1767
by: Mizipzor | last post by:
Is there a way to "subscribe" to individual topics? im currently getting bombarded with daily digests and i wish to only receive a mail when there is activity in a topic that interests me. Can this be done? Thanks in advance.
1
1947
by: rpjd | last post by:
I am completely new to this so please bear with me here. My project involves a webpage executing php scripts via an xmlhttprequest which queries a database and returns data to the webpage. This code below is working to a degree in IE7. As I have not yet parsed http.responseText, I am getting all the code in parts.php in the alert. My php script query creates two php arrays, one a single array and a two-dimensional array such as array. My...
9
2469
by: sebzzz | last post by:
Hi, I work at this company and we are re-building our website: http://caslt.org/. The new website will be built by an external firm (I could do it myself, but since I'm just the summer student worker...). Anyways, to help them, they first asked me to copy all the text from all the pages of the site (and there is a lot!) to word documents. I found the idea pretty stupid since style would have to be applied from scratch anyway since we...
5
3775
by: Larry Bates | last post by:
Info: Python version: ActivePython 2.5.1.1 Platform: Windows I wanted to install BeautifulSoup today for a small project and decided to use easy_install. I can install other packages just fine. Unfortunately I get the following error from BeautifulSoup installation attempt: C:\Python25\Lib\SITE-P~1>easy_install BeautifulSoup
1
1904
by: =?UTF-8?B?4KSw4KS14KWA4KSC4KSm4KSwIOCkoOCkvuCkleCl | last post by:
hello friends, is there any lib in python that provides a mechanism to get the title of a web page ? also is there anything available to get a nice summary like the way google shows below every link ? thanks ravinder thakur
0
9273
by: Hystou | last post by:
Most computers default to English, but sometimes we require a different language, especially when relocating. Forgot to request a specific language before your computer shipped? No problem! You can effortlessly switch the default language on Windows 10 without reinstalling. I'll walk you through it. First, let's disable language synchronization. With a Microsoft account, language settings sync across devices. To prevent any complications,...
1
9841
by: Hystou | last post by:
Overview: Windows 11 and 10 have less user interface control over operating system update behaviour than previous versions of Windows. In Windows 11 and 10, there is no way to turn off the Windows Update option using the Control Panel or Settings app; it automatically checks for updates and installs any it finds, whether you like it or not. For most users, this new feature is actually very convenient. If you want to control the update process,...
0
8712
agi2029
by: agi2029 | last post by:
Let's talk about the concept of autonomous AI software engineers and no-code agents. These AIs are designed to manage the entire lifecycle of a software development project—planning, coding, testing, and deployment—without human intervention. Imagine an AI that can take a project description, break it down, write the code, debug it, and then launch it, all on its own.... Now, this would greatly impact the work of software developers. The idea...
1
7244
isladogs
by: isladogs | last post by:
The next Access Europe User Group meeting will be on Wednesday 1 May 2024 starting at 18:00 UK time (6PM UTC+1) and finishing by 19:30 (7.30PM). In this session, we are pleased to welcome a new presenter, Adolph Dupré who will be discussing some powerful techniques for using class modules. He will explain when you may want to use classes instead of User Defined Types (UDT). For example, to manage the data in unbound forms. Adolph will...
0
6534
by: conductexam | last post by:
I have .net C# application in which I am extracting data from word file and save it in database particularly. To store word all data as it is I am converting the whole word file firstly in HTML and then checking html paragraph one by one. At the time of converting from word file to html my equations which are in the word document file was convert into image. Globals.ThisAddIn.Application.ActiveDocument.Select();...
0
5141
by: TSSRALBI | last post by:
Hello I'm a network technician in training and I need your help. I am currently learning how to create and manage the different types of VPNs and I have a question about LAN-to-LAN VPNs. The last exercise I practiced was to create a LAN-to-LAN VPN between two Pfsense firewalls, by using IPSEC protocols. I succeeded, with both firewalls in the same network. But I'm wondering if it's possible to do the same thing, with 2 Pfsense firewalls...
0
5303
by: adsilva | last post by:
A Windows Forms form does not have the event Unload, like VB6. What one acts like?
2
3358
muto222
by: muto222 | last post by:
How can i add a mobile payment intergratation into php mysql website.
3
2666
bsmnconsultancy
by: bsmnconsultancy | last post by:
In today's digital era, a well-designed website is crucial for businesses looking to succeed. Whether you're a small business owner or a large corporation in Toronto, having a strong online presence can significantly impact your brand's success. BSMN Consultancy, a leader in Website Development in Toronto offers valuable insights into creating effective websites that not only look great but also perform exceptionally well. In this comprehensive...

By using Bytes.com and it's services, you agree to our Privacy Policy and Terms of Use.

To disable or enable advertisements and analytics tracking please visit the manage ads & tracking page.