473,769 Members | 4,584 Online
Bytes | Software Development & Data Engineering Community
+ Post

Home Posts Topics Members FAQ

extracting from web pages but got disordered words sometimes

There are ten web pages I want to deal with.
from http://www.af.shejis.com/new_lw/html/125926.shtml
to http://www.af.shejis.com/new_lw/html/125936.shtml

Each of them uses the charset of Chinese "gb2312", and firefox
displays all of them in the right form, that's readable Chinese.

My job is, I get every page and extract the html title of it and
dispaly the title on linux shell Termial.

And, my problem is, to some page, I get human readable title(that's in
Chinese), but to other pages, I got disordered word. Since each page
has the same charset, I don't know why I can't get every title in the
same way.

Here's my python code, get_title.py :

Expand|Select|Wrap|Line Numbers
  1. #!/usr/bin/python
  2. import urllib2
  3. from BeautifulSoup import BeautifulSoup
  4.  
  5. min_page=125926
  6. max_page=125936
  7.  
  8. def make_page_url(page_index):
  9. return ur"".join([ur"http://www.af.shejis.com/new_lw/
  10. html/",str(page_index),ur".shtml"])
  11.  
  12. def get_page_title(page_index):
  13. url=make_page_url(page_index)
  14. print "now getting: ", url
  15. user_agent='Mozilla/4.0 (compatible; MSIE 5.5; Windows NT)'
  16. headers={'User-Agent':user_agent}
  17. req=urllib2.Request(url,None,headers)
  18. response=urllib2.urlopen(req)
  19. #print response.info()
  20. page=response.read()
  21.  
  22. #extract tile by beautiful soup
  23. soup=BeautifulSoup(page)
  24. full_title=str(soup.html.head.title.string)
  25.  
  26. #title is in the format of "title --title"
  27. #use this code to delete the "--" and the duplicate title
  28. title=full_title[full_title.rfind('-')+1::]
  29.  
  30. return title
  31.  
  32. for i in xrange(min_page,max_page):
  33. print get_page_title(i)
  34.  
Will somebody please help me out? Thanks in advance.

Jan 27 '07 #1
3 1591
On Jan 27, 5:18 am, "Frank Potter" <could....@gmai l.comwrote:
There are ten web pages I want to deal with.
fromhttp://www.af.shejis.c om/new_lw/html/125926.shtml
to http://www.af.shejis.com/new_lw/html/125936.shtml

Each of them uses the charset of Chinese "gb2312", and firefox
displays all of them in the right form, that's readable Chinese.

My job is, I get every page and extract the html title of it and
dispaly the title on linux shell Termial.

And, my problem is, to some page, I get human readable title(that's in
Chinese), but to other pages, I got disordered word. Since each page
has the same charset, I don't know why I can't get every title in the
same way.

Here's my python code, get_title.py :

Expand|Select|Wrap|Line Numbers
  1. #!/usr/bin/python
  2. import urllib2
  3. from BeautifulSoup import BeautifulSoup
  4. min_page=125926
  5. max_page=125936
  6. def make_page_url(page_index):
  7.     return ur"".join([ur"http://www.af.shejis.com/new_lw/
  8. html/",str(page_index),ur".shtml"])
  9. def get_page_title(page_index):
  10.     url=make_page_url(page_index)
  11.     print "now getting: ", url
  12.     user_agent='Mozilla/4.0 (compatible; MSIE 5.5; Windows NT)'
  13.     headers={'User-Agent':user_agent}
  14.     req=urllib2.Request(url,None,headers)
  15.     response=urllib2.urlopen(req)
  16.     #print response.info()
  17.     page=response.read()
  18.     #extract tile by beautiful soup
  19.     soup=BeautifulSoup(page)
  20.     full_title=str(soup.html.head.title.string)
  21.     #title is in the format of "title --title"
  22.     #use this code to delete the "--" and the duplicate title
  23.     title=full_title[full_title.rfind('-')+1::]
  24.     return title
  25. for i in xrange(min_page,max_page):
  26.     print get_page_title(i)
  27.  

Will somebody please help me out? Thanks in advance.
This pyparsing solution seems to extract what you were looking for,
but I don't know if this will render to Chinese or not.

-- Paul

from pyparsing import makeHTMLTags,Sk ipTo
import urllib

titleStart,titl eEnd = makeHTMLTags("t itle")
scanExpr = titleStart + SkipTo("- -",include=T rue) +
SkipTo(titleEnd ).setResultsNam e("titleChars ") + titleEnd

def extractTitle(ht mlSource):
titleSource = scanExpr.search String(htmlSour ce, maxMatches=1)[0]
return titleSource.tit leChars
for urlIndex in range(125926,12 5936+1):
url = "http://www.af.shejis.c om/new_lw/html/%d.shtml" % urlIndex
pg = urllib.urlopen( url)
html = pg.read()
pg.close()
print url,':',extract Title(html)
Gives:

http://www.af.shejis.com/new_lw/html/125926.shtml : GSM±¾µØÍø×éÍø·½ ʽ
http://www.af.shejis.com/new_lw/html/125927.shtml : GSM
±¾µØÍø×éÍø·½Ê½³ õ̽
http://www.af.shejis.com/new_lw/html/125928.shtml : GSMµÄÊý¾ÝÒµÎñ
http://www.af.shejis.com/new_lw/html/125929.shtml :
GSMµÄÊý¾ÝÒµÎñºÍ ³ÐÔØÄÜÁ¦
http://www.af.shejis.com/new_lw/html/125930.shtml : GSMµÄÍøÂçÑݽø-
´ÓGSMµ½GPRSµ½3G £¨¸½Í¼£©
http://www.af.shejis.com/new_lw/html/125931.shtml : GSM¶ÌÏûÏ
¢ÒµÎñÔÚË®Çé×Ô¶¯ ²â±¨ÏµÍ³ÖеÄÓ¦Ó Ã¬Ø
http://www.af.shejis.com/new_lw/html/125932.shtml : £Ç£Ó
£Í½»»»ÏµÍ³µÄÍø çÓÅ»¯
http://www.af.shejis.com/new_lw/html/125933.shtml : GSMÇл»µô»°µÄ·Ö Îö¼
°½â¾ö°ì·¨
http://www.af.shejis.com/new_lw/html/125934.shtml : GSMÊÖ»ú²¦½ÐÊл° Ä
£¿é¾ÖÓû§¹ÊÕ쵀 ÆÊÎö
http://www.af.shejis.com/new_lw/html/125935.shtml :
GSMÊÖ»úµ½WCDMAÖ Õ¶ËµÄÑݱä
http://www.af.shejis.com/new_lw/html/125936.shtml : GSMÊÖ»úµÄάÐÞ·½ ·¨

Jan 27 '07 #2

After looking at the pyparsing results, I think I see the problem with
your original code. You are selecting only the characters after the
rightmost "-" character, but you really want to select everything to
the right of "- -". In some of the titles, the encoded Chinese
includes a "-" character, so you are chopping off everything before
that.

Try changing your code to:
title=full_titl e.split("- -")[1]

I think then your original program will work.

-- Paul

Jan 27 '07 #3
Thank you, I tried again and I figured it out.
That's something with beautiful soup, I worked with it a year ago also
dealing with Chinese html pages and nothing error happened. I read the
old code and I find the difference. Change the page to unicode before
feeding to beautiful soup, then everything will be OK.

On Jan 28, 3:26 am, "Paul McGuire" <p...@austin.rr .comwrote:
After looking at the pyparsing results, I think I see the problem with
your original code. You are selecting only the characters after the
rightmost "-" character, but you really want to select everything to
the right of "- -". In some of the titles, the encoded Chinese
includes a "-" character, so you are chopping off everything before
that.

Try changing your code to:
title=full_titl e.split("- -")[1]

I think then your original program will work.

-- Paul
Jan 28 '07 #4

This thread has been closed and replies have been disabled. Please start a new discussion.

Similar topics

1
10607
by: cassandra.flowers | last post by:
Hi, I am using VB6 and want to extract text from a string. But ONLY take out words that begin with 't' or 'd'. The mainstring is input by the user into 'txtMain' and then by clicking a command button, all the words that begin with 't' or 'd' will be extracted and appear in a second text box, txtExtract. So, i know that I have to search through the main string for the character "d", then extract the following characters until it...
0
1183
by: mtp1032 | last post by:
I need to be able to extract the values from an XmlRpcValue where I do not know in advance what the keys are, or how many exist. For example, suppose I have an XmlRpcValue, xmlVal, returned by some xml-rpc method, XmlRpc::XmlRpcValue xmlVal = some_xmlrpc_method(...); std::cout << xmlVal << std::endl; yields the following:
4
10200
by: Jane Doe | last post by:
Hi, I need to search and replace patterns in web pages, but I can't find a way even after reading the ad hoc chapter in New Rider's "Inside JavaScript". Here's what I want to do: function filter() { var items = new Array("John", "Jane");
0
1637
by: Mico | last post by:
I would be very grateful for any help with the following: I currently have the code below. This opens a MS Word document, and uses C#'s internal regular expressions library to find if there is a match within this document. When I run the code I get a parser error - I think there is an escape character in the Word doc format, or perhaps trying to do a match with the entire document is not a good idea. public DataRow getMatches()
1
2599
by: timdavis919 | last post by:
I'm trying to build some Russian web pages using Perl and MySQL. Toward that end, I have created a simple test case, which does not seem to work. Any help would be appreciated. I can successfully create a table in MySQL 4.1 with these commands (this creates a table with three rows; the three columns in each row are 1) an integer primary key, 2) a russian word, and 3) an english word): ...
4
1705
by: sofiafig | last post by:
Hi, I have a file with several entries in the form: AFFX-BioB-5_at E. coli /GEN=bioB /gb:J04423.1 NOTE=SIF corresponding to nucleotides 2032-2305 of /gb:J04423.1 DEF=E.coli 7,8-diamino-pelargonic acid (bioA), biotin synthetase (bioB), 7-keto-8-amino-pelargonic acid synthetase (bioF), bioC protein, and dethiobiotin synthetase (bioD), complete cds.
8
1209
by: pirho | last post by:
Not sure if I'm posting this in the right spot, but.... Lets say i have a class for data access that gets instantiated on each page in a web application ( .Net 1.1 ) From inside that class, how can I get a reference to the page that instantiated it? I'm assuming there is something in system.reflection, but I'm not familiar with reflection enough to figure out what to use.
4
1692
by: Damo | last post by:
I have a program, That retrieves a webpage , such as a search engine results page from the web, Then I need to go through the document and retrieve just the search results. The problem is I want to omit the sponsored results. So is there a way I can start analysing the document at a specific point (ie after the sponsored results) Thanks
11
4567
by: Ebenezer | last post by:
Let's suppose I have some nodes in an XML file, with an URL attribute: <node url="mypage.php?name1=value1&foo=bar&foo2=bar2&name2=value0" /> <node url="myotherpage.php?name4=value4&foo=bar3&foo2=bar5&name2=value8" /> and so on. Let's suppose I want to retrieve this @url parameter, BUT ONLY with the values, in querystring, associated with "foo" and "foo2" (thus discarding name1, name2, name4 and every other different ones).
0
9587
marktang
by: marktang | last post by:
ONU (Optical Network Unit) is one of the key components for providing high-speed Internet services. Its primary function is to act as an endpoint device located at the user's premises. However, people are often confused as to whether an ONU can Work As a Router. In this blog post, we’ll explore What is ONU, What Is Router, ONU & Router’s main usage, and What is the difference between ONU and Router. Let’s take a closer look ! Part I. Meaning of...
0
10211
Oralloy
by: Oralloy | last post by:
Hello folks, I am unable to find appropriate documentation on the type promotion of bit-fields when using the generalised comparison operator "<=>". The problem is that using the GNU compilers, it seems that the internal comparison operator "<=>" tries to promote arguments from unsigned to signed. This is as boiled down as I can make it. Here is my compilation command: g++-12 -std=c++20 -Wnarrowing bit_field.cpp Here is the code in...
1
9993
by: Hystou | last post by:
Overview: Windows 11 and 10 have less user interface control over operating system update behaviour than previous versions of Windows. In Windows 11 and 10, there is no way to turn off the Windows Update option using the Control Panel or Settings app; it automatically checks for updates and installs any it finds, whether you like it or not. For most users, this new feature is actually very convenient. If you want to control the update process,...
0
9863
tracyyun
by: tracyyun | last post by:
Dear forum friends, With the development of smart home technology, a variety of wireless communication protocols have appeared on the market, such as Zigbee, Z-Wave, Wi-Fi, Bluetooth, etc. Each protocol has its own unique characteristics and advantages, but as a user who is planning to build a smart home system, I am a bit confused by the choice of these technologies. I'm particularly interested in Zigbee because I've heard it does some...
0
8870
agi2029
by: agi2029 | last post by:
Let's talk about the concept of autonomous AI software engineers and no-code agents. These AIs are designed to manage the entire lifecycle of a software development project—planning, coding, testing, and deployment—without human intervention. Imagine an AI that can take a project description, break it down, write the code, debug it, and then launch it, all on its own.... Now, this would greatly impact the work of software developers. The idea...
0
6672
by: conductexam | last post by:
I have .net C# application in which I am extracting data from word file and save it in database particularly. To store word all data as it is I am converting the whole word file firstly in HTML and then checking html paragraph one by one. At the time of converting from word file to html my equations which are in the word document file was convert into image. Globals.ThisAddIn.Application.ActiveDocument.Select();...
0
5447
by: adsilva | last post by:
A Windows Forms form does not have the event Unload, like VB6. What one acts like?
2
3561
muto222
by: muto222 | last post by:
How can i add a mobile payment intergratation into php mysql website.
3
2815
bsmnconsultancy
by: bsmnconsultancy | last post by:
In today's digital era, a well-designed website is crucial for businesses looking to succeed. Whether you're a small business owner or a large corporation in Toronto, having a strong online presence can significantly impact your brand's success. BSMN Consultancy, a leader in Website Development in Toronto offers valuable insights into creating effective websites that not only look great but also perform exceptionally well. In this comprehensive...

By using Bytes.com and it's services, you agree to our Privacy Policy and Terms of Use.

To disable or enable advertisements and analytics tracking please visit the manage ads & tracking page.