Hi...
got a short test app that i'm playing with. the goal is to get data off the
page in question.
basically, i should be able to get a list of "tr" nodes, and then to
iterate/parse them. i'm missing something, as i think i can get a single
node, but i can't figure out how to display the contents of the node.. nor
how to get the list of the "tr" nodes....
my test code is:
--------------------------------
#!/usr/bin/python
#test python script
import re
import libxml2dom
import urllib
import urllib2
import sys, string
from mechanize import Browser
import mechanize
#import tidy
import os.path
import cookielib
from libxml2dom import Node
from libxml2dom import NodeList
############### #########
#
# Parse pricegrabber.co m
############### #########
# datafile
tfile = open("price.dat ", 'wr+')
efile = open("price_err .dat", 'wr+')
urlopen = urllib2.urlopen
##cj = urllib2.cookiel ib.LWPCookieJar ()
Request = urllib2.Request
br = Browser()
user_agent = 'Mozilla/4.0 (compatible; MSIE 5.5; Windows NT)'
values1 = {'name' : 'Michael Foord',
'location' : 'Northampton',
'language' : 'Python' }
headers = { 'User-Agent' : user_agent }
url ="http://www.pricegrabbe r.com/rating_summary. php/page=1"
#============== =============== ==========
if __name__ == "__main__":
# main app
txdata = None
#----------------------------
# get the kentucky test pages
#br.set_cookiej ar(cj)
br.set_handle_r edirect(True)
br.set_handle_r eferer(True)
br.set_handle_r obots(False)
br.addheaders = [('User-Agent', 'Firefox')]
br.open(url)
#cj.save(COOKIE FILE) # resave cookies
res = br.response() # this is a copy of response
s = res.read()
# s contains HTML not XML text
d = libxml2dom.pars eString(s, html=1)
print "d = d",d
#get the input/text dialogs
#tn1 = "//div[@id='main_conte nt']/form[1]/input[position()=1]/@name"
t1 =
"/html/body/div[@id='pgSiteCont ainer']/div[@id='pgPageCont ent']/table[2]/tbo
dy"
tr =
"/html/body/div[@id='pgSiteCont ainer']/div[@id='pgPageCont ent']/table[2]/tbo
dy/tr[4]"
tr_=d.xpath(tr)
print "len =",tr_[1].nodeValue
print "fin"
-----------------------------------------------
my issue appears to be related to the last "tbody", or tbody/tr[4]...
if i leave off the tbody, i can display data, as the tr_ is an array with
data...
with the "tbody" it appears that the tr_ array is not defined, or it has no
data... however, i can use the DOM tool with firefox to observe the fact
that the "tbody" is there...
so.. what am i missing...
thoughts/comments are most welcome...
also, i'm willing to send a small amount via paypal!!
-bruce 3 2627
BeautifulSoup is a pretty nice python module for screen scraping (not
necessarily well formed) web pages.
On Fri, 13 Jun 2008 11:10:09 -0700, bruce wrote:
Hi...
got a short test app that i'm playing with. the goal is to get data off
the page in question.
basically, i should be able to get a list of "tr" nodes, and then to
iterate/parse them. i'm missing something, as i think i can get a single
node, but i can't figure out how to display the contents of the node..
nor how to get the list of the "tr" nodes....
my test code is:
--------------------------------
#!/usr/bin/python
#test python script
import re
import libxml2dom
import urllib
import urllib2
import sys, string
from mechanize import Browser
import mechanize
#import tidy
import os.path
import cookielib
from libxml2dom import Node
from libxml2dom import NodeList
############### #########
#
# Parse pricegrabber.co m
############### #########
# datafile
tfile = open("price.dat ", 'wr+')
efile = open("price_err .dat", 'wr+')
urlopen = urllib2.urlopen
##cj = urllib2.cookiel ib.LWPCookieJar () Request = urllib2.Request
br = Browser()
user_agent = 'Mozilla/4.0 (compatible; MSIE 5.5; Windows NT)' values1 =
{'name' : 'Michael Foord',
'location' : 'Northampton',
'language' : 'Python' }
headers = { 'User-Agent' : user_agent }
url ="http://www.pricegrabbe r.com/rating_summary. php/page=1"
#============== =============== ==========
if __name__ == "__main__":
# main app
txdata = None
#----------------------------
# get the kentucky test pages
#br.set_cookiej ar(cj)
br.set_handle_r edirect(True)
br.set_handle_r eferer(True)
br.set_handle_r obots(False)
br.addheaders = [('User-Agent', 'Firefox')] br.open(url)
#cj.save(COOKIE FILE) # resave cookies
res = br.response() # this is a copy of response s = res.read()
# s contains HTML not XML text
d = libxml2dom.pars eString(s, html=1)
print "d = d",d
#get the input/text dialogs
#tn1 =
"//div[@id='main_conte nt']/form[1]/input[position()=1]/@name"
t1 =
"/html/body/div[@id='pgSiteCont ainer']/div[@id='pgPageCont ent']/table
[2]/tbo
dy"
tr =
"/html/body/div[@id='pgSiteCont ainer']/div[@id='pgPageCont ent']/table
[2]/tbo
dy/tr[4]"
tr_=d.xpath(tr)
print "len =",tr_[1].nodeValue
print "fin"
-----------------------------------------------
my issue appears to be related to the last "tbody", or tbody/tr[4]...
if i leave off the tbody, i can display data, as the tr_ is an array
with data...
with the "tbody" it appears that the tr_ array is not defined, or it has
no data... however, i can use the DOM tool with firefox to observe the
fact that the "tbody" is there...
so.. what am i missing...
thoughts/comments are most welcome...
also, i'm willing to send a small amount via paypal!!
-bruce
On 13 Jun, 20:10, "bruce" <bedoug...@eart hlink.netwrote:
>
url ="http://www.pricegrabbe r.com/rating_summary. php/page=1"
[...]
tr =
"/html/body/div[@id='pgSiteCont ainer']/div[@id='pgPageCont ent']/table[2]/tbo
dy/tr[4]"
tr_=d.xpath(tr)
[...]
my issue appears to be related to the last "tbody", or tbody/tr[4]...
if i leave off the tbody, i can display data, as the tr_ is an array with
data...
Yes, I can confirm this.
with the "tbody" it appears that the tr_ array is not defined, or it has no
data... however, i can use the DOM tool with firefox to observe the fact
that the "tbody" is there...
Yes, but the DOM tool in Firefox probably inserts virtual nodes for
its own purposes. Remember that it has to do a lot of other stuff like
implement CSS rendering and DOM event models.
You can confirm that there really is no tbody by printing the result
of this...
d.xpath("/html/body/div[@id='pgSiteCont ainer']/
div[@id='pgPageCont ent']/table[2]")[0].toString()
This should fetch the second table in a single element list and then
obviously give you the only element of that list. You'll see that the
raw HTML doesn't have any tbody tags at all.
Paul
Dan Stromberg wrote:
BeautifulSoup is a pretty nice python module for screen scraping (not
necessarily well formed) web pages.
On Fri, 13 Jun 2008 11:10:09 -0700, bruce wrote:
>Hi...
got a short test app that i'm playing with. the goal is to get data off the page in question.
basically, i should be able to get a list of "tr" nodes, and then to iterate/parse them. i'm missing something, as i think i can get a single node, but i can't figure out how to display the contents of the node.. nor how to get the list of the "tr" nodes....
my test code is: -------------------------------- #!/usr/bin/python
#test python script import re import libxml2dom import urllib import urllib2 import sys, string from mechanize import Browser import mechanize #import tidy import os.path import cookielib from libxml2dom import Node from libxml2dom import NodeList
############## ########## # # Parse pricegrabber.co m ############## ##########
# datafile tfile = open("price.dat ", 'wr+') efile = open("price_err .dat", 'wr+')
urlopen = urllib2.urlopen ##cj = urllib2.cookiel ib.LWPCookieJar () Request = urllib2.Request br = Browser()
user_agent = 'Mozilla/4.0 (compatible; MSIE 5.5; Windows NT)' values1 = {'name' : 'Michael Foord', 'location' : 'Northampton', 'language' : 'Python' } headers = { 'User-Agent' : user_agent }
url ="http://www.pricegrabbe r.com/rating_summary. php/page=1"
#============= =============== ===========
if __name__ == "__main__": # main app
txdata = None
#---------------------------- # get the kentucky test pages
#br.set_cookiej ar(cj) br.set_handle_r edirect(True) br.set_handle_r eferer(True) br.set_handle_r obots(False) br.addheaders = [('User-Agent', 'Firefox')] br.open(url) #cj.save(COOKIE FILE) # resave cookies
res = br.response() # this is a copy of response s = res.read()
# s contains HTML not XML text d = libxml2dom.pars eString(s, html=1)
print "d = d",d
#get the input/text dialogs #tn1 = "//div[@id='main_conte nt']/form[1]/input[position()=1]/@name"
t1 = "/html/body/div[@id='pgSiteCont ainer']/div[@id='pgPageCont ent']/table
[2]/tbo
>dy" tr = "/html/body/div[@id='pgSiteCont ainer']/div[@id='pgPageCont ent']/table
[2]/tbo
>dy/tr[4]"
tr_=d.xpath(tr)
print "len =",tr_[1].nodeValue
print "fin"
-----------------------------------------------
my issue appears to be related to the last "tbody", or tbody/tr[4]...
if i leave off the tbody, i can display data, as the tr_ is an array with data...
with the "tbody" it appears that the tr_ array is not defined, or it has no data... however, i can use the DOM tool with firefox to observe the fact that the "tbody" is there...
so.. what am i missing...
thoughts/comments are most welcome...
also, i'm willing to send a small amount via paypal!!
-bruce
FYI: Mechanize includes all of BeautifulSoup's methods and adds additional
functionality (like forms handling).
Larry This thread has been closed and replies have been disabled. Please start a new discussion. Similar topics |
by: asdf sdf |
last post by:
wondering what python resources might be available for python-based
legacy system access, particularly to MVS, DB2 and Adabas. What
additional pieces might be needed to implement this access?
Python would be running on Unix and need to do Adabas and DB2 access.
Screen-scraping is a common technique. Any screen scraping modules for...
|
by: Simon Brunning |
last post by:
QOTW: "Sure, but what about the case where his program is on paper tape and
all he has for an editor is an ice pick?" - Grant Edwards
"And in this case, you get improved usability *and* improved speed at the
same time. That's the way it should be." - Fredrik Lundh
The Simplest Possible Metaclass:
http://orbtech.com/blog/simplemetaclass
|
by: Roland Hall |
last post by:
Am I correct in assuming screen scraping is just the response text sent to
the browser? If so, would that mean that this could not be screen scraped?
function moi() {
var tag = '<a href=';
var tagType1 = '"mail'+'to:', tagType2 = '">', tagType3 = '<\/a>';
var user1 = 'web', user2 = 'master', user3 = '@';
var dom1 = 'danger', dom2 = 'ous',...
|
by: Robert Martinez |
last post by:
I've seen a lot about screen scraping with .NET, mostly in VB.net. I have
been able to convert most of it over, but it is still just very basic stuff.
Can someone help direct me toward some good info / samples on the following:
I want to be able to do 3 things:
1) Set up a module in IBUYSPY Portal (like in the right or left pane) that...
|
by: Jim Giblin |
last post by:
I need to scrape specific information from another website, specifically the
prices of precious metals from several different vendors. While I will
credit the vendors as the data source, I do not want to use the format of
their pages, and want the inforamtion consolidated to a single page of my
design.
I did something like this for a...
| |
by: rachel |
last post by:
Hello,
I am currently contracted out by a real estate agent. He
has a page that he has created himself that has a list of
homes.. their images and data in html format.
He wants me to take this page and reformat it so that it
looks different.
Do I use screen scraping to do this?
Could someone please point me to a good screen scraping
|
by: Sanjay Arora |
last post by:
We are looking to select the language & toolset more suitable for a
project that requires getting data from several web-sites in real-
time....html parsing/scraping. It would require full emulation of the
browser, including handling cookies, automated logins & following
multiple web-link paths. Multiple threading would be a plus but not...
|
by: different.engine |
last post by:
Folks:
I am screen scraping a large volume of data from Yahoo Finance each
evening, and parsing with Beautiful Soup.
I was wondering if anyone could give me some pointers on how to make
it less obvious to Yahoo that this is what I am doing, as I fear that
they probably monitor for this type of activity, and will soon ban my
IP.
|
by: bruce |
last post by:
Hi Paul...
Thanks for the reply. Came to the same conclusion a few minutes before I saw
your email.
Another question:
tr=d.xpath(foo)
gets me an array of nodes.
|
by: Hystou |
last post by:
Most computers default to English, but sometimes we require a different language, especially when relocating. Forgot to request a specific language before your computer shipped? No problem! You can effortlessly switch the default language on Windows 10 without reinstalling. I'll walk you through it.
First, let's disable language...
|
by: Oralloy |
last post by:
Hello folks,
I am unable to find appropriate documentation on the type promotion of bit-fields when using the generalised comparison operator "<=>".
The problem is that using the GNU compilers, it seems that the internal comparison operator "<=>" tries to promote arguments from unsigned to signed.
This is as boiled down as I can make it. ...
| |
by: jinu1996 |
last post by:
In today's digital age, having a compelling online presence is paramount for businesses aiming to thrive in a competitive landscape. At the heart of this digital strategy lies an intricately woven tapestry of website design and digital marketing. It's not merely about having a website; it's about crafting an immersive digital experience that...
|
by: tracyyun |
last post by:
Dear forum friends,
With the development of smart home technology, a variety of wireless communication protocols have appeared on the market, such as Zigbee, Z-Wave, Wi-Fi, Bluetooth, etc. Each protocol has its own unique characteristics and advantages, but as a user who is planning to build a smart home system, I am a bit confused by the...
|
by: isladogs |
last post by:
The next Access Europe User Group meeting will be on Wednesday 1 May 2024 starting at 18:00 UK time (6PM UTC+1) and finishing by 19:30 (7.30PM).
In this session, we are pleased to welcome a new presenter, Adolph Dupré who will be discussing some powerful techniques for using class modules.
He will explain when you may want to use classes...
|
by: conductexam |
last post by:
I have .net C# application in which I am extracting data from word file and save it in database particularly. To store word all data as it is I am converting the whole word file firstly in HTML and then checking html paragraph one by one.
At the time of converting from word file to html my equations which are in the word document file was convert...
|
by: TSSRALBI |
last post by:
Hello
I'm a network technician in training and I need your help.
I am currently learning how to create and manage the different types of VPNs and I have a question about LAN-to-LAN VPNs.
The last exercise I practiced was to create a LAN-to-LAN VPN between two Pfsense firewalls, by using IPSEC protocols.
I succeeded, with both firewalls in...
|
by: adsilva |
last post by:
A Windows Forms form does not have the event Unload, like VB6. What one acts like?
| |
by: 6302768590 |
last post by:
Hai team
i want code for transfer the data from one system to another through IP address by using C# our system has to for every 5mins then we have to update the data what the data is updated we have to send another system
| |