I'm working on writing a program that will synchronize one database with
another. For the source database, we can just use the python sybase API;
that's nice and normal.
For the target database, unfortunately, the only interface we have access
to, is http+html based. There's same javascript involved too, but
hopefully we won't have to interact with that.
So, I've got Basic AUTH going with http, but now I'm faced with the
following questions, due to the fact that I need to pull some lists out of
HTML, and then make some changes via POST or so, again over HTTP:
1) Would I be better off just regex'ing the html I'm getting back? (I
suppose this depends on the complexity of the html received, eh?)
2) Would I be better off feeding the HTML into an HTML parser, and then
traversing that datastructure (is that really how it works?)?
3) When I retrieve stuff over http, it's clear that the web server is
sending some kind of odd gibberish, which the python urllib2 API is
passing on to me. In a packet trace, it looks like:
Date: Wed, 10 Nov 2004 01:09:47 GMT^M
Server: Apache/1.3.29 (Unix) (Red-Hat/Linux) mod_perl/1.23^M
Keep-Alive: timeout=15, max=98^M
Connection: Keep-Alive^M
Transfer-Encoding: chunked^M
Content-Type: text/html^M
^M
ef1^M
<!DOCTYPE html PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN">^M
<html>^M
<head>^M
....and so on. It seems to me that "ef1" should not be there. Is that
true? What -is- that nonsense? It's not the same string every time, and
it doesn't show up in a web browser.
Thanks! 7 2582
> For the target database, unfortunately, the only interface we have access to, is http+html based. There's same javascript involved too, but hopefully we won't have to interact with that.
Argl. That sounds like royal pain in somewhere...
So, I've got Basic AUTH going with http, but now I'm faced with the following questions, due to the fact that I need to pull some lists out of HTML, and then make some changes via POST or so, again over HTTP:
1) Would I be better off just regex'ing the html I'm getting back? (I suppose this depends on the complexity of the html received, eh?)
2) Would I be better off feeding the HTML into an HTML parser, and then traversing that datastructure (is that really how it works?)?
I personally would certainly go that way - the best thing IMHO would be to
make a dom-tree out of the html you then can work on with xpath. 4suite
might be good for that. While this seems a bit overengineered at first,
using xpath allows for pretty strong queries against your dom-tree so even
larger changes in the "interface" can be coped with. And writing htmlparser
based class isn't hard, either. 3) When I retrieve stuff over http, it's clear that the web server is sending some kind of odd gibberish, which the python urllib2 API is passing on to me. In a packet trace, it looks like:
Date: Wed, 10 Nov 2004 01:09:47 GMT^M Server: Apache/1.3.29 (Unix) (Red-Hat/Linux) mod_perl/1.23^M Keep-Alive: timeout=15, max=98^M Connection: Keep-Alive^M Transfer-Encoding: chunked^M Content-Type: text/html^M ^M ef1^M <!DOCTYPE html PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN">^M <html>^M <head>^M
...and so on. It seems to me that "ef1" should not be there. Is that true? What -is- that nonsense? It's not the same string every time, and it doesn't show up in a web browser.
A webserver serving http doesn't care what you return - remember that http
can also be used to transfer binary data.
So the problem is not the apache, but whoever wrote that webapplication. Is
it cgi based?
For the ignoring part: Webbrowser tend to be very relaxed about html
documents format, otherwise a lot of the web would be "unsurfable ". So I'm
not to astonished that they ignore that leading crap.
--
Regards,
Diez B. Roggisch
Forget about the last part - I just read Jp Calderone's reply after writing
mine - interesting. I've never heard of "chunked" encoding before.
--
Regards,
Diez B. Roggisch
On Wed, 10 Nov 2004 12:26:04 +0100, Diez B. Roggisch wrote: So, I've got Basic AUTH going with http, but now I'm faced with the following questions, due to the fact that I need to pull some lists out of HTML, and then make some changes via POST or so, again over HTTP:
1) Would I be better off just regex'ing the html I'm getting back? (I suppose this depends on the complexity of the html received, eh?)
2) Would I be better off feeding the HTML into an HTML parser, and then traversing that datastructure (is that really how it works?)?
I personally would certainly go that way - the best thing IMHO would be to make a dom-tree out of the html you then can work on with xpath. 4suite might be good for that. While this seems a bit overengineered at first, using xpath allows for pretty strong queries against your dom-tree so even larger changes in the "interface" can be coped with. And writing htmlparser based class isn't hard, either.
This sounds interesting.
But if I use an XML parser to parse HTML instead of a dedicated HTML
parser, will I still get smart handling of unpaired tags? I'm not sure we
can count on getting 100% properly formed HTML...
> But if I use an XML parser to parse HTML instead of a dedicated HTML parser, will I still get smart handling of unpaired tags? I'm not sure we can count on getting 100% properly formed HTML...
There should be html2dom parsers - after all, extending htmlparser to
generate dom shouldn't be to hard.
Googling turns up tidy - so you may want to feed your html through it
before: http://www.xml.com/pub/a/2004/09/08/pyxml.html
--
Regards,
Diez B. Roggisch
In article <pa************ *************** *@dcs.nac.uci.e du>,
Dan Stromberg <st******@dcs.n ac.uci.edu> wrote: I'm working on writing a program that will synchronize one database with another. For the source database, we can just use the python sybase API; that's nice and normal.
[...]
1) Would I be better off just regex'ing the html I'm getting back? (I suppose this depends on the complexity of the html received, eh?)
2) Would I be better off feeding the HTML into an HTML parser, and then traversing that datastructure (is that really how it works?)?
I recommend you look at BeautifulSoup: http://www.crummy.com/software/BeautifulSoup/
It is very forgiving of the typical affronts HTML writers put into their
code.
-M
--
Michael J. Fromberger | Lecturer, Dept. of Computer Science http://www.dartmouth.edu/~sting/ | Dartmouth College, Hanover, NH, USA
On Thu, 11 Nov 2004 15:05:27 -0500, Michael J. Fromberger wrote: In article <pa************ *************** *@dcs.nac.uci.e du>, Dan Stromberg <st******@dcs.n ac.uci.edu> wrote:
I'm working on writing a program that will synchronize one database with another. For the source database, we can just use the python sybase API; that's nice and normal.
[...]
1) Would I be better off just regex'ing the html I'm getting back? (I suppose this depends on the complexity of the html received, eh?)
2) Would I be better off feeding the HTML into an HTML parser, and then traversing that datastructure (is that really how it works?)?
I recommend you look at BeautifulSoup:
http://www.crummy.com/software/BeautifulSoup/
It is very forgiving of the typical affronts HTML writers put into their code.
BeautifulSoup looks great.
Regrettably, I'm getting:
Traceback (most recent call last):
File "./netreo.py", line 130, in ?
soup.feed(html)
File "/Dcs/staff/strombrg/netreo/lib/BeautifulSoup.p y", line 308, in feed
SGMLParser.feed (self, text)
File "/Web/lib/python2.3/sgmllib.py", line 94, in feed
self.rawdata = self.rawdata + data
TypeError: cannot concatenate 'str' and 'list' objects
....upon feeding some html into the "feed" method.
This is with python 2.3.4, on an RHEL 3 system.
Am I perhaps using a version of python that is too recent?
In article <pa************ *************** *@dcs.nac.uci.e du>,
Dan Stromberg <st******@dcs.n ac.uci.edu> wrote: On Thu, 11 Nov 2004 15:05:27 -0500, Michael J. Fromberger wrote:
In article <pa************ *************** *@dcs.nac.uci.e du>, Dan Stromberg <st******@dcs.n ac.uci.edu> wrote: [...]
1) Would I be better off just regex'ing the html I'm getting back? (I suppose this depends on the complexity of the html received, eh?)
2) Would I be better off feeding the HTML into an HTML parser, and then traversing that datastructure (is that really how it works?)?
I recommend you look at BeautifulSoup:
http://www.crummy.com/software/BeautifulSoup/
It is very forgiving of the typical affronts HTML writers put into their code.
BeautifulSoup looks great.
Regrettably, I'm getting:
Traceback (most recent call last): File "./netreo.py", line 130, in ? soup.feed(html) File "/Dcs/staff/strombrg/netreo/lib/BeautifulSoup.p y", line 308, in feed SGMLParser.feed (self, text) File "/Web/lib/python2.3/sgmllib.py", line 94, in feed self.rawdata = self.rawdata + data TypeError: cannot concatenate 'str' and 'list' objects
...upon feeding some html into the "feed" method.
This is with python 2.3.4, on an RHEL 3 system.
Am I perhaps using a version of python that is too recent?
Are you feeding a string to your parser, or a list? It wants a string;
it looks like maybe you're giving it a list.
-M
--
Michael J. Fromberger | Lecturer, Dept. of Computer Science http://www.dartmouth.edu/~sting/ | Dartmouth College, Hanover, NH, USA This thread has been closed and replies have been disabled. Please start a new discussion. Similar topics |
by: Jonas Galvez |
last post by:
Hi,
I'm trying to parse some binary data with regexes. It works well in
the latest Python build, but I need to run this on Python 1.5.2. The
binary data has a pattern like this:
keyName1\002..(.*)\000.*keyName2\002..(.*)\000
(I'm using regex syntax to illustrate)
So I wrote the following script:
|
by: daveyand |
last post by:
Hey guys,
I've spent abit of time searching th web and also this group and cant
seem to find a solution to my issue.
Basically i want to get the current url, and then replace http:// with
something else.
Here is the current code.
|
by: Masa Ito |
last post by:
I am trying to capture the contents of a function with Regex. I am using
Expresso to test (nice - thanks for the great tool UltraPico!). I can
handle my own with single line regex's (I think).. I want to have a
named capture of the entire 'contents' of specific functions.
EG: Sample code
<Description("{0} is a required field.")_
Protected Overridable Function AccountIDRequired(ByVal target As
Object, ByVal e As RuleArgs) As Boolean...
|
by: marktang |
last post by:
ONU (Optical Network Unit) is one of the key components for providing high-speed Internet services. Its primary function is to act as an endpoint device located at the user's premises. However, people are often confused as to whether an ONU can Work As a Router. In this blog post, we’ll explore What is ONU, What Is Router, ONU & Router’s main usage, and What is the difference between ONU and Router. Let’s take a closer look !
Part I. Meaning of...
|
by: Hystou |
last post by:
Most computers default to English, but sometimes we require a different language, especially when relocating. Forgot to request a specific language before your computer shipped? No problem! You can effortlessly switch the default language on Windows 10 without reinstalling. I'll walk you through it.
First, let's disable language synchronization. With a Microsoft account, language settings sync across devices. To prevent any complications,...
| |
by: Oralloy |
last post by:
Hello folks,
I am unable to find appropriate documentation on the type promotion of bit-fields when using the generalised comparison operator "<=>".
The problem is that using the GNU compilers, it seems that the internal comparison operator "<=>" tries to promote arguments from unsigned to signed.
This is as boiled down as I can make it.
Here is my compilation command:
g++-12 -std=c++20 -Wnarrowing bit_field.cpp
Here is the code in...
|
by: jinu1996 |
last post by:
In today's digital age, having a compelling online presence is paramount for businesses aiming to thrive in a competitive landscape. At the heart of this digital strategy lies an intricately woven tapestry of website design and digital marketing. It's not merely about having a website; it's about crafting an immersive digital experience that captivates audiences and drives business growth.
The Art of Business Website Design
Your website is...
|
by: tracyyun |
last post by:
Dear forum friends,
With the development of smart home technology, a variety of wireless communication protocols have appeared on the market, such as Zigbee, Z-Wave, Wi-Fi, Bluetooth, etc. Each protocol has its own unique characteristics and advantages, but as a user who is planning to build a smart home system, I am a bit confused by the choice of these technologies. I'm particularly interested in Zigbee because I've heard it does some...
|
by: conductexam |
last post by:
I have .net C# application in which I am extracting data from word file and save it in database particularly. To store word all data as it is I am converting the whole word file firstly in HTML and then checking html paragraph one by one.
At the time of converting from word file to html my equations which are in the word document file was convert into image.
Globals.ThisAddIn.Application.ActiveDocument.Select();...
|
by: adsilva |
last post by:
A Windows Forms form does not have the event Unload, like VB6. What one acts like?
|
by: 6302768590 |
last post by:
Hai team
i want code for transfer the data from one system to another through IP address by using C# our system has to for every 5mins then we have to update the data what the data is updated we have to send another system
| |
by: muto222 |
last post by:
How can i add a mobile payment intergratation into php mysql website.
| |