473,909 Members | 6,117 Online
Bytes | Software Development & Data Engineering Community
+ Post

Home Posts Topics Members FAQ

html parsing? Or just simple regex'ing?


I'm working on writing a program that will synchronize one database with
another. For the source database, we can just use the python sybase API;
that's nice and normal.

For the target database, unfortunately, the only interface we have access
to, is http+html based. There's same javascript involved too, but
hopefully we won't have to interact with that.

So, I've got Basic AUTH going with http, but now I'm faced with the
following questions, due to the fact that I need to pull some lists out of
HTML, and then make some changes via POST or so, again over HTTP:

1) Would I be better off just regex'ing the html I'm getting back? (I
suppose this depends on the complexity of the html received, eh?)

2) Would I be better off feeding the HTML into an HTML parser, and then
traversing that datastructure (is that really how it works?)?

3) When I retrieve stuff over http, it's clear that the web server is
sending some kind of odd gibberish, which the python urllib2 API is
passing on to me. In a packet trace, it looks like:

Date: Wed, 10 Nov 2004 01:09:47 GMT^M
Server: Apache/1.3.29 (Unix) (Red-Hat/Linux) mod_perl/1.23^M
Keep-Alive: timeout=15, max=98^M
Connection: Keep-Alive^M
Transfer-Encoding: chunked^M
Content-Type: text/html^M
^M
ef1^M
<!DOCTYPE html PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN">^M
<html>^M
<head>^M

....and so on. It seems to me that "ef1" should not be there. Is that
true? What -is- that nonsense? It's not the same string every time, and
it doesn't show up in a web browser.
Thanks!

Jul 18 '05 #1
7 2585
> For the target database, unfortunately, the only interface we have access
to, is http+html based. There's same javascript involved too, but
hopefully we won't have to interact with that.
Argl. That sounds like royal pain in somewhere...
So, I've got Basic AUTH going with http, but now I'm faced with the
following questions, due to the fact that I need to pull some lists out of
HTML, and then make some changes via POST or so, again over HTTP:

1) Would I be better off just regex'ing the html I'm getting back? (I
suppose this depends on the complexity of the html received, eh?)

2) Would I be better off feeding the HTML into an HTML parser, and then
traversing that datastructure (is that really how it works?)?
I personally would certainly go that way - the best thing IMHO would be to
make a dom-tree out of the html you then can work on with xpath. 4suite
might be good for that. While this seems a bit overengineered at first,
using xpath allows for pretty strong queries against your dom-tree so even
larger changes in the "interface" can be coped with. And writing htmlparser
based class isn't hard, either.

3) When I retrieve stuff over http, it's clear that the web server is
sending some kind of odd gibberish, which the python urllib2 API is
passing on to me. In a packet trace, it looks like:

Date: Wed, 10 Nov 2004 01:09:47 GMT^M
Server: Apache/1.3.29 (Unix) (Red-Hat/Linux) mod_perl/1.23^M
Keep-Alive: timeout=15, max=98^M
Connection: Keep-Alive^M
Transfer-Encoding: chunked^M
Content-Type: text/html^M
^M
ef1^M
<!DOCTYPE html PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN">^M
<html>^M
<head>^M

...and so on. It seems to me that "ef1" should not be there. Is that
true? What -is- that nonsense? It's not the same string every time, and
it doesn't show up in a web browser.


A webserver serving http doesn't care what you return - remember that http
can also be used to transfer binary data.

So the problem is not the apache, but whoever wrote that webapplication. Is
it cgi based?

For the ignoring part: Webbrowser tend to be very relaxed about html
documents format, otherwise a lot of the web would be "unsurfable ". So I'm
not to astonished that they ignore that leading crap.

--
Regards,

Diez B. Roggisch
Jul 18 '05 #2
Forget about the last part - I just read Jp Calderone's reply after writing
mine - interesting. I've never heard of "chunked" encoding before.
--
Regards,

Diez B. Roggisch
Jul 18 '05 #3
On Wed, 10 Nov 2004 12:26:04 +0100, Diez B. Roggisch wrote:
So, I've got Basic AUTH going with http, but now I'm faced with the
following questions, due to the fact that I need to pull some lists out of
HTML, and then make some changes via POST or so, again over HTTP:

1) Would I be better off just regex'ing the html I'm getting back? (I
suppose this depends on the complexity of the html received, eh?)

2) Would I be better off feeding the HTML into an HTML parser, and then
traversing that datastructure (is that really how it works?)?


I personally would certainly go that way - the best thing IMHO would be to
make a dom-tree out of the html you then can work on with xpath. 4suite
might be good for that. While this seems a bit overengineered at first,
using xpath allows for pretty strong queries against your dom-tree so even
larger changes in the "interface" can be coped with. And writing htmlparser
based class isn't hard, either.


This sounds interesting.

But if I use an XML parser to parse HTML instead of a dedicated HTML
parser, will I still get smart handling of unpaired tags? I'm not sure we
can count on getting 100% properly formed HTML...

Jul 18 '05 #4
> But if I use an XML parser to parse HTML instead of a dedicated HTML
parser, will I still get smart handling of unpaired tags? I'm not sure we
can count on getting 100% properly formed HTML...


There should be html2dom parsers - after all, extending htmlparser to
generate dom shouldn't be to hard.

Googling turns up tidy - so you may want to feed your html through it
before:

http://www.xml.com/pub/a/2004/09/08/pyxml.html
--
Regards,

Diez B. Roggisch
Jul 18 '05 #5
In article <pa************ *************** *@dcs.nac.uci.e du>,
Dan Stromberg <st******@dcs.n ac.uci.edu> wrote:
I'm working on writing a program that will synchronize one database with
another. For the source database, we can just use the python sybase API;
that's nice and normal.

[...]

1) Would I be better off just regex'ing the html I'm getting back? (I
suppose this depends on the complexity of the html received, eh?)

2) Would I be better off feeding the HTML into an HTML parser, and then
traversing that datastructure (is that really how it works?)?


I recommend you look at BeautifulSoup:

http://www.crummy.com/software/BeautifulSoup/

It is very forgiving of the typical affronts HTML writers put into their
code.

-M

--
Michael J. Fromberger | Lecturer, Dept. of Computer Science
http://www.dartmouth.edu/~sting/ | Dartmouth College, Hanover, NH, USA
Jul 18 '05 #6
On Thu, 11 Nov 2004 15:05:27 -0500, Michael J. Fromberger wrote:
In article <pa************ *************** *@dcs.nac.uci.e du>,
Dan Stromberg <st******@dcs.n ac.uci.edu> wrote:
I'm working on writing a program that will synchronize one database with
another. For the source database, we can just use the python sybase API;
that's nice and normal.

[...]

1) Would I be better off just regex'ing the html I'm getting back? (I
suppose this depends on the complexity of the html received, eh?)

2) Would I be better off feeding the HTML into an HTML parser, and then
traversing that datastructure (is that really how it works?)?


I recommend you look at BeautifulSoup:

http://www.crummy.com/software/BeautifulSoup/

It is very forgiving of the typical affronts HTML writers put into their
code.


BeautifulSoup looks great.

Regrettably, I'm getting:

Traceback (most recent call last):
File "./netreo.py", line 130, in ?
soup.feed(html)
File "/Dcs/staff/strombrg/netreo/lib/BeautifulSoup.p y", line 308, in feed
SGMLParser.feed (self, text)
File "/Web/lib/python2.3/sgmllib.py", line 94, in feed
self.rawdata = self.rawdata + data
TypeError: cannot concatenate 'str' and 'list' objects

....upon feeding some html into the "feed" method.

This is with python 2.3.4, on an RHEL 3 system.

Am I perhaps using a version of python that is too recent?
Jul 18 '05 #7
In article <pa************ *************** *@dcs.nac.uci.e du>,
Dan Stromberg <st******@dcs.n ac.uci.edu> wrote:
On Thu, 11 Nov 2004 15:05:27 -0500, Michael J. Fromberger wrote:
In article <pa************ *************** *@dcs.nac.uci.e du>,
Dan Stromberg <st******@dcs.n ac.uci.edu> wrote:
[...]

1) Would I be better off just regex'ing the html I'm getting back? (I
suppose this depends on the complexity of the html received, eh?)

2) Would I be better off feeding the HTML into an HTML parser, and then
traversing that datastructure (is that really how it works?)?


I recommend you look at BeautifulSoup:

http://www.crummy.com/software/BeautifulSoup/

It is very forgiving of the typical affronts HTML writers put into their
code.


BeautifulSoup looks great.

Regrettably, I'm getting:

Traceback (most recent call last):
File "./netreo.py", line 130, in ?
soup.feed(html)
File "/Dcs/staff/strombrg/netreo/lib/BeautifulSoup.p y", line 308, in feed
SGMLParser.feed (self, text)
File "/Web/lib/python2.3/sgmllib.py", line 94, in feed
self.rawdata = self.rawdata + data
TypeError: cannot concatenate 'str' and 'list' objects

...upon feeding some html into the "feed" method.

This is with python 2.3.4, on an RHEL 3 system.

Am I perhaps using a version of python that is too recent?


Are you feeding a string to your parser, or a list? It wants a string;
it looks like maybe you're giving it a list.

-M

--
Michael J. Fromberger | Lecturer, Dept. of Computer Science
http://www.dartmouth.edu/~sting/ | Dartmouth College, Hanover, NH, USA
Jul 18 '05 #8

This thread has been closed and replies have been disabled. Please start a new discussion.

Similar topics

0
1717
by: Jonas Galvez | last post by:
Hi, I'm trying to parse some binary data with regexes. It works well in the latest Python build, but I need to run this on Python 1.5.2. The binary data has a pattern like this: keyName1\002..(.*)\000.*keyName2\002..(.*)\000 (I'm using regex syntax to illustrate) So I wrote the following script:
14
2944
by: daveyand | last post by:
Hey guys, I've spent abit of time searching th web and also this group and cant seem to find a solution to my issue. Basically i want to get the current url, and then replace http:// with something else. Here is the current code.
3
2293
by: Masa Ito | last post by:
I am trying to capture the contents of a function with Regex. I am using Expresso to test (nice - thanks for the great tool UltraPico!). I can handle my own with single line regex's (I think).. I want to have a named capture of the entire 'contents' of specific functions. EG: Sample code <Description("{0} is a required field.")_ Protected Overridable Function AccountIDRequired(ByVal target As Object, ByVal e As RuleArgs) As Boolean...
0
10919
jinu1996
by: jinu1996 | last post by:
In today's digital age, having a compelling online presence is paramount for businesses aiming to thrive in a competitive landscape. At the heart of this digital strategy lies an intricately woven tapestry of website design and digital marketing. It's not merely about having a website; it's about crafting an immersive digital experience that captivates audiences and drives business growth. The Art of Business Website Design Your website is...
0
10538
tracyyun
by: tracyyun | last post by:
Dear forum friends, With the development of smart home technology, a variety of wireless communication protocols have appeared on the market, such as Zigbee, Z-Wave, Wi-Fi, Bluetooth, etc. Each protocol has its own unique characteristics and advantages, but as a user who is planning to build a smart home system, I am a bit confused by the choice of these technologies. I'm particularly interested in Zigbee because I've heard it does some...
0
9725
agi2029
by: agi2029 | last post by:
Let's talk about the concept of autonomous AI software engineers and no-code agents. These AIs are designed to manage the entire lifecycle of a software development project—planning, coding, testing, and deployment—without human intervention. Imagine an AI that can take a project description, break it down, write the code, debug it, and then launch it, all on its own.... Now, this would greatly impact the work of software developers. The idea...
1
8097
isladogs
by: isladogs | last post by:
The next Access Europe User Group meeting will be on Wednesday 1 May 2024 starting at 18:00 UK time (6PM UTC+1) and finishing by 19:30 (7.30PM). In this session, we are pleased to welcome a new presenter, Adolph Dupré who will be discussing some powerful techniques for using class modules. He will explain when you may want to use classes instead of User Defined Types (UDT). For example, to manage the data in unbound forms. Adolph will...
0
5938
by: TSSRALBI | last post by:
Hello I'm a network technician in training and I need your help. I am currently learning how to create and manage the different types of VPNs and I have a question about LAN-to-LAN VPNs. The last exercise I practiced was to create a LAN-to-LAN VPN between two Pfsense firewalls, by using IPSEC protocols. I succeeded, with both firewalls in the same network. But I'm wondering if it's possible to do the same thing, with 2 Pfsense firewalls...
0
6138
by: adsilva | last post by:
A Windows Forms form does not have the event Unload, like VB6. What one acts like?
1
4774
by: 6302768590 | last post by:
Hai team i want code for transfer the data from one system to another through IP address by using C# our system has to for every 5mins then we have to update the data what the data is updated we have to send another system
2
4336
muto222
by: muto222 | last post by:
How can i add a mobile payment intergratation into php mysql website.
3
3357
bsmnconsultancy
by: bsmnconsultancy | last post by:
In today's digital era, a well-designed website is crucial for businesses looking to succeed. Whether you're a small business owner or a large corporation in Toronto, having a strong online presence can significantly impact your brand's success. BSMN Consultancy, a leader in Website Development in Toronto offers valuable insights into creating effective websites that not only look great but also perform exceptionally well. In this comprehensive...

By using Bytes.com and it's services, you agree to our Privacy Policy and Terms of Use.

To disable or enable advertisements and analytics tracking please visit the manage ads & tracking page.