471,353 Members | 1,796 Online
Bytes | Software Development & Data Engineering Community
Post +

Home Posts Topics Members FAQ

Join Bytes to post your question to a community of 471,353 software developers and data experts.

Fetching a clean copy of a changing web page

I'm reading the PhishTank XML file of active phishing sites,
at "http://data.phishtank.com/data/online-valid/" This changes
frequently, and it's big (about 10MB right now) and on a busy server.
So once in a while I get a bogus copy of the file because the file
was rewritten while being sent by the server.

Any good way to deal with this, short of reading it twice
and comparing?

John Nagle
Jul 16 '07 #1
7 1129
John Nagle schrieb:
I'm reading the PhishTank XML file of active phishing sites,
at "http://data.phishtank.com/data/online-valid/" This changes
frequently, and it's big (about 10MB right now) and on a busy server.
So once in a while I get a bogus copy of the file because the file
was rewritten while being sent by the server.

Any good way to deal with this, short of reading it twice
and comparing?
Make them fix the obvious bug they have would be the best of course.

Apart from that - the only thing you could try is to apply a SAX parser
on the input stream immediatly, so that at least if the XML is non-valid
because of the way they serve it you get to that ASAP. But it will only
shave off a few moments.

Diez
Jul 16 '07 #2
On Jul 16, 1:00 am, John Nagle <na...@animats.comwrote:
I'm reading the PhishTank XML file of active phishing sites,
at "http://data.phishtank.com/data/online-valid/" This changes
frequently, and it's big (about 10MB right now) and on a busy server.
So once in a while I get a bogus copy of the file because the file
was rewritten while being sent by the server.

Any good way to deal with this, short of reading it twice
and comparing?

John Nagle
Sounds like that's the host's problem--they should be using atomic
writes, which is usally done be renaming the new file on top of the
old one. How "bogus" are the bad files? If it's just incomplete,
then since it's XML, it'll be missing the "</output>" and you should
get a parse error if you're using a suitable strict parser. If it's
mixed old data and new data, but still manages to be well-formed XML,
then yes, you'll probably have to read it twice.

-Miles

Jul 16 '07 #3
On 7/16/07, John Nagle <na***@animats.comwrote:
I'm reading the PhishTank XML file of active phishing sites,
at "http://data.phishtank.com/data/online-valid/" This changes
frequently, and it's big (about 10MB right now) and on a busy server.
So once in a while I get a bogus copy of the file because the file
was rewritten while being sent by the server.

Any good way to deal with this, short of reading it twice
and comparing?
If you have:
1. Ball park estimate of the size of XML
2. Some footers or "last tags" in the XML

May be you can use the above to check the xml and catch the "bogus" ones !

cheers,

--
----
Amit Khemka
website: www.onyomo.com
wap-site: www.owap.in
Home Page: www.cse.iitd.ernet.in/~csd00377

Endless the world's turn, endless the sun's Spinning, Endless the quest;
I turn again, back to my own beginning, And here, find rest.
Jul 16 '07 #4
Miles wrote:
On Jul 16, 1:00 am, John Nagle <na...@animats.comwrote:
> I'm reading the PhishTank XML file of active phishing sites,
at "http://data.phishtank.com/data/online-valid/" This changes
frequently, and it's big (about 10MB right now) and on a busy server.
So once in a while I get a bogus copy of the file because the file
was rewritten while being sent by the server.

Any good way to deal with this, short of reading it twice
and comparing?

John Nagle


Sounds like that's the host's problem--they should be using atomic
writes, which is usally done be renaming the new file on top of the
old one. How "bogus" are the bad files? If it's just incomplete,
then since it's XML, it'll be missing the "</output>" and you should
get a parse error if you're using a suitable strict parser. If it's
mixed old data and new data, but still manages to be well-formed XML,
then yes, you'll probably have to read it twice.

-Miles
Yes, they're updating it non-atomically.

I'm now reading it twice and comparing, which works.
Actually, it's read up to 5 times, until the same contents
appear twice in a row. Two tries usually work, but if the
server is updating, it may require more.

Ugly, and doubles the load on the server, but necessary to
get a consistent copy of the data.

John Nagle
Jul 16 '07 #5
Miles wrote:
On Jul 16, 1:00 am, John Nagle <na...@animats.comwrote:
> I'm reading the PhishTank XML file of active phishing sites,
at "http://data.phishtank.com/data/online-valid/" This changes
frequently, and it's big (about 10MB right now) and on a busy server.
So once in a while I get a bogus copy of the file because the file
was rewritten while being sent by the server.

Any good way to deal with this, short of reading it twice
and comparing?

John Nagle


Sounds like that's the host's problem--they should be using atomic
writes, which is usally done be renaming the new file on top of the
old one. How "bogus" are the bad files? If it's just incomplete,
then since it's XML, it'll be missing the "</output>" and you should
get a parse error if you're using a suitable strict parser. If it's
mixed old data and new data, but still manages to be well-formed XML,
then yes, you'll probably have to read it twice.
The files don't change much from update to update; typically they
contain about 10,000 entries, and about 5-10 change every hour. So
the odds of getting a seemingly valid XML file with incorrect data
are reasonably good.

John Nagle
Jul 17 '07 #6
John Nagle wrote:
Miles wrote:
>On Jul 16, 1:00 am, John Nagle <na...@animats.comwrote:
>> I'm reading the PhishTank XML file of active phishing sites,
at "http://data.phishtank.com/data/online-valid/" This changes
frequently, and it's big (about 10MB right now) and on a busy server.
So once in a while I get a bogus copy of the file because the file
was rewritten while being sent by the server.

Any good way to deal with this, short of reading it twice
and comparing?

John Nagle

Sounds like that's the host's problem--they should be using atomic
writes, which is usally done be renaming the new file on top of the
old one. How "bogus" are the bad files? If it's just incomplete,
then since it's XML, it'll be missing the "</output>" and you should
get a parse error if you're using a suitable strict parser. If it's
mixed old data and new data, but still manages to be well-formed XML,
then yes, you'll probably have to read it twice.

The files don't change much from update to update; typically they
contain about 10,000 entries, and about 5-10 change every hour. So
the odds of getting a seemingly valid XML file with incorrect data
are reasonably good.
I'm still left wondering what the hell kind of server process will start
serving one copy of a file and complete the request from another. Oh, well.

regards
Steve
--
Steve Holden +1 571 484 6266 +1 800 494 3119
Holden Web LLC/Ltd http://www.holdenweb.com
Skype: holdenweb http://del.icio.us/steve.holden
--------------- Asciimercial ------------------
Get on the web: Blog, lens and tag the Internet
Many services currently offer free registration
----------- Thank You for Reading -------------

Jul 17 '07 #7
On Tue, 2007-07-17 at 00:47 +0000, John Nagle wrote:
Miles wrote:
On Jul 16, 1:00 am, John Nagle <na...@animats.comwrote:
I'm reading the PhishTank XML file of active phishing sites,
at "http://data.phishtank.com/data/online-valid/" This changes
frequently, and it's big (about 10MB right now) and on a busy server.
So once in a while I get a bogus copy of the file because the file
was rewritten while being sent by the server.

Any good way to deal with this, short of reading it twice
and comparing?

John Nagle

Sounds like that's the host's problem--they should be using atomic
writes, which is usally done be renaming the new file on top of the
old one. How "bogus" are the bad files? If it's just incomplete,
then since it's XML, it'll be missing the "</output>" and you should
get a parse error if you're using a suitable strict parser. If it's
mixed old data and new data, but still manages to be well-formed XML,
then yes, you'll probably have to read it twice.

The files don't change much from update to update; typically they
contain about 10,000 entries, and about 5-10 change every hour. So
the odds of getting a seemingly valid XML file with incorrect data
are reasonably good.
Does the server return a reliable last-modified timestamp? If yes, you
can do something like this:

prev_last_mod = None
while True:
u = urllib.urlopen(theUrl)
if prev_last_mod==u.headers['last-modified']:
break
prev_last_mod = u.headers['last-modified']
contents = u.read()
u.close()

That way, you only have to re-read the file if it actually changed
according to the time stamp, rather than having to re-read in any case
just to check whether it changed.

HTH,

--
Carsten Haese
http://informixdb.sourceforge.net
Jul 17 '07 #8

This discussion thread is closed

Replies have been disabled for this discussion.

Similar topics

reply views Thread by Andres Baravalle | last post: by
9 posts views Thread by Jenden0 | last post: by
1 post views Thread by MD | last post: by
22 posts views Thread by Sandman | last post: by
232 posts views Thread by robert maas, see http://tinyurl.com/uh3t | last post: by
4 posts views Thread by tokcy | last post: by

By using Bytes.com and it's services, you agree to our Privacy Policy and Terms of Use.

To disable or enable advertisements and analytics tracking please visit the manage ads & tracking page.