473,408 Members | 2,888 Online
Bytes | Software Development & Data Engineering Community
Post Job

Home Posts Topics Members FAQ

Join Bytes to post your question to a community of 473,408 software developers and data experts.

XmlHttpRequest not loading latest version of xml

This works, but it doesn't load the latest version of the xml if it was
just modified without closing and reopening the browser.

Here's the scenario:

I have an xml doc called results.xml. It can contain lots of data.

<Results>
<Data>Stuff</Data>
<Data>More stuff</Data>
</Results>

My ASP application can add to the data during the year. At the end of
the year I want to reset the xml.

I run a script that can make a copy of the file from results.xml to
results_[yyyymmdd].xml as a backup and save a new results.xml file as
the following with a backup date attribute:

<Results BUDate="20051124"></Results>

So far this all works fine, but here's where the problem occurs. I've
set a Backup date attribute on the off chance I want to restore that
lastest file.

I have a checkbox that will toggle between enabled and disabled
depending on whether the results.xml file has an attribute "BUDate".
If I try to restore the backup immediately, during the same session it
returns the results.xml file as it was before it was backed up. In
other words, it has all the data!

In order to get it to work correctly, I have to log off, close the
browser (IE) and re-open the browser.

Is there any way to make it retrieve that latest xml version without
closing the browser?

Many thanks,

King Wilder
Here's my javascript code:

var response = null;

// global flag
var isIE = false;

// global request and XML document objects
var req;

function loadXMLDoc(url, GoAsync) {

// branch for native XMLHttpRequest object
if (window.XMLHttpRequest) {
req = new XMLHttpRequest();
req.onreadystatechange = processReqChange;
req.open("GET", url, GoAsync);
req.send(null);
// branch for IE/Windows ActiveX version
} else if (window.ActiveXObject) {
isIE = true;
req = new ActiveXObject("Microsoft.XMLHTTP");
if (req) {
req.onreadystatechange = processReqChange;
req.open("GET", url, GoAsync);
req.send();
}
}
}

// handle onreadystatechange event of req object
function processReqChange() {
// only if req shows "loaded"
if (req.readyState == 4) {
// only if "OK"
if (req.status == 200) {
response = req.responseXML;
} else {
alert("There was a problem retrieving the XML data:\n" +
req.statusText);
}
}
}

Nov 26 '05
76 3945
On Wed, 30 Nov 2005 16:14:27 -0500, in comp.lang.javascript , Randy
Webb <Hi************@aol.com> in
<Kv******************************@comcast.com> wrote:
Matt Silberstein said the following on 11/30/2005 2:07 PM:
On Wed, 30 Nov 2005 18:44:41 GMT, in comp.lang.javascript ,
ji*@jibbering.com (Jim Ley) in
<43****************@news.individual.net> wrote:

On Wed, 30 Nov 2005 18:16:35 GMT, Matt Silberstein
<Re**************************@ix.netcom.com> wrote:
One issue that concerns me is that it is the server that knows if the
data is old, not the client. Making client side determinations is
non-optimal.

but if the client doesn't make a request, the server isn't involved...

And then we are taking about a very simple piece of code.


Yes, and we keep telling you what and how to do it. Add a unique
querystring to the URL. Then it will be retrieved from the server or not
at all.


You missed my point. Jim said that the client was not making a
request. If the client makes no request, then it can't add a
querystring, can it? I must have misunderstood his point somehow.
--
Matt Silberstein

Do something today about the Darfur Genocide

http://www.beawitness.org
http://www.darfurgenocide.org
http://www.savedarfur.org

"Darfur: A Genocide We can Stop"
Nov 30 '05 #51
On Wed, 30 Nov 2005 21:25:16 GMT, Matt Silberstein
<Re**************************@ix.netcom.com> wrote:

You missed my point. Jim said that the client was not making a
request. If the client makes no request, then it can't add a
querystring, can it?


yes, the point is if it's caching the page, then even though you the
client makes a request, because the resource is cached the server will
never see it. So unless you can ensure the request makes it to the
server it makes no odds that the server knows the page needs updating.

Jim.
Nov 30 '05 #52
Jim Ley wrote:
On Wed, 30 Nov 2005 21:25:16 GMT, Matt Silberstein:
You missed my point. Jim said that the client was not making a
request. If the client makes no request, then it can't add a
querystring, can it?


yes, the point is if it's caching the page, then even though you the
client makes a request, because the resource is cached the server will
never see it. So unless you can ensure the request makes it to the
server it makes no odds that the server knows the page needs updating.


The argument is flawed, because if it is in the cache the resource has
been served before. If it was served with the correct headers before, no
conforming HTTP implementation along the request/response chain, including
the client, is allowed to use the cached version. Of course, if a proper
caching technique was not considered before, transition efforts have to
be made now.
PointedEars
Dec 1 '05 #53
Thomas 'PointedEars' Lahn wrote:
If it was served with the correct headers
before, no conforming HTTP implementation along the request/response
chain, including the client, is allowed to use the cached version.


As a web author who wants to make sure my site works as intended, I would
need to ensure that:
* There are no incorrectly-configured proxies between my server and the
user
* The user has their browser cache settings configured correctly
* The user does not have some personal proxy running on their machine
which might mess with the headers

There's no way I could guarantee that for all my site visitors.

Creating a unique URL avoids the problems created by the above. As a web
author, I can have much greater confidence that my site will work correctly
for all users. And that is the most important thing to me.

As was said before - you are correct in theory, as is often the case. But in
the practical world, a more brute-force solution is required to actually
achieve the desired results consistently.

--
Matt Kruse
http://www.JavascriptToolbox.com
http://www.AjaxToolbox.com
Dec 1 '05 #54
Matt Kruse wrote:
Thomas 'PointedEars' Lahn wrote:
If it was served with the correct headers
before, no conforming HTTP implementation along the request/response
chain, including the client, is allowed to use the cached version.
As a web author who wants to make sure my site works as intended, I would
need to ensure that:
* There are no incorrectly-configured proxies between my server and
the user


There is nothing that could be configured, it is a requirement of the
protocol since version 1.0. A proxy that does not follow the protocol
specification is completely broken and any ISP is _required_ to use
working proxies (if they provide one) in order to maintain normal
business operation.
* The user has their browser cache settings configured correctly
* The user does not have some personal proxy running on their machine
which might mess with the headers

There's no way I could guarantee that for all my site visitors.
You do not have to. The ISPs and browser vendors have to. People are
paid at ISPs and browser vendors to do that job (some, that are not even
paid do their job better than others), and so far I had no problems with
correct headers. However, I did have problems with some browsers if
those headers were missing; the simple reason being that you cannot do
(do not use the cache) what you do not know (expiration or date of last
modification).
Creating a unique URL avoids the problems created by the above.
No, it does not. If ISPs use broken implementations, and
if users use broken implementations, anything can happen.
As a web author, I can have much greater confidence that my site will work
correctly for all users. And that is the most important thing to me.
But it will not work correctly. Understand that with both meanings.
[...] But in the practical world, a more brute-force solution is
required to actually achieve the desired results consistently.


It is not here. Full stop.
PointedEars
Dec 1 '05 #55
On Wed, 30 Nov 2005 21:54:29 GMT, in comp.lang.javascript ,
ji*@jibbering.com (Jim Ley) in
<43****************@news.individual.net> wrote:
On Wed, 30 Nov 2005 21:25:16 GMT, Matt Silberstein
<Re**************************@ix.netcom.com> wrote:

You missed my point. Jim said that the client was not making a
request. If the client makes no request, then it can't add a
querystring, can it?


yes, the point is if it's caching the page, then even though you the
client makes a request, because the resource is cached the server will
never see it. So unless you can ensure the request makes it to the
server it makes no odds that the server knows the page needs updating.


This differs from what you said before. I don't know enough to know
what actually happens and that seems in dispute here. From my
experience the preferable situation would be that the server, since it
knows about the data, to decide on freshness. If that is not possible
then the client needs to go to paranoid mode and assume that cached
data is bad and try to get fresh data. If I understand the claims and
my reading the server can ensure that the data is not cached, but that
not all servers are written that way and servers are not always under
the control of the person writing the client. The catch is that the
paranoid client side method means potentially duplicated responses
which would then fill the cache along the way. If I understand things
correctly there is no optimal solution (again assuming you don't
control the server and it does not set expiration properly).

--
Matt Silberstein

Do something today about the Darfur Genocide

http://www.beawitness.org
http://www.darfurgenocide.org
http://www.savedarfur.org

"Darfur: A Genocide We can Stop"
Dec 1 '05 #56
On Thu, 01 Dec 2005 02:25:13 +0100, Thomas 'PointedEars' Lahn
<Po*********@web.de> wrote:
Matt Kruse wrote:
There is nothing that could be configured, it is a requirement of the
protocol since version 1.0. A proxy that does not follow the protocol
specification is completely broken and any ISP is _required_ to use
working proxies (if they provide one) in order to maintain normal
business operation.


required??? This is the industry that quite happily had proxies with
hard-coded with user agent strings, proxies that if you make the
resource cacheable then it ignores vary headers.
There's no way I could guarantee that for all my site visitors.


You do not have to. The ISPs and browser vendors have to.


No they don't! That's the same as the validity argument, it's not up
to user agents to render invalid documents, because it's up to the
author to produce valid content. However we know in the real world
that authors do not author valid content, so user agents have to
render invalid stuff so their users still get an appropriate service.

This is exactly the same, we know there are broken UA's and proxies,
so to give the users what they want we make allowances.

Jim.
Dec 1 '05 #57
Thomas 'PointedEars' Lahn said the following on 11/30/2005 4:14 PM:
Randy Webb wrote:

Matt Silberstein said the following on 11/30/2005 2:07 PM:
ji*@jibbering.com (Jim Ley) in [...] wrote:

On Wed, 30 Nov 2005 18:16:35 GMT, Matt Silberstein
<Re**************************@ix.netcom.com> wrote:

>One issue that concerns me is that it is the server that knows if the
>data is old, not the client. Making client side determinations is
>non-optimal.

but if the client doesn't make a request, the server isn't involved...

And then we are taking about a very simple piece of code.


Yes, and we keep telling you what and how to do it. [...]

_We_ do not tell such thing:


OK, I will clarify my statement then:

"The people that know how to solve the problem keep telling you how to
do it, those that do not know how to solve the problem keep pointing you
to server headers."

--
Randy
comp.lang.javascript FAQ - http://jibbering.com/faq & newsgroup weekly
Javascript Best Practices - http://www.JavascriptToolbox.com/bestpractices/
Dec 1 '05 #58
Randy Webb wrote:
"The people that know how to solve the problem keep telling you how to
do it, those that do not know how to solve the problem keep pointing you
to server headers."


"The people that claim to know how to solve the problem ..., those that
are claimed by the former not to know how to solve the problem ..."
PointedEars
Dec 1 '05 #59
Thomas 'PointedEars' Lahn said the following on 11/30/2005 8:25 PM:
Matt Kruse wrote:

Thomas 'PointedEars' Lahn wrote:
If it was served with the correct headers
before, no conforming HTTP implementation along the request/response
chain, including the client, is allowed to use the cached version.
As a web author who wants to make sure my site works as intended, I would
need to ensure that:
* There are no incorrectly-configured proxies between my server and
the user

There is nothing that could be configured, it is a requirement of the
protocol since version 1.0. A proxy that does not follow the protocol
specification is completely broken and any ISP is _required_ to use
working proxies (if they provide one) in order to maintain normal
business operation.


Again, you are missing the fact that not everything follows the Specs to
the letter. Nor will they ever.

<snip>
Creating a unique URL avoids the problems created by the above.

No, it does not.


Yes it does. You just don't realize it yet. Get some real world
experience and it becomes obvious to you. Until then, stick to the
specs, researching those specs, and arguing Theory and leave the Reality
aspect to those that understand it.
If ISPs use broken implementations, and if users use broken implementations,
anything can happen.


And it is up to the programmer to deal with that implementation.

Theory: All ISPs should follow the standards.
Reality: They don't and it is up the programmer to deal with it with
whatever means works best.

--
Randy
comp.lang.javascript FAQ - http://jibbering.com/faq & newsgroup weekly
Javascript Best Practices - http://www.JavascriptToolbox.com/bestpractices/
Dec 1 '05 #60
Thomas 'PointedEars' Lahn said the following on 12/1/2005 7:04 AM:
Randy Webb wrote:

"The people that know how to solve the problem keep telling you how to
do it, those that do not know how to solve the problem keep pointing you
to server headers."

"The people that claim to know how to solve the problem ..., those that
are claimed by the former not to know how to solve the problem ..."


"The people that claim to know how to solve the problem, solve that
problem daily, and will continue to solve the problem that way because
it works. On the other hand, you have those that have the head buried in
the sand to Reality that keep arguing to set headers that may or may not
work that think they know how to solve the problem but don't really even
understand the implications of solving the problem"

Leave the Reality to those that understand it Thomas.
--
Randy
comp.lang.javascript FAQ - http://jibbering.com/faq & newsgroup weekly
Javascript Best Practices - http://www.JavascriptToolbox.com/bestpractices/
Dec 1 '05 #61
I know this has been a long running debate in this group and I think I
understand the claims and theory on both sides. But to check as I see
the question if I have control of both sides (and have perfect systems
in between) then having an expiration string would be the best answer.
The owner of the data would know when it expires and the caches would
give me optimal performance. If the server does not expire the
information and/or the systems in-between are not set up right, then
the cached information could be incorrect. In that case I would have
to use a unique string to force freshening of the data. This, however,
could lead to excessive load on the various caching systems, from the
proxies to the client itself.

Assuming the above is substantially correct I have three questions.

First, does anyone have a reference to some tests on proxy
configuration? I saw people say that they can/are misconfigured, but I
would like to see something that gives me an idea of how prevalent
that is.

Second, as a follow up to that, does anyone have any ideas of what is
the realistic length of time things tend to stay in various caches
(proxy, client, etc.)? I know the answer could be "forever", I am just
wondering about expectation. Are we talking about minutes, hours,
days?

Third, couldn't the client check the expires itself and then decide on
asking a second time with a unique string? Yes, I understand that is
probably worthwhile for any/most situations, I am just trying to
understand this fully for myself.
Thanks to all.
--
Matt Silberstein

Do something today about the Darfur Genocide

http://www.beawitness.org
http://www.darfurgenocide.org
http://www.savedarfur.org

"Darfur: A Genocide We can Stop"
Dec 2 '05 #62
Matt Silberstein wrote:
I know this has been a long running debate in this group and I think I
understand the claims and theory on both sides. But to check as I see
the question if I have control of both sides (and have perfect systems
in between) then having an expiration string would be the best answer.
No. If that were so, you should definitely use all HTTP/1.1 cache-control
mechanisms available.
The owner of the data would know when it expires
Would they? I say, that depends on the resource.
and the caches would give me optimal performance.
If you knew the approximate future expiration date and it is an HTTP/1.0
cache, yes. If you set the Expires header to the current or a past date
for an HTTP/1.0 cache, no.
If the server does not expire the information and/or the systems
in-between are not set up right, then the cached information could
be incorrect. In that case I would have to use a unique string to
force freshening of the data.
You may have to use that, but with broken systems there is no guarantee
that this will work either. Arguing that intermediary implementations
could be broken and cache control therefore will not work, is completely
pointless.
This, however, could lead to excessive load on the various caching
systems, from the proxies to the client itself.
Correct.
Assuming the above is substantially correct I have three questions.

First, does anyone have a reference to some tests on proxy
configuration? I saw people say that they can/are misconfigured, but I
would like to see something that gives me an idea of how prevalent
that is.
I also wonder if anybody claiming this boldly can prove it, and more
important, can prove that this particular version and/or misconfiguration
is really used now by well-known ISPs. I seriously doubt it.
Second, as a follow up to that, does anyone have any ideas of what is
the realistic length of time things tend to stay in various caches
(proxy, client, etc.)? I know the answer could be "forever", I am just
wondering about expectation. Are we talking about minutes, hours,
days?
Minutes and hours would be hardly reasonable; remember that a cache should
provide for transparent faster data access. It is probably days, maybe
some weeks, given that a proxy cache is also of finite size (thus the
oldest, biggest or most seldom accessed ressources are inevitably removed
from it some day) and a proxy server usually has to provide service for a
larger number of users.
Third, couldn't the client check the expires itself and then decide on
asking a second time with a unique string?


It could. But you cannot use client-side JS scripting for this. Due to
security reasons including maintaining user privacy, the cache cannot be
accessed directly through the AOM.
PointedEars
Dec 3 '05 #63
On Sat, 03 Dec 2005 02:02:55 +0100, in comp.lang.javascript , Thomas
'PointedEars' Lahn <Po*********@web.de> in
<12****************@PointedEars.de> wrote:
Matt Silberstein wrote:
I know this has been a long running debate in this group and I think I
understand the claims and theory on both sides. But to check as I see
the question if I have control of both sides (and have perfect systems
in between) then having an expiration string would be the best answer.


No. If that were so, you should definitely use all HTTP/1.1 cache-control
mechanisms available.


That is what I meant.
The owner of the data would know when it expires


Would they? I say, that depends on the resource.


If anyone would, the owner of the data would know if it is old. If
not, then who would? Your solution certainly assumes it.
and the caches would give me optimal performance.


If you knew the approximate future expiration date and it is an HTTP/1.0
cache, yes. If you set the Expires header to the current or a past date
for an HTTP/1.0 cache, no.

If the server does not expire the information and/or the systems
in-between are not set up right, then the cached information could
be incorrect. In that case I would have to use a unique string to
force freshening of the data.


You may have to use that, but with broken systems there is no guarantee
that this will work either. Arguing that intermediary implementations
could be broken and cache control therefore will not work, is completely
pointless.


Not really since in the real world we can figure out in what ways
things are usually broken. It may be that anything can happen in which
case we are sunk. Or it may be that they tend to be broken in ways we
can deal with.
This, however, could lead to excessive load on the various caching
systems, from the proxies to the client itself.


Correct.
Assuming the above is substantially correct I have three questions.

First, does anyone have a reference to some tests on proxy
configuration? I saw people say that they can/are misconfigured, but I
would like to see something that gives me an idea of how prevalent
that is.


I also wonder if anybody claiming this boldly can prove it, and more
important, can prove that this particular version and/or misconfiguration
is really used now by well-known ISPs. I seriously doubt it.
Second, as a follow up to that, does anyone have any ideas of what is
the realistic length of time things tend to stay in various caches
(proxy, client, etc.)? I know the answer could be "forever", I am just
wondering about expectation. Are we talking about minutes, hours,
days?


Minutes and hours would be hardly reasonable; remember that a cache should
provide for transparent faster data access. It is probably days, maybe
some weeks, given that a proxy cache is also of finite size (thus the
oldest, biggest or most seldom accessed ressources are inevitably removed
from it some day) and a proxy server usually has to provide service for a
larger number of users.


Again, there should be some real-world tests out there but I don't
know where to look.
Third, couldn't the client check the expires itself and then decide on
asking a second time with a unique string?


It could. But you cannot use client-side JS scripting for this. Due to
security reasons including maintaining user privacy, the cache cannot be
accessed directly through the AOM.


I am not suggesting looking at the cache but at the expiration in the
header. See what the server sends you and make a guess on whether or
not it really was new. For critical data this is not necessarily a bad
solution.

--
Matt Silberstein

Do something today about the Darfur Genocide

http://www.beawitness.org
http://www.darfurgenocide.org
http://www.savedarfur.org

"Darfur: A Genocide We can Stop"
Dec 3 '05 #64
On Sat, 03 Dec 2005 02:02:55 +0100, Thomas 'PointedEars' Lahn
<Po*********@web.de> wrote:
I also wonder if anybody claiming this boldly can prove it, and more
important, can prove that this particular version and/or misconfiguration
is really used now by well-known ISPs. I seriously doubt it.


There is just such a broken cache in the EMEA part of one of the
largest computer systems company in the world.

There are a lot more caches in corporates than there are ISPs.
Third, couldn't the client check the expires itself and then decide on
asking a second time with a unique string?


It could. But you cannot use client-side JS scripting for this. Due to
security reasons including maintaining user privacy, the cache cannot be
accessed directly through the AOM.


Client-side JS can absolutely be used for this in the instance of
xmlhttp request requests which the OP was doing.

Jim.
Dec 3 '05 #65
Jim Ley wrote:
On Sat, 03 Dec 2005 02:02:55 +0100, Thomas 'PointedEars' Lahn
I also wonder if anybody claiming this boldly can prove it, and more
important, can prove that this particular version and/or misconfiguration
is really used now by well-known ISPs. I seriously doubt it.
There is just such a broken cache in the EMEA part of one of the
largest computer systems company in the world.


That is a statement, not a proof.

Which company do you mean exactly? In case you mean IBM ("EMEA" still
has that ring to me): I worked for IBM Germany until about two months
ago and never had problems with cached resources accessed from within
the intranet. Not with IE 6 and not with Firefox 1.0.x; both on Win2k
SP-4.
There are a lot more caches in corporates than there are ISPs.


That is another statement.
Third, couldn't the client check the expires itself and then decide on
asking a second time with a unique string?

It could. But you cannot use client-side JS scripting for this. Due to
security reasons including maintaining user privacy, the cache cannot be
accessed directly through the AOM.


Client-side JS can absolutely be used for this in the instance of
xmlhttp request requests which the OP was doing.


Correct. Fortunately, you preserved the context of my statement.
I referred to accessing the (local) cache directly. Of course, the
getResponseHeader() method of the XMLHttpRequest object used allows
to retrieve values of cache control headers like Expires and to
consider them for additional request (if the header was missing.)

BTW: My website uses cache control methods, including
<URL:http://www.pointedears.de/scripts/js-version-info> and
<URL:http://www.pointedears.de/scripts/test/mime-types/>
I update both presumably in intervals of less than two weeks.
You are welcome to test.
PointedEars
Dec 3 '05 #66
On Sat, 03 Dec 2005 19:34:40 +0100, Thomas 'PointedEars' Lahn
<Po*********@web.de> wrote:
Jim Ley wrote:
On Sat, 03 Dec 2005 02:02:55 +0100, Thomas 'PointedEars' Lahn
I also wonder if anybody claiming this boldly can prove it, and more
important, can prove that this particular version and/or misconfiguration
is really used now by well-known ISPs. I seriously doubt it.
There is just such a broken cache in the EMEA part of one of the
largest computer systems company in the world.


That is a statement, not a proof.


well, I can't prove it to you other than by violating the security of
the company, can I?
Which company do you mean exactly? In case you mean IBM


Nope, not IBM, but a similar company.
There are a lot more caches in corporates than there are ISPs.


That is another statement.


Do you dispute it?

Jim.
Dec 3 '05 #67
Jim Ley wrote:
On Sat, 03 Dec 2005 19:34:40 +0100, Thomas 'PointedEars' Lahn
That is a statement, not a proof.


well, I can't prove it to you other than by violating the security of
the company, can I?


Well, you could describe the situation as detailed as you are permitted to.
Which company do you mean exactly? In case you mean IBM


Nope, not IBM, but a similar company.


Name it then. And I would appreciate it if you marked omissions
in quoted material.
There are a lot more caches in corporates than there are ISPs.

That is another statement.


Do you dispute it?


Not yet. But it will be even harder to prove.
PointedEars

P.S.: I repaired the broken Subject header now.
Dec 3 '05 #68
Thomas 'PointedEars' Lahn wrote:
Jim Ley wrote:
There is just such a broken cache in the EMEA part of one of the
largest computer systems company in the world.

That is a statement, not a proof.


I suspect that no amount of 'proof' would be adequate to satisfy you.

I think both sides of this argument have been laid out pretty well, and
anyone wanting to explore the topic could read through the thread and come
to their own conclusion.

--
Matt Kruse
http://www.JavascriptToolbox.com
http://www.AjaxToolbox.com
Dec 3 '05 #69
On Sat, 3 Dec 2005 13:17:14 -0600, in comp.lang.javascript , "Matt
Kruse" <ne********@mattkruse.com> in <dm*********@news1.newsguy.com>
wrote:
Thomas 'PointedEars' Lahn wrote:
Jim Ley wrote:
There is just such a broken cache in the EMEA part of one of the
largest computer systems company in the world.

That is a statement, not a proof.


I suspect that no amount of 'proof' would be adequate to satisfy you.

I think both sides of this argument have been laid out pretty well, and
anyone wanting to explore the topic could read through the thread and come
to their own conclusion.


The only thing missing is I wish that I had some kind of numbers to
work on. I have one side asserting there is major problem with
misconfigured systems, another telling me it is not so, but I lack any
information to make a determination. And I know I can't do a
sufficient test on my one. Again, does any know of any reference to
this being tested?
--
Matt Silberstein

Do something today about the Darfur Genocide

http://www.beawitness.org
http://www.darfurgenocide.org
http://www.savedarfur.org

"Darfur: A Genocide We can Stop"
Dec 3 '05 #70
Matt Silberstein wrote:
The only thing missing is I wish that I had some kind of numbers to
work on.
Stats which surely will never be available.
I have one side asserting there is major problem with
misconfigured systems


I don't believe that was the point being made.

From my point of view, the point was that even if there is just one user
behind a broken caching proxy, changing the url will make it work for
everyone including that one user. You might not know for certain if you will
have a user behind a broken proxy, but if you want to play it safe you
change the url to cater to the lowest common denominator.

A specific case isn't required. In theory, such a situation could exist (and
has been _known_ to exist at various times). Since the risk of such a
situation is > 0, you can completely hedge against the risk by using the
unique url strategy.

--
Matt Kruse
http://www.JavascriptToolbox.com
http://www.AjaxToolbox.com
Dec 3 '05 #71
On Sat, 03 Dec 2005 19:59:38 +0100, Thomas 'PointedEars' Lahn
<Po*********@web.de> wrote:
Jim Ley wrote:
On Sat, 03 Dec 2005 19:34:40 +0100, Thomas 'PointedEars' Lahn
That is a statement, not a proof.


well, I can't prove it to you other than by violating the security of
the company, can I?


Well, you could describe the situation as detailed as you are permitted to.


The situation has been described already resources are cached despite
them being asserted to be non-cacheable.
Nope, not IBM, but a similar company.


Name it then.


Unfortunately there is such a thing as client confidentiality...

Jim.
Dec 3 '05 #72
Matt Silberstein said the following on 12/3/2005 3:45 PM:
On Sat, 3 Dec 2005 13:17:14 -0600, in comp.lang.javascript , "Matt
Kruse" <ne********@mattkruse.com> in <dm*********@news1.newsguy.com>
wrote:

Thomas 'PointedEars' Lahn wrote:
Jim Ley wrote:

There is just such a broken cache in the EMEA part of one of the
largest computer systems company in the world.

That is a statement, not a proof.


I suspect that no amount of 'proof' would be adequate to satisfy you.

I think both sides of this argument have been laid out pretty well, and
anyone wanting to explore the topic could read through the thread and come
to their own conclusion.

The only thing missing is I wish that I had some kind of numbers to
work on. I have one side asserting there is major problem with
misconfigured systems, another telling me it is not so, but I lack any
information to make a determination. And I know I can't do a
sufficient test on my one. Again, does any know of any reference to
this being tested?


One hundred trillion trillion examples of something working won't prove
a theory but one example of it not working will dis-prove it. The AOL
Proxies routinely disregard cache headers. Now, you have to decide
whether you want to take that into account, and work around it as has
been explained many times by appending to the URL. Or, you say to heck
with ~40 million potential customers and you rely on server headers.

--
Randy
comp.lang.javascript FAQ - http://jibbering.com/faq & newsgroup weekly
Javascript Best Practices - http://www.JavascriptToolbox.com/bestpractices/
Dec 4 '05 #73
On Sat, 03 Dec 2005 19:04:17 -0500, in comp.lang.javascript , Randy
Webb <Hi************@aol.com> in <e-********************@comcast.com>
wrote:
Matt Silberstein said the following on 12/3/2005 3:45 PM:
On Sat, 3 Dec 2005 13:17:14 -0600, in comp.lang.javascript , "Matt
Kruse" <ne********@mattkruse.com> in <dm*********@news1.newsguy.com>
wrote:

Thomas 'PointedEars' Lahn wrote:

Jim Ley wrote:

>There is just such a broken cache in the EMEA part of one of the
>largest computer systems company in the world.

That is a statement, not a proof.

I suspect that no amount of 'proof' would be adequate to satisfy you.

I think both sides of this argument have been laid out pretty well, and
anyone wanting to explore the topic could read through the thread and come
to their own conclusion.

The only thing missing is I wish that I had some kind of numbers to
work on. I have one side asserting there is major problem with
misconfigured systems, another telling me it is not so, but I lack any
information to make a determination. And I know I can't do a
sufficient test on my one. Again, does any know of any reference to
this being tested?


One hundred trillion trillion examples of something working won't prove
a theory but one example of it not working will dis-prove it. The AOL
Proxies routinely disregard cache headers. Now, you have to decide
whether you want to take that into account, and work around it as has
been explained many times by appending to the URL. Or, you say to heck
with ~40 million potential customers and you rely on server headers.


I don't understand. You seem to object to my asking for numbers, then
give me some numbers to prove your point. I agree that if AOL will
"frequently" get it wrong then that is a sufficient argument. I am
quite willing to abandon one theoretically existent user, but not 40M.
While I believe you is there any published information on this?
--
Matt Silberstein

Do something today about the Darfur Genocide

http://www.beawitness.org
http://www.darfurgenocide.org
http://www.savedarfur.org

"Darfur: A Genocide We can Stop"
Dec 4 '05 #74
Matt Silberstein said the following on 12/3/2005 10:21 PM:
On Sat, 03 Dec 2005 19:04:17 -0500, in comp.lang.javascript , Randy
Webb <Hi************@aol.com> in <e-********************@comcast.com>
wrote:

Matt Silberstein said the following on 12/3/2005 3:45 PM:
On Sat, 3 Dec 2005 13:17:14 -0600, in comp.lang.javascript , "Matt
Kruse" <ne********@mattkruse.com> in <dm*********@news1.newsguy.com>
wrote:

Thomas 'PointedEars' Lahn wrote:
>Jim Ley wrote:
>
>
>>There is just such a broken cache in the EMEA part of one of the
>>largest computer systems company in the world.
>
>That is a statement, not a proof.

I suspect that no amount of 'proof' would be adequate to satisfy you.

I think both sides of this argument have been laid out pretty well, and
anyone wanting to explore the topic could read through the thread and come
to their own conclusion.
The only thing missing is I wish that I had some kind of numbers to
work on. I have one side asserting there is major problem with
misconfigured systems, another telling me it is not so, but I lack any
information to make a determination. And I know I can't do a
sufficient test on my one. Again, does any know of any reference to
this being tested?
One hundred trillion trillion examples of something working won't prove
a theory but one example of it not working will dis-prove it. The AOL
Proxies routinely disregard cache headers. Now, you have to decide
whether you want to take that into account, and work around it as has
been explained many times by appending to the URL. Or, you say to heck
with ~40 million potential customers and you rely on server headers.

I don't understand. You seem to object to my asking for numbers, then
give me some numbers to prove your point.


I had no objection to you asking for numbers.
I agree that if AOL will "frequently" get it wrong then that is a
sufficient argument. I am quite willing to abandon one theoretically
existent user, but not 40M. While I believe you is there any published information on this?


Not that I am aware of that is published on the web. I do know, from
personal experience, that it can take up to 3 full days for a new page
to propogate through the AOL proxies.

--
Randy
comp.lang.javascript FAQ - http://jibbering.com/faq & newsgroup weekly
Javascript Best Practices - http://www.JavascriptToolbox.com/bestpractices/
Dec 4 '05 #75
On Sun, 04 Dec 2005 01:31:47 -0500, in comp.lang.javascript , Randy
Webb <Hi************@aol.com> in
<6N******************************@comcast.com> wrote:
Matt Silberstein said the following on 12/3/2005 10:21 PM:
On Sat, 03 Dec 2005 19:04:17 -0500, in comp.lang.javascript , Randy
Webb <Hi************@aol.com> in <e-********************@comcast.com>
wrote:

Matt Silberstein said the following on 12/3/2005 3:45 PM:

On Sat, 3 Dec 2005 13:17:14 -0600, in comp.lang.javascript , "Matt
Kruse" <ne********@mattkruse.com> in <dm*********@news1.newsguy.com>
wrote:

>Thomas 'PointedEars' Lahn wrote:
>
>
>>Jim Ley wrote:
>>
>>
>>>There is just such a broken cache in the EMEA part of one of the
>>>largest computer systems company in the world.
>>
>>That is a statement, not a proof.
>
>I suspect that no amount of 'proof' would be adequate to satisfy you.
>
>I think both sides of this argument have been laid out pretty well, and
>anyone wanting to explore the topic could read through the thread and come
>to their own conclusion.
The only thing missing is I wish that I had some kind of numbers to
work on. I have one side asserting there is major problem with
misconfigured systems, another telling me it is not so, but I lack any
information to make a determination. And I know I can't do a
sufficient test on my one. Again, does any know of any reference to
this being tested?

One hundred trillion trillion examples of something working won't prove
a theory but one example of it not working will dis-prove it. The AOL
Proxies routinely disregard cache headers. Now, you have to decide
whether you want to take that into account, and work around it as has
been explained many times by appending to the URL. Or, you say to heck
with ~40 million potential customers and you rely on server headers.

I don't understand. You seem to object to my asking for numbers, then
give me some numbers to prove your point.


I had no objection to you asking for numbers.


True so let me re-phrase: you seem to think that the numbers were not
an important decision point. That is, it was more important to ensure
that the one person was ok than just program for the 10^X. I think
that is a judgment call, not an absolute. Paranoia is the appropriate
approach, but it will only take you so far.
I agree that if AOL will "frequently" get it wrong then that is a
sufficient argument. I am quite willing to abandon one theoretically
existent user, but not 40M.

While I believe you is there any published information on this?


Not that I am aware of that is published on the web. I do know, from
personal experience, that it can take up to 3 full days for a new page
to propogate through the AOL proxies.


Even with appropriate header? Wow!

(As a real aside, but telling about misconfiguration, I once had some
Internet email show up 6 months after sending it. It had some tech
advice that was almost disastrous since it was so displaced in time.)

--
Matt Silberstein

Do something today about the Darfur Genocide

http://www.beawitness.org
http://www.darfurgenocide.org
http://www.savedarfur.org

"Darfur: A Genocide We can Stop"
Dec 4 '05 #76
Matt Silberstein said the following on 12/4/2005 11:58 AM:
On Sun, 04 Dec 2005 01:31:47 -0500, in comp.lang.javascript , Randy
Webb <Hi************@aol.com> in
<6N******************************@comcast.com> wrote:

<snip>
I had no objection to you asking for numbers.

True so let me re-phrase: you seem to think that the numbers were not
an important decision point. That is, it was more important to ensure
that the one person was ok than just program for the 10^X. I think
that is a judgment call, not an absolute. Paranoia is the appropriate
approach, but it will only take you so far.


To me, no, the numbers are not that important because most statistics on
the web are generally useless simply because of the nature of the web.
There is no way to come up with reliable numbers on what ISP's honors
what headers without testing every one of them, almost daily, to cover
an changes that are made at any given time.

My numbers were more a point that while you may not be able to prove
something, it is quite easy to disprove something.
I agree that if AOL will "frequently" get it wrong then that is a
sufficient argument. I am quite willing to abandon one theoretically
existent user, but not 40M.
While I believe you is there any published information on this?


Not that I am aware of that is published on the web. I do know, from
personal experience, that it can take up to 3 full days for a new page
to propogate through the AOL proxies.

Even with appropriate header? Wow!


Yes, even with appropriate headers. The AOL combo didn't even honor a
Control-F5 Refresh to get it from the server after clearing the cache.
It got it from an AOL proxy that was outdated.
(As a real aside, but telling about misconfiguration, I once had some
Internet email show up 6 months after sending it. It had some tech
advice that was almost disastrous since it was so displaced in time.)


That sounds like AOL Tech support :)

--
Randy
comp.lang.javascript FAQ - http://jibbering.com/faq & newsgroup weekly
Javascript Best Practices - http://www.JavascriptToolbox.com/bestpractices/
Dec 4 '05 #77

This thread has been closed and replies have been disabled. Please start a new discussion.

By using Bytes.com and it's services, you agree to our Privacy Policy and Terms of Use.

To disable or enable advertisements and analytics tracking please visit the manage ads & tracking page.