By using this site, you agree to our updated Privacy Policy and our Terms of Use. Manage your Cookies Settings.
424,660 Members | 1,560 Online
Bytes IT Community
+ Ask a Question
Need help? Post your question and get tips & solutions from a community of 424,660 IT Pros & Developers. It's quick & easy.

XmlHttpRequest not loading latest version of xml

P: n/a
This works, but it doesn't load the latest version of the xml if it was
just modified without closing and reopening the browser.

Here's the scenario:

I have an xml doc called results.xml. It can contain lots of data.

<Results>
<Data>Stuff</Data>
<Data>More stuff</Data>
</Results>

My ASP application can add to the data during the year. At the end of
the year I want to reset the xml.

I run a script that can make a copy of the file from results.xml to
results_[yyyymmdd].xml as a backup and save a new results.xml file as
the following with a backup date attribute:

<Results BUDate="20051124"></Results>

So far this all works fine, but here's where the problem occurs. I've
set a Backup date attribute on the off chance I want to restore that
lastest file.

I have a checkbox that will toggle between enabled and disabled
depending on whether the results.xml file has an attribute "BUDate".
If I try to restore the backup immediately, during the same session it
returns the results.xml file as it was before it was backed up. In
other words, it has all the data!

In order to get it to work correctly, I have to log off, close the
browser (IE) and re-open the browser.

Is there any way to make it retrieve that latest xml version without
closing the browser?

Many thanks,

King Wilder
Here's my javascript code:

var response = null;

// global flag
var isIE = false;

// global request and XML document objects
var req;

function loadXMLDoc(url, GoAsync) {

// branch for native XMLHttpRequest object
if (window.XMLHttpRequest) {
req = new XMLHttpRequest();
req.onreadystatechange = processReqChange;
req.open("GET", url, GoAsync);
req.send(null);
// branch for IE/Windows ActiveX version
} else if (window.ActiveXObject) {
isIE = true;
req = new ActiveXObject("Microsoft.XMLHTTP");
if (req) {
req.onreadystatechange = processReqChange;
req.open("GET", url, GoAsync);
req.send();
}
}
}

// handle onreadystatechange event of req object
function processReqChange() {
// only if req shows "loaded"
if (req.readyState == 4) {
// only if "OK"
if (req.status == 200) {
response = req.responseXML;
} else {
alert("There was a problem retrieving the XML data:\n" +
req.statusText);
}
}
}

Nov 26 '05 #1
Share this Question
Share on Google+
76 Replies


P: n/a
kwilder wrote:
This works, but it doesn't load the latest version of the xml if it
was just modified without closing and reopening the browser.


GET requests can be cached by the browser, just as they could by typing the
url into the browser.
Attach a unique string to the url with each request so the browser doesn't
cache it.

You may want to consider using a wrapper around XMLHttpRequest to make it
easier to deal with and to automate handling of issues like this. For
example, my library at http://www.AjaxToolbox.com/ will automatically add a
unique ID to GET requests.

--
Matt Kruse
http://www.JavascriptToolbox.com
http://www.AjaxToolbox.com
Nov 26 '05 #2

P: n/a
Matt Kruse wrote:
kwilder wrote:
This works, but it doesn't load the latest version of the xml if it
was just modified without closing and reopening the browser.
GET requests can be cached by the browser, just as they could by typing
the url into the browser.


It is not the request that is cached but the resource retrieved, hence the
_response_ previously sent. Which is why GET and POST request make no
difference here.
Attach a unique string to the url with each request so the browser doesn't
cache it.


No, that Dirty Hack will only fill the user's cache with obsolete
data over time. <URL:http://www.mnot.net/cache_docs/>
PointedEars
Nov 26 '05 #3

P: n/a
Thomas 'PointedEars' Lahn wrote:
It is not the request that is cached but the resource retrieved,
hence the _response_ previously sent. Which is why GET and POST
request make no difference here.


Semantics.

GET and POST differ because servers typically expire POST requests
immediately, so the user doesn't need to worry about anything.
Attach a unique string to the url with each request so the browser
doesn't cache it.

No, that Dirty Hack will only fill the user's cache with obsolete
data over time. <URL:http://www.mnot.net/cache_docs/>


It's not a dirty hack - it's often the best option. Users can't always
control server headers, and often they would have no idea how to do it.
Creating unique url's won't "full the user's cache" any more than they allow
for, and no more than browsing around the web.

--
Matt Kruse
http://www.JavascriptToolbox.com
http://www.AjaxToolbox.com
Nov 26 '05 #4

P: n/a
Matt Kruse wrote:
Thomas 'PointedEars' Lahn wrote:
It is not the request that is cached but the resource retrieved,
hence the _response_ previously sent. Which is why GET and POST
request make no difference here.
Semantics.


Certainly not. What may be cached locally here is the server's response
to the client's request and the URI assigned with that request so that
the response can be identified without submitting the request again. It
is definitely not the request itself that is cached here across sessions
(as that would pose a security risk).
GET and POST differ because servers typically expire POST requests
immediately,
First of all, (in contrast to FTP, for example) HTTP is not a stateful
protocol by default. There is only one HTTP request and one HTTP response
to it. The very notion of "typically expire POST requests" regarding HTTP
servers is absurd; if one would follow this logic, one would be forced to
state that HTTP servers "expire" _every_ HTTP request which is obviously
nonsense. That said, second, when a _connection_ expires (since several
requests can be issued and responded to using the same connection) depends
on whether the client is sending Keep-Alive or not, specified since
HTTP/1.1. (You probably want to read RFC2616, especially section 1.4.)
However, that has nothing to do with the matter discussed, since the
latter is not a server-side problem (although it may seem so), but a
client-side one that can be helped best with proper server-side
configuration.

True is that HTTP servers can sustain a cache for paths of requested
resources and responses to these requests. However, in Apache HTTPD,
the most advanced and widest distributed of HTTP servers, such caching
is still experimental and only present since version 2.0 (where 1.x
is the most version used for production environments.)[1]
Another widely distributed HTTP server is Microsoft Internet Information
Server. "By default InetInfo, the process responsible for WWW, FTP and
Gopher uses a 3MB of cache for all of the services."[2] That has perhaps
changed. See also [3] and please CMIIW.

True is that HTTP _proxy_ servers cache requests and responses and they
are less likely to cache POST requests and responses to them. However,
that is a _different_ matter. Using a proper caching technique on the
target HTTP server helps intermediary proxy servers to identify and hold
only the latest version of the resource, though, where in contrast a
different URI fills their cache with a new one and inevitably forces
other cached resources to be removed. Since a proxy server is used by
many people, it is not a very social thinking that initially caused this,
if you ask me.

True is that responses to POST requests are not likely to be kept by
caches as described in /Caching Tutorial for Web Authors and Webmasters/.
However, some clients do cache POST requests for the current session.
That is why e.g. in Firefox you can use the Back button/feature from a
response to a POST request and return to the resource that issued sending
it. You then can use the Forward button/feature and are asked if you
really want to re-submit the POST request. This works as long as the
history is sustained. However, either is _different_ from the matter
discussed.
so the user doesn't need to worry about anything.
Attach a unique string to the url with each request so the browser
doesn't cache it. No, that Dirty Hack will only fill the user's cache with obsolete
data over time. <URL:http://www.mnot.net/cache_docs/>


It's not a dirty hack - it's often the best option.


It is _never_ the best option. A proper caching technique is always the
best option.
Users can't always control server headers, and often they would have
no idea how to do it.
Then they are incompetent enough not to be Web authors and programmers!
Creating unique url's won't "full the user's cache" any more than they
allow for, and no more than browsing around the web.


Without proper caching technique and especially with abusing the query-part
of URIs, content that is obsolete the moment it is retrieved is cached,
filling the caches with data that is useless (since unlikely to be accessed
again) for a long time, unless removed later by caching requests that would
exceed the assigned cache memory and disk space -- iff that really worked
locally -- or removed manually by the user (who wonders where his free disk
space is since the former does _not_ really work in one of the widest
distributed Web user agents).
PointedEars
___________
[1] <http://httpd.apache.org/docs/2.0/mod/mod_cache.html>
[2] <URL:http://www.windowsitpro.com/Article/ArticleID/13974/13974.html>
[3]
<URL:http://msdn.microsoft.com/library/en-us/iissdk/html/4f0f204a-eb78-4e15-b52b-89c6dbb343ef.asp>
Nov 26 '05 #5

P: n/a
On Sat, 26 Nov 2005 12:02:25 +0100, Thomas 'PointedEars' Lahn
<Po*********@web.de> wrote:
Matt Kruse wrote:
Attach a unique string to the url with each request so the browser doesn't
cache it.


No, that Dirty Hack will only fill the user's cache with obsolete
data over time. <URL:http://www.mnot.net/cache_docs/>


Except it will be reliable in all situations, whereas suggesting that
a resource should not be cached won't.

You should certainly include headers suggesting it's not cached too,
to save the "fill the user's cache" problem you raise, but changing
the URI is the best idea.

Jim.
Nov 26 '05 #6

P: n/a
Jim Ley wrote:
[...] Thomas 'PointedEars' Lahn [...]
Matt Kruse wrote:
Attach a unique string to the url with each request so the browser
doesn't cache it.
No, that Dirty Hack will only fill the user's cache with obsolete
data over time. <URL:http://www.mnot.net/cache_docs/>


Except it will be reliable in all situations,


Reliable -- perhaps. Reasonable and social -- no.
whereas suggesting that a resource should not be cached won't.

You should certainly include headers suggesting it's not cached too,
I am not suggesting such a thing.
to save the "fill the user's cache" problem you raise,
That will save from that problem but cause more server
and network load, burdening the problem on others. No.

The "trick" is to specify resources to expire and to make
different versions of them more identifyable for the client
when _served_.
but changing the URI is the best idea.


No, it is only the easiest one.
PointedEars
Nov 26 '05 #7

P: n/a
"Thomas 'PointedEars' Lahn" <Po*********@web.de> wrote in message
news:30****************@PointedEars.de...
but changing the URI is the best idea.


No, it is only the easiest one.


So its a server side problem. The headers the server serves should have some
form of "timeout".

Aaron
Nov 26 '05 #8

P: n/a
Aaron Gray wrote:
"Thomas 'PointedEars' Lahn" <Po*********@web.de> wrote [...]
but changing the URI is the best idea. No, it is only the easiest one.


So its a server side problem.


No, it is a client-side problem best to be solved server-side :)
The headers the server serves should have some form of "timeout".


The response should also include such a header, yes. The specified
header name is Expires. Other identifying headers include ETag and
Content-Length, both specified in HTTP/1.1.
PointedEars
Nov 26 '05 #9

P: n/a
Thomas 'PointedEars' Lahn wrote:
Semantics.

Certainly not. What may be cached locally here is the server's
response to the client's request and the URI assigned with that
request...


You're being argumentative as usual.
Clearly, it's the response that is cached by the browser. However, the
generic term 'request' is often used to refer to the whole request/response
cycle. Perhaps it is not the ideal term, but you have to be pretty obtuse to
argue over it.
It's not a dirty hack - it's often the best option.

It is _never_ the best option. A proper caching technique is always
the best option.


You're wrong.
Users can't always control server headers, and often they would have
no idea how to do it.

Then they are incompetent enough not to be Web authors and
programmers!


Your elitist bullshit gets tiring.

--
Matt Kruse
http://www.JavascriptToolbox.com
http://www.AjaxToolbox.com
Nov 26 '05 #10

P: n/a
Ok I read everyones post and I apologize but I don't feel that much
closer to my answer. I consider myself about an 8 on a scale of 1-10 in
Javascript, and I guess its that 2 points that is depriving my of this
understanding.

Per Matt Kruse's response and adding something to the URL, I don't know
how this would work. I simply request and save an XML file saved on the
web server.

http://[hostname]/xml/results.xml

How would I randomize this path and then be able to retrieve it later?
I will look at your AjaxToolbox.

Per Thomas Lahn's responses, Tom you seem very knowledgable, but was
there an answer to my question in there? You don't like the idea of
generating a unique id and attaching it to the URL. You say that it's
the easy way to solve this problem. I really like your explanation of
how things work, but I didn't seem to find a suggestion to my problem.
Can you tell me what I should do if not use a unique id?

I didn't know this would spark such a debate, but I think it's actually
good because I can see there are varied opinions on this subject.

I guess the bottom line I'm looking for is a reliable, proper way to
retrieve a file from the server and not the cache.

Thanks again for everyone's responses.

King Wilder


*** Sent via Developersdex http://www.developersdex.com ***
Nov 26 '05 #11

P: n/a
On 26/11/2005 17:15, Thomas 'PointedEars' Lahn wrote:

[Freshness information]
The response should also include such a header, yes. The specified
header name is Expires.
Or Cache-Control using the max-age directive (which takes precedence
where supported).
Other identifying headers include ETag and Content-Length, both
specified in HTTP/1.1.


The Content-Length header isn't used for entity validation. ETag is, and
so is Last-Modified. That said, the Content-Length header can otherwise
improve network performance as it's required for persistent connections
to be utilised.

Mike

--
Michael Winter
Prefix subject with [News] before replying by e-mail.
Nov 26 '05 #12

P: n/a
Matt Kruse wrote:
Thomas 'PointedEars' Lahn wrote:
Semantics. Certainly not. What may be cached locally here is the server's
response to the client's request and the URI assigned with that
request...


You're being argumentative as usual.


I do hope so since you provided no convincing arguments.
Clearly, it's the response that is cached by the browser. However,
the generic term 'request' is often used to refer to the whole
request/response cycle.
It is not, only in your parallel universe.
Perhaps it is not the ideal term, but you have to
be pretty obtuse to argue over it.


You have to pretty ignorant to ignore RFC2616.
It's not a dirty hack - it's often the best option.

It is _never_ the best option. A proper caching technique is always
the best option.


You're wrong.


I am not.
Users can't always control server headers, and often they would have
no idea how to do it.

Then they are incompetent enough not to be Web authors and
programmers!


Your elitist bullshit gets tiring.


So you _do_ prefer people who do not know what they are doing, so
that you can tell them what to do. "Who's more foolish? ..."
PointedEars
Nov 26 '05 #13

P: n/a
King Wilder wrote:
Ok I read everyones post and I apologize but I don't feel that much
closer to my answer. I consider myself about an 8 on a scale of 1-10 in
Javascript, and I guess its that 2 points that is depriving my of this
understanding.
No, this is not at all about JavaScript, it is about how the Web works --
via HTTP (HyperText Transfer Protocol). That protocol already provides
means to identify resources, means that are used by HTML user agents.
It is foolish not to make use of them.
Per Thomas Lahn's responses, Tom you seem very knowledgable, but was
there an answer to my question in there?
Yes, there was.
You don't like the idea of generating a unique id and attaching it to
the URL. You say that it's the easy way to solve this problem. I really
like your explanation of how things work, but I didn't seem to find a
suggestion to my problem.
Yes, it only seems so. Read again.
Can you tell me what I should do if not use a unique id?


Tell the server to identify its responses through headers. This way,
any standards compliant non-broken non-misconfigured user agent (client)
will not use its local cache and any intermediary proxy server will not
use its cache if the cached resource says so. This way, caches are not
filled with garbage and cache mechanisms both locally and on proxy servers
are not required to remove other cached content for it, and you will still
get the latest data. Worked for me ever since.
HTH

PointedEars
Nov 26 '05 #14

P: n/a
On Sat, 26 Nov 2005 20:18:09 +0100, Thomas 'PointedEars' Lahn
<Po*********@web.de> wrote:

Tell the server to identify its responses through headers. This way,
any standards compliant non-broken non-misconfigured user agent (client)
will not use its local cache and any intermediary proxy server will not
use its cache if the cached resource says so.


No, there is no MUST requirement on the client that proxies or the
client obey them, so it's not simply a broken issue. Even if it was,
then changing the URL _aswell_ would do nothing to change the
behaviour in correct clients, and would also support broken ones.

Changing the URL is very good advice, stop advising against it.

Jim.
Nov 26 '05 #15

P: n/a
On Sat, 26 Nov 2005 18:30:02 GMT, King Wilder <ki**@gizmobeach.com>
wrote:
Ok I read everyones post and I apologize but I don't feel that much
closer to my answer. I consider myself about an 8 on a scale of 1-10 in
Javascript, and I guess its that 2 points that is depriving my of this
understanding.

Er, if you don't know how to go

url="http://[hostname]/xml/results.xml?"+new Date().valueOf();

Then I can't see how you can claim 8 out 10 javascript ability??

As Thomas says though you should also ensure you send appropriate
no-cache headers, that is not a javascript issue though, that is a
server configuration issue.

Jim.
Nov 26 '05 #16

P: n/a
> Aaron Gray wrote:
"Thomas 'PointedEars' Lahn" <Po*********@web.de> wrote [...]
but changing the URI is the best idea.
No, it is only the easiest one.


So its a server side problem.

No, it is a client-side problem best to be solved server-side :)


Ah... I see :)
The headers the server serves should have some form of "timeout".


The response should also include such a header, yes. The specified
header name is Expires. Other identifying headers include ETag and
Content-Length, both specified in HTTP/1.1.


Thanks for the advice.

Seems like the best and only truely correct solution to the problem.

Aaron
Nov 26 '05 #17

P: n/a
Michael Winter wrote:
On 26/11/2005 17:15, Thomas 'PointedEars' Lahn wrote:
The response should also include such a header, yes. The
specified header name is Expires.


Or Cache-Control using the max-age directive (which takes
precedence where supported).


True (RFC2616, 14.9.3.)
PointedEars
Nov 26 '05 #18

P: n/a
Jim Ley wrote:
[...] Thomas 'PointedEars' Lahn [...] wrote:
Tell the server to identify its responses through headers. This way,
any standards compliant non-broken non-misconfigured user agent (client)
will not use its local cache and any intermediary proxy server will not
use its cache if the cached resource says so.
No, there is no MUST requirement on the client that proxies
or the client obey them, so it's not simply a broken issue.


,-[RFC2616]
|
| 14.9 Cache-Control
|
| The Cache-Control general-header field is used to specify
| directives that MUST be obeyed by all caching mechanisms
| along the request/response chain. [...]

And there is more (e.g. Last-Modified, see RFC2616 section 13.3.4) but
that does not really belong here.)
Even if it was, then changing the URL _aswell_ would do nothing to
change the behaviour in correct clients, and would also support broken
ones.
Non sequitur.
Changing the URL is very good advice,
It is not.
stop advising against it.


No, I will not.
PointedEars
Nov 26 '05 #19

P: n/a
On Sat, 26 Nov 2005 21:14:59 +0100, Thomas 'PointedEars' Lahn
<Po*********@web.de> wrote:
Jim Ley wrote:
[...] Thomas 'PointedEars' Lahn [...] wrote:
Tell the server to identify its responses through headers. This way,
any standards compliant non-broken non-misconfigured user agent (client)
will not use its local cache and any intermediary proxy server will not
use its cache if the cached resource says so.


No, there is no MUST requirement on the client that proxies
or the client obey them, so it's not simply a broken issue.


,-[RFC2616]
|
| 14.9 Cache-Control
|
| The Cache-Control general-header field is used to specify
| directives that MUST be obeyed by all caching mechanisms
| along the request/response chain. [...]

And there is more (e.g. Last-Modified, see RFC2616 section 13.3.4) but
that does not really belong here.)


And the requirement that clients use HTTP 1.1 is where? IE for
example is generally configured to only use HTTP 1.0 through
proxies...

Jim.
Nov 26 '05 #20

P: n/a
Thomas 'PointedEars' Lahn said the following on 11/26/2005 2:07 PM:
Matt Kruse wrote:

Thomas 'PointedEars' Lahn wrote:
Semantics.

Certainly not. What may be cached locally here is the server's
response to the client's request and the URI assigned with that
request...
You're being argumentative as usual.

I do hope so since you provided no convincing arguments.


You are always argumentative, it takes no convincing arguments (or
proof) to know it. It only takes reading a few of your posts.
Clearly, it's the response that is cached by the browser. However,
the generic term 'request' is often used to refer to the whole
request/response cycle.

It is not, only in your parallel universe.


His "parallel universe" is called "Reality", your universe is called
"Theory". I prefer the former to the latter.
Perhaps it is not the ideal term, but you have to
be pretty obtuse to argue over it.

You have to pretty ignorant to ignore RFC2616.


Nah, it only takes a decent amount of common sense and a dose of reality
to ignore it. To blatantly think it is the end-all solution is the
ignorant part.

Theory: An RFC says to do something.
Reality: The browsers do something different.

That is something that you have failed to realize to date.
It's not a dirty hack - it's often the best option.

It is _never_ the best option. A proper caching technique is always
the best option.
You're wrong.

I am not.


Yes you are, you just don't realize it yet.
Users can't always control server headers, and often they would have
no idea how to do it.

Then they are incompetent enough not to be Web authors and
programmers!


Your elitist bullshit gets tiring.

So you _do_ prefer people who do not know what they are doing, so
that you can tell them what to do. "Who's more foolish? ..."


Nah, he is like me. He prefers reality to elitist bullshit Theorists
like you.

--
Randy
comp.lang.javascript FAQ - http://jibbering.com/faq & newsgroup weekly
Javascript Best Practices - http://www.JavascriptToolbox.com/bestpractices/
Nov 26 '05 #21

P: n/a
Jim Ley wrote:

[Quotation corrected]
[...] Thomas 'PointedEars' Lahn [...]:
Jim Ley wrote:
[...] Thomas 'PointedEars' Lahn [...] wrote:
Tell the server to identify its responses through headers. This way,
any standards compliant non-broken non-misconfigured user agent
(client) will not use its local cache and any intermediary proxy server
will not use its cache if the cached resource says so.
No, there is no MUST requirement on the client that proxies
or the client obey them, so it's not simply a broken issue.

,-[RFC2616]
|
| 14.9 Cache-Control
|
| The Cache-Control general-header field is used to specify
| directives that MUST be obeyed by all caching mechanisms
| along the request/response chain. [...]


And there is more (e.g. Last-Modified, see RFC2616 section 13.3.4)
but that does not really belong here.)


And the requirement that clients use HTTP 1.1 is where? IE for
example is generally configured to only use HTTP 1.0 through
proxies...


The Expires header was first defined in HTTP/1.0 (RFC1945), due to
absence of a definition of the Cache-Control header being more
restrictive than in HTTP/1.1:

,-[RFC1945]
|
| 10.7 Expires
|
| The Expires entity-header field gives the date/time after which the
| entity should be considered stale. This allows information providers
| to suggest the volatility of the resource, or a date after which the
| information may no longer be valid. Applications must not cache this
| entity beyond the date given. [...]

Microsoft makes it clear that it is a Good Thing to send both Expires
and Cache-Control for Internet Explorer:

<URL:http://support.microsoft.com/kb/234067/EN-US/>
The Last-Modified header was first defined in HTTP/1.0, too:

,-[RFC1945]
|
| 10.10 Last-Modified
|
| The Last-Modified entity-header field indicates the date and time at
| which the sender believes the resource was last modified. The exact
| semantics of this field are defined in terms of how the recipient
| should interpret it: if the recipient has a copy of this resource
| which is older than the date given by the Last-Modified field, that
| copy should be considered stale. [...]
PointedEars
Nov 26 '05 #22

P: n/a
Jim Ley wrote:
[...] Thomas 'PointedEars' Lahn [...]:
Jim Ley wrote:
[...] Thomas 'PointedEars' Lahn [...] wrote:
> Tell the server to identify its responses through headers. This way,
> any standards compliant non-broken non-misconfigured user agent
> (client) will not use its local cache and any intermediary proxy
> server will not use its cache if the cached resource says so.
No, there is no MUST requirement on the client that proxies
or the client obey them, so it's not simply a broken issue.

,-[RFC2616]
|
| 14.9 Cache-Control
|
| The Cache-Control general-header field is used to specify
| directives that MUST be obeyed by all caching mechanisms
| along the request/response chain. [...]

And there is more (e.g. Last-Modified, see RFC2616 section 13.3.4)
but that does not really belong here.)


And the requirement that clients use HTTP 1.1 is where? IE for
example is generally configured to only use HTTP 1.0 through
proxies...


That does not matter. "Cache directives MUST be passed through by a
proxy or gateway application, regardless of their significance to that
application, since the directives might be applicable to all recipients
along the request/response chain." (HTTP/1.1 [RFC2616], June 1999,
section 14.9.)

Furthermore, the Expires header was first defined in HTTP/1.0 (RFC1945,
May 1996), due to absence of a definition of the Cache-Control header
being more restrictive than in HTTP/1.1:

,-[RFC1945]
|
| 10.7 Expires
|
| The Expires entity-header field gives the date/time after which the
| entity should be considered stale. This allows information providers
| to suggest the volatility of the resource, or a date after which the
| information may no longer be valid. Applications must not cache this
| entity beyond the date given. [...]

Microsoft makes it very clear that it is a Good Thing to send
both Expires and Cache-Control to Internet Explorer:

<URL:http://support.microsoft.com/kb/234067/EN-US/>
And the Last-Modified header was first defined in HTTP/1.0, too:

,-[RFC1945]
|
| 10.10 Last-Modified
|
| The Last-Modified entity-header field indicates the date and time at
| which the sender believes the resource was last modified. The exact
| semantics of this field are defined in terms of how the recipient
| should interpret it: if the recipient has a copy of this resource
| which is older than the date given by the Last-Modified field, that
| copy should be considered stale. [...]
PointedEars
Nov 26 '05 #23

P: n/a
Thomas,

Reading yours and everyone else responses tells me that your answers
lie in manipulating server-side resources. If I am implementing my
Javascript HTTPRequest call from an HTML page, it doesn't seem like
I'll have control over this.

With that said, is the only recourse to accomplish what I want, to do
as Matt suggested and include a Unique ID? I'm looking for a complete
client-side solution. If there isn't one, then I would like to know it
isn't possible through Javascript alone. :^) Thanks.
Re: Jim Ley's response:
Er, if you don't know how to go url="http://[hostname]/xml/results.xml?"+new Date().valueOf();
Then I can't see how you can claim 8 out 10 javascript ability??


Touche. I claim to be 8 out of 10 in Javascript, sans the knowledge of
HTTPRequest objects. I've only used this a few times and I like it and
it works, but until now I've never had to post a change to the XML and
then immediately try to re-load the change in the same session. This
is where I found the problem of caching.

Adding the query string to the URL IS something I know how to do and I
will give it a try.

King

Nov 27 '05 #24

P: n/a
kwilder said the following on 11/27/2005 12:26 PM:
Thomas,

Reading yours and everyone else responses tells me that your answers
lie in manipulating server-side resources.
And then hoping the browser honors it.
If I am implementing my Javascript HTTPRequest call from an HTML page,
it doesn't seem like I'll have control over this.
You won't from the client-side.
With that said, is the only recourse to accomplish what I want, to do
as Matt suggested and include a Unique ID? I'm looking for a complete
client-side solution. If there isn't one, then I would like to know it
isn't possible through Javascript alone. :^) Thanks.
Yes, Matt's advice was the best advice if you are looking for a solution
to the problem instead of a Theory about how it might be done.
Re: Jim Ley's response:
Er, if you don't know how to go
url="http://[hostname]/xml/results.xml?"+new Date().valueOf();

Then I can't see how you can claim 8 out 10 javascript ability??

Touche. I claim to be 8 out of 10 in Javascript, sans the knowledge of
HTTPRequest objects.


His reply had nothing to do with the HTTPRequest Object, it had to do
with knowing how to append a unique string to the URL. It doesn't matter
if that URL is to an .xml, .html, .php, .asp, .jpg or any other extension.
I've only used this a few times and I like it and it works, but until now
I've never had to post a change to the XML and then immediately try to re-load
the change in the same session. This is where I found the problem of caching.
Welcome to web scripting.
Adding the query string to the URL IS something I know how to do and I
will give it a try.


And you will find that it works best.

--
Randy
comp.lang.javascript FAQ - http://jibbering.com/faq & newsgroup weekly
Javascript Best Practices - http://www.JavascriptToolbox.com/bestpractices/
Nov 27 '05 #25

P: n/a
kwilder wrote:
Reading yours and everyone else responses tells me that your answers
lie in manipulating server-side resources.
It is necessary to either change the server configuration or to modify
server-side resources so either generates the respective HTTP headers.
If I am implementing my Javascript HTTPRequest call from an HTML page,
it doesn't seem like I'll have control over this.
Why not?
With that said, is the only recourse to accomplish what I want, to do
as Matt suggested and include a Unique ID?
If you _cannot_ change headers, which I seriously doubt, this is the only
way I know.
I'm looking for a complete client-side solution. If there isn't one,
then I would like to know it isn't possible through Javascript alone. :^)


J(ava)Script/ECMAScript is not restricted to the client side. For
example, JScript can be used in ASP to generate the discussed headers.
Please read <URL:http://jibbering.com/faq/faq_notes/pots1.html> on how
to post here.
PointedEars
Nov 28 '05 #26

P: n/a
Thomas 'PointedEars' Lahn wrote:
Reading yours and everyone else responses tells me that your answers
lie in manipulating server-side resources. It is necessary to either change the server configuration or to modify
server-side resources so either generates the respective HTTP headers.
If I am implementing my Javascript HTTPRequest call from an HTML
page,
it doesn't seem like I'll have control over this.

Why not?


Why would you assume the OP does?
Maybe this is for a webapp which sits on a managed, shared server.
Maybe it's a small component being built for a bigger site, which he doesn't
have any influence on.
Maybe it's a reusable tool and the OP doesn't even know what servers it will
eventually be used on.

It's counter-productive to pigeon-hole a question into the situation and
environment that you prefer, rather than answering the actual question.
With that said, is the only recourse to accomplish what I want, to do
as Matt suggested and include a Unique ID?

If you _cannot_ change headers, which I seriously doubt, this is the
only way I know.


Huh, and earlier you said:
It is _never_ the best option.


But now in this situation it is?

Manipulating the url works consistently. Sending proper HTTP headers is good
also, but the browser may choose to ignore them and the user may not have
any ability to make sure they are correct.

Except in very specific situations, sending a unique url is still the best
and most consistent option.

--
Matt Kruse
http://www.JavascriptToolbox.com
http://www.AjaxToolbox.com
Nov 28 '05 #27

P: n/a
Matt Kruse wrote:
Thomas 'PointedEars' Lahn wrote:
Reading yours and everyone else responses tells me that your answers
lie in manipulating server-side resources.

It is necessary to either change the server configuration or to modify
server-side resources so either generates the respective HTTP headers.
If I am implementing my Javascript HTTPRequest call from an HTML
page,
it doesn't seem like I'll have control over this.

Why not?


Why would you assume the OP does? [irrelevance snipped]


He wrote "it doesn't seem". Anything else is your wild assumption.
PointedEars
Nov 28 '05 #28

P: n/a
Matt Kruse wrote:
Thomas 'PointedEars' Lahn wrote:
Reading yours and everyone else responses tells me that your answers
lie in manipulating server-side resources. It is necessary to either change the server configuration or to modify
server-side resources so either generates the respective HTTP headers.
If I am implementing my Javascript HTTPRequest call from an HTML
page,
it doesn't seem like I'll have control over this.

Why not?

[irrelevance snipped]
With that said, is the only recourse to accomplish what I want, to do
as Matt suggested and include a Unique ID?

If you _cannot_ change headers, which I seriously doubt, this is the ^^^^^^^^ ^^^^^^^^^^^^^^^^^^^^^^^ only way I know.


Huh, and earlier you said:
It is _never_ the best option.


But now in this situation it is?


No, it is _not_. It /would/ be the _only_ option left, still being a Bad
Thing. Which is why I asked why the OP thinks that "it doesn't seem like"
he'll have control over this. Obviously he does have control over the data
produced and access to a server-side scripting platform (ASP), so there is
no reason why it would not be possible for him to generate the headers
discussed (through ASP). In fact, he just needs to follow the instructions
in

<URL:http://support.microsoft.com/kb/234067/EN-US/>

which I posted earlier (news:15****************@PointedEars.de).
Manipulating the url works consistently.
Perhaps.
Sending proper HTTP headers is good also, but the browser may choose to
ignore them and the user may not have any ability to make sure they are
correct.
Your logic is flawed. Sending identifying HTTP headers _in that case_
would make _no_ difference.
Except in very specific situations, sending a unique url is still the best
and most consistent option.


No, it is not. In all situations the best option is to identify the age
of the resource. If one is willing to read, understand and learn, like the
OP apparently is, while you are apparently not, that becomes very clear.
PointedEars
Nov 28 '05 #29

P: n/a
Thomas 'PointedEars' Lahn said the following on 11/28/2005 9:50 AM:
Matt Kruse wrote:
<snip>
Sending proper HTTP headers is good also, but the browser may choose to
ignore them and the user may not have any ability to make sure they are
correct.

Your logic is flawed. Sending identifying HTTP headers _in that case_
would make _no_ difference.


NO, his logic is not flawed. You can specify and send any header you
want. That doesn't mean the browser is going to follow that advice.

Theory: You set a header, the browser honors it.
Reality: You set a header, you hope the browser honors it and some don't.
Except in very specific situations, sending a unique url is still the best
and most consistent option.

No, it is not.


Yes it is. It is reliable and dependable.
In all situations the best option is to identify the age of the resource.
That is not always reliable nor dependable. If it were, this discussion
wouldn't even be being had. It is very well documented in the archives
of this group. You should search, read, and learn from it.
If one is willing to read, understand and learn, like the OP apparently is,
while you are apparently not, that becomes very clear.


You should practice what you preach. And add to it "Open your eyes to
reality Thomas, it's a totally different world out there". Then you
will, hopefully, realize that the Web does not do what the specs say, it
does whatever the h**l it wants to and you, as the Web Author, have to
deal with that.

--
Randy
comp.lang.javascript FAQ - http://jibbering.com/faq & newsgroup weekly
Javascript Best Practices - http://www.JavascriptToolbox.com/bestpractices/

Nov 28 '05 #30

P: n/a
Randy Webb wrote:
Thomas 'PointedEars' Lahn said the following on 11/28/2005 9:50 AM:
Matt Kruse wrote:
Sending proper HTTP headers is good also, but the browser may choose to
ignore them and the user may not have any ability to make sure they are
correct.
Your logic is flawed. Sending identifying HTTP headers _in that case_
would make _no_ difference.


NO, his logic is not flawed. You can specify and send any header you
want.


However, sending identifying HTTPs header _and_ change the URI would not
make any difference, hence it would not be any better then changing only
the URI. In fact, that would be a complete waste of bandwidth.
That doesn't mean the browser is going to follow that advice.
That was not was he argued, read again.
Theory: You set a header, the browser honors it.
Reality: You set a header, you hope the browser honors it and some don't.


Browsers that do not honor protocol headers that are specified as "MUST be
handled" even in the very first version of that protocol are completely
broken and should not be supported anyway. Fortunately, there is no such
widely supported user agent, including IE.
Except in very specific situations, sending a unique url is still the
best and most consistent option.


No, it is not.


Yes it is. It is reliable and dependable.


Perhaps. Which does not make it a Good Thing since there is a
far better alternative, which is also reliable, is available.
In all situations the best option is to identify the age of the resource.


That is not always reliable nor dependable. If it were, this discussion
wouldn't even be being had.


No, the OP any many participants in this discussion, including you, did
not or refuse(d) to know that there are "MUST handle" HTTP headers that
achieve the very same in a far better way than URL modification could.
PointedEars
Nov 29 '05 #31

P: n/a
Thomas 'PointedEars' Lahn wrote:
Randy Webb wrote:
That doesn't mean the browser is going to follow that advice.


That was not was he argued, read again.


read: "That was not what he argued, read again."
Theory: You set a header, the browser honors it.
Reality: You set a header, you hope the browser honors it and some don't.


Browsers that do not honor protocol headers that are specified as "MUST be
handled" even in the very first version of that protocol are completely
broken and should not be supported anyway. Fortunately, there is no such
widely supported user agent, including IE.


read: "Fortunately, there is no such widely distributed user agent,
including IE."
Nov 29 '05 #32

P: n/a
Thomas 'PointedEars' Lahn said the following on 11/28/2005 7:01 PM:
Randy Webb wrote:

Thomas 'PointedEars' Lahn said the following on 11/28/2005 9:50 AM:
Matt Kruse wrote:
<snip>
Except in very specific situations, sending a unique url is still the
best and most consistent option.

No, it is not.


Yes it is. It is reliable and dependable.

Perhaps. Which does not make it a Good Thing since there is a
far better alternative, which is also reliable, is available.


Only if that alternative is available to the author and it is not always
available. If a person is using a free web server (many people do) then
they may not (and most times dont) have access to set headers. That
leaves them with the only alternative of the queryString to get the
updated resource.

--
Randy
comp.lang.javascript FAQ - http://jibbering.com/faq & newsgroup weekly
Javascript Best Practices - http://www.JavascriptToolbox.com/bestpractices/
Nov 29 '05 #33

P: n/a
kwilder wrote:

In order to get it to work correctly, I have to log off, close the
browser (IE) and re-open the browser.

Is there any way to make it retrieve that latest xml version without
closing the browser?


It's simple enough to set the headers in server-side scripts, but doing
it for static files needs server access usually. I'd suggest wrapping
the request for the file in a script? I know nothing about ASP but I
know it's easy enough in PHP with something like

<?php
header("Cache-Control: no-cache, must-revalidate"); // HTTP/1.1
header("Expires: Mon, 26 Jul 1997 05:00:00 GMT"); // Date in the past
?>

and then get the script to output the file (either hardcoded or specify
somehow in GET).

Nick
Nov 29 '05 #34

P: n/a
Nick wrote:
kwilder wrote:
In order to get it to work correctly, I have to log off, close the
browser (IE) and re-open the browser.

Is there any way to make it retrieve that latest xml version without
closing the browser?
It's simple enough to set the headers in server-side scripts, but doing
it for static files needs server access usually. I'd suggest wrapping
the request for the file in a script? I know nothing about ASP but I
know it's easy enough in PHP with something like

<?php
header("Cache-Control: no-cache, must-revalidate"); // HTTP/1.1
header("Expires: Mon, 26 Jul 1997 05:00:00 GMT"); // Date in the past
?>


Note that this will create unnecessary traffic along the request/response
chain and slows down refreshs. For all practical purposes, Last-Modified
(which HTTP/1.1 servers SHOULD send and HTTP/1.1 caching proxies MUST NOT
handle as a cache[able] request) and a reasonable expiration date in the
not-too-far future (after which HTTP/1.0 implementations MUST NOT cache
the resource) is better. See also the ETag header which 'provides for
an "opaque" cache validator' as it "MUST be unique across all versions
of all entities associated with a particular resource (RFC2616, 13.3.2
and 3.11).
and then get the script to output the file (either hardcoded or specify
somehow in GET).


Either this way or the simpler one: Neither in ASP nor in PHP it is
necessary to use an additional file. Both ASP and PHP code to generate the
required HTTP headers can be included right into the resource requested, at
the very beginning of the file [in PHP, header() only works if there was no
previous output.]
PointedEars
Nov 29 '05 #35

P: n/a
Thomas 'PointedEars' Lahn wrote:
Nick wrote:
<?php
header("Cache-Control: no-cache, must-revalidate"); // HTTP/1.1
header("Expires: Mon, 26 Jul 1997 05:00:00 GMT"); // Date in the past
?>


Note that this will create unnecessary traffic along the request/response
chain and slows down refreshs.


I'm not a big expert on HTTP, I just copied that from the php.net
documentation on the header() function. You'd think they'd know better ;)
and then get the script to output the file (either hardcoded or specify
somehow in GET).


Either this way or the simpler one: Neither in ASP nor in PHP it is
necessary to use an additional file. Both ASP and PHP code to generate the
required HTTP headers can be included right into the resource requested


If the resource requested is a dynamic page, yes. I believed the OP
wishes to request a static XML file though.
Nov 29 '05 #36

P: n/a
Pointed Ears may have a good amount of knowledge, and maybe a good
amount of experience with development with the XmlHTTPRequest Object,
but what he is pushing is his knowledge, but not practical experience.

Others have been expressing that practical experience with this object
does have responses being cached in the browser, this is for a variety
of reasons:

1. Server is not configured to expire content
2. Client Browser is just holding on to the cached version of the
response.

To name two reasons.

The truth is, that no matter how much knowledge you have if you do not
expect the worse case with your applications it is doomed to faile.
Take this example, I build a web application that runs in IE and I use
XmlHTTPRequest to get me data so that I can update my web interface,
but I am using a service that I did not build and I have no control
over. They are not going to change their server settings for just my
one application and I need to use this so that I can get my application
running. So I set my headers and test out hitting the service, but end
up always getting back the same information, I then add the Unique ID
and get fresh data each time I make the request. This is a real world
example of using all the knownledge being thrown around, and exists in
many applications today. Now if you were to create your own data
channel that the XmlHTTPRequest Object is to use, then you make the
header changes on the server and test the application calling it with
out the Unique ID, if this works by giving you freash data every time,
then you have your solution but if you find it buggy and it isn't 100%
reliable you must add the extra over head of a Unique ID to make it
that way.

The application is what takes priority not the theory of what should
work and how it should work. The web was built by people with all
levels of knowledge and personalities. Some do not even want to change
there data services others are more than willing to make a change as
long as that they see it benifical.

Nov 29 '05 #37

P: n/a
1.**********@gmail.com wrote:
Pointed Ears may have a good amount of knowledge, and maybe a good
amount of experience with development with the XmlHTTPRequest Object,
but what he is pushing is his knowledge, but not practical experience.
Wrong.
[...]


Your arguments have already been refuted, I suggest that you read more
carefully.
PointedEars
Nov 29 '05 #38

P: n/a
I had read them, and stated these as what are in place today and what
is being developed with currently. You have not stated any practical
examples that you had created that the topic was asking for, again all
you had stated was theory. Give me an example of your theory and I
will take back my statement of you just pushing knowledge.

I am not doubting your understanding of the subject, but I do doubt
that you have not built many applications that use this and consume
many services, ones that you control and others that you do not
control.

I can be refuted as much as anyone wants, it doesn't stop that fact
that of what is in place today is what works and what has been proven
to work. It may not be perfect, but it is what it is because software
is not perfect because has been made by practical developers not
theologians.

Nov 29 '05 #39

P: n/a
function loadXMLDoc(url, GoAsync)
{
var t = new Date();
var dummy = t.getTime();
var newUrl;
if(url.indexOf("?")>-1) newUrl = url + "&dummy=" + dummy;
else newUrl = url + "?dummy=" + dummy;

...

}

Please visit http://www.logicwebsolutions.com

James

Nov 29 '05 #40

P: n/a
Thomas 'PointedEars' Lahn said the following on 11/29/2005 9:54 AM:
Nick wrote:

[in PHP, header() only works if there was no previous output.]


Not entirely true.

--
Randy
comp.lang.javascript FAQ - http://jibbering.com/faq & newsgroup weekly
Javascript Best Practices - http://www.JavascriptToolbox.com/bestpractices/
Nov 29 '05 #41

P: n/a
On Sat, 26 Nov 2005 12:02:25 +0100, in comp.lang.javascript , Thomas
'PointedEars' Lahn <Po*********@web.de> in
<27****************@PointedEars.de> wrote:
Matt Kruse wrote:
kwilder wrote:
This works, but it doesn't load the latest version of the xml if it
was just modified without closing and reopening the browser.


GET requests can be cached by the browser, just as they could by typing
the url into the browser.


It is not the request that is cached but the resource retrieved, hence the
_response_ previously sent. Which is why GET and POST request make no
difference here.
Attach a unique string to the url with each request so the browser doesn't
cache it.


No, that Dirty Hack will only fill the user's cache with obsolete
data over time. <URL:http://www.mnot.net/cache_docs/>


That is a rather nicely written (and nice looking as well, easy to
read for a technical document). I have a quick question regarding some
of the debate here: have you tested (or read about tests) of this in
various browsers. I see discussion of specifications, but not of
various implementations except for Apache. That does not answer
enough.

--
Matt Silberstein

Do something today about the Darfur Genocide

http://www.beawitness.org
http://www.darfurgenocide.org
http://www.savedarfur.org

"Darfur: A Genocide We can Stop"
Nov 30 '05 #42

P: n/a
On Sat, 26 Nov 2005 19:27:51 GMT, in comp.lang.javascript ,
ji*@jibbering.com (Jim Ley) in
<43****************@news.individual.net> wrote:
On Sat, 26 Nov 2005 20:18:09 +0100, Thomas 'PointedEars' Lahn
<Po*********@web.de> wrote:

Tell the server to identify its responses through headers. This way,
any standards compliant non-broken non-misconfigured user agent (client)
will not use its local cache and any intermediary proxy server will not
use its cache if the cached resource says so.


No, there is no MUST requirement on the client that proxies or the
client obey them, so it's not simply a broken issue. Even if it was,
then changing the URL _aswell_ would do nothing to change the
behaviour in correct clients, and would also support broken ones.

Changing the URL is very good advice, stop advising against it.


One issue that concerns me is that it is the server that knows if the
data is old, not the client. Making client side determinations is
non-optimal.

--
Matt Silberstein

Do something today about the Darfur Genocide

http://www.beawitness.org
http://www.darfurgenocide.org
http://www.savedarfur.org

"Darfur: A Genocide We can Stop"
Nov 30 '05 #43

P: n/a
On Mon, 28 Nov 2005 13:05:03 -0500, in comp.lang.javascript , Randy
Webb <Hi************@aol.com> in <r6********************@comcast.com>
wrote:
Thomas 'PointedEars' Lahn said the following on 11/28/2005 9:50 AM:
Matt Kruse wrote:
<snip>
Sending proper HTTP headers is good also, but the browser may choose to
ignore them and the user may not have any ability to make sure they are
correct.

Your logic is flawed. Sending identifying HTTP headers _in that case_
would make _no_ difference.


NO, his logic is not flawed. You can specify and send any header you
want. That doesn't mean the browser is going to follow that advice.

Theory: You set a header, the browser honors it.
Reality: You set a header, you hope the browser honors it and some don't.


Do you know of some references on this? I googled and did not find
anything useful.

I did find lots of suggestions to use headers, however. And this makes
sense to me since the server, almost by definition, knows if the data
is old or not. Caching is a very valuable mechanism, it is used over
and over on many levels in computers, we should make use of it not try
to bypass it.

[snip]

That is not always reliable nor dependable. If it were, this discussion
wouldn't even be being had. It is very well documented in the archives
of this group. You should search, read, and learn from it.


I did a quick search of the group and mostly found suggestions to
change the header. Could you give me a better pointer:

http://groups.google.com/groups?q=gr...ache%20headers
[snip]

--
Matt Silberstein

Do something today about the Darfur Genocide

http://www.beawitness.org
http://www.darfurgenocide.org
http://www.savedarfur.org

"Darfur: A Genocide We can Stop"
Nov 30 '05 #44

P: n/a
Randy Webb wrote:
Thomas 'PointedEars' Lahn said the following on 11/28/2005 7:01 PM:
Randy Webb wrote:
Thomas 'PointedEars' Lahn said the following on 11/28/2005 9:50 AM:
Matt Kruse wrote:
> Except in very specific situations, sending a unique url is still the
> best and most consistent option.
No, it is not.
Yes it is. It is reliable and dependable.

Perhaps. Which does not make it a Good Thing since there is a
far better alternative, which is also reliable, is available.


Only if that alternative is available to the author and it is not always
available. If a person is using a free web server (many people do) then
they may not (and most times dont) have access to set headers. That
leaves them with the only alternative of the queryString to get the
updated resource.


As was said before, service providers that do not allow their customers
to set HTTP headers, should be forced by them to do so and if they do not
comply they should be abandoned and disadvised to other people in favor of
those many other (free) providers that do. That follows alone from being
able to correctly declare the character encoding of the served resource in
the Content-Type header, which is required since no character encoding can
be assumed by HTML UAs (see the Spec). On the other hand, people who serve
resources with the wrong or no character encoding declaration in the HTTP
headers and let the UA make an educated guess instead [which fails e.g.
for IE which assumes UTF-7(!) as default] do not deserve to be called
Web authors or even Web programmers or webmasters.
PointedEars
Nov 30 '05 #45

P: n/a
On Wed, 30 Nov 2005 18:16:35 GMT, Matt Silberstein
<Re**************************@ix.netcom.com> wrote:
One issue that concerns me is that it is the server that knows if the
data is old, not the client. Making client side determinations is
non-optimal.


but if the client doesn't make a request, the server isn't involved...

Jim.
Nov 30 '05 #46

P: n/a
On Wed, 30 Nov 2005 18:44:41 GMT, in comp.lang.javascript ,
ji*@jibbering.com (Jim Ley) in
<43****************@news.individual.net> wrote:
On Wed, 30 Nov 2005 18:16:35 GMT, Matt Silberstein
<Re**************************@ix.netcom.com> wrote:
One issue that concerns me is that it is the server that knows if the
data is old, not the client. Making client side determinations is
non-optimal.


but if the client doesn't make a request, the server isn't involved...


And then we are taking about a very simple piece of code.

--
Matt Silberstein

Do something today about the Darfur Genocide

http://www.beawitness.org
http://www.darfurgenocide.org
http://www.savedarfur.org

"Darfur: A Genocide We can Stop"
Nov 30 '05 #47

P: n/a
Randy Webb wrote:
Thomas 'PointedEars' Lahn said the following on 11/29/2005 9:54 AM:
[in PHP, header() only works if there was no previous output.]


Not entirely true.


,-<URL:http://php.net/manual/en/function.header.php>
|
| Remember that header() must be called before any actual output is sent,
| either by normal HTML tags, blank lines in a file, or from PHP. It is a
| very common error to read code with include(), or require(), functions,
| or another file access function, and have spaces or empty lines that are
| output before header() is called. The same problem exists when using a
| single PHP/HTML file.

Maybe you are referring to this:

| <?php
| header("HTTP/1.0 404 Not Found");
| ?>
|
| Note: The HTTP status header line will always be the first sent to the
| client, regardless of the actual header() call being the first or not.
| [...]

However, additional header() calls, including sending issueing HTTP status
headers do not qualify as "(actual) output" here.
X-Post & F'up2 comp.lang.php

PointedEars
Nov 30 '05 #48

P: n/a
Matt Silberstein said the following on 11/30/2005 2:07 PM:
On Wed, 30 Nov 2005 18:44:41 GMT, in comp.lang.javascript ,
ji*@jibbering.com (Jim Ley) in
<43****************@news.individual.net> wrote:

On Wed, 30 Nov 2005 18:16:35 GMT, Matt Silberstein
<Re**************************@ix.netcom.com> wrote:

One issue that concerns me is that it is the server that knows if the
data is old, not the client. Making client side determinations is
non-optimal.


but if the client doesn't make a request, the server isn't involved...

And then we are taking about a very simple piece of code.


Yes, and we keep telling you what and how to do it. Add a unique
querystring to the URL. Then it will be retrieved from the server or not
at all.

--
Randy
comp.lang.javascript FAQ - http://jibbering.com/faq & newsgroup weekly
Javascript Best Practices - http://www.JavascriptToolbox.com/bestpractices/
Nov 30 '05 #49

P: n/a
Randy Webb wrote:
Matt Silberstein said the following on 11/30/2005 2:07 PM:
ji*@jibbering.com (Jim Ley) in [...] wrote:
On Wed, 30 Nov 2005 18:16:35 GMT, Matt Silberstein
<Re**************************@ix.netcom.com> wrote:
One issue that concerns me is that it is the server that knows if the
data is old, not the client. Making client side determinations is
non-optimal.
but if the client doesn't make a request, the server isn't involved... And then we are taking about a very simple piece of code.


Yes, and we keep telling you what and how to do it. [...]


_We_ do not tell such thing:
Add a unique querystring to the URL. Then it will
be retrieved from the server or not at all.

PointedEars
Nov 30 '05 #50

76 Replies

This discussion thread is closed

Replies have been disabled for this discussion.