By using this site, you agree to our updated Privacy Policy and our Terms of Use. Manage your Cookies Settings.
434,882 Members | 2,482 Online
Bytes IT Community
+ Ask a Question
Need help? Post your question and get tips & solutions from a community of 434,882 IT Pros & Developers. It's quick & easy.

FAQ 5.40, Ajax and GET

P: n/a
| 5.40 Why is my Ajax page not updated properly when using an HTTP GET
| request in Internet Explorer?
|
| Microsoft Internet Explorer caches the results of HTTP GET requests.
| To ensure that the document is retrieved from the server, you will
| need to use the POST Method.
|
| * http://msdn2.microsoft.com/en-us/library/ms536648.aspx

The MSDN page on that topic reads:
| Internet Explorer caches the results of HTTP GET requests in the
| Temporary Internet Files (TIF) folder. In most cases, caching improves
| performance for data that will not change frequently. To guarantee
| that the results are not cached, use POST.

Microsoft's recommendation notwithstanding, I still think that the
proper request method for idempotent requests is GET. There are other
ways to get around caching, like

(a) appending a random value in a query parameter, or
(b) setting the necessary HTTP headers on the server side

Not all locations can handle POST requests, so (a) would also be a more
compatible and passive solution.
- Conrad
Oct 20 '08 #1
Share this Question
Share on Google+
19 Replies


P: n/a
Conrad Lender wrote on 20 okt 2008 in comp.lang.javascript:
>| 5.40 Why is my Ajax page not updated properly when using an HTTP GET
| request in Internet Explorer?
|
| Microsoft Internet Explorer caches the results of HTTP GET requests.
| To ensure that the document is retrieved from the server, you will
| need to use the POST Method.
|
| * http://msdn2.microsoft.com/en-us/library/ms536648.aspx

The MSDN page on that topic reads:
| Internet Explorer caches the results of HTTP GET requests in the
| Temporary Internet Files (TIF) folder. In most cases, caching improves
| performance for data that will not change frequently. To guarantee
| that the results are not cached, use POST.

Microsoft's recommendation notwithstanding, I still think that the
proper request method for idempotent requests is GET. There are other
ways to get around caching, like

(a) appending a random value in a query parameter, or
(b) setting the necessary HTTP headers on the server side

Not all locations can handle POST requests, so (a) would also be a more
compatible and passive solution.
Conrad,

What do you mean by "locations"?

--
Evertjan.
The Netherlands.
(Please change the x'es to dots in my emailaddress)
Oct 20 '08 #2

P: n/a
On 2008-10-20 23:09, Evertjan. wrote:
>Not all locations can handle POST requests, so (a) would also be a more
compatible and passive solution.
What do you mean by "locations"?
Service locations. URLs. For example, I can fetch a page from Slashdot
with "GET http://slashdot.org/index.pl?issue=20081006". But I couldn't
POST to http://slashdot.org/index.pl, put "issue=20081020" in the
request body, and expect the same result.
- Conrad
Oct 20 '08 #3

P: n/a
Conrad Lender wrote on 20 okt 2008 in comp.lang.javascript:
On 2008-10-20 23:09, Evertjan. wrote:
>>Not all locations can handle POST requests, so (a) would also be a more
compatible and passive solution.
>What do you mean by "locations"?

Service locations. URLs.
"Service locations." ?
For example, I can fetch a page from Slashdot
Are they any different from other URL's?
with "GET http://slashdot.org/index.pl?issue=20081006". But I couldn't
POST to http://slashdot.org/index.pl, put "issue=20081020" in the
request body, and expect the same result.
That is just because that page is designed that way,
it could also be designed as a post request.

I do not see why that would give a pre for the GET request in the FAQ.

You should use the one that accommodates the page's html-code in question.

I personally think serverside progamming that permits both options,
like ASP request("issue")
in stead of ASP request.querystring("issue") for the GET
and ASP request.form("issue") for the POST
to be a bad habit,
but that is not the topic at hand.
--
Evertjan.
The Netherlands.
(Please change the x'es to dots in my emailaddress)
Oct 20 '08 #4

P: n/a
On 2008-10-20 23:26, Evertjan. wrote:
>>>Not all locations can handle POST requests, so (a) would also be a more
compatible and passive solution.
What do you mean by "locations"?
Service locations. URLs.
"Service locations." ?
>For example, I can fetch a page from Slashdot
Are they any different from other URL's?
No, same thing. Let's just call them "addresses" and "pages", if that's
less confusing.
>with "GET http://slashdot.org/index.pl?issue=20081006". But I couldn't
POST to http://slashdot.org/index.pl, put "issue=20081020" in the
request body, and expect the same result.

That is just because that page is designed that way,
it could also be designed as a post request.
It could have been, but wasn't. When you're communicating with a web
service, you have no choice but to use what it's offering. Sometimes you
just don't have access to the server-side code, and/or no experience or
authorization to modify it.

If the server will only accept GET requests (this is not a rare case!),
the recommendation from the FAQ (and from Microsoft) will fail. Using a
random query parameter, on the other hand, is a minimal adjustment and
works just as well.
I do not see why that would give a pre for the GET request in the FAQ.

You should use the one that accommodates the page's html-code in question.
I don't know what you mean by "accommodating the page's html code", but
the HTTP specs are pretty clear on the subject of when to use GET and POST.

By the way, unless IE is ignoring the caching-related HTTP headers, I
think that it's correct to use a cached response. If the server didn't
consider the contents to be cacheable, it should have sent the
appropriate headers. The real problem seems to be that the readystate
handlers don't work as expected when a cached copy is used.
- Conrad
Oct 20 '08 #5

P: n/a
Conrad Lender wrote on 20 okt 2008 in comp.lang.javascript:
On 2008-10-20 23:26, Evertjan. wrote:
>>>>Not all locations can handle POST requests, so (a) would also be a
more compatible and passive solution.
What do you mean by "locations"?
Service locations. URLs.
"Service locations." ?
>>For example, I can fetch a page from Slashdot
Are they any different from other URL's?

No, same thing. Let's just call them "addresses" and "pages", if
that's less confusing.
>>with "GET http://slashdot.org/index.pl?issue=20081006". But I
couldn't POST to http://slashdot.org/index.pl, put "issue=20081020"
in the request body, and expect the same result.

That is just because that page is designed that way,
it could also be designed as a post request.

It could have been, but wasn't. When you're communicating with a web
service,
you just conceded there is no such thing, only URL's
you have no choice but to use what it's offering. Sometimes
you just don't have access to the server-side code, and/or no
experience or authorization to modify it.
So what? That does not mean you should promote one of them in the FAQ.
If the server will only accept GET requests (this is not a rare
case!), the recommendation from the FAQ (and from Microsoft) will
fail. Using a random query parameter, on the other hand, is a minimal
adjustment and works just as well.
"random query parameter" what is that again?
>I do not see why that would give a pre for the GET request in the
FAQ.

You should use the one that accommodates the page's html-code in
question.

I don't know what you mean by "accommodating the page's html code",
I am sorry you don't, the html code of the page that normally requests
the page.
but the HTTP specs are pretty clear on the subject of when to use GET
and POST.
so you do know after all?
By the way, unless IE is ignoring the caching-related HTTP headers, I
think that it's correct to use a cached response. If the server didn't
consider the contents to be cacheable, it should have sent the
appropriate headers.
The cachability is not related to your advice to prefer GET in the FAQ,
methinks.
The real problem seems to be that the readystate
handlers don't work as expected when a cached copy is used.
Interesting, how so? and is this different for GET vs POST xmlHTTP?

--
Evertjan.
The Netherlands.
(Please change the x'es to dots in my emailaddress)
Oct 20 '08 #6

P: n/a
On 2008-10-21 00:08, Evertjan. wrote:
>When you're communicating with a web service,

you just conceded there is no such thing, only URL's
I said no such thing. I'm sorry, I just assumed you were familiar with
the concept of web services:
http://en.wikipedia.org/wiki/Web_service
>you have no choice but to use what it's offering. Sometimes
you just don't have access to the server-side code, and/or no
experience or authorization to modify it.

So what? That does not mean you should promote one of them in the FAQ.
I don't know how to make this any clearer...
Here are two quotes from RFC 2616:

| In particular, the convention has been established that the GET and
| HEAD methods SHOULD NOT have the significance of taking an action
| other than retrieval. These methods ought to be considered "safe".
| This allows user agents to represent other methods, such as POST, PUT
| and DELETE, in a special way, so that the user is made aware of the
| fact that a possibly unsafe action is being requested.

| The POST method is used to request that the origin server accept the
| entity enclosed in the request as a new subordinate of the resource
| identified by the Request-URI in the Request-Line. POST is designed
| to allow a uniform method to cover the following functions:
|
| - Annotation of existing resources;
|
| - Posting a message to a bulletin board, newsgroup, mailing list,
| or similar group of articles;
|
| - Providing a block of data, such as the result of submitting a
| form, to a data-handling process;
|
| - Extending a database through an append operation.

This makes GET the method of choice for information retrieval, and POST
the method of choice for data changes. Using POST instead of GET just to
avoid caching is a hack.
"random query parameter" what is that again?
instead of
http://example.com/page
use
http://example.com/page?t=1224540900 (append a timestamp)
or
http://example.com/page?r=8b511ce (append a random string)
The cachability is not related to your advice to prefer GET in the FAQ,
methinks.
It's the whole point. GET requests can (usually) be cached, POST
requests (usually) can't (but sometimes can!); that's why the FAQ
recommends using POST to prevent caching. IMO this is not the best
solution to the problem.
- Conrad
Oct 20 '08 #7

P: n/a
Conrad Lender wrote:
On 2008-10-21 00:08, Evertjan. wrote:

This makes GET the method of choice for information retrieval, and POST
the method of choice for data changes. Using POST instead of GET just to
avoid caching is a hack.
Using POST to avoid caching seems to miss the point of what GET is for.

>"random query parameter" what is that again?

instead of
http://example.com/page
use
http://example.com/page?t=1224540900 (append a timestamp)
or
http://example.com/page?r=8b511ce (append a random string)
Using example.com would not work on any other server except example.com
(because of security policy). I can imagine someone actually trying that
and wondering why it doesn't work.

How about:
Use a unique request parameter, such as a timestamp.
req.open("GET", "/example.php?date=" + (+new Date));

>The cachability is not related to your advice to prefer GET in the FAQ,
methinks.

It's the whole point. GET requests can (usually) be cached, POST
requests (usually) can't (but sometimes can!); that's why the FAQ
recommends using POST to prevent caching. IMO this is not the best
solution to the problem.
Agreed. What is not yet final is the wording of that entry.

Garrett
>
- Conrad

--
comp.lang.javascript FAQ <URL: http://jibbering.com/faq/ >
Oct 21 '08 #8

P: n/a
On Oct 21, 12:37*am, Conrad Lender <crlen...@yahoo.comwrote:
>
(...) GET requests can (usually) be cached, POST
requests (usually) can't (but sometimes can!);
Given that a POST is a "send-data-to-the-server" rather than a
"request-data-from-the-server" operation, caching it would be a very
nasty thing... when or where have you seen them being cached ?

--
Jorge.
Oct 21 '08 #9

P: n/a
On 2008-10-21 09:07, Jorge wrote:
>(...) GET requests can (usually) be cached, POST
requests (usually) can't (but sometimes can!);

Given that a POST is a "send-data-to-the-server" rather than a
"request-data-from-the-server" operation, caching it would be a very
nasty thing... when or where have you seen them being cached ?
I've never seen it, but RFC 2616 suggests that it's possible:

9.5 POST
...
Responses to this method are not cacheable, *unless the response
includes appropriate Cache-Control or Expires header fields*.

(emphasis mine)
I don't know if any UA has actually implemented it that way.
- Conrad
Oct 21 '08 #10

P: n/a
On 2008-10-21 08:09, dhtml wrote:
>>"random query parameter" what is that again?

instead of
http://example.com/page
use
http://example.com/page?t=1224540900 (append a timestamp)
or
http://example.com/page?r=8b511ce (append a random string)

Using example.com would not work on any other server except example.com
(because of security policy). I can imagine someone actually trying that
and wondering why it doesn't work.
I used example.com because it was an example :-)
http://en.wikipedia.org/wiki/Example.com
How about:
Use a unique request parameter, such as a timestamp.
req.open("GET", "/example.php?date=" + (+new Date));
Yes, that would be better in the FAQ (unless the "date" argument
actually has a meaning to example.php).
- Conrad
Oct 21 '08 #11

P: n/a
On Oct 21, 9:56 am, Conrad Lender wrote:
On 2008-10-21 09:07, Jorge wrote:
>>(...) GET requests can (usually) be cached, POST
requests (usually) can't (but sometimes can!);
>Given that a POST is a "send-data-to-the-server" rather than
a "request-data-from-the-server" operation, caching it would
be a very nasty thing... when or where have you seen them
being cached ?

I've never seen it, but RFC 2616 suggests that it's possible:

9.5 POST
..
Responses to this method are not cacheable, *unless the response
includes appropriate Cache-Control or Expires header fields*.

(emphasis mine)
I don't know if any UA has actually implemented it that way.
So what sorts of Expires and Cache-Control headers did you employ in
making that determination (and on how many browsers)? Or is it the
case that "I don't know if any UA" actually means that you have never
experimented with the proposition and so are ignorant of the actual
behaviour of implementations in this regard? The latter may make you
statement literally true, but renders it rather worthless.

Richard.
Oct 21 '08 #12

P: n/a
On 2008-10-21 11:48, Richard Cornford wrote:
>I've never seen it, but RFC 2616 suggests that it's possible:

9.5 POST
..
Responses to this method are not cacheable, *unless the response
includes appropriate Cache-Control or Expires header fields*.

(emphasis mine)
I don't know if any UA has actually implemented it that way.

So what sorts of Expires and Cache-Control headers did you employ in
making that determination (and on how many browsers)? Or is it the
case that "I don't know if any UA" actually means that you have never
experimented with the proposition and so are ignorant of the actual
behaviour of implementations in this regard? The latter may make you
statement literally true, but renders it rather worthless.
No, I haven't tested this in any way, and the "I don't know" statement
was intended as a disclaimer to that effect. The important thing is that
the RFC permits UAs to cache POST requests under certain circumstances.
If POST were used to prevent caching, that could fail in any (current or
future) standards-compliant browser. Testing a random selection of
current browsers wouldn't change anything about that.
- Conrad
Oct 21 '08 #13

P: n/a
On Oct 20, 4:56 pm, Conrad Lender <crlen...@yahoo.comwrote:
Microsoft's recommendation notwithstanding, I still think that the
proper request method for idempotent requests is GET. There are other
ways to get around caching, like

(a) appending a random value in a query parameter, or
(b) setting the necessary HTTP headers on the server side
This sounds reasonable to me. Might want to mention something about
IE's 2056 char limit in GET requests although that should be changing
soon (if it hasn't already changed in IE 7).

Bob
Oct 21 '08 #14

P: n/a
On Oct 21, 12:00 pm, Conrad Lender wrote:
On 2008-10-21 11:48, Richard Cornford wrote:
>>I've never seen it, but RFC 2616 suggests that it's
possible:
9.5 POST
..
Responses to this method are not cacheable, *unless the
response includes appropriate Cache-Control or Expires
header fields*.
>>(emphasis mine)
I don't know if any UA has actually implemented it that way.
>So what sorts of Expires and Cache-Control headers did you
employ in making that determination (and on how many
browsers)? Or is it the case that "I don't know if any UA"
actually means that you have never experimented with the
proposition and so are ignorant of the actual behaviour of
implementations in this regard? The latter may make you
statement literally true, but renders it rather worthless.

No, I haven't tested this in any way, and the "I don't know"
statement was intended as a disclaimer to that effect.
OK.
The important thing is that the RFC permits UAs to cache
POST requests under certain circumstances. If POST were
used to prevent caching, that could fail in any (current
or future) standards-compliant browser. Testing a random
selection of current browsers wouldn't change anything
about that.
The response from a GET request is also sent with response headers,
and so could also assert Expires, Max-Age, Cache-Control, etc. In
context the real interesting subject for a test would be whether, when
the response from a GET request was sent with these headers such that
they strongly discouraged caching, would subsequent requests for the
same URL still be retrieved from the cache (wherever they currently
are in the absence of such headers). In the event that the headers
that influence caching could be shown to prevent the caching of XML
HTTP GET responses (at least in currently problematic UAs) then the
real non-hack solution would be to take appropriate action on the
server to ensure that such headers were sent.

Richard.
Oct 21 '08 #15

P: n/a
On 2008-10-21 16:00, Richard Cornford wrote:
The response from a GET request is also sent with response headers,
and so could also assert Expires, Max-Age, Cache-Control, etc. In
That's why I added the (b) alternative in my initial suggestion.
context the real interesting subject for a test would be whether, when
the response from a GET request was sent with these headers such that
they strongly discouraged caching, would subsequent requests for the
same URL still be retrieved from the cache
Not in Firefox or other browsers that take standards seriously. I have
no information about how IE handles this. Maybe I'll find the time for a
few tests later.
In the event that the headers
that influence caching could be shown to prevent the caching of XML
HTTP GET responses (at least in currently problematic UAs) then the
real non-hack solution would be to take appropriate action on the
server to ensure that such headers were sent.
Agreed. That would be the ideal solution. It's only the cases where the
response headers can't be influenced, or where the browser ignores them,
or where a proxy modifies them, that we actively have to prevent caching
from the JavaScript side. And in those cases, I think that a random part
in the query string would be a cleaner workaround than sending the
request with POST.
- Conrad
Oct 21 '08 #16

P: n/a
Conrad Lender wrote:
On 2008-10-21 11:48, Richard Cornford wrote:
>>I've never seen it, but RFC 2616 suggests that it's possible:

9.5 POST
..
Responses to this method are not cacheable, *unless the response
includes appropriate Cache-Control or Expires header fields*.

(emphasis mine)
I don't know if any UA has actually implemented it that way.
So what sorts of Expires and Cache-Control headers did you employ in
making that determination (and on how many browsers)? Or is it the
case that "I don't know if any UA" actually means that you have never
experimented with the proposition and so are ignorant of the actual
behaviour of implementations in this regard? The latter may make you
statement literally true, but renders it rather worthless.

No, I haven't tested this in any way, and the "I don't know" statement
was intended as a disclaimer to that effect. The important thing is that
the RFC permits UAs to cache POST requests under certain circumstances.
If POST were used to prevent caching, that could fail in any (current or
future) standards-compliant browser. Testing a random selection of
current browsers wouldn't change anything about that.
Do any browsers cache POST? That would seem to break a lot of sites.

The fact that RFC 2616 allows for the possibility means that POST is not
a way to "prevent caching". Setting the appropriate headers is
(Expires). Providing a unique query string would achieve the desired result.

The document Thomas linked is relevant there as well:
http://www.mnot.net/cache_docs/#EXPIRES

Though this one seems at least as relevant:
http://www.mnot.net/javascript/xmlht...est/cache.html

I also like the CSS for that page. Clean and easy to read.

Garrett
>
- Conrad

--
comp.lang.javascript FAQ <URL: http://jibbering.com/faq/ >
Oct 24 '08 #17

P: n/a
On Oct 24, 5:27 am, dhtml wrote:
Conrad Lender wrote:
>On 2008-10-21 11:48, Richard Cornford wrote:
>>>I've never seen it, but RFC 2616 suggests that it's possible:
9.5 POST
..
Responses to this method are not cacheable, *unless the response
includes appropriate Cache-Control or Expires header fields*.
>>>(emphasis mine)
I don't know if any UA has actually implemented it that way.
So what sorts of Expires and Cache-Control headers did you
employ in making that determination (and on how many browsers)?
Or is it the case that "I don't know if any UA" actually means
that you have never experimented with the proposition and so
are ignorant of the actual behaviour of implementations in
this regard? The latter may make you statement literally true,
but renders it rather worthless.
>No, I haven't tested this in any way, and the "I don't know"
statement was intended as a disclaimer to that effect. The
important thing is that the RFC permits UAs to cache POST
requests under certain circumstances. If POST were used to
prevent caching, that could fail in any (current or future)
standards-compliant browser. Testing a random selection of
current browsers wouldn't change anything about that.

Do any browsers cache POST?
In the presence of a positive assertion that the response was
cacheable then you would hope so, and certainly could not fault one
for doing so.
That would seem to break a lot of sites.
No, it would only risk breaking sites where caching of POST responses
was inappropriate but those sites were sending HTTP headers that
encouraged the caching of those responses, and such a site would be
then be broken by design (and as the result of positive action as such
headers would not have been the default on any HTTP/application server
I have ever encountered).
The fact that RFC 2616 allows for the possibility means that
POST is not a way to "prevent caching".
No, but it was only Microsoft who suggested it may be.
Setting the appropriate headers is
(Expires).
Expires would not be the only pertinence header.
Providing a unique query string would achieve the desired result.
<snip>

Doesn't that depend on what the desired result is? Caching is
generally a god thing as it reduces network traffic (and GET requests
are inappropriate for anything but information retrieval). A blanket
inhibiting of caching for all XML HTTP requests may not be such a good
thing.

Richard.
Oct 24 '08 #18

P: n/a
Richard Cornford wrote:
On Oct 24, 5:27 am, dhtml wrote:
>Conrad Lender wrote:
>>On 2008-10-21 11:48, Richard Cornford wrote:
>Do any browsers cache POST?

In the presence of a positive assertion that the response was
cacheable then you would hope so, and certainly could not fault one
for doing so.
But we still don't have an example of a browser that will cache a POST.

>
Expires would not be the only pertinence header.
If preventing a browser from caching a request is the goal, where would
setting EXPIRES be insufficient?

Setting Cache-Control on the client may very well fail in MSIE.
>Providing a unique query string would achieve the desired result.
<snip>

Doesn't that depend on what the desired result is? Caching is
generally a god thing as it reduces network traffic (and GET requests
are inappropriate for anything but information retrieval). A blanket
inhibiting of caching for all XML HTTP requests may not be such a good
thing.
That is true, but that is an answer to another question. Maybe: "Why is
my XHR not cached in Firefox?" So, for the answer to this question, I
propose:

*Why is my Ajax page not updated properly when using an HTTP GET request
in Internet Explorer*?

Browsers cache the results of HTTP requests to reduces network traffic.
To ensure that the document is retrieved from the server, either set
the EXPIRES response header with a past date or use a query string.

Expires: Fri, 23 Oct 2008 14:19:41 GMT

or

req.open("GET", "/example.php?date=" + (+new Date), true);

Always use the appropriate HTTP method. See RFC 2616:
http://www.faqs.org/rfcs/rfc2616.html

More Info:
http://www.mnot.net/cache_docs/#EXPIRES
http://www.mnot.net/javascript/xmlht...est/cache.html
Garrett
Richard.

--
comp.lang.javascript FAQ <URL: http://jibbering.com/faq/ >
Oct 25 '08 #19

P: n/a
On Oct 24, 7:52*am, Richard Cornford <Richard.Cornf...@googlemail.com>
wrote:
On Oct 24, 5:27 am, dhtml wrote:
Conrad Lender wrote:
On 2008-10-21 11:48, Richard Cornford wrote:
I've never seen it, but RFC 2616 suggests that it's possible:
9.5 POST
..
Responses to this method are not cacheable, *unless the response
includes appropriate Cache-Control or Expires header fields*.
>>(emphasis mine)
I don't know if any UA has actually implemented it that way.
So what sorts of Expires and Cache-Control headers did you
employ in making that determination (and on how many browsers)?
Or is it the case that "I don't know if any UA" actually means
that you have never experimented with the proposition and so
are ignorant of the actual behaviour of implementations in
this regard? The latter may make you statement literally true,
but renders it rather worthless.
No, I haven't tested this in any way, and the "I don't know"
statement was intended as a disclaimer to that effect. The
important thing is that the RFC permits UAs to cache POST
requests under certain circumstances. If POST were used to
prevent caching, that could fail in any (current or future)
standards-compliant browser. Testing a random selection of
current browsers wouldn't change anything about that.
Do any browsers cache POST?

In the presence of a positive assertion that the response was
cacheable *then you would hope so, and certainly could not fault one
for doing so.
That would seem to break a lot of sites.

No, it would only risk breaking sites where caching of POST responses
was inappropriate but those sites were sending HTTP headers that
encouraged the caching of those responses, and such a site would be
then be broken by design (and as the result of positive action as such
headers would not have been the default on any HTTP/application server
I have ever encountered).
The fact that RFC 2616 allows for the possibility means that
POST is not a way to "prevent caching".

No, but it was only Microsoft who suggested it may be.
That sounds like them. I am trying to remember which major site uses
POST to search. Hitting the back button to get back to the search
form has predictably irritating results. I think it is PayPal.
>
Setting the appropriate headers is
(Expires).

Expires would not be the only pertinence header.
I think If-Modified-Since and Cache-Control would be the pertinent
request headers.
>
Providing a unique query string would achieve the desired result.

<snip>

Doesn't that depend on what the desired result is? Caching is
generally a god thing as it reduces network traffic (and GET requests
are inappropriate for anything but information retrieval). A blanket
inhibiting of caching for all XML HTTP requests may not be such a good
thing.
It seems to me that unless you want to disable caching of XHR GET
requests for all browsers, you have to include the If-Modified-Since
header. If you don't, unlike quasi-standards-based browsers, IE will
use the cached data without checking its freshness. I have observed
this in IE6. IIRC, IE7 does it as well. Personally, I prevent
caching of such requests in virtually every case (the data wouldn't
keep anyway.) Apps that use Ajax to download entire pages would be
another story.
Oct 27 '08 #20

This discussion thread is closed

Replies have been disabled for this discussion.