473,386 Members | 1,835 Online
Bytes | Software Development & Data Engineering Community
Post Job

Home Posts Topics Members FAQ

Join Bytes to post your question to a community of 473,386 software developers and data experts.

Lots of noise about user agent strings

Seems developers of mobile applications are pretty much devoted to UA
sniffing:

<URL: http://wurfl.sourceforge.net/vodafonerant/index.htm >
--
Rob
Jun 27 '08 #1
35 2434
RobG wrote:
Seems developers of mobile applications are pretty much devoted
to UA sniffing:

<URL: http://wurfl.sourceforge.net/vodafonerant/index.htm >
Yes, it appears to be someone who has made a serious mistake complaining
about the reality that made it a mistake. Quoting a bit of that page
shows that quite a few false beliefs were behind the mistake:-

| All the way since the inception of the web, HTTP clients have
| had unique User-Agent (UA) strings which let the server know
| who they were. This mechanism was taken over as-is by mobile
| browser manufacturers. While there have been a few exceptions
| due to device manufacturer's sloppiness, it is accurate to say
| that 99.99% of the devices out there have unique UA strings
| which can be associated to brand, model and a bunch of other
| info about the device properties.

HTTP clients have not had unique UA strings for well over a decade now,
and even UA strings that were distinct from others were explicitly
designed to prevent the server from knowing which client was being used
(hence the "Mozilla" bit at the front of IE's UA header, it was put
there only to mislead servers).

And it is absolutely not the case the 99.99% of devices have unique UA
strings. I may not get to look at mobile device browsers that often but
to date the vast majority of the scriptable browsers I have examined
have had UA strings that were more or less indistinguishable from those
of IE 6.

The bottom line remains that the HTTP specification defines the
User-Agent header as an arbitrary sequence of characters and does not
even require that the sequence of characters be the same for two
consecutive requests, let alone being in any sense unique. Any attempt
to treat something that is specified in that way as a source of
information is going to be a very obvious mistake.

Richard.

Jun 27 '08 #2
RobG meinte:
Seems developers of mobile applications are pretty much devoted to UA
sniffing:

<URL: http://wurfl.sourceforge.net/vodafonerant/index.htm >
A real expert:

"All the way since the inception of the web, HTTP clients have had
unique User-Agent (UA) strings which let the server know who they were."

He should probably try to understand web technologies and not write 43kB
of rant.

Gregor
--
http://photo.gregorkofler.at ::: Landschafts- und Reisefotografie
http://web.gregorkofler.com ::: meine JS-Spielwiese
http://www.image2d.com ::: Bildagentur für den alpinen Raum
Jun 27 '08 #3
VK
On May 27, 2:23 pm, Gregor Kofler <use...@gregorkofler.atwrote:
RobG meinte:
Seems developers of mobile applications are pretty much devoted to UA
sniffing:
<URL:http://wurfl.sourceforge.net/vodafonerant/index.htm>

A real expert:

"All the way since the inception of the web, HTTP clients have had
unique User-Agent (UA) strings which let the server know who they were."
What offense to you have to that?
User-Agent string is intended to let the server know who is requesting
the page, this is why they were created and how they are used since
Mosaic. Is your version being that it is a NCSA invented beatifying
ornament of HTTP request? Sorry to say then that it is utterly wrong
though semi-poetic version.

User-Agent are unique to each browser and each browser version unless
the browser code is manually reverse-engineered and altered by the end
user in violence of EULA. Such situation is definitely possible but
serious solutions do not normally account users surfing the Web with
hacked software: unless it is a special statistics collector.

The Vodafon problem is a problem causing money loss to the involved
MMS / media push providers. The problem is not deadly because the
agent info is not removed but only reformatted - yet such things are
not allowed anyway, the intermediary servers must not be refactoring
HTTP requests. Otherwise one day by typing example.com one will end up
in example.net because some intermediary server's admin will decide
that example.net is better as the target address. It is easy to start
and much harder to stop so, yes, such things should be squeezed out
right at the beginning.
Jun 27 '08 #4
VK schrieb am 27.05.2008 15:50:
On May 27, 2:23 pm, Gregor Kofler <use...@gregorkofler.atwrote:
>RobG meinte:
>>Seems developers of mobile applications are pretty much devoted to UA
sniffing:
<URL:http://wurfl.sourceforge.net/vodafonerant/index.htm>
A real expert:

"All the way since the inception of the web, HTTP clients have had
unique User-Agent (UA) strings which let the server know who they were."

What offense to you have to that?
User-Agent string is intended to let the server know who is requesting
the page, this is why they were created and how they are used since
Mosaic. Is your version being that it is a NCSA invented beatifying
ornament of HTTP request? Sorry to say then that it is utterly wrong
though semi-poetic version.

User-Agent are unique to each browser and each browser version unless
the browser code is manually reverse-engineered and altered by the end
user in violence of EULA. Such situation is definitely possible but
Opera was shipped a while in the default config of cloning IE6 to avoid
user problems.
serious solutions do not normally account users surfing the Web with
hacked software: unless it is a special statistics collector.
i dont think all those opera users hacked their software .-)

--
Mit freundlichen Grüßen
Holger Jeromin
Jun 27 '08 #5
VK
On May 27, 6:48 pm, Holger Jeromin <news03_2...@katur.dewrote:
Opera was shipped a while in the default config of cloning IE6 to avoid
user problems.
Opera never was shipped User-Agent string _cloning_ any of existing IE
User-String. In some older releases Opera's User-Agent string was
partially altered to contain some keywords most oftenly used for
server-side or client-side IE detection so it had "MSIE" and
"Microsoft Internet Explorer" in it. It still was vendor-specific
enough to detect Opera if one was targeted to detect exactly Opera and
not IE.

Opera is not alone nor it is the champion in this producer-humiliating
activity. All time the best in my collection are remaining some
releases of Safari with User-Agent string - and I'm not kidding -
"Mozilla/5.0 MSIE Microsoft Internet Explorer like Gecko; KHTML..."
with the part after KHTML giving the actual Safari-specific info.
Sometimes when I am depressed this "Microsoft Internet Explorer like
Gecko" cheers me up. :-)

In any case the User-Agent spoofing business faded out way of time ago
as the "brand putting down" impact proved to be much higher than any
possible immediate benefits.

For the client-side there is also navigator.vendor string which is
more easy to parse for the producer name. It is not relevant for the
original Vodafon topic.
Jun 27 '08 #6
VK meinte:
What offense to you have to that?
User-Agent string is intended to let the server know who is requesting
the page, this is why they were created and how they are used since
Mosaic. Is your version being that it is a NCSA invented beatifying
ornament of HTTP request? Sorry to say then that it is utterly wrong
though semi-poetic version.
It isn't even "beautifying".
User-Agent are unique to each browser and each browser version unless
the browser code is manually reverse-engineered and altered by the end
user in violence of EULA.
A-ha. How come, that Firefox explicitly allows me to set my UA
identification string to whatever I want?

[fantasizing what is allowed and what not snipped]

Gregor
--
http://photo.gregorkofler.at ::: Landschafts- und Reisefotografie
http://web.gregorkofler.com ::: meine JS-Spielwiese
http://www.image2d.com ::: Bildagentur für den alpinen Raum
Jun 27 '08 #7
VK
On May 27, 8:55 pm, Gregor Kofler <use...@gregorkofler.atwrote:
A-ha. How come, that Firefox explicitly allows me to set my UA
identification string to whatever I want?
You are sounding like a child, really: "Look ma', I have just pulled
out a wheel out of my new toy car!. Am I cool or what?" :-)

I have no intention to stop you from your hacking exercises. After
Firefox is done
http://www.beatnikpad.com/archives/2...ent-in-firefox
please feel free to start pulling out another wheel :-)
http://www.pctools.com/guides/registry/detail/799/

As I said before it is perfectly possible in this or another way for
absolutely any browser on the market. Another question that it has no
relation to the real Web development. The amount of users going
through the User-Agent string change doesn't exceed the amount of
other inadequately acting groups of Web surfers: Lynx users, self-made
browser users, officially registered psycho cases with Internet
access: and other inevitable small shrink one has to expect in any
business involved with a big amount of people.

There is one puzzling point that bothers while reading similar to that
discussions. It is a bit OT to OP but overall fits well into User-
Agent string questions.
Let's us imagine that the cell phone radio pollution indeed did get
some unexpected brain cells damage effect - so every single user in
the world first thing of all finds the appropriate hack for her
proffered browser and changing the current User-Agent string to "There
is no Web, there is only Xul". Therefore server-side detection and
User-Agent detection as such become totally unreliable. The only hope
is the client-side feature detection (and a few remaining sane doctors
searching a remedy from the brain disease). So the dream of some came
true. Cool. I just have one small question:
Look at the User-Agent hacks for Firefox or at IE. Now look at this
chunk of code:
document.getFoobar = new Function;
window.alert(typeof document.getFoobar);
or
document.getFoobar = new Object;
window.alert(typeof document.getFoobar);
So now the question: who and why had decided that much more labor and
skill intensive procedure will be most probably used - but an easy
like a moo-cow runtime feature spoofing will never be? Was it some
common "internal feeling" or at least once there was a reasonable
discussion on the subject? What were the arguments? Thank you in
advance.
Jun 27 '08 #8
VK meinte:
On May 27, 8:55 pm, Gregor Kofler <use...@gregorkofler.atwrote:
>A-ha. How come, that Firefox explicitly allows me to set my UA
identification string to whatever I want?

You are sounding like a child, really: "Look ma', I have just pulled
out a wheel out of my new toy car!. Am I cool or what?" :-)

I have no intention to stop you from your hacking exercises.
As I said before it is perfectly possible in this or another way for
absolutely any browser on the market.
In case you forgot, 4 hours 21 minutes earlier you stated:

"User-Agent are unique to each browser and each browser version unless
the browser code is manually reverse-engineered and altered by the end
user in violence of EULA."

--
http://photo.gregorkofler.at ::: Landschafts- und Reisefotografie
http://web.gregorkofler.com ::: meine JS-Spielwiese
http://www.image2d.com ::: Bildagentur für den alpinen Raum
Jun 27 '08 #9
On May 27, 8:23 pm, Gregor Kofler <use...@gregorkofler.atwrote:
RobG meinte:
Seems developers of mobile applications are pretty much devoted to UA
sniffing:
<URL:http://wurfl.sourceforge.net/vodafonerant/index.htm>

A real expert:

"All the way since the inception of the web, HTTP clients have had
unique User-Agent (UA) strings which let the server know who they were."

He should probably try to understand web technologies and not write 43kB
of rant.
The complaint is based on the belief that the UA string is the only
viable way to reliably deliver web content and downloads to mobile
devices. The owners of these sites want users to be able to download
games, software, ringtones, etc. to their mobile devices and have
confidence that they should work.

As a result, they have a database of thousands of UA strings that are
used to identify the browser and device and attempt to deliver
appropriate content.

It seems to me that they've ignored the simplest of solutions, which
might include delivering small test downloads so the user can discover
what works on their device (or not), or to just ask the user what
device they are using.

I don't have any experience in developing or using such sites, I was
wondering if anyone here has and can comment on the situation.

--
Rob
Jun 27 '08 #10
On May 28, 3:37 pm, "Richard Cornford" <Rich...@litotes.demon.co.uk>
wrote:

[snip]
That illustrates why UA strings are not a viable means of identifying
web browsers
If you were a system administrator and you wanted to send gzipped
JavaScript files to save bandwidth, how would you determine which
browsers could accept gzipped files and which could not? I have only
read explanations how to do this with the user agent string. I have
some ideas but have never tried any of them. For example, send a
gzipped file and then a non-gzipped file to see if the first file
worked. I'm curious what you would do or if you have any experience
with this area where user agent string is used.

[snip]

Peter
Jun 27 '08 #11
On May 29, 12:22 pm, Peter Michaux <petermich...@gmail.comwrote:
On May 28, 3:37 pm, "Richard Cornford" <Rich...@litotes.demon.co.uk>
wrote:

[snip]
That illustrates why UA strings are not a viable means of identifying
web browsers

If you were a system administrator and you wanted to send gzipped
JavaScript files to save bandwidth,
Many people consider bandwidth to be bytes sent by the server, whereas
it should be measured as bits transmitted by the modem after the
addition of network stuff and compression.

I asked about this on the Apache news group (figuring they'd be a
suitably open-minded and knowledgable lot) and got two responses, the
one I listened to suggested it is pointless zipping files because:

1. modems are optimised to compress text for transmission and so
likely compress files better (or at least no worse) than general
purpose compression programs

2. letting the modem do the work saves CPU effort at both ends

3. if the file is zipped before transmission, subsequent modem
compression may actually result in more data to transmit (though
likely not much more)

Anyhow, here's a link:

<URL:
http://groups.google.com.au/group/al...55117c72c4b5e0
>
how would you determine which
browsers could accept gzipped files and which could not? I have only
read explanations how to do this with the user agent string. I have
some ideas but have never tried any of them. For example, send a
gzipped file and then a non-gzipped file to see if the first file
worked. I'm curious what you would do or if you have any experience
with this area where user agent string is used.
There is an Open Mobile Alliance (OMA) user agent profile
specification that UAs can use to transmit capability and preference
information:

<URL: http://www.openmobilealliance.org/te...20011020-a.pdf
>
It is supposed to be used by developers of WAP and other mobile-
specific sites to identify device characteristics and then serve
appropriate content. The fact that projects like WURFL seem to be
more popular for such things shows that developers don't trust the
profile information to do the job.

Whether UA string or agent profiles are used, it seems that feature
detection has been abaondoned (perhaps it was never seen as a serious
contenter) as a strategy for determining mobile UA capability.

Maybe feature detection is employed as a second strategy for minor
irregularities and after device capability has already been defined
(or assumed) fairly precisely (where "precisely" is used in its
mathematic sense, which is quite different to "accurately").
--
Rob
Jun 27 '08 #12
On May 27, 7:11 pm, VK <schools_r...@yahoo.comwrote:
On May 27, 8:55 pm, Gregor Kofler <use...@gregorkofler.atwrote:

Lots of rantage demonstrating a total lack of understanding of how browsers work snipped
You know, it's better to remain silent and be thought a fool than to
open your mouth and remove all doubt.
Jun 27 '08 #13
On May 29, 9:40 am, Gordon wrote:
On May 27, 7:11 pm, VK wrote:
>Lots of rantage demonstrating a total lack of understanding
of how browsers work snipped

You know, it's better to remain silent and be thought a fool
than to open your mouth and remove all doubt.
But would a fool recognise that even when told?
Jun 27 '08 #14
On May 28, 9:27 pm, RobG <rg...@iinet.net.auwrote:
On May 29, 12:22 pm, Peter Michaux <petermich...@gmail.comwrote:
On May 28, 3:37 pm, "Richard Cornford" <Rich...@litotes.demon.co.uk>
wrote:
[snip]
That illustrates why UA strings are not a viable means of identifying
web browsers
If you were a system administrator and you wanted to send gzipped
JavaScript files to save bandwidth,

Many people consider bandwidth to be bytes sent by the server, whereas
it should be measured as bits transmitted by the modem after the
addition of network stuff and compression.

I asked about this on the Apache news group (figuring they'd be a
suitably open-minded and knowledgable lot) and got two responses, the
one I listened to suggested it is pointless zipping files because:

1. modems are optimised to compress text for transmission and so
likely compress files better (or at least no worse) than general
purpose compression programs

2. letting the modem do the work saves CPU effort at both ends
I didn't know modems do this. There must be a standard compression
algorithm to ensure the receiver knows how to decompress.

3. if the file is zipped before transmission, subsequent modem
compression may actually result in more data to transmit (though
likely not much more)

Anyhow, here's a link:

<URL:http://groups.google.com.au/group/al...ion/browse_frm...
Hmm. This is quite contrary to the current popular thought about
gzipping JavaScript before sending it over the wire.

Steve Souders works for Yahoo!'s performance team and has made many
experiments. I believe he watches total page load time in Firebug and
so that would include modem decompression time. You can see in the
Editorial Review section of the following page there are 14 rules to
speed a page.

<URL: http://www.amazon.com/dp/0596529309>

One of the rule is gzip components.

More confusion added to the pile.

Another issue is that files do not need to be compressed "on the fly"
by the server. They can be pre compressed and if the client can handle
the compressed version then that is the one sent.

[snip]

Peter
Jun 27 '08 #15
Peter Michaux wrote:
On May 28, 3:37 pm, Richard Cornford wrote:
<snip>
>That illustrates why UA strings are not a viable means of
identifying web browsers

If you were a system administrator and you wanted to send
gzipped JavaScript files to save bandwidth, how would you
determine which browsers could accept gzipped files and
which could not?
The HTTP Accept-Encoding header sent with the request would seem
like the obvious place to start (as that is precisely what it is
for).
I have only read explanations how to do this with the user
agent string.
Incredible, and incredibly foolish as HTTP very explicitly allows
proxies to change the encoding. That is, if a client cannot handle
gzip but the proxy can it can ask the server for gzip, decompress
it and send the identity encoded result to the client. It could
also do this the other way around, but it would be unlikely
that doing so would be seen as a good idea. And it could also
disregard any client preference for a compressed encoding and only
make identity requests to servers itself.

So a proxy may or may not send on the client's UA string or
substitute an alternative (which does not matter as the UA string
is arbitrary) and it may or may not impose the same encoding
limitations as the client. That would make looking at the UA
string at all in this context extremely foolish. Indeed more
foolish that ignoring q values in the Accept header when content
negotiating HTML/XHTML.
I have some ideas but have never tried any of them. For
example, send a gzipped file and then a non-gzipped file
to see if the first file worked. I'm curious what you
would do or if you have any experience with this area where
user agent string is used.
I have probably just answered both of those questions(?).

Richard.
Jun 27 '08 #16
Peter Michaux wrote:
On May 28, 3:37 pm, "Richard Cornford" <Rich...@litotes.demon.co.uk>
wrote:
>That illustrates why UA strings are not a viable means of identifying
web browsers

If you were a system administrator and you wanted to send gzipped
JavaScript files to save bandwidth, how would you determine which
browsers could accept gzipped files and which could not?
One would scan the Accept-Encoding request header value for "gzip", and then
use gzip(1) or a gzip implementation to compress the message body. There
are libraries like cgi_buffer which are capable of that. Apache 2.0+ has
mod_deflate.
I have only read explanations how to do this with the user agent string.
That is a pity. Had you read RFCs 1945 and 2616 more thoroughly as I think
I recommended to you before, you would also have found the specification of
this header.
PointedEars
--
realism: HTML 4.01 Strict
evangelism: XHTML 1.0 Strict
madness: XHTML 1.1 as application/xhtml+xml
-- Bjoern Hoehrmann
Jun 27 '08 #17
On May 29, 4:19 pm, "Richard Cornford" <Rich...@litotes.demon.co.uk>
wrote:
Peter Michaux wrote:
On May 28, 3:37 pm, Richard Cornford wrote:
<snip>
That illustrates why UA strings are not a viable means of
identifying web browsers
If you were a system administrator and you wanted to send
gzipped JavaScript files to save bandwidth, how would you
determine which browsers could accept gzipped files and
which could not?

The HTTP Accept-Encoding header sent with the request would seem
like the obvious place to start (as that is precisely what it is
for).
I believe that the issue is that IE6 claims it can accept gzip but in
actual fact it cannot due to a decompression bug. This bug may only
apply to files over a certain size. This leads to the use of the user
agent string.
I have only read explanations how to do this with the user
agent string.

Incredible, and incredibly foolish as HTTP very explicitly allows
proxies to change the encoding. That is, if a client cannot handle
gzip but the proxy can it can ask the server for gzip, decompress
it and send the identity encoded result to the client. It could
also do this the other way around, but it would be unlikely
that doing so would be seen as a good idea. And it could also
disregard any client preference for a compressed encoding and only
make identity requests to servers itself.

So a proxy may or may not send on the client's UA string or
substitute an alternative (which does not matter as the UA string
is arbitrary) and it may or may not impose the same encoding
limitations as the client. That would make looking at the UA
string at all in this context extremely foolish. Indeed more
foolish that ignoring q values in the Accept header when content
negotiating HTML/XHTML.
Interesting. I do need to read these documents more.

[snip]

Thanks,
Peter
Jun 27 '08 #18
In comp.lang.javascript message <4f28e817-0351-4268-928e-73f7590011ee@j3
3g2000pri.googlegroups.com>, Wed, 28 May 2008 21:27:09, RobG
<rg***@iinet.net.auposted:
>
Many people consider bandwidth to be bytes sent by the server, whereas
it should be measured as bits transmitted by the modem after the
addition of network stuff and compression.
Not necessarily. My pages are (almost all) small enough that, with any
reasonably recent modem, the download time itself will not upset any
possibly-significant readers. I would support efficiency of transfer,
on general grounds. But what is most important to me is the transfer
count maintained, erratically, by the server system, because there is a
monthly limit.

For any in a similar situation : I put in a ROBOTS.TXT file, denying
access to all but the Home Page INDEX.HTM. Over the course of a month
(during which I re-enabled robot access to some categories, and restored
some hidden topics), access was down by three quarters and still
dropping. I disabled ROBOTS.TXT a fortnight ago, and access had doubled
(to half the original) and is still rising.

--
(c) John Stockton, nr London UK. ?@merlyn.demon.co.uk IE7 FF2 Op9 Sf3
news:comp.lang.javascript FAQ <URL:http://www.jibbering.com/faq/index.html>.
<URL:http://www.merlyn.demon.co.uk/js-index.htmjscr maths, dates, sources.
<URL:http://www.merlyn.demon.co.uk/TP/BP/Delphi/jscr/&c, FAQ items, links.
Jun 27 '08 #19
Peter Michaux wrote:
On May 28, 9:27 pm, RobG <rg...@iinet.net.auwrote:
>On May 29, 12:22 pm, Peter Michaux <petermich...@gmail.comwrote:
>>On May 28, 3:37 pm, "Richard Cornford" <Rich...@litotes.demon.co.uk>
wrote:

1. modems are optimised to compress text for transmission and so
likely compress files better (or at least no worse) than general
purpose compression programs
That's dubious. Modem compression has to be real-time, whereas
general-purpose compression is often run out-of-band. When you gzip,
you usually don't do it while sending the data.

Consequently, modem compression (LAP-M with BTLZ, V.44, MNP-5, or
whatever) has to make different trade-offs than general-purpose
compression. The modem compression standards use smaller dictionaries
and windows than the most aggressive general-purpose LZ compressors
(eg level-9 gzip).

For that matter, the modem compression standards I'm familiar with are
not optimized for text; they're general-purpose adaptive entropy encoders.
>2. letting the modem do the work saves CPU effort at both ends
True, unless you're using a software modem (like the so-called
"Winmodems").
I didn't know modems do this. There must be a standard compression
algorithm to ensure the receiver knows how to decompress.
Yes, since the late 1980s or early 1990s. (I don't recall the exact
history and a trivial Google search didn't turn one up in the first
few hits.) Look up MNP-5 (an early, widely-supported proprietary
protocol), V.42bis (the first ITU standard that included compression),
BTLZ (the version of Lempel-Ziv used in V.42bis), and V.44 (a later
and more aggressive compressor).

Compression for modems came along shortly after decent synchronous
protocols (most notably LAPM, an asymmetric HDLC protocol) were
introduced, getting rid of the async framing overhead and allowing for
decent blocking of data.
>3. if the file is zipped before transmission, subsequent modem
compression may actually result in more data to transmit (though
likely not much more)
Any growth should be negligible. Modem compression protocols have
uncompressed modes.
Hmm. This is quite contrary to the current popular thought about
gzipping JavaScript before sending it over the wire.
Actually, it isn't, if you study the subject in a bit more depth.

First, as I explained above, out-of-band compression with
general-purpose compressors typically will yield better compression
than what a POTS modem will achieve.

Second, many people are not using POTS modems for their connections.
Sometimes they're on uncompressed LANs. Sometimes they're using
high-speed (so-called "broadband", though that's a misnomer)
connections, like cable or DSL or FiOS. I'll admit that I haven't
looked into what kinds of compression are typically done on those
networks, but simply taking a bunch of received wisdom about POTS
modems and assuming it applies everywhere would be foolish.

Finally, precompressing the payload may have other performance
effects, because you produce smaller TCP segments. Besides saving
somewhat on TCP and IP overhead (probably negligible), you may improve
pacing (particularly if the client or server has poorly-written code
that is vulnerable to things like Nagle/Delayed-ACK Interaction),
reduce stack overhead on both ends, etc.
Steve Souders works for Yahoo!'s performance team and has made many
experiments.
There's a huge body of literature on TCP/IP performance. A handful of
experiments by "Yahoo!'s performance team" might give some decent
general guidelines, but they're not much better ground for
generalization than the "modems compress" folklore is.

The real rule is that there is no set of rules that adequately covers
all situations. If you find there's a performance problem for a
particular case, you can investigate that and often improve it; and
your improvements may result in better performance for most or all of
your users. But blanket recommendations like "compress Javascript" (or
don't) are the litanies of the cargo cults.

--
Michael Wojcik
Micro Focus
Rhetoric & Writing, Michigan State University
Jun 27 '08 #20
On May 29, 4:22*am, Peter Michaux <petermich...@gmail.comwrote:
>
If you were a system administrator and you wanted to send gzipped
JavaScript files to save bandwidth, how would you determine which
browsers could accept gzipped files and which could not?
Looking at the Accept-Encoding header.
Jun 27 '08 #21
VK
On Jun 1, 12:20 pm, Jorge <jo...@jorgechamorro.comwrote:
On May 29, 4:22 am, Peter Michaux <petermich...@gmail.comwrote:
If you were a system administrator and you wanted to send gzipped
JavaScript files to save bandwidth, how would you determine which
browsers could accept gzipped files and which could not?

Looking at the Accept-Encoding header.
In the context of the discussion I dare to question what principal
difference one sees between altering over say (Gecko) about:config
User-Agent string and altering network.http.accept-encoding string? If
one doesn't have a trust to one chunk of info send by agent, why so
much trust to another chunk sent in the same request? ;-)
With a reliable stats proving the point of view, please ;-)
Jun 27 '08 #22
Peter Michaux wrote:
On May 29, 4:19 pm, Richard Cornford wrote:
>Peter Michaux wrote:
<snip>
>>If you were a system administrator and you wanted to send
gzipped JavaScript files to save bandwidth, how would you
determine which browsers could accept gzipped files and
which could not?

The HTTP Accept-Encoding header sent with the request would
seem like the obvious place to start (as that is precisely
what it is for).

I believe that the issue is that IE6 claims it can accept
gzip but in actual fact it cannot due to a decompression bug.
IE 6 absolutely can accept gzip encoding, else that would have been
spotted long ago and be very well known by now.
This bug may only apply to files over a certain size.
Are we in the realm of rumour and folk-law or are there demonstrable
facts behind this assertion? Such as the precise size of the (compressed
or uncompressed) files that are supposed to be a problem, a Microsoft KB
article about it, a test case created by someone who's analytical skills
run to real cause and effect identification?

Beyond my normal cynicism, one of the reasons that I suspect this is BS
is that at work we have a QA department that delights in trying to break
our web applications (which is, after all, their job) and one of the
ways they try to do that is by overwhelming the browser with huge
downloads. The HTTPS test servers are set-up to server gziped content
when they think they can and IE 6 is certainly is the test set of
browser used so not having seen any evidence of this being a problem
suggest that it is not (or the problematic files size is so very large
that there is no real issue).
This leads to the use of the user agent string.
<snip>

But you only have to see that other browsers send default UA headers
that are indistinguishable from that of IE 6 to know that would be a
poor approach. If you had to make an assumption based on a request
header I would probably pick on IE's unusual Accept header. How many
other browsers would be willing to accept the set of Microsoft specific
formats that IE says it would prefer (say Word and Access documents)?
And even if some other browser said it could handle that content would
those types come out with the same relative q values as in IE 6's Accept
header? That doesn't entirely solve the problem because IE's Accept
headers can be modified, but it is better than looking at something that
is known to be deliberately spoofed by other browsers.

Richard.

Jun 27 '08 #23
VK wrote:
On Jun 1, 12:20 pm, Jorge wrote:
>On May 29, 4:22 am, Peter Michaux wrote:
>>If you were a system administrator and you wanted to send
gzipped JavaScript files to save bandwidth, how would you
determine which browsers could accept gzipped files and
which could not?

Looking at the Accept-Encoding header.

In the context of the discussion I dare to question what
principal difference one sees between altering over say
(Gecko) about:config User-Agent string and altering
network.http.accept-encoding string? If one doesn't have
a trust to one chunk of info send by agent, why so
much trust to another chunk sent in the same request? ;-)
Who is proposing not trussing the User Agent header? The HTTP
specification defines it as an arbitrary sequence of characters that
does not even need to be consistent over time, and so as being something
that should not be treated as a source of information. And the proposal
being made here is to trust it to be precisely what it is defined as
being; not a source of information.
With a reliable stats proving the point of view, please ;-)
"Stats" are not capable of "proving" anything.

Richard.

Jun 27 '08 #24
VK
On Jun 1, 10:19 pm, "Richard Cornford" <Rich...@litotes.demon.co.uk>
wrote:
Peter Michaux wrote:
On May 29, 4:19 pm, Richard Cornford wrote:
Peter Michaux wrote:
<snip>
>If you were a system administrator and you wanted to send
gzipped JavaScript files to save bandwidth, how would you
determine which browsers could accept gzipped files and
which could not?
The HTTP Accept-Encoding header sent with the request would
seem like the obvious place to start (as that is precisely
what it is for).
I believe that the issue is that IE6 claims it can accept
gzip but in actual fact it cannot due to a decompression bug.

IE 6 absolutely can accept gzip encoding, else that would have been
spotted long ago and be very well known by now.
This bug may only apply to files over a certain size.

Are we in the realm of rumour and folk-law or are there demonstrable
facts behind this assertion? Such as the precise size of the (compressed
or uncompressed) files that are supposed to be a problem, a Microsoft KB
article about it, a test case created by someone who's analytical skills
run to real cause and effect identification?

Beyond my normal cynicism, one of the reasons that I suspect this is BS
is that at work we have a QA department that delights in trying to break
our web applications (which is, after all, their job) and one of the
ways they try to do that is by overwhelming the browser with huge
downloads. The HTTPS test servers are set-up to server gziped content
when they think they can and IE 6 is certainly is the test set of
browser used so not having seen any evidence of this being a problem
suggest that it is not (or the problematic files size is so very large
that there is no real issue).
As a devil advocate I would suggest to your QA department to test IE6
SP1 w/o Q837251 patch installed ;-)
That is in reference to http://support.microsoft.com/kb/837251
But if they come back victorious you may point out that users not
updating their IE or Windows for a year and half do deserve every bit
of troubles they are getting as the result.

Jun 27 '08 #25
VK wrote:
On Jun 1, 2:16 pm, Lasse Reichstein Nielsen <l...@hotpop.comwrote:
>VK <schools_r...@yahoo.comwrites:
>>In the context of the discussion I dare to question what principal
difference one sees between altering over say (Gecko) about:config
User-Agent string and altering network.http.accept-encoding string? If
one doesn't have a trust to one chunk of info send by agent, why so
much trust to another chunk sent in the same request? ;-)
The premises of your argument are false. The problem with user-agent
feature detection has never been that the user-agent string is
"untrustworthy"; there is no trust relationship between the user agent
and the server, so that attribute does not apply. User-agent feature
detection is a broken mechanism because it makes incorrect inferences,
especially false negatives that restrict UAs from receiving content
they're perfectly capable of handling.
>Because users have, or have had, reason to fake the User-Agent string

Correction: not users, but some browser producers.
A bogus dichotomy. The user agent is an agent of the user.

User-agent values often are set by the user; it doesn't matter what
tool enables them to do so.
The only other
cases I am aware of are experienced programming involved users
removing some data from the distributer section of User-Agent string
in IE.
It is remotely possible that your experience does not cover the entire
set of applicable cases.
And do we have a site where User-Agent spoofing would help? Like with
this string welcome, with this - go away? I mean being reasonable so
on a browser no more than 6-7 years old?
GIYF. A trivial search turned up complaints about [1], for example.
Look at the Javascript used by that page, particularly the computation
of the variables is_ie and is_nav, and how they're used in functions
like displayAll().
The Browser Wars was the fight of two: nobody cared of some 3rd or
4th.
Except the people who did, of course. And the people who cared about
standards.
It is not fair to blame on developers that they didn't account
some possible neutral 3rd parties that will come someday from
somewhere.
Oh yes it is. That's the whole point of standards and
interoperability, and those have always been explicit goals for the
web, just like most other Internet applications.

[1] http://www.trader.ca/search/default....goryid=1&CAT=1

--
Michael Wojcik
Micro Focus
Rhetoric & Writing, Michigan State University
Jun 27 '08 #26
On Jun 1, 9:47*pm, Thomas 'PointedEars' Lahn <PointedE...@web.de>
wrote:
My Spanish is not good enough, so I have to use an online translator.
Trying Google Translate, this leads to:

| Mislead the server to receive a file. Gz with which you can not
| do anything?
| Why Ibas to want to do that?

Which IMHO would beg the question if you confused "receive" and "send".
Unless, of course, the translation is incorrect. *However, since Englishand
Spanish are both Indo-European languages, I would assume the common root of
"recibir" and "receive" to be of meaning.
Yes, recibir and receive mean the same thing.

You fool the server:
you send the fake Accept-Encoding: gzip request header,
you receive the answer gzipped,
yet you don't know how to deal with gzips,
you are the browser.

HTH, Thomas.

Regards,
--Jorge.
Jun 27 '08 #27
"Richard Cornford" <Ri*****@litotes.demon.co.ukwrites:
Peter Michaux wrote:
>On May 29, 4:19 pm, Richard Cornford wrote:
>>Peter Michaux wrote:
<snip>
>>>If you were a system administrator and you wanted to send
gzipped JavaScript files to save bandwidth, how would you
determine which browsers could accept gzipped files and
which could not?

The HTTP Accept-Encoding header sent with the request would
seem like the obvious place to start (as that is precisely
what it is for).

I believe that the issue is that IE6 claims it can accept
gzip but in actual fact it cannot due to a decompression bug.

IE 6 absolutely can accept gzip encoding, else that would have been
spotted long ago and be very well known by now.
>This bug may only apply to files over a certain size.

Are we in the realm of rumour and folk-law or are there demonstrable
facts behind this assertion? Such as the precise size of the
(compressed or uncompressed) files that are supposed to be a problem,
a Microsoft KB article about it, a test case created by someone who's
analytical skills run to real cause and effect identification?
Now this is hear-say, since I haven't dealt with the problem myself (a
coworker of me did), but there appears to be some issue with some
versions of IE6 combined with some (microsoft, IIRC) HTTP proxy server
that does indeed send out accept-encoding headers for gzip while
messing up the download (possibly the proxy doesn't send on the
encoding headers, or maybe it adds accept-encoding headers when
they're not reliable; i'm not sure). AFAIK IE6 by itself works fine,
though, and win XP SP2 or installing IE7 also seems to fix the issue,
even with a proxy server.

--
Joost Diepenmaat | blog: http://joost.zeekat.nl/ | work: http://zeekat.nl/
Jun 27 '08 #28
On Jun 1, 11:48 am, VK <schools_r...@yahoo.comwrote:
On Jun 1, 10:19 pm, "Richard Cornford" <Rich...@litotes.demon.co.uk>
wrote:
Peter Michaux wrote:
On May 29, 4:19 pm, Richard Cornford wrote:
>Peter Michaux wrote:
<snip>
>>If you were a system administrator and you wanted to send
>>gzipped JavaScript files to save bandwidth, how would you
>>determine which browsers could accept gzipped files and
>>which could not?
>The HTTP Accept-Encoding header sent with the request would
>seem like the obvious place to start (as that is precisely
>what it is for).
I believe that the issue is that IE6 claims it can accept
gzip but in actual fact it cannot due to a decompression bug.
IE 6 absolutely can accept gzip encoding, else that would have been
spotted long ago and be very well known by now.
This bug may only apply to files over a certain size.
Are we in the realm of rumour and folk-law or are there demonstrable
facts behind this assertion? Such as the precise size of the (compressed
or uncompressed) files that are supposed to be a problem, a Microsoft KB
article about it, a test case created by someone who's analytical skills
run to real cause and effect identification?
Beyond my normal cynicism, one of the reasons that I suspect this is BS
is that at work we have a QA department that delights in trying to break
our web applications (which is, after all, their job) and one of the
ways they try to do that is by overwhelming the browser with huge
downloads. The HTTPS test servers are set-up to server gziped content
when they think they can and IE 6 is certainly is the test set of
browser used so not having seen any evidence of this being a problem
suggest that it is not (or the problematic files size is so very large
that there is no real issue).

As a devil advocate I would suggest to your QA department to test IE6
SP1 w/o Q837251 patch installed ;-)
That is in reference tohttp://support.microsoft.com/kb/837251
But if they come back victorious you may point out that users not
updating their IE or Windows for a year and half do deserve every bit
of troubles they are getting as the result.
This must have been it. It is good to know the issue is gone in new or
updated browsers but the general problem still exists. The server
cannot feature test the client directly (at least not easily) and does
need to rely on the strings it is sent.

Peter
Jun 27 '08 #29
Peter Michaux wrote:
On Jun 1, 11:48 am, VK wrote:
>On Jun 1, 10:19 pm, Richard Cornford wrote:
>>Peter Michaux wrote:
On May 29, 4:19 pm, Richard Cornford wrote:
Peter Michaux wrote:
<snip>
>>>I believe that the issue is that IE6 claims it can accept
gzip but in actual fact it cannot due to a decompression bug.
<snip>
>>>This bug may only apply to files over a certain size.
>>Are we in the realm of rumour and folk-law or are there
demonstrable facts behind this assertion? ...
<snip>
>That is in reference to http://support.microsoft.com/kb/837251
<snip>
This must have been it. It is good to know the issue is gone
in new or updated browsers but the general problem still exists.
The Microsoft KB article asserts that the issue was introduced in a
security update for IE, and then fixed in a patch, so the issue is with
IE installations that have had some updates but are not up to date, or
non-updated installations of versions released between the introduction
of the security update and the issuing of the patch. Microsoft don't
seem very keen to let the reader know which security update introduced
the issue (so we can know the length of the interval between its release
and the patch that fixed its bugs) or the size of the downloads in
question.
The server cannot feature test the client directly (at least not
easily) and does need to rely on the strings it is sent.
But the Accept Encoding string not the User Agent string.

Richard.

Jun 27 '08 #30
VK
On Jun 15, 8:11 pm, "Richard Cornford" <Rich...@litotes.demon.co.uk>
wrote:
Peter Michaux wrote:
On Jun 1, 11:48 am, VK wrote:
On Jun 1, 10:19 pm, Richard Cornford wrote:
Peter Michaux wrote:
On May 29, 4:19 pm, Richard Cornford wrote:
Peter Michaux wrote:
<snip>
>>I believe that the issue is that IE6 claims it can accept
gzip but in actual fact it cannot due to a decompression bug.
<snip>
>>This bug may only apply to files over a certain size.
>Are we in the realm of rumour and folk-law or are there
demonstrable facts behind this assertion? ...
<snip>
That is in reference tohttp://support.microsoft.com/kb/837251
<snip>
This must have been it. It is good to know the issue is gone
in new or updated browsers but the general problem still exists.

The Microsoft KB article asserts that the issue was introduced in a
security update for IE, and then fixed in a patch, so the issue is with
IE installations that have had some updates but are not up to date, or
non-updated installations of versions released between the introduction
of the security update and the issuing of the patch. Microsoft don't
seem very keen to let the reader know which security update introduced
the issue (so we can know the length of the interval between its release
and the patch that fixed its bugs) or the size of the downloads in
question.
The server cannot feature test the client directly (at least not
easily) and does need to rely on the strings it is sent.

But the Accept Encoding string not the User Agent string.
and I keep asking within this thread why one request header has to be
particularly mistrusted and some other request header has to be
particularly trusted? - given the same amount of work involved to
alter or to spoof either one client-side?

the second question everyone failed to answer so far is why User-Agent
spoofing has to be considered as a decisive reason to not use User-
Agent: but client caps spoofing is considered as not a big deal. In
the realm of practical programming the situation is right opposite.
compare for instance the listed procedures to alter User-Agent for say
Gecko or IE and now with a code like:
window.ActiveXObject = new Function;
or
window.opera = new Object;
(shudder + surprised look on my face)
Jun 27 '08 #31
On Jun 15, 9:11 am, "Richard Cornford" <Rich...@litotes.demon.co.uk>
wrote:
Peter Michaux wrote:
On Jun 1, 11:48 am, VK wrote:
On Jun 1, 10:19 pm, Richard Cornford wrote:
Peter Michaux wrote:
On May 29, 4:19 pm, Richard Cornford wrote:
Peter Michaux wrote:
<snip>
>>I believe that the issue is that IE6 claims it can accept
gzip but in actual fact it cannot due to a decompression bug.
<snip>
>>This bug may only apply to files over a certain size.
>Are we in the realm of rumour and folk-law or are there
demonstrable facts behind this assertion? ...
<snip>
That is in reference tohttp://support.microsoft.com/kb/837251
<snip>
This must have been it. It is good to know the issue is gone
in new or updated browsers but the general problem still exists.

The Microsoft KB article asserts that the issue was introduced in a
security update for IE, and then fixed in a patch, so the issue is with
IE installations that have had some updates but are not up to date, or
non-updated installations of versions released between the introduction
of the security update and the issuing of the patch. Microsoft don't
seem very keen to let the reader know which security update introduced
the issue (so we can know the length of the interval between its release
and the patch that fixed its bugs) or the size of the downloads in
question.
So during IE's Accept Encoding lying period (or perhaps even now since
there may still be browsers out there partly updated), would you
simply not send gzipped content at all because the Accept Encoding is
not reliable? Or would you use the User Agent string to save the
servers potentially quite a lot of their load? Or is there something
better to be done?

Peter
Jun 27 '08 #32
Peter Michaux wrote:
"Richard Cornford" wrote:
>Peter Michaux wrote:
>>On Jun 1, 11:48 am, VK wrote:
[http://support.microsoft.com/kb/837251]
This must have been it. It is good to know the issue is gone
in new or updated browsers but the general problem still exists.
The Microsoft KB article asserts that the issue was introduced in a
security update for IE, and then fixed in a patch, so the issue is with
IE installations that have had some updates but are not up to date, or
non-updated installations of versions released between the introduction
of the security update and the issuing of the patch. [...]

So during IE's Accept Encoding lying period (or perhaps even now since
there may still be browsers out there partly updated), would you
simply not send gzipped content at all because the Accept Encoding is
not reliable?
Date Published: 5/5/2004
Or would you use the User Agent string to save the servers potentially
quite a lot of their load?
The User-Agent header value does not need to show the UA's patch level.
Or is there something better to be done?
Not to use IEeek.
PointedEars
Jun 27 '08 #33
On Jun 18, 12:41 pm, Thomas 'PointedEars' Lahn <PointedE...@web.de>
wrote:
Peter Michaux wrote:
"Richard Cornford" wrote:
Peter Michaux wrote:
On Jun 1, 11:48 am, VK wrote:
[http://support.microsoft.com/kb/837251]
This must have been it. It is good to know the issue is gone
in new or updated browsers but the general problem still exists.
The Microsoft KB article asserts that the issue was introduced in a
security update for IE, and then fixed in a patch, so the issue is with
IE installations that have had some updates but are not up to date, or
non-updated installations of versions released between the introduction
of the security update and the issuing of the patch. [...]
So during IE's Accept Encoding lying period (or perhaps even now since
there may still be browsers out there partly updated), would you
simply not send gzipped content at all because the Accept Encoding is
not reliable?

Date Published: 5/5/2004
Or would you use the User Agent string to save the servers potentially
quite a lot of their load?

The User-Agent header value does not need to show the UA's patch level.
I believe that the general technique just sends non-gzipped to all
user agents claiming to be IE less than version seven. Given that
other browsers now have a large share of the market the technique
could still lead to a big savings.

Peter
Jun 27 '08 #34
Peter Michaux wrote:
Thomas 'PointedEars' Lahn wrote:
>Peter Michaux wrote:
>>"Richard Cornford" wrote:
Peter Michaux wrote:
On Jun 1, 11:48 am, VK wrote:
>[http://support.microsoft.com/kb/837251]
This must have been it. It is good to know the issue is gone
in new or updated browsers but the general problem still exists.
The Microsoft KB article asserts that the issue was introduced in a
security update for IE, and then fixed in a patch, so the issue is with
IE installations that have had some updates but are not up to date, or
non-updated installations of versions released between the introduction
of the security update and the issuing of the patch. [...]
So during IE's Accept Encoding lying period (or perhaps even now since
there may still be browsers out there partly updated), would you
simply not send gzipped content at all because the Accept Encoding is
not reliable?
Date Published: 5/5/2004
>>Or would you use the User Agent string to save the servers potentially
quite a lot of their load?
The User-Agent header value does not need to show the UA's patch level.

I believe that the general technique just sends non-gzipped to all
user agents claiming to be IE less than version seven. Given that
other browsers now have a large share of the market the technique
could still lead to a big savings.
Given that this patch was released more than four years ago, you would
support faulty software. I would consider this a Bad Idea. I don't see
any savings here.
PointedEars
--
var bugRiddenCrashPronePieceOfJunk = (
navigator.userAgent.indexOf('MSIE 5') != -1
&& navigator.userAgent.indexOf('Mac') != -1
) // Plone, register_function.js:16
Jun 27 '08 #35
Peter Michaux wrote:
On Jun 15, 9:11 am, Richard Cornford wrote:
>Peter Michaux wrote:
>>On Jun 1, 11:48 am, VK wrote:
On Jun 1, 10:19 pm, Richard Cornford wrote:
Peter Michaux wrote:
>On May 29, 4:19 pm, Richard Cornford wrote:
>>Peter Michaux wrote:
<snip>
>>>>>I believe that the issue is that IE6 claims it can accept
>gzip but in actual fact it cannot due to a decompression
>bug.
<snip>
>>>>>This bug may only apply to files over a certain size.
>>>>Are we in the realm of rumour and folk-law or are there
demonstrable facts behind this assertion? ...
<snip>
>>>That is in reference tohttp://support.microsoft.com/kb/837251
<snip>
>>This must have been it. It is good to know the issue is gone
in new or updated browsers but the general problem still exists.

The Microsoft KB article asserts that the issue was introduced in
a security update for IE, and then fixed in a patch, so the issue
is with IE installations that have had some updates but are not
up to date, or non-updated installations of versions released
between the introduction of the security update and the issuing
of the patch. Microsoft don't seem very keen to let the reader
know which security update introduced the issue (so we can know
the length of the interval between its release and the patch
that fixed its bugs) or the size of the downloads in question.

So during IE's Accept Encoding lying period (or perhaps even
now since there may still be browsers out there partly updated),
would you simply not send gzipped content at all because the
Accept Encoding is not reliable?
As with anything else, the issue needs to be pinned down before any
reaction makes sense. We still do not know the nature of the cut-off
point. It is asserted that the effected IE browsers will not properly
handle zipped material above a certain size, but is that size hundreds
of kilobytes, megabytes, tens of megabytes, hundreds of megabytes or
gigabytes?

To start with, once the size is known it is possible to disregard the
issue entirely whenever the server is sending anything smaller than that
limit. And then it is possible for the context to rule that the 'issue'
is moot; for example, in the event that you have a web site were you can
know that all the content that can be served is within the known limit.

This is why I am so cynical about such reports (or at least one of the
reasons[1]), because the true nature of issue is so important to the
handling of the issue that any repost of an issue without the relevant
technical details suggests more a failure of analysis than any real
issue.
Or would you use the User Agent string to save the servers
potentially quite a lot of their load?
If we assume that this size limits is not small, indeed is considerably
larger than anything that would normally be included in/imported by a
web page (which seems reasonable as Microsoft's QA is unlikely to have
missed this if the limit was that small) then the bulk of the load for
the vast majority of servers is not relevant for this issue at all. So
for the small minority of situation where the thing being delivered is
sufficiently large for this issue to be relevant there are lots of
possibilities. Looking at the UA string would be a long way down the
list of possibilities.
Or is there something
better to be done?
The best thing to do is to understand the true nature of the issue, and
the real context in which you are trying to address that issue.

[1] There reasons for being so cynical about such reports include the
amount of bullshit put about by people who should know better, and the
degree to which fundamentally faulty analysis is used to draw
concussions and then goes unnoticed by people who swallow those
conclusions.

To illustrate the former:-

<quote
cite="http://ejohn.org/blog/future-proofing-javascript-libraries/">

Additionally, in Internet Explorer, doing object detection checks can,
sometimes, cause actual function executions to occur. For example:

if ( elem.getAttribute ) {
// will die in Internet Explorer
}
That line will cause problems as Internet Explorer attempts to execute
the getAttribute function with no arguments (which is invalid). ... "
</quote>

- is just an obviously false assertion. It is easy to demonstrate as
being false with simple examples such as:-

<html>
<head>
<title></title>
</head>
<body>
<script type="text/javascript">
window.onload = function(){
if(document.body.getAttribute){
alert('Code is fine!');
}
};
</script>
</body>
</html>

- load that into IE and if the above statement were true you would not
be seeing the alert. And it is not even a statement that may have been
true at some time in the past and later fixed in an IE update, because
feature detection has been in regular use for a decade or more and has
included feature testing of getAttribute methods on elements for all of
that time, and yet this supposed issue has never been raised on this
group (which has approximately 30,000 posts/year over that period).

(There was an issue that this may be a "Chinese whispers" representation
of, where, for a short period (until fixed by an update), attempting to
read Element node interface properties form an attribute node would
cause IE 6 to crash and so reading - getAttribute - on an attribute node
would be an example of that (but that is crashing the browser not
erroneously executing the method with no argument). But that was never
an issue in cross browser scripting because it simply makes no sense to
attempt to use - get/setAttribute - on an attribute node and so nobody
attempted to feature detect for the existence of those method on that
object.)

An illustration of faulty analysis can be found on:-

<URL: http://ejohn.org/blog/most-bizarre-ie-quirk/ >

- where the code:-

setInterval(function(){
alert("hello!");
}, -1);

is used to examine some unique IE behaviour when - setTimeout - is given
a millisecond argument of -1. The conclusion drawn is that "The callback
function will be executed every time the user left clicks the mouse,
anywhere in the document.". And then the subsequent discussion goes on
to consider the relationship between this and onclick events (which
would come first, etc.)

That conclusions in utterly false and stems from a significant but
obvious fault in the testing method used to perform the analysis. The
fault is in using - alert - to signal the calling of the callback
function and then using that in conjunction with conclusions about mouse
interactions. The problem being that - alert - blocks javascript
execution and transfers focus to the button on the alert box, both of
which have a huge impact on interactive behaviour in web browsers.

Change the test code to:-

<script type="text/javascript">
var count = 0;
setInterval(function(){
window.status = (++count);
}, -1);
</script>

-, so that the execution of the call-back function is signalled by the
incrementing of a number displayed in the window's status bare, and a
totally different impression of what is happening is revealed. That is,
the call-back function appears to start firing as soon as the mouse
button is depressed (so onmousedown not onclick) and the function is
called repeatedly until the button is released (or, apparently, a drag
event starts).

For me (so dependent on the rate at which my finger moved, the OS/OS
settings and probably the CPU speed on this computer) the counter has
reached 5 by the time I have completed a normal mouse click. Thus if the
call-back function is executed five times during a click the call-back
function is _not_ "executed every time the user left clicks the mouse".

It does not take much experience (or even much thinking about it) to see
that alert is of limited usefulness when testing mouse interactions, but
still there it is, along with resulting conclusions that are divergent
from reality.

Richard.

Jun 27 '08 #36

This thread has been closed and replies have been disabled. Please start a new discussion.

Similar topics

13
by: Erik | last post by:
I want to write a Server Side PHP program that generates a HTML page client side. How would I get at the clients' screen size, before serving the generated page ? Would it be a two-step process:...
60
by: Fotios | last post by:
Hi guys, I have put together a flexible client-side user agent detector (written in js). I thought that some of you may find it useful. Code is here: http://fotios.cc/software/ua_detect.htm ...
3
by: LIN | last post by:
Hi, In my log files I often simple 'Mozilla' as UserAgent. In that case what may be the browser type. If Mozilla/4.0 is the UserAgent can we take it as a Netscape Browser. LIN
3
by: Raventhorn | last post by:
I am having problems that I also saw people having in the ASP.NET forums with menus and people coming to a site with weird user agent values. Is there a way to access the user agent before the user...
1
by: Bill | last post by:
Is there a fast way, with Classic ASP, to determine if a user agent is a search engine spider? I know that ASP.NET has Request.Browser.Crawler, I'm looking to see if classic ASP has something...
5
by: Gordon | last post by:
As part of a data collecting system (basically a system that collects submissions from forms), I was planning on logging a bit of data about the submitting user agent for the purposes of spotting...
0
by: taylorcarr | last post by:
A Canon printer is a smart device known for being advanced, efficient, and reliable. It is designed for home, office, and hybrid workspace use and can also be used for a variety of purposes. However,...
0
by: Charles Arthur | last post by:
How do i turn on java script on a villaon, callus and itel keypad mobile phone
0
by: aa123db | last post by:
Variable and constants Use var or let for variables and const fror constants. Var foo ='bar'; Let foo ='bar';const baz ='bar'; Functions function $name$ ($parameters$) { } ...
0
by: ryjfgjl | last post by:
In our work, we often receive Excel tables with data in the same format. If we want to analyze these data, it can be difficult to analyze them because the data is spread across multiple Excel files...
0
by: emmanuelkatto | last post by:
Hi All, I am Emmanuel katto from Uganda. I want to ask what challenges you've faced while migrating a website to cloud. Please let me know. Thanks! Emmanuel
0
BarryA
by: BarryA | last post by:
What are the essential steps and strategies outlined in the Data Structures and Algorithms (DSA) roadmap for aspiring data scientists? How can individuals effectively utilize this roadmap to progress...
1
by: nemocccc | last post by:
hello, everyone, I want to develop a software for my android phone for daily needs, any suggestions?
0
by: Hystou | last post by:
Most computers default to English, but sometimes we require a different language, especially when relocating. Forgot to request a specific language before your computer shipped? No problem! You can...
0
Oralloy
by: Oralloy | last post by:
Hello folks, I am unable to find appropriate documentation on the type promotion of bit-fields when using the generalised comparison operator "<=>". The problem is that using the GNU compilers,...

By using Bytes.com and it's services, you agree to our Privacy Policy and Terms of Use.

To disable or enable advertisements and analytics tracking please visit the manage ads & tracking page.