By using this site, you agree to our updated Privacy Policy and our Terms of Use. Manage your Cookies Settings.
432,537 Members | 1,743 Online
Bytes IT Community
+ Ask a Question
Need help? Post your question and get tips & solutions from a community of 432,537 IT Pros & Developers. It's quick & easy.

Cross Browser JavaScript Debugging (Moz/IE)

P: n/a
To all Mozilla JS Guru's (IE dudes welcome),

I have spent the last three years developing complex DHTML
applications that must work in IE 5.5sp2+ but I use Mozilla 1.3+** to
do all my development. I have build some cross browser debuggers so my
users can send me verbose debug dumps. I have some success but have
come to a roadblock with the basic underlying JavaScript models.

The only way to get a complete stack in IE is to use the
window.onerror handler. At this point by surfing the
arguments.callee.caller... you can navigate the call stack to the
originating point of the error. This works but it goes against the
first basic rule of debugging which is to handle the error as close to
the source as possible using try{...}catch(e){...} blocks.
Unfortunately in Mozilla, you cannot surf the stack by using
arguments.callee.caller from window.onerror handler :|

In Mozilla, they have error.stack which contains the full stack and
it works great but is only available if you use try/catch blocks.
However, in IE there is no stack property of error therefore if you
handle the error with a try/catch you will only know from the current
stack position (where it is handled) to the first level. All calls
after the current function/method are lost :[ ... Example - stack a(),
b(), c(), d(), e(), f() has error at e() and try/catch is at c(), in
IE you can only get a()-b()-c() and you will not know if the error
happened at c(), d(), e(), or f().

Here is some possible IE / Moz debugging solutions:
1) My current Production version- use try/catch blocks but put
them in as many error prone places and as close to error source
points. When error occurs store error state in global data structure.
Throw the error up and ultimately handle the error in window.onerror.
At window.onerror, read if global debug exists and use data and
possibly traverse call stack to originating call. ( In IE, you still
loose the stack from the point where it is handled :[. But in Mozilla
I get great debugging :] )

2) Possibility 1 - Structure code such that try/catches only
operate in Mozilla. This would need simple server side code to serve
up JavaScript with try/catch blocks if Mozilla and comment them out if
IE.

3) Possibility 2 - Find a way to get the error object in Mozilla
when handling error in window.onerror. Have not found any way to get
this yet and I am tempted to download the Mozilla source and write the
patch myself.

4) Possibility 3 - Find a way to get the stack from the source
error in IE when using try/catch. IE currently does not have a stack
property of error (Huge design flaw). I would never even dream of
suggesting this as a bug to Microsoft as I am morally against having
to pay to report bugs. Also I would never in any event want to see
their source code :P

Any ideas would be greatly appreciated.

TQ

** (See bug http://bugzilla.mozilla.org/show_bug.cgi?id=158592). This
bug caused problems when traversing call stack in Mozilla. I reported
this and made a reasonably good case to the Moz team and it was and
eventually fixed (Try that with Microsoft). Now Moz 1.3+ (Netscape
7+?) has had this patch included.
Jul 23 '05 #1
Share this Question
Share on Google+
16 Replies


P: n/a
On 18 Apr 2004 16:58:56 -0700, de********@yahoo.ca (Java script
Dude) wrote:
I have build some cross browser debuggers so my
users can send me verbose debug dumps. I have some success but have
come to a roadblock with the basic underlying JavaScript models.
I've never found the need for these sort of call stacks in JS, and
have certainly never found the need for verbose debugging code in
shipped content, it strikes me as suggesting that you're not putting
enough thought into where you're likely to error, and aren't being
defensive enough.
1) My current Production version- use try/catch blocks but put
them in as many error prone places and as close to error source
points.
Surely you should always be catching known errors, and preferably
should be preventing all that are expected, are you really getting
value out of this debug code?
2) Possibility 1 - Structure code such that try/catches only
operate in Mozilla. This would need simple server side code to serve
oops, this line has already disqualified yourself from having a clue,
you cannot identify IE/mozilla on the server, by doing so you're just
introducing yourself other problems.
I would never even dream of
suggesting this as a bug to Microsoft as I am morally against having
to pay to report bugs.
You don't need to pay to report bugs to MS.
Any ideas would be greatly appreciated.


Write defensive code, use more try/catches in known areas, use the
onerror approach as a final hope catch.

Jim.
--
comp.lang.javascript FAQ - http://jibbering.com/faq/

Jul 23 '05 #2

P: n/a
> >I have build some cross browser debuggers so my
users can send me verbose debug dumps. I have some success but have
come to a roadblock with the basic underlying JavaScript models.
I've never found the need for these sort of call stacks in JS, and
have certainly never found the need for verbose debugging code in
shipped content, it strikes me as suggesting that you're not putting
enough thought into where you're likely to error, and aren't being
defensive enough.


I am talking about DHTML Applications not pages with JavaScript (DHTML
Page/Document). Applications imply code that uses libraries and will
need to go below 3 levels in a call stack. Without reading a stack
anything more than two levels deep is very difficult (and sometimes
impossible) to debug. I have some JavaScript (That's be on production
for years) that go more than 7 levels deep in the stack. This code may
be complete but at least it is debuggable.

1) My current Production version- use try/catch blocks but put
them in as many error prone places and as close to error source
points.


Surely you should always be catching known errors, and preferably
should be preventing all that are expected, are you really getting
value out of this debug code?


I agree, more handlers is better for debugging (in IE) in Moz, it's
only important to handle the error. How far up the stack does not
matter.

2) Possibility 1 - Structure code such that try/catches only
operate in Mozilla. This would need simple server side code to serve


oops, this line has already disqualified yourself from having a clue,
you cannot identify IE/mozilla on the server, by doing so you're just
introducing yourself other problems.


Clue numero uno: Try page:
http://cyscape.com/products/bhawk/javabean.asp. You will see that it
can detect Mozilla. (JavaBean.asp? What Sacralige)

I would never even dream of
suggesting this as a bug to Microsoft as I am morally against having
to pay to report bugs.


You don't need to pay to report bugs to MS.


I have tried before and they always asked me for $ to report the bug I
refused on moral grounds. Bugzilla just takes the bug and fixes it and
even includes me in the process :] Me be happy. If I detect a bug in
IE just work around it, in Moz I report it because it is not in vain.

Any ideas would be greatly appreciated.


Write defensive code, use more try/catches in known areas, use the
onerror approach as a final hope catch.


I agree completely. Until IE gives us a stack property and Mozilla
gives us the error object at window.onerror, that is the best
technique.

Tim
Jul 23 '05 #3

P: n/a
Java script Dude wrote:
<snip>
2) Possibility 1 - Structure code such that try/catches
only operate in Mozilla. This would need simple server side
code to serve


oops, this line has already disqualified yourself from having
a clue, you cannot identify IE/mozilla on the server, by doing
so you're just introducing yourself other problems.


Clue numero uno: Try page:
http://cyscape.com/products/bhawk/javabean.asp. You will see
that it can detect Mozilla. (JavaBean.asp? What Sacralige)

<snip>

The statement "you cannot identify IE/mozilla on the server" is not
modified by an instance of someone attempting to sell software that
claims to identify them on the server. The only information that the
server gets about what browser is on the other end of the connection (at
least initially) is the User Agent header, and then is routinely
faked/spoofed (or replaced/modified by intervening proxies).

But why not try their page and see how well it does. IE 6 first (as
getting that wrong would be a very bad sign). The returned page,
entitled BrowserHawk, features the line "Your browser: Netscape 4.0
(Unknown)"; not a good start. Following the link "features and benefits
here" goes to the page with the "demo", and the demo link opens a page
with the words "Performing an extensive browser test. Please wait..",
shortly followed by two JScript error reports reading -
"navigator.language" is null or not an object - (which is of course true
on IE 6) and that is it.

Having failed totally at identifying IE 6, because it is not a default
configuration of IE 6 (though that isn't a good excuse as web browsers
are designed to be configurable) I thought I would try it with default
configurations of a couple of less common browsers. IceBrowser 5.4 to
start with. This time BrowserHawk is convinced it is dealing with IE 5,
and again errors on the "Performing an extensive browser test. Please
wait.." page because IceBrowser doesn't implement Microsoft's
ScriptEngine methods.

Next, Web Browser 2.0, this time reported as IE 6 by BrowserHawk, and
again errors on the "extensive browser test" page, prematurely
terminating the testing process.

So that is three browsers (including the most common one used) and 3
failures to identify, followed by 3 errors generated on the test page,
resulting in a failure to complete the test process. It looks like
whoever implemented BrowserHawk didn't have a clue either, but then
looking at the source of the test page turned up this gem:-

<noscript><body BGCOLOR=#000066
TEXT=#FFFFFF onLoad="bhawkTest();"></noscript>
<script>document.write(
'<body BGCOLOR=#000066 TEXT=#FFFFFF onLoad="bhawkTest();">'
);</script>

- which rather confirms that impression.

Richard.
Jul 23 '05 #4

P: n/a
> > Clue numero uno: Try page:
http://cyscape.com/products/bhawk/javabean.asp. You will see
that it can detect Mozilla. (JavaBean.asp? What Sacralige)


The statement "you cannot identify IE/mozilla on the server" is not
modified by an instance of someone attempting to sell software that
claims to identify them on the server. The only information that the
server gets about what browser is on the other end of the connection (at
least initially) is the User Agent header, and then is routinely
faked/spoofed (or replaced/modified by intervening proxies).


True, if proxies are spoofing the User Agent then we are all toast
anyway. It does not matter what browser you use really. The original
argument was that Mozilla cannot be detected.

I wrote a simple JSP that dump's the User Agent:

<%
out.println(request.getHeader("USER-AGENT"));
%>
Outputs:
Mozilla - Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.6)
Gecko/20040206 Firefox/0.8
IE - Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.1)

Clearly, as long as the User Agent is not 'spoofed' any (decent)
programmer can parse this string and figure out what browser is at the
other end.

Such a programmer could also remove the need to detect the browser
version on the server if they write good DHTML and they could even
write a simple client side gateway for detecting the browser (IF JS is
enabled of course).

I do plead a bit of ignorance here though as I build multi-user
intranet based PDM systems that only require IE 5.5sp2 as a base. I
develop in Mozilla not becuase of a customers need but because of a
need to develop open standard, stable and extensible code. As such
server side browser detection is not necessary. Also there is no
'spoofing' happening within our intranet of course.

I guess I'm a little spoiled :]

Tim
Jul 23 '05 #5

P: n/a
de********@yahoo.ca (Java script Dude) writes:
True, if proxies are spoofing the User Agent then we are all toast
anyway.
Yes, and they are.
Clearly, as long as the User Agent is not 'spoofed' any (decent)
programmer can parse this string and figure out what browser is at the
other end.
No. That would require him to have knowledge of all browsers in
existence, and their variants of user agent strings.

If the script is to be future-proof, he must also be able to
anticipate comming browsers and their user agent strings. At worst,
they could be identical to an existing browsers'.

This is ofcourse aburd. However, if ones pages depends on browser
sniffing, these are real problems.
Such a programmer could also remove the need to detect the browser
version on the server if they write good DHTML and they could even
write a simple client side gateway for detecting the browser (IF JS is
enabled of course).
And if Javascript is not enabled, will his page fail? :)
I do plead a bit of ignorance here though as I build multi-user
intranet based PDM systems that only require IE 5.5sp2 as a base.
I develop in Mozilla not becuase of a customers need but because of
a need to develop open standard, stable and extensible code.


That is a recommendable goal. I would do the same if I were doing
Javascript for a living :)

I do think you underestimate the number and variety of browsers that
are available, and how prevalent user agent faking is (which is
ofcourse impossible to tell, since good faking doesn't stand out in
the server logs.)

/L
--
Lasse Reichstein Nielsen - lr*@hotpop.com
DHTML Death Colors: <URL:http://www.infimum.dk/HTML/rasterTriangleDOM.html>
'Faith without judgement merely degrades the spirit divine.'
Jul 23 '05 #6

P: n/a
Java script Dude wrote:
> Clue numero uno: Try page:
> http://cyscape.com/products/bhawk/javabean.asp. You will see
> that it can detect Mozilla. (JavaBean.asp? What Sacralige)
The statement "you cannot identify IE/mozilla on the server" is not
modified by an instance of someone attempting to sell software that
claims to identify them on the server. The only information that the
server gets about what browser is on the other end of the connection
(at least initially) is the User Agent header, and then is routinely
faked/spoofed (or replaced/modified by intervening proxies).


True, if proxies are spoofing the User Agent then we are
all toast anyway.


No we are not. It is well known that User Agent headers are not a
discriminating indicator of the type or version of client-side software,
and that has been true for some considerable time. That makes no
practical difference to client-side scripting because browser detecting
is not necessary for client-side scripting, and there is no really good
reason for server-side scripts to be interested either as they are
mostly in the business of generating HTML (which browsers are designed
to understand).
It does not matter what browser you use really. The original
argument was that Mozilla cannot be detected.
And it cannot be detected. Mozilla's User Agent header is just as
amenable to user modification as any other browser's, and any other
browser is at liberty to send a User Agent header that is
indistinguishable form one that is a default in a Mozilla version (a
number directly provide the user with an option to spoof Mozilla in
their preferences).
I wrote a simple JSP that dump's the User Agent:

<%
out.println(request.getHeader("USER-AGENT"));
%>
Outputs:
Mozilla - Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.6)
Gecko/20040206 Firefox/0.8
IE - Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.1)

Clearly, as long as the User Agent is not 'spoofed' any (decent)
programmer can parse this string and figure out what browser is at the
other end.
But that caveat instantly becomes the fatal flaw in the plan, as User
Agent headers are routinely spoofed by the majority of browsers. It is
the norm not the exception. Many use spoofed headers that still leave a
clue as to which browser it really is, but there are still a number that
are just indistinguishable form some more common browser by default.
Such a programmer could also remove the need to detect the browser
version on the server if they write good DHTML and they could even
write a simple client side gateway for detecting the browser (IF JS is
enabled of course).
No, you can't detect the browser on the client either. You either
immediately fall back into the user agent string trap, or you would need
to employ a discriminating object inference technique, which
pre-supposes a omniscient knowledge of all web browser DOMs (past,
present and future).

But the combination of the facts that this group has a particular
interest in browser DOMs and that there is nobody contributing to this
group who will even claim to be able to name all web browser, let alone
to know enough to infer a browser type and version from any given DOM,
suggest that there is nobody with enough knowledge to write a truly
discriminating object inference script even for browsers that currently
exist.

Fortunately identifying browsers is unnecessary, so the impossibility of
the task is not a problem.
I do plead a bit of ignorance here though as I build multi-user
intranet based PDM systems that only require IE 5.5sp2 as a base. I
develop in Mozilla not becuase of a customers need but because of a
need to develop open standard, stable and extensible code.
Didn't you start a thread expounding your notions of how to go about
browser scripting? Was that a good idea given that you only author for
two browsers (and appear to be optimistic about aspects of their
behaviour) and with the expectation that the results will only be used
in a known environment?
As such
server side browser detection is not necessary. Also there is no
'spoofing' happening within our intranet of course.


Yes, take away the Internet and much of the cautious, defensive
scripting and strategic script design becomes unnecessary. But then
those are the aspects of browser scripting that makes it interesting and
challenging.

Richard.
Jul 23 '05 #7

P: n/a
"Richard Cornford" <Ri*****@litotes.demon.co.uk> wrote:
It is well known that User Agent headers are not a
discriminating indicator of the type or version of client-side software,
and that has been true for some considerable time.
I would be interested in some real statistics on this question.
My hunch is that the vast majority of user agent headers are accurate and
not misrepresentations of the client being used.
because browser detecting
is not necessary for client-side scripting
Well, I wouldn't say that is completely true.
There are times when browser-sniffing is certainly necessary. For example,
when a particular version of a particular browser has a bug or quirk which
must be specially accomodated. There are some features/bugs which are not
detectable by inspecting the dom or testing for properties/methods, and must
be accounted for by looking at the exact browser being used.
But that caveat instantly becomes the fatal flaw in the plan, as User
Agent headers are routinely spoofed by the majority of browsers. It is
the norm not the exception.
Why do you say this? I look at my logs and see very common user agent
strings.
Fortunately identifying browsers is unnecessary, so the impossibility of
the task is not a problem.


It's not completely unnecessary, but for most situations it is.
Further, although the task may be impossible (I agree), there may be value
in solving the problem 99.9% of the way.

--
Matt Kruse
Javascript Toolbox: http://www.mattkruse.com/javascript/
Jul 23 '05 #8

P: n/a
Matt Kruse wrote:
"Richard Cornford" wrote:
It is well known that User Agent headers are not a
discriminating indicator of the type or version of client-side
software, and that has been true for some considerable time.
I would be interested in some real statistics on this question.


You haven't really thought that through. How would it be possible to
gather statistics on the extent to which things were indistinguishable?
My hunch is that the vast majority of user agent headers are accurate
and not misrepresentations of the client being used.
It wouldn't matter if they were as it remains impossible to tell, and a
minority that tells lies still invalidates the proposition that it
browser type or version can be detected.
because browser detecting
is not necessary for client-side scripting


Well, I wouldn't say that is completely true. There are
times when browser-sniffing is certainly necessary. For
example, when a particular version of a particular browser
has a bug or quirk which must be specially accomodated.


Any proposed action that necessitates an impossibility is misguided. But
there are not that many examples of browser bugs that cannot be exposed
by appropriate feature detecting, or avoided entirely by taking a more
reliable approach (if there are actually any, none have been
specifically mentioned as examples).
There are some
features/bugs which are not detectable by inspecting the dom or
testing for properties/methods, and must be accounted for by looking
at the exact browser being used.
There are no techniques available to identify the exact browser being
used.
But that caveat instantly becomes the fatal flaw in the plan, as User
Agent headers are routinely spoofed by the majority of browsers. It
is the norm not the exception.


Why do you say this?


Because it is true.
I look at my logs and see very common user agent
strings.


When the majority of less common browsers actively spoof the User Agent
headers of more common browsers, or make a facility for doing so easily
available to their users and an insistence on the part of web site
designers in reading UA headers and serving unhelpful comments up to the
users of browsers they have never heard of encourages the users of those
less well known browsers to use the facilities they have for spoofing,
then the expected result would be the logging of only a limited number
of user agent strings, only those normally produced by the most common
browser.

But then your own logs will not be representative anyway as your
unwillingness to accommodate browsers that do not conform to your
expectations will not leave the users of those browsers with the
impression that visiting your site is a productive way of spending their
time.
Fortunately identifying browsers is unnecessary, so the
impossibility of the task is not a problem.


It's not completely unnecessary, but for most situations it is.
Further, although the task may be impossible (I agree), there may be
value in solving the problem 99.9% of the way.


You love wheeling out arbitrary statistics. The only logical way of
dealing with the impossibility of accurately discriminating between web
browsers (and/or browser versions) is to seek to avoid the need to do so
at all.

As techniques exist that remove the need to identify browsers it makes
more sense to learn and refine those then to spend time an effort
failing to achieve the impossible (no matter how close your
head-in-the-sand attitude may leave you believing you could get).

Richard.
Jul 23 '05 #9

P: n/a
"Richard Cornford" <Ri*****@litotes.demon.co.uk> wrote:
You haven't really thought that through. How would it be possible to
gather statistics on the extent to which things were indistinguishable?
A survey of Opera/Mozilla users, for example, to find out how many
manipulate their UA strings. Etc, etc.
It's not an impossible task, although it may not be partcularly useful.
My hunch is that the vast majority of user agent headers are accurate
and not misrepresentations of the client being used.

It wouldn't matter if they were as it remains impossible to tell, and a
minority that tells lies still invalidates the proposition that it
browser type or version can be detected.


It doesn't invalidate the proposition that in the vast majority of cases,
browser type and version can be detected accurately.
AFAIK, no one was proposing that it was a fool-proof, 100% solution.
Any proposed action that necessitates an impossibility is misguided.
Laughable.
Many things are viewed as "impossible" until someone does them to a
satisfactory degree.
There are no techniques available to identify the exact browser being
used.
There are techniques available to identify the browser that the user says is
being used.
If the user is lying, then they learn to live with the results.
Agent headers are routinely spoofed by the majority of browsers. It
is the norm not the exception.

Why do you say this?

Because it is true.


Any evidence of this? I think you are mistaken.
When the majority of less common browsers actively spoof the User Agent
headers of more common browsers, or make a facility for doing so easily
available to their users and an insistence on the part of web site
designers in reading UA headers and serving unhelpful comments up to the
users of browsers they have never heard of encourages the users of those
less well known browsers to use the facilities they have for spoofing,
then the expected result would be the logging of only a limited number
of user agent strings, only those normally produced by the most common
browser.
The answer would seem to be that users of Opera and Mozilla or any other
browser which allows for manipulating the UA header should stop using that
feature. Then those people who _do_ decide to use browser-sniffing will
incorporate support for their browsers.
Further, although the task may be impossible (I agree), there may be
value in solving the problem 99.9% of the way.

You love wheeling out arbitrary statistics.


Well, 94% of the time, they're right. HA!
The only logical way of
dealing with the impossibility of accurately discriminating between web
browsers (and/or browser versions) is to seek to avoid the need to do so
at all.
Again, you need to learn that the world is not black-and-white. Solutions
which solve the problem in almost all cases, for almost all users, can
provide a lot of value. A problem does not have to be solved 100% in order
for it to provide value. Progress and learning _depends_ on people solving
problems partially, so that others can piggy-back on their learning and
improve the solutions.
As techniques exist that remove the need to identify browsers it makes
more sense to learn and refine those then to spend time an effort
failing to achieve the impossible (no matter how close your
head-in-the-sand attitude may leave you believing you could get).


Let me be clear about what I'm saying:

1) Browser-sniffing, in most cases, is not necessary. There are better ways
to do it. Usually.

2) In the event that sniffing is necessary, it's often reliable. No modern
browser is going to spoof being IE4. If you're in a corporate environment
where some older machines may still have IE4 installed, and you want to
direct those users to an upgrade or contact page before allowing them to
enter your intranet portal, server-side browser sniffing is a very good
idea. Or if you are writing some client-side code which will break in Opera5
(since it has some missing js functionality which cannot be tested for
directly) then sniffing for that particular version on the client side might
be a good idea.

3) For people who choose to use browsers that enable spoofing of the UA
header, if they choose to use that feature, then they should be willing to
accept the results from a server who trusts that they aren't lying to it. If
they want to encourage the use of these new browsers, they should not be
hiding their real identity, which will only result in people seeing no need
to correctly identify them.

4) Identifying 100% of the browsers out there is an impossible goal.
Identfying specific browsers and versions which you want to take action on
usually is very possible, and might provide value to the developer.

--
Matt Kruse
Javascript Toolbox: http://www.mattkruse.com/javascript/


Jul 23 '05 #10

P: n/a
On Mon, 26 Apr 2004 11:57:40 -0500, "Matt Kruse"
<ne********@mattkruse.com> wrote:
It doesn't invalidate the proposition that in the vast majority of cases,
browser type and version can be detected accurately.
Yet no-one is very good at pointing to a URL which does that, we will
all agree that with sufficient work you could detect todays and
tomorrows browsers reasonably well, but no-one has yet seemed to put
in that effort.
Many things are viewed as "impossible" until someone does them to a
satisfactory degree.
This would seem like the perfect opportunity for you to go for it
then!
If the user is lying, then they learn to live with the results.
Except of course even if not lying you're assuming you know the
capabilities of the browser - what are the capabilities of IE6sp2 ?
>> Agent headers are routinely spoofed by the majority of browsers. It
>> is the norm not the exception.
> Why do you say this?

Because it is true.


Any evidence of this? I think you are mistaken.


IE4,5.5,6.0 all spoof themselves as Mozilla /4.0

of course you could then argue that they identify themselves later as
IE, but...
The answer would seem to be that users of Opera and Mozilla or any other
browser which allows for manipulating the UA header should stop using that
feature. Then those people who _do_ decide to use browser-sniffing will
incorporate support for their browsers.
That would not be in their self-interest (due to the lag in getting it
implemented)
Again, you need to learn that the world is not black-and-white. Solutions
which solve the problem in almost all cases, for almost all users, can
provide a lot of value.
Sure they can, but no-ones yet come up with a solution which involves
browser scripting that I've seen that wouldn't be better solved in
another way. Often people say it's to exclude certain old NN4 bugs,
but they're a tiny minority anyway, you might aswell just stuff them,
as a different subset of people that carries a significant development
overhead.

You've also ignored on the server the significant extra load (if you
ensure that caches are not used) or the problem of people being served
the wrong version from the proxy cache, if the first user is Lynx, and
the proxy cache caches it, you're really stuffed all your users end up
getting that one!
2) In the event that sniffing is necessary, it's often reliable. No modern
browser is going to spoof being IE4. If you're in a corporate environment
where some older machines may still have IE4 installed, and you want to
direct those users to an upgrade or contact page before allowing them to
enter your intranet portal, server-side browser sniffing is a very good
idea.


not really as IE4/IE5 can be 100% detected on the client without
failure, yet doing it on the server is likely to break in a number of
conditions if there are any proxies in the way, and yes in the
corporate environment I live in there are caching proxies in the
intranet.

Jim.
--
comp.lang.javascript FAQ - http://jibbering.com/faq/

Jul 23 '05 #11

P: n/a
"Matt Kruse" <ne********@mattkruse.com> writes:

[User-Agent spoofing]
I would be interested in some real statistics on this question.
My hunch is that the vast majority of user agent headers are accurate and
not misrepresentations of the client being used.
My hunch is that the vast majority of User Agent headers are claiming
to be some version of IE. Most of them are likely to be correct,
although definitly not all, and there are IE's disguising as something
else too. How many is impossible to tell if they do it well.

The real problem comes with browsers apart from IE. Pretty much any
modern non-IE browser allows the user to disguise as IE to some
extend. Opera's *default* installation masquerades as IE, although
distinguishable by someone who knows where to look.

My hunch is that a significant portion of these spoofs their user
agent string at times. I only do it for certain sites (like MSDN),
or for pages that refuse me access based on my browser.

It is ofcourse impossible to make statistics, because it would
required one to be able to distinguish browsers, to detect the ones
that tries to be undetectable. Whatever statistics someone comes up
with, there is no way to verify it. No way to know which disguises
slipped through.
Why do you say this? I look at my logs and see very common user agent
strings.


Common user agent strings are exactly what a spoofer would use. Most
likely a recent version of IE.

/L
--
Lasse Reichstein Nielsen - lr*@hotpop.com
DHTML Death Colors: <URL:http://www.infimum.dk/HTML/rasterTriangleDOM.html>
'Faith without judgement merely degrades the spirit divine.'
Jul 23 '05 #12

P: n/a
"Matt Kruse" <ne********@mattkruse.com> writes:
Let me be clear about what I'm saying:

1) Browser-sniffing, in most cases, is not necessary. There are better ways
to do it. Usually.

2) In the event that sniffing is necessary, it's often reliable. No modern
browser is going to spoof being IE4. .... 3) For people who choose to use browsers that enable spoofing of the UA
header, if they choose to use that feature, then they should be willing to
accept the results from a server who trusts that they aren't lying to it. .... 4) Identifying 100% of the browsers out there is an impossible goal.
Identfying specific browsers and versions which you want to take action on
usually is very possible, and might provide value to the developer.


I have to agree with all of this.

The point being that browser detection is used to build a black-list
of browsers that need special attention, and that will *usually* be
browsers that people are unlikely to spoof as (although I find that IE
6 requires a significant amount of special attention :).

If everybody identifies correctly, then the page will work.

The bad use of browser detection is to use it with a white-list, and
fail on unrecognizable browsers. That is, sadly, often the way it is
used in scripts found on the internet.

/L
--
Lasse Reichstein Nielsen - lr*@hotpop.com
DHTML Death Colors: <URL:http://www.infimum.dk/HTML/rasterTriangleDOM.html>
'Faith without judgement merely degrades the spirit divine.'
Jul 23 '05 #13

P: n/a
Matt Kruse wrote:
"Richard Cornford" wrote: <snip>
> My hunch is that the vast majority of user agent headers are
> accurate and not misrepresentations of the client being used.

It wouldn't matter if they were as it remains impossible to tell,
and a minority that tells lies still invalidates the proposition
that it browser type or version can be detected.


It doesn't invalidate the proposition that in the vast majority of
cases, browser type and version can be detected accurately.


Without any additional information, if you were asked to guess what web
browser an individual used you would be a fool to guess anything but IE
6, and in the vast majority of cases you would be correct. That correct
result could be no more than a coincidence, resulting form nothing more
than the distribution of browsers in use.

If the criteria for browser detecting is no more than a probability of a
correct result then blind guesswork satisfies it, no need to write any
code for that at all.
AFAIK, no one was proposing that it was a fool-proof, 100% solution.
The code at the posted URL makes a claim very much approaching that.
There certainly is no caveat stating that it fails uncontrollably in the
face of anything out of the ordinary.
Any proposed action that necessitates an impossibility is misguided.


Laughable.
Many things are viewed as "impossible" until someone does them to a
satisfactory degree.


Some things are logically impossible, and determining the nature of the
client side software from the User Agent header is one of those. That
just follows from the nature of string data, two browsers sending the
same UA header cannot be distinguished using that header. If you think
they can you would be in a position to tell me which three browsers on
the computer I am currently sitting at send the header:-

Mozilla/4.0 (compatible; MSIE 6.0; Windows 98)

- by default. Guesswork will get you one, research may reveal a second,
but nothing will tell you which one of the three I actually copied the
string form to paste it into this article, that information is not part
of the string and is impossible to determine from the string.
There are no techniques available to identify the exact browser being
used.


There are techniques available to identify the browser that the user
says is being used.
If the user is lying, then they learn to live with the results.


It is always the same with you when you don't want to recognise an
issue, it is the browser's fault, or it is the user's fault. It isn't
ether's fault that UA headers are not discriminating, the HTTP 1.1
specification doesn't require that they should be, because spoofing was
already the norm when the specification was written so it was already to
late to make any strong requirements on what the UA header should
contain. The fault is in expecting a discriminating UA header to be
used, and lies with the software author for trying to deduce anything
from a header that is not specified to be a source of information.
>> Agent headers are routinely spoofed by the majority of browsers.
>> It is the norm not the exception.
> Why do you say this?

Because it is true.


Any evidence of this?


The User Agent headers sent by web browsers.
I think you are mistaken.
Clearly.

<snip> The answer would seem to be that users of Opera and Mozilla or any
other browser which allows for manipulating the UA header should stop
using that feature.
The feature is provided because it is needed, and the browsers that use
spoofed headers by default do not always provide any options in the
preferences to change that so most users of those browsers will not even
be aware that it is happening.
Then those people who _do_ decide to use
browser-sniffing will incorporate support for their browsers.
If the people who use browser-sniffing could be relied upon to support
all browsers that provided a discriminating UA header then there would
have been no need for Microsoft to start the spoofing ball rolling in
the late nineties. In reality server side browser detecting is used to
exclude unrecognised browser, and the manufacturers of those browsers
are not motivated to make it easy for their products to be needlessly
excluded.
> Further, although the task may be impossible (I agree), there may
> be value in solving the problem 99.9% of the way.

You love wheeling out arbitrary statistics.


Well, 94% of the time, they're right. HA!


More guesswork?
The only logical way of
dealing with the impossibility of accurately discriminating between
web browsers (and/or browser versions) is to seek to avoid the need
to do so at all.


Again, you need to learn that the world is not black-and-white.
Solutions which solve the problem in almost all cases, for almost all
users, can provide a lot of value.


Feature detecting already satisfies (exceeds) those criteria.
A problem does not have to be
solved 100% in order for it to provide value. Progress and learning
_depends_ on people solving problems partially, so that others can
piggy-back on their learning and improve the solutions.
Feature detecting is already significantly better at addressing the
problem than browser detecting. It avoids any need to be interested in
the type of browser so inherent limitations in browser detection are
rendered insignificant and it is already capable of 100% discrimination
in most practical applications. If people want to piggyback their
learning on improving a solution it makes more sense to start with the
best of what is available.
As techniques exist that remove the need to identify browsers it
makes more sense to learn and refine those then to spend time an
effort failing to achieve the impossible (no matter how close your
head-in-the-sand attitude may leave you believing you could get).


Let me be clear about what I'm saying:

1) Browser-sniffing, in most cases, is not necessary. There are
better ways to do it. Usually.

2) In the event that sniffing is necessary,


We don't see any examples of where sniffing is actually necessary, just
assertions that they exist.
it's often reliable.
Guessing the browser is also often reliable, that doesn't make it a
strategy that is likely to produce reliable browser scripts.

<snip> ... . Or if you are writing some
client-side code which will break in Opera5 (since it has some
missing js functionality which cannot be tested for directly)
And does Opera 5 have any missing JS functionality that cannot be
detected? And if so, what exactly?
then sniffing for that particular version on
the client side might be a good idea.
And if the proposed missing functionality can be detected browser
sniffing becomes a bad idea in comparison (as any other browser may also
lack that functionality).
3) For people who choose to use browsers that enable spoofing of the
UA header, if they choose to use that feature, then they should be
willing to accept the results from a server who trusts that they
aren't lying to it. If they want to encourage the use of these new
browsers, they should not be hiding their real identity, which will
only result in people seeing no need to correctly identify them.
That does not correspond with what has happened historically. Browsers
didn't start spoofing because they could, they started it to avoid
arbitrary exclusion by web sites using browser detecting based on the UA
strings.
4) Identifying 100% of the browsers out there is an impossible goal.
Identfying specific browsers and versions which you want to take
action on usually is very possible,
No, receiving a User Agent header that corresponds 100% with the UA
header normally sent by a particular common browsers cannot in itself be
used to deduce the nature of the originating software. To say that
assuming it originated with the common web browser will usually be
correct is to say no more than that common browsers are more common than
other software impersonating common browsers.
and might provide value to the
developer.


Or it may distract them form pursuing a more valuable alternative.

Richard.
Jul 23 '05 #14

P: n/a
Java script Dude wrote:
> Clue numero uno: Try page:
> http://cyscape.com/products/bhawk/javabean.asp. You will see
> that it can detect Mozilla. (JavaBean.asp? What Sacralige)
The statement "you cannot identify IE/mozilla on the server" is not
modified by an instance of someone attempting to sell software that
claims to identify them on the server. The only information that the
server gets about what browser is on the other end of the connection (at
least initially) is the User Agent header, and then is routinely
faked/spoofed (or replaced/modified by intervening proxies).


True, if proxies are spoofing the User Agent then we are all toast
anyway.


*You* are toast anyway. Good programmers make use of the UA header only
when necessary (when feature detection is not enough to workaround known
bugs) but they do not rely on it.
It does not matter what browser you use really. The original
argument was that Mozilla cannot be detected.

I wrote a simple JSP that dump's the User Agent:

<%
out.println(request.getHeader("USER-AGENT"));
%>
Outputs:
Mozilla - Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.6)
Gecko/20040206 Firefox/0.8
IE - Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.1)

Clearly, as long as the User Agent is not 'spoofed' any (decent)
programmer can parse this string and figure out what browser is at the
other end. [...]


Read <http://pointedears.de.vu/scripts/test/whatami> and get baked ;-)
PointedEars
Jul 23 '05 #15

P: n/a
Richard Cornford wrote:
You love wheeling out arbitrary statistics. The only logical way of
dealing with the impossibility of accurately discriminating between web
browsers (and/or browser versions) is to seek to avoid the need to do so
at all.

As techniques exist that remove the need to identify browsers it makes
more sense to learn and refine those then to spend time an effort
failing to achieve the impossible (no matter how close your
head-in-the-sand attitude may leave you believing you could get).

Richard.


It is possible to find out the statistical distribution , which kind of
browsers a group of internet users have. One could pick a random sample
of the group, and verbally _ask_ each member of the sample

One could compare the procedure applied when estimating the shares of
different parties in Parlament election (a country with many parties).
The researcher picks randomly about 1000 persons to interview and the
results are to be read in newspapers, with estimated error margins.

Is there such research results about browsers made public somewhere?

Cross browser coding is a Good Thing. Even better would be, if there
were tools for any Javascritp beginner to start coding without first
becoming a member of High Priesthood Guru Team And Possessor Of Almost
Secret Information. (if the information is scattered in zillion places,
it is equivivalent of secret info in practice).

The priests might be a bit reluctant to make themselves unnecessary: why
should newcomers be able to learn in some hours everything that the
priest had to learn during many years? The magic art would no more be
magic, fascinating , challenging.
Jul 23 '05 #16

P: n/a
optimistx wrote:
<snip>
It is possible to find out the statistical distribution , which kind
of browsers a group of internet users have. One could pick a random
sample of the group, and verbally _ask_ each member of the sample
A requirement when deriving statistics from a sample is that the sample
be representative. So beyond the logistic problems of sampling a
globally distributed population there is also the problem of determining
that the sample taken is representative. Which itself would require the
availability of general data about Internet users, and the Internet does
not lend itself to the gathering of that sort of information.

Asking people which browsers they use will not tell you which UA headers
those browsers send, and most users would not be aware of that
particular detail. The kind of browsers used would tell you no more than
their default UA strings and their potential to spoof other browsers.

But in a world where some people speak of "having the Internet on their
computer", where some browsers are a customised UIs layered over another
browser and browsers embedded on small devices may not give the user any
indication of what browser is being used (just a way of starting it),
will asking a representative sample of internet users which browsers
they use reveal even that information?
One could compare the procedure applied when estimating the shares of
different parties in Parlament election (a country with many parties).
The researcher picks randomly about 1000 persons
I don't think you will find that the selection is not actually random.
to interview and the results are to be read in
newspapers, with estimated error margins.
Thinking in terms of estimated error margins does not lend itself to
reliable software creation. Better to look for criteria that produce
boolean results for decision making in computer code.
Is there such research results about browsers made public somewhere?

<snip>

No.

Richard.
Jul 23 '05 #17

This discussion thread is closed

Replies have been disabled for this discussion.