473,396 Members | 1,840 Online
Bytes | Software Development & Data Engineering Community
Post Job

Home Posts Topics Members FAQ

Join Bytes to post your question to a community of 473,396 software developers and data experts.

Should UA string spoofing be treated as a trademark violation?

VK
I wandering about the common proctice of some UA's producers to spoof
the UA string to pretend to be another browser (most often IE).

Shouldn't it be considered as a trademark violation of the relevant
name owner? If I make a whisky and call it "Jack Daniels", I most
probably will have some serious legal problems. "Mozilla" partially
appeared because NCSA stopped them from using "Mosaic" in the UA
string.

Is it some different situation with the current spoofing?
P.S. And no, I am not starving for browser sniffing. But the stats
impact is obvious.

Apr 13 '06 #1
79 3689

VK wrote:
I wandering about the common proctice of some UA's producers to spoof
the UA string to pretend to be another browser (most often IE).

Shouldn't it be considered as a trademark violation of the relevant
name owner? If I make a whisky and call it "Jack Daniels", I most
probably will have some serious legal problems. "Mozilla" partially
appeared because NCSA stopped them from using "Mosaic" in the UA
string.

Is it some different situation with the current spoofing?
P.S. And no, I am not starving for browser sniffing. But the stats
impact is obvious.


It seems that this problem would be of concern mainly to Microsoft, and
Microsoft has plenty of good lawyers and usually will use them for the
least little copyright violation they see. I would thus guess that
either there is not a copyright violation or Microsoft considers this
problem unimportant. Of course copyright laws can vary somewhat around
the world, and they are not enforced very well in some countries. I am
quite willing to leave the copyright aspects of this "problem" to
Microsoft. However, I think that every browser variation should have a
unique ID number assigned by an international agency, and that any
browser should be blocked from the web that does not have such an ID.
This would allow meaningful browser detection on the rare occasions
that it is really needed - for example the browser has a bug that other
browsers do not have. Howover, I am sure that this is wishful thinking
on my part, just as is a requirement that all new pages be blocked from
the web unless they completely validate at the W3C html and css
validators.

Apr 13 '06 #2

cwdjrxyz wrote:
VK wrote:
I wandering about the common proctice of some UA's producers to spoof
the UA string to pretend to be another browser (most often IE).
<snip> It seems that this problem would be of concern mainly to Microsoft, and
Microsoft has plenty of good lawyers and usually will use them for the
least little copyright violation they see. I would thus guess that
either there is not a copyright violation or Microsoft considers this
problem unimportant.
Don't be silly, Microsoft couldn't take action against anyone as they
virtually invented UA string spoofing, and every browser they have
released since IE 4 has spoofed Netscape 4 (hence 'Mozilla/4.0' at the
start of their UA string).

It was Microsoft's action in spoofing Netscape that resulted in the
change between HTTP 1.0 and 1.1 where the latter no longer specifies
the UA header as a source of information, only suggests that it could
be used as such. By the time HTTP 1.1 was written the horse had long
since bolted.
... , and that any browser should be blocked from the web
that does not have such an ID.
At which point the people writing the browsers you have never heard
off, and would so assume are incapable of anything, start spoofing
browser IDs. We just end up back where we are now, with lots of people
wasting their time thinking about browser IDs in the same way people
have been wasting their time assuming that user agent strings could be
a source of information.
This would allow meaningful browser detection on the rare occasions
that it is really needed
Many more people declare a need for browser detection than are actually
capable of coming up with some example where feature detection could
not answer the question if asked.
- for example the browser has a bug that other
browsers do not have.
Don't all browsers have a bug that other browsers do not have? But most
significant bugs can be tested for without browser detection. If you
think otherwise you are welcome to suggest a concrete example and see
if it can't be feature detected.
Howover, I am sure that this is wishful thinking

<snip>

Yes it is.

Richard.

Apr 13 '06 #3
VK wrote:
I wandering about the common proctice of some UA's producers to spoof
the UA string to pretend to be another browser (most often IE).

Shouldn't it be considered as a trademark violation of the relevant
name owner? If I make a whisky and call it "Jack Daniels", I most
probably will have some serious legal problems. "Mozilla" partially
appeared because NCSA stopped them from using "Mosaic" in the UA
string.

Is it some different situation with the current spoofing?


From a technical (and probably a realistic) standpoint, it probably
isn't really relevant. Especially considering that the greatest impact
UA spoofing has is to raise the statistical % usage of IE - and I'm sure
Microsoft isn't too concerned about that.

From a theoretical legal standpoint, you're probably right.
Apr 13 '06 #4
VK

Richard Cornford wrote:
Don't all browsers have a bug that other browsers do not have? But most
significant bugs can be tested for without browser detection. If you
think otherwise you are welcome to suggest a concrete example and see
if it can't be feature detected.


To not be nasty - but exclusively as a "burden of proof":

The current SVG Cairo engine used in Firefox 1.5.x cannot render
textpaths under Windows 98. Even more nasty: it just stops SVG
rendering on the first occurence of textpath. It is mentioned on
Mozilla's site but doesn't help on a practical run.

While the SVG Cairo support itself can be detected by
if (document.implementation.hasFeature('org.w3c.dom.s vg', '1.0'))
for Windows 98 adjustments I have to sniff for "Win98" in UA's string.

Though this sample is not perfectly "clear" as I'm sniffing for OS, not
UA.

Apr 13 '06 #5

Richard Cornford wrote:

At which point the people writing the browsers you have never heard
off, and would so assume are incapable of anything, start spoofing
browser IDs. We just end up back where we are now, with lots of people
wasting their time thinking about browser IDs in the same way people
have been wasting their time assuming that user agent strings could be
a source of information.


Of course you are right. So the international agency that assigns
browser IDs would have to have the ability to enforce the standards and
heavily fine or otherwise penalize browser writers who violate them. On
a more general level, we have to have international and national
standards for broadcasting radio and TV to avoid chaos. However the
situation on the web has approached anarchy in many respects, resulting
in unnecessary problems for both the writers of web pages and users.
The technical control of radio and TV broadcasting on an international
and national basis is not perfect, and a few rogue countries have
jammed broadcasts from time to time, for example. However, in my
opinion, the situation is far better in the broadcast field than it now
is on the web. I am only talking about enforcement of technical
standards. I do not think that regulation of content standards is a
good thing in most cases, although there could be rare exceptions. The
problem here is that what is acceptable in one society may not be so in
another. A good example is China. Both Google and Yahoo have recently
attracted the attention of the US Congress, and others, concerning
giving personal information about users that the Chinese official
demand. In some cases such information has apparently been used to jail
people who do not agree with some official Chinese policy - in other
words, a political "crime". That is apparently the price of their
doing business in China, but there are many who highly object on a
moral basis and claim that if giving personal information is the cost
of doing business in China, then the company should not operate there.

Apr 13 '06 #6
cwdjrxyz wrote:
Richard Cornford wrote:
At which point the people writing the browsers you have never
heard off, and would so assume are incapable of anything,
start spoofing browser IDs. We just end up back where we are
now, with lots of people wasting their time thinking about
browser IDs in the same way people have been wasting their
time assuming that user agent strings could be a source of
information.
Of course you are right. So the international agency that
assigns browser IDs would have to have the ability to enforce
the standards and heavily fine or otherwise penalize browser
writers who violate them.


That is fine so long as it cuts both ways and any web author who is
caught excluding a browser because it identifies itself is subject to
equivalent fines and penalties. Anything short of that and you are
inviting a browser monopoly that would not be in the public interest.

<snip> .... I am only talking about enforcement of technical
standards.

<snip>

Aren't you the 'cwdjrxyz' who blew his credibility in alt.html by
championing a content negotiation script that disregarded the mechanism
laid out in the HTTP 1.1 specification and actually failed so badly that
it would send XHTML to browsers that explicitly declared their rejection
of it:-

<news:11*********************@f14g2000cwb.googlegr oups.com>

I don't think I will have much regard for any assertions you may make in
favour of technical standards until after I have seen some evidence that
you follow them yourself.

Richard.
Apr 13 '06 #7
VK wrote:
Richard Cornford wrote:
... suggest a concrete example
and see if it can't be feature detected.


To not be nasty - but exclusively as a "burden of proof":

The current SVG Cairo ...

<snip>

That is not a concrete example, it is a hearsay report from the most
unreliable source available.

Richard.
Apr 13 '06 #8

Richard Cornford wrote:
cwdjrxyz wrote:
Richard Cornford wrote:
At which point the people writing the browsers you have never
heard off, and would so assume are incapable of anything,
start spoofing browser IDs. We just end up back where we are
now, with lots of people wasting their time thinking about
browser IDs in the same way people have been wasting their
time assuming that user agent strings could be a source of
information.


Of course you are right. So the international agency that
assigns browser IDs would have to have the ability to enforce
the standards and heavily fine or otherwise penalize browser
writers who violate them.


That is fine so long as it cuts both ways and any web author who is
caught excluding a browser because it identifies itself is subject to
equivalent fines and penalties. Anything short of that and you are
inviting a browser monopoly that would not be in the public interest.

<snip>
.... I am only talking about enforcement of technical
standards.

<snip>

Aren't you the 'cwdjrxyz' who blew his credibility in alt.html by
championing a content negotiation script that disregarded the mechanism
laid out in the HTTP 1.1 specification and actually failed so badly that
it would send XHTML to browsers that explicitly declared their rejection
of it:-

<news:11*********************@f14g2000cwb.googlegr oups.com>

I don't think I will have much regard for any assertions you may make in
favour of technical standards until after I have seen some evidence that
you follow them yourself.


I do not see what bringing up an unrelated reference to another group
has to do with this. You quote only one post in a very long thread. In
summary I use a php include to force a browser to accept true xhtml 1.1
if it reports it will accept it at all in the header exchange. It is up
to the browser maker to decide if they want to allow true xhtml using
the mime type for xhtml+xml or not. If they do not allow it then my php
include reverts to html 4.01 strict. If I did not do that, my pages
would not work on IE6! Thus I do not send xhtml to browsers that do not
indicate that they will accept it! In some cases the browser says it
will accept either the mime type for true xhtml or the mime type for
html. In some of these cases it says it prefers html. In that case I
have found that the common browsers that will accept both html and true
xhtml, but "prefer" html, work just fine if you force the xhtml path in
the header exchange. My guess is that some browser makers specify that
they prefer html just to be on the safe side. One should not confuse a
"preference" for the browser with the code that can be used to indicate
that preference in the header exchange, if a browser writer so wishes.
In addition a few lesser used browsers do not indicate what they will
accept in the header exchange, although they sometimes really will
accept true xhtml just as well as well as html. Apple's Safari comes to
mind here. In that case, I err on the safe side and use html 4.01
strict, because browser detection of some of these browsers is not safe
because they can spoof another browser.

I now have dozens of pages served as described above, and they all
validate perfectly as xhtml 1.1 or html 4.01 strict at the W3C
depending on what path is selected by the header exchange. Furthermore,
the pages work properly for the xhtml 1.1 or html 4.01 strict path
selected by the header exchange I use. You can see several such pages
by going to http://www.cwdjr.info/media/playersRoot.php .

Apr 14 '06 #9
VK

Richard Cornford wrote:
The current SVG Cairo ...

<snip>

That is not a concrete example, it is a hearsay report


You imply that I do not use SVG but just making up a problem? It is not
clear how did you come up to this conclusion - unless you think
yourselve telepathic.

Here is the feature detection block I'm currently using, and believe me
I did not make it just to post it here:

....
/**
* Feature detection block.
*/
/*@cc_on @*/
/*@if (@_jscript_version >= 5.5)
if (document.namespaces['v'] == null) {

document.namespaces.add('v','urn:schemas-microsoft-com:vml','#default#VML');
}
SVL.UA = 'IE';
SVL.VL = 'VML';
@elif (@_jscript)
SVL.UA = 'IE';
@else @*/
if (document.implementation) {
if (window.opera) {
SVL.UA = 'Opera';
SVL.VL = 'SVG';
}
else if (document.implementation.hasFeature('org.w3c.dom.s vg', '1.0'))
{
SVL.UA = 'Gecko';
SVL.VL = 'SVG';
}
else if ((window.netscape)&&(window.netscape.security)) {
SVL.UA = 'Gecko';
}
else {
/*NOP*/
}
}
/*@end @*/
....

Apr 14 '06 #10
VK

cwdjrxyz wrote:
Of course you are right. So the international agency that assigns
browser IDs would have to have the ability to enforce the standards and
heavily fine or otherwise penalize browser writers who violate them.
Wow! Like 10 year in jail for not supporting XHTML? :-) Form a W3C
International Police Corp exempted from the national legislatures? "You
browser doesn't support RegExp properly! On the wall, bastards!" :-)

You you mean "technical" standards (like cryptography strength, proper
UA string reporting etc.) then maybe... But I still tend to believe
that it can be better handled on the particular country level.
That is apparently the price of their
doing business in China, but there are many who highly object on a
moral basis and claim that if giving personal information is the cost
of doing business in China, then the company should not operate there.


I'm aware of China - search engines story and finding it very sad. But
it is OT of UA's features. It was made by IP tracking - which is a core
feature of WWW used by governments even in many countries against of
their own citizens. Think of Carnivore (USA), SORM2 (Russia), some very
interesting organizations acting under EUCD in EU.

Apr 14 '06 #11
On 14/04/2006 04:41, cwdjrxyz wrote:
Richard Cornford wrote:
[snipped quote]

Seems that you still don't know how to trim quoted posts.
Aren't you the 'cwdjrxyz' who blew his credibility in alt.html by
championing a content negotiation script that disregarded the
mechanism laid out in the HTTP 1.1 specification and actually
failed so badly that it would send XHTML to browsers that
explicitly declared their rejection of it:-

<news:11*********************@f14g2000cwb.googlegr oups.com>

I don't think I will have much regard for any assertions you may
make in favour of technical standards until after I have seen some
evidence that you follow them yourself.


I do not see what bringing up an unrelated reference to another group
has to do with this.


I think Richard made his point rather well: one cannot pontificate about
enforcing standards unless one is willing to implement them oneself.
You quote only one post in a very long thread.
Readers are welcome to view the rest of the thread, I'm sure; Google's
archived it all. However, what other insight do you expect them to gain?
It wasn't just me that told you that you were wrong, but that others
did, too?
In summary I use a php include to force a browser to accept true
xhtml 1.1
Which you shouldn't.
if it reports it will accept it at all in the header exchange.
That isn't the test you use at all. It's quite obvious from inspecting
its behaviour that all you do is look for the mere mention of the
string, application/xhtml+xml, as a substring of the Accept header value
(and a substring match is patently wrong). You make no effort to parse
the header whatsoever, and this is plain for all to see:

<http://www.cwdjr.info/media/phpPage.txt>
It is up to the browser maker to decide if they want to allow true
xhtml using the mime type for xhtml+xml or not.
Indeed, which is why quality values can be used to indicate not only
preference for a certain media type, but an explicit rejection of it as
well. If you had read and understood sections 14.1 and 3.9 in RFC 2616,
you would know this.
If they do not allow it then my php include reverts to html 4.01
strict.
But it doesn't. If a browser explicitly states that it cannot accept
XHTML, you'll serve it anyway.
If I did not do that, my pages would not work on IE6!
You only send HTML to IE 6 because it doesn't include
application/xhtml+xml in its Accept header value, not because your
negotiation mechanism is written correctly.
Thus I do not send xhtml to browsers that do not indicate that they
will accept it!
Repeating something false does not make it true.
In some cases the browser says it will accept either the mime type
for true xhtml or the mime type for html. In some of these cases it
says it prefers html. In that case I have found that the common
browsers that will accept both html and true xhtml, but "prefer"
html, work just fine if you force the xhtml path in the header
exchange.
There would be little point in advertising the ability to process XHTML
if in fact the user agent can't. However, that doesn't mean there isn't
a reason to prefer another media type.
My guess is [...]
Your guess is irrelevant. One does not incorrectly implement HTTP based
on a guess.

Though a rigid interpretation of quality values is not a requirement, it
is strongly recommended and you are clearly in no position to override
that recommendation.

[snip]
You can see several such pages by going to
http://www.cwdjr.info/media/playersRoot.php .


That isn't much there of which one should be proud. I see a useless
document title, a broken and superfluous meta element, poor use of class
attributes, badly chosen structural elements, and a very fetching uneven
white border.

Mike

--
Michael Winter
Prefix subject with [News] before replying by e-mail.
Apr 14 '06 #12
cwdjrxyz wrote:
Richard Cornford wrote:
cwdjrxyz wrote:
<snip>
> .... I am only talking about enforcement of technical
> standards. <snip>

Aren't you the 'cwdjrxyz' who blew his credibility in alt.html
by championing a content negotiation script that disregarded
the mechanism laid out in the HTTP 1.1 specification and
actually failed so badly that it would send XHTML to browsers
that explicitly declared their rejection of it:-

<news:11*********************@f14g2000cwb.googlegr oups.com>

I don't think I will have much regard for any assertions you
may make in favour of technical standards until after I have
seen some evidence that you follow them yourself.


I do not see what bringing up an unrelated reference to
another group has to do with this.


It is a thread that demonstrates someone who is apparently keen to
stress their championing of technical standards disregarding RFC 2616
(Hypertext Transfer Protocol -- HTTP/1.1), which just happens to be one
of the most pivotal technical standards that exists for the Internet.

I particularly enjoyed the point in the tread where Michael Winter
proposed you actually read RFC 2616 and you declared (in considerable
length) that you didn't take technical advice from people posting to
Usenet and instead would get your advice through technical and computing
journals. While everyone observing the conversation knew full well that
if going through technical journals was a worthwhile practice at all it
must inevitable lead you all the way back to RFC 2616, as that is the
applicable technical standard for content negotiation. You had spent
your residual credibility in the group by the end of that post.
You quote only one post in a very long thread.
Anyone who cares will be able to reference the thread form any single
message ID within it.
In summary I use a php include to force a browser to accept
true xhtml 1.1 if it reports it will accept it at all in the
header exchange.
It was precisely the fact that you were not doing that, but instead
serving XHTML to any UA that included the character sequence
"application/xhtml+xml" in its Accept header, that was the reason for
the criticism you received in that thread. Because, as anyone familiar
with the technical standards for content negotiation, as laid out in RFC
2616, already knows, a UA may include the character sequence
"application/xhtml+xml" in its Accept header in order to express its
absolute rejection of the MIME type (and that is without even
considering that it may include the sequence in a way that expresses a
string preference for text/html or some other type).

Content negotiation is the subject of formal technical specification and
your simplistic efforts do completely disregard that specification, to
the extent that your system is capable of doing the opposite of what it
should do and serving XHTML to a UA that reports that it cannot accept
it.
It is up to the browser maker to decide if they want to
allow true xhtml using the mime type for xhtml+xml or not.
Yes, and their mechanism for doing that is providing an HTTP Accept
header that conforms with RFC 2616's specification for an Accept header.
And it is the responsibility of the person writing software to do
content negotiation to interpret that Accept header in accordance with
the technical specification, rather than making up their own rules based
on superficial observations of few actual Accept headers.
If they do not allow it then my php include reverts to
html 4.01 strict.
It can only do that if you are aware of what constitutes 'not allow' as
laid down in RFC 2616.
If I did not do that, my pages would not work on IE6! Thus
I do not send xhtml to browsers that do not indicate that
they will accept it!
But the consequence of your implementation was that you will also send
XHTML to browser that do assert that they do 'not allow' it. That is
poor, and it is in disregard for the applicable technical specification.
In some cases the browser says it will accept either the
mime type for true xhtml or the mime type for html.
And it may express a preference for one or the other. For a while Opera
expressed a preference for text/html, which was fair enough as their
XHTML could not sensibly be scripted at the time, only rendered, so HTML
was the better content type to accept. Your script would have pushed
XHTML at it regardless because it had no conception of the specified
mechanism for content negotiation. And for other browsers in the same
situation your scrip is still delivering the inferior choice when it
could send the superior.
In some of these cases it says it prefers html. In that case
I have found that the common browsers that will accept both
html and true xhtml, but "prefer" html, work just fine if you
force the xhtml path in the header exchange.
Interworking specifications are not about what works in 'common
browsers', they are about creating systems that deliver acceptable
outcomes for everyone. after all you are the one proposing that UAs
identify themselves so you can know the uncommon browsers when they show
up, yet you are only acting to accommodate the 'common browsers'
regardless of how well the uncommon browses conform to the applicable
technical specification that you would rather disregard.
My guess is that some browser makers specify that they
prefer html just to be on the safe side.
I think that Opera made their choice thinking that the user may prefer
the option of functional scripts to broken ones. Browser manufactures
may also think that the user may prefer progressive rendering to only
having access to a page's contents once the page had fully loaded, or
that a user may prefer the output of an old and well tested/debugged
HTMP parser to a brand new, hardly tested and experimental XHTML parser.
The browser manufacturers are in a good position to judge the relative
acceptability of various content types in their browsers, and have a
specified mechanism to express it in their Accept headers. It doesn't
make sense to put that aside because superficial testing does not expose
any manifestations.
One should not confuse a "preference" for the browser with
the code that can be used to indicate that preference in
the header exchange, if a browser writer so wishes.
That doesn't make sense.
In addition a few
lesser used browsers do not indicate what they will accept in
the header exchange, although they sometimes really will accept
true xhtml just as well as well as html. Apple's Safari comes
to mind here. In that case, I err on the safe side and use html
4.01 strict, because browser detection of some of these browsers
is not safe because they can spoof another browser.

I now have dozens of pages served as described above,
And because you serve them without any regard for the RFC 2616 specified
content negotiation mechanism any statement you may make about the
'enforcement of technical standards' will be hypocrisy.
and they all validate perfectly as xhtml 1.1 or html 4.01 strict
at the W3C depending on what path is selected by the header
exchange. Furthermore, the pages work properly for the xhtml 1.1
or html 4.01 strict path selected by the header exchange I use. ...


There is little point talking of a "header exchange" if the UAs are
sending headers in accordance with RFC 2616 and you are interpreting
them in accordance with superficial rules derived from a few
observations and a lot of blanket assumptions. It is not an exchange,
let alone negotiation, if you are not even talking the same language.

Richard.
Apr 14 '06 #13
VK wrote:
Richard Cornford wrote:
> The current SVG Cairo ... <snip>

That is not a concrete example, it is a hearsay report


You imply that I do not use SVG but just making up a problem?


I do not imply that that you don't "use" SVG, I stated that your
assertion was a hearsay report form the most unreliable source
available.
It is not clear how did you come up to this conclusion -
unless you think yourselve telepathic.
There is no need for telepathy, all I have to do is observer that:-

1. You don't understand javascript sufficiently well to understand
the code that you write yourself.
2. You write bug-filled, convoluted, difficult to maintain, even
dangerous code, disregarding conventions and still failing to
fully address the applicable situation.
3. You are unwilling to take the advice of others on how to better
understand javascript, even when those individuals are the ones
capable of explaining the hows and whys of javascript.
4. You are incapable of comprehending technical explanations of
javascript, or engaging in the process of formatting questions
asking for clarification of what you don't understand.
5. You resist evidence and demonstrations that you are wrong far
beyond the point where any rational observer would accept reality.
6. When you find yourself in a minority of one in a technical
discussion involving many genuine experts on a subject you prefer
to conclude that everyone else is wrong and you alone are the
only person who really understands the technology.
7. You author code on a basis of mystical incantation, including
things 'because they work' but without any understanding of what
they actually do or ability to explain why you are using them.
8. You declare things to be 'bugs' because you don't like/understand
them when they are actually completely normal and expected (even
technically specified).
9. You don't understand computers (even to the extent of seeing why
the bit widths of data and address registers have nothing to do
with the precision of number representations in computer systems).
10. You bury your head in the sand whenever you are faced with the
possibility that things could be done better, or how they might
be done better.
11. You spend time testing browsers and end up knowing less about
them than when you started.
12. You don't understand logic.
13. You follow irrational though processes to false conclusions and
then maintain that you are correct in the face of any arguments.
14. You see relevance in the irrelevant and unrelated, but can never
justify it, preferring to characterise those who see the
irrelevant as irrelevant as beyond understanding.
15. The majority of your statements are too incoherent to convey
meaning.
16. When your statements are clear enough to convey meaning the
majority are irrational, technically false or made up off the
top of your head.
17. You use English terms outside of their accepted meanings and
apply those incorrect meaning to your interpretation of English.
18. You use technical language outside of its specified meaning.
19. You are incapable of consistently creating well-formed Usenet
posts.
20. You tend to regard people pointing out your inadequacies as
personally motivated rather than the reasoned responses to your
own misguided actions/behaviour that they actually are.

And given the above, when you make a statement that something is so it
would not be sensible for anyone to conclude that it is so, and even if
it were so it would be more reasonable to attribute it to shortcomings
in the programmer than anything else. I.E. if you are not capable of
rendering something sufficiently concrete that it can be reproduced by
others then it makes more sense to disregard it as just more irrational
ravings from your deranged mind.
Here is the feature detection block ...

<snip>

That is not a feature detection block, and is till irrelevant to the
'issue' you mentioned.

Richard.
Apr 14 '06 #14

Michael Winter wrote:
On 14/04/2006 04:41, cwdjrxyz wrote:

a lot.

I find most of your discussion without merit, and consider it just
another troll post. I am not going to waste my time on you again. Bye.

Apr 14 '06 #15

Richard Cornford wrote:

a lot.

I find most of your discussion without merit, and consider it just
another troll post. I am not going to waste my time on you again. Bye.

Apr 14 '06 #16
Richard Cornford said the following on 4/14/2006 9:59 AM:
cwdjrxyz wrote:


<snip>
You quote only one post in a very long thread.


Anyone who cares will be able to reference the thread form any single
message ID within it.


I read the entire thread based on that one reference and it was because
I needed the laugh I got from reading it.

I can now, safely, add cwdjrxyz to the people in the VK File. He was
half way there but that thread finished it.

--
Randy
comp.lang.javascript FAQ - http://jibbering.com/faq & newsgroup weekly
Javascript Best Practices - http://www.JavascriptToolbox.com/bestpractices/
Apr 14 '06 #17
cwdjrxyz said the following on 4/14/2006 11:48 AM:
Bye.


Promise?
--
Randy
comp.lang.javascript FAQ - http://jibbering.com/faq & newsgroup weekly
Javascript Best Practices - http://www.JavascriptToolbox.com/bestpractices/
Apr 14 '06 #18
VK

Randy Webb wrote:
I can now, safely, add cwdjrxyz to the people in the VK File.


VK File ? It sounds really scary. May you explain?

Apr 14 '06 #19
VK

Richard Cornford wrote:
1. You don't understand javascript sufficiently well to understand
the code that you write yourself.

<snip>

Rather strong statement from a person who just recently learned how to
add <script> elements to the page (see the relevant thread)

;-)

Apr 15 '06 #20
cwdjrxyz wrote:
[almost the same as in
news:11**********************@u72g2000cwu.googlegr oups.com]


Please do your practises in self-reflection elsewhere.
Score adjusted

PointedEars
--
If one person calls you a donkey, laugh at them; if several people do, who
write quite reasonable articles most of the time, you better find yourself
some green pastures to graze. -- saying in de.ALL (translated)
Apr 15 '06 #21
VK wrote:
Richard Cornford wrote:
1. You don't understand javascript sufficiently well to understand
the code that you write yourself.

<snip>

Rather strong statement from a person who just recently learned how to
add <script> elements to the page (see the relevant thread)

;-)


Is there a troll convention going on in the UK this weekend :-). Just
look at the post history of some of the trolls responding to this
thread, and you will see that they treat everyone in the same rude,
troll-like way. The best tactic likely is just to ignore them. Most see
them for what they are.

Since some of the trolls appear to have far too much free time, as
indicated by their many and often extremely long and rude posts to this
group and others, perhaps they need something more productive to keep
them busy and to avoid boredom. Since several appear to be in the UK, I
would like to suggest a useful project for them. The Queen has a web
site at http://www.royal.gov.uk/output/Page1.asp . Of course the Queen
would hire someone to write her pages. If you take the noted url to the
W3C validator, you find many errors, including some javascript ones. It
would seem the Queen should deserve a site that validates perfectly.
Perhaps some of the UK trolls could correct the code in the site and
contact a member of the royal household staff to explain the problem
and how it could be corrected. The Queen appears to be a very nice and
polite lady in her public speeches. Usenet trolls could benefit greatly
by studying the Queen's speeches in details.

Apr 15 '06 #22
VK wrote:
I wandering about the common proctice of some UA's producers to spoof
the UA string to pretend to be another browser (most often IE).
Is it common? How many browsers, by default, spoof others?

Shouldn't it be considered as a trademark violation of the relevant
name owner?
On the face of it, yes. However, whether it could be held to be so in a
court is quite another matter.

I am not a lawyer, but as anyone with a business that sells products it
designs and makes itself, I have an interest in knowing about trademark
and copyright law as applicable in my own jurisdiction. In regard to
trademark, the primary question is whether its use will confuse
consumers into thinking a product is from one company when in fact is is
from another. A case in point is Apple Corps and Apple Computer, though
there are aspects of that case related to the logo also.

The second issue is the damages that might arise - loss of sales because
consumers bought the 'wrong' product, leveraging another company's good
will, loss of reputation because the second company's faulty products
reflected on the first, and so on.

In regard to user agent spoofing, I don't think any of the above can be
shown. Consumers don't identify browsers by looking at the UA string,
so it can't be a factor in their decision of which browser to use. If
you can't prove that, end of case.

A more tenuous link could be shown if certain user agent strings were
required to make sites work properly (the original reason for doing it).
It might then be shown that this has some effect on consumer choice, but
the obvious conclusion here is inappropriate discrimination by the site.
The UA has the defence of acting as it did to overcome that
discrimination. To go further and try to link it back to trademark
violation is a very long bow to draw.

Is it some different situation with the current spoofing?


Yes, because the UA string is not a factor in consumer choice of which
browser to use.
--
Rob
Apr 15 '06 #23
Richard Cornford wrote:
Don't all browsers have a bug that other browsers do not have? But
most significant bugs can be tested for without browser detection. If
you think otherwise you are welcome to suggest a concrete example and
see if it can't be feature detected.


I'm curious to know if there is a way to feature-detect the need for an
"iframe shim" behind a DIV.

In IE on windows, as I'm sure you know, select lists and other controls
always render on top of other elements, regardless of z-index values. This
is commonly solved by placing an empty iframe behind the element, which
effectively blocks the control from showing through.

This problem only applies to IE on Windows. While there doesn't seem to be
any problem with adding the iframe for other browsers, it would ideally not
be done if not necessary.

What feature-detection could be used to determine the need for the iframe
shim?
It would be possible to feature-detect other things that are known to exist
only in IE on Windows, but it's generally not a good practice to infer
feature X based on the existence of feature Y.

Thoughts?

--
Matt Kruse
http://www.JavascriptToolbox.com
http://www.AjaxToolbox.com
Apr 15 '06 #24
VK

Matt Kruse wrote:
What feature-detection could be used to determine the need for the iframe
shim?


IE-specific "specifics" usually solved by using JScript pre-processor:
....
/*@cc_on @*/
/*@if (@_jscript)
var fixNeeded = true;
@else @*/
var fixNeeded = false;
/*@end @*/

It is possible to imagine that some UA producer will implement exactly
the same pre-processor and instruct it to tell that it is Internet
Explorer running JScript.
It is possible - but this already goes beyond the power of any
developer - and beyond any acceptable behavior.

And "semantically" :-) it is not a "yaky UA sniffing". I do not sniff
anything: I just place a piece of code and let it to be executed by any
UA capable to handle it (ignored otherwise).

With SVG programming I still need more UA sniffing because Opera and
Gecko SVG implementations are too different in some important aspects
to treat them equally. Here the biggest challenge was from Opera: this
browser became really determined to be undetectable out of IE. As soon
as some feature check was found, they cover it by a bogus "cork" in the
next upgrade. I really was running out of ideas until I found
windows.opera object (where they keep their a la GreaseMonkey
functions). The only thing bothers me now in my nightmares :-) that if
some new wannabe bastard desides to pretend to be Opera rather than IE
or Gecko. I tranquilize myself by thinking that it's highly doubtful -
though yet possible.

Apr 15 '06 #25
VK wrote:
Matt Kruse wrote:
What feature-detection could be used to determine the need for the
iframe shim? IE-specific "specifics" usually solved by using JScript pre-processor:


That is not feature-detection.
/*@cc_on @*/
/*@if (@_jscript)
var fixNeeded = true;
@else @*/
var fixNeeded = false;
/*@end @*/


Further, this doesn't check for Windows vs. Mac, since the problem doesn't
occur on Mac.

--
Matt Kruse
http://www.JavascriptToolbox.com
http://www.AjaxToolbox.com
Apr 15 '06 #26
VK

Matt Kruse wrote:
VK wrote:
Matt Kruse wrote:
What feature-detection could be used to determine the need for the
iframe shim?

IE-specific "specifics" usually solved by using JScript pre-processor:


That is not feature-detection.
/*@cc_on @*/
/*@if (@_jscript)
var fixNeeded = true;
@else @*/
var fixNeeded = false;
/*@end @*/


Further, this doesn't check for Windows vs. Mac, since the problem doesn't
occur on Mac.


Oh, you need Mac check too? Here we are:

/*@cc_on @*/
/*@if (!@_mac)
var fixNeeded = true;
@else @*/
var fixNeeded = false;
/*@end @*/

Alternatively:

/*@cc_on @*/
/*@if (@_win32)
var fixNeeded = true;
@else @*/
var fixNeeded = false;
/*@end @*/
(I guess it is secure to disregard a possibility of Windows 3.x ;-)

You need JScript.Net check maybe?

/*@cc_on @*/
/*@if (@_jscript_version >= 7)
var lang = 'JScript.Net';
@elif (@_jscript_version < 7)
var lang = 'JScript';
@else @*/
var lang = 'JavaScript';
/*@end @*/

To not run over and over :-) here are all conditions you can check:

@_win32
True if running on a Win32 system.

@_win16
True if running on a Win16 system.

@_mac
True if running on an Apple Macintosh system.

@_alpha
True if running on a DEC Alpha processor.

@_x86
True if running on an Intel processor.

@_mc680x0
True if running on a Motorola 680x0 processor.

@_PowerPC
True if running on a Motorola PowerPC processor.

@_jscript
Always true.

@_jscript_build
Contains the build number of the JScript scripting engine.

@_jscript_version
Contains the JScript version number in major.minor format.

Enjoy!

Apr 15 '06 #27
RobG <rg***@iinet.net.au> writes:
VK wrote:
I wandering about the common proctice of some UA's producers to spoof
the UA string to pretend to be another browser (most often IE).


Is it common? How many browsers, by default, spoof others?


Almost all other than Netscape 2-4.

IE6's user-agent string (on my computer) is:
Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.1; SV1; .NET CLR 1.1.4322; .NET CLR 2.0.50215)

The initial Mozilla/4.0 is a spoof of Netscape 4. The remaining data can
be used to discover that it actually isn't Netscape 4 by servers that
know what to look for, while those that don't know will be spoofed.
Most people have forgotten that this is how, and where, spoofing started :)

FireFox's is:
Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.0.1) Gecko/20060111 Firefox/1.5.0.1

It claims to be Mozilla/5.0, which it isn't. It's perhaps acceptable, since
it's in the general family of browsers reporting that name.

I don't remember what Opera's default is any more. It used to be an
IE-spoof. Mine is set to report as Opera, and I only very rarely have
problems with that (I still spoof for MSDN)

(And, also IANAL, I agree on the arguments against it being trademark
violation)

/L
--
Lasse Reichstein Nielsen - lr*@hotpop.com
DHTML Death Colors: <URL:http://www.infimum.dk/HTML/rasterTriangleDOM.html>
'Faith without judgement merely degrades the spirit divine.'
Apr 15 '06 #28
"VK" <sc**********@yahoo.com> writes:
I really was running out of ideas until I found
windows.opera object (where they keep their a la GreaseMonkey
functions).
The user.js-functions are new, but I believe window.opera has been
in Opera since the earliest Javascript capable Opera browsers.
The only thing bothers me now in my nightmares :-) that if
some new wannabe bastard desides to pretend to be Opera rather than IE
or Gecko. I tranquilize myself by thinking that it's highly doubtful -
though yet possible.


The reason browsers try to look like other browsers is to twart
annoying programmers that refuse to let pages work on them, even
though they implement all the features that are needed. As long as
programmers use browser detection as a white-list of browsers that are
allowed to work on their page, browser makes will try to prevent their
browser from being excluded (usually for no good reason).

If the programmers used feature detection instead, then they won't
need to know what browser it is, only what features are available. A
completely new but feature complete browser would work with old pages
then. Using browser detection as a white-list, it would be
unnecessarily excluded.

/L
--
Lasse Reichstein Nielsen - lr*@hotpop.com
DHTML Death Colors: <URL:http://www.infimum.dk/HTML/rasterTriangleDOM.html>
'Faith without judgement merely degrades the spirit divine.'
Apr 15 '06 #29
VK

Lasse Reichstein Nielsen wrote:
I don't remember what Opera's default is any more. It used to be an
IE-spoof.
The latest Opera 8.54 default is
Mozilla/4.0 (compatible; MSIE 6.0; Windows 98; en) Opera 8.54
(with different OS and language of course for different users)

I presume it currently holds the record of spoofing :-) by claiming
three browsers at once. At least they mention the real one now (Opera)
too but AFAIK it is a rather recent improvement.
(And, also IANAL, I agree on the arguments against it being trademark
violation)


It claims to be able to handle the content which it cannot handle: say
it cannot draw VML graphics, initialize ActiveX objects and use
behaviors. I understand that it is always possible to serve some
generic all-in-one script to sniff the real situation client-side and
act accordingly. But since when server-side content preparation became
an illegal technique? It is very often that the requested page doesn't
exists at all - but being prepared out of raw data per call.

Apr 15 '06 #30

VK wrote:
Lasse Reichstein Nielsen wrote:
I don't remember what Opera's default is any more. It used to be an
IE-spoof.


The latest Opera 8.54 default is
Mozilla/4.0 (compatible; MSIE 6.0; Windows 98; en) Opera 8.54
(with different OS and language of course for different users)

I presume it currently holds the record of spoofing :-) by claiming
three browsers at once. At least they mention the real one now (Opera)
too but AFAIK it is a rather recent improvement.
(And, also IANAL, I agree on the arguments against it being trademark
violation)


It claims to be able to handle the content which it cannot handle: say
it cannot draw VML graphics, initialize ActiveX objects and use
behaviors. I understand that it is always possible to serve some
generic all-in-one script to sniff the real situation client-side and
act accordingly. But since when server-side content preparation became
an illegal technique? It is very often that the requested page doesn't
exists at all - but being prepared out of raw data per call.


Opera is quite unique in many ways in addition to spoofing user agents.
So far as I know, it does not use actual ActiveX (unless you add
unofficial plugins that float around from time to time), and Opera has
been very anti-ActiveX, at least in the past. Yet, for the last few
upgrades, Opera will run the WMP9 and 10 media player if you use only a
Microsoft ActiveX object to code for the media playing. I have no idea
what Opera is doing to get this to work - hopefully someone knows. Of
course there have been ActiveX plugins for the WMP only for Netscape,
Mozilla and Firefox in the past, but you had to download them, and the
mentioned browser writers tended to discourage this. The reason for an
ActiveX plugin for the WMP is that some only write for IE using an
ActiveX object and do not bother to write a path for most other
browsers that do not come with ActiveX.

Since I work with quite a bit of media, I have noted something else
interesting at the server of my host. You of course get records of the
user agent for visiting browsers. However these days you often get a
record of the visit of a player, such as the WMP, Real, etc. Actually
some of the modern media players are now running about 3 times the byte
size of a small browser such as Opera or Firefox. Part of this
complexity is due to inclusion of new features for selling and
protecting media.

Apr 15 '06 #31
"VK" <sc**********@yahoo.com> writes:
The latest Opera 8.54 default is
Mozilla/4.0 (compatible; MSIE 6.0; Windows 98; en) Opera 8.54
(with different OS and language of course for different users) I presume it currently holds the record of spoofing :-) by claiming
three browsers at once. At least they mention the real one now (Opera)
too but AFAIK it is a rather recent improvement.
I don't think so ... but let me check.
Yep, Opera 3.62 reports:
Mozilla/4.0 (Windows NT 5.1;US) Opera 3.62 [en]
i.e., spoofing Netscape 4 to the uninitated, but revealing itself to be
Opera to those who knows that it exists.
It claims to be able to handle the content which it cannot handle: say
it cannot draw VML graphics, initialize ActiveX objects and use
behaviors.
My IE can't initialize most ActiveX objects either.
Still, that should be handled by content negotiation, not feature
inference from the user-agent string. It wouldn't have been spoofed
to begin with, if people hadn't abused it.
I understand that it is always possible to serve some generic
all-in-one script to sniff the real situation client-side and act
accordingly. But since when server-side content preparation became
an illegal technique?


It never was ... but people refusing to serve content to a browser
that can understand it, just because they don't recognize its name,
have made the bed they now lie in (if that's an idiom in English).

/L
--
Lasse Reichstein Nielsen - lr*@hotpop.com
DHTML Death Colors: <URL:http://www.infimum.dk/HTML/rasterTriangleDOM.html>
'Faith without judgement merely degrades the spirit divine.'
Apr 15 '06 #32
on**********@netscape.net wrote:
VK wrote:
Richard Cornford wrote:
> 1. You don't understand javascript sufficiently well to understand
> the code that you write yourself. <snip>

Rather strong statement from a person who just recently learned how to
add <script> elements to the page (see the relevant thread)

;-)


Is there a troll convention going on in the UK this weekend :-).


If there was, most certainly you would be there, would you not?
Just look at the post history of some of the trolls responding to this
thread,
I can see only one, maybe two trolls here. Certainly they are not called
Richard, Michael, or Lasse, as those people are in fact invaluable
contributors to this newsgroup, who have made their points well.
While "cwdjrxyz" and the like are not at all, and they have not.
[...]
Since some [...] appear to have far too much free time, as
indicated by their many and often extremely long and rude posts
to this group and others [...]


This is a technical Usenet discussion group, its more serious contributors
trying to come up with (for some people hard) technical facts; not some
cuddle script-kiddie Web forum, its members telling you what you want to
hear, that you may be used to. If you can't stand the heat, stay out of
the kitchen.

<URL:http://jibbering.com/faq/>
Score adjusted

PointedEars
Apr 15 '06 #33
VK

Lasse Reichstein Nielsen wrote:
It never was ... but people refusing to serve content to a browser
that can understand it, just because they don't recognize its name,
have made the bed they now lie in (if that's an idiom in English).


You've made the bad you sleep in it. :-) Point taken.

Apr 15 '06 #34
Lasse Reichstein Nielsen wrote:
RobG <rg***@iinet.net.au> writes:
VK wrote:
I wandering about the common proctice of some UA's producers to spoof
the UA string to pretend to be another browser (most often IE).

Is it common? How many browsers, by default, spoof others?


Almost all other than Netscape 2-4.

IE6's user-agent string (on my computer) is:
Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.1; SV1; .NET CLR 1.1.4322; .NET CLR 2.0.50215)

The initial Mozilla/4.0 is a spoof of Netscape 4. The remaining data can
be used to discover that it actually isn't Netscape 4 by servers that
know what to look for, while those that don't know will be spoofed.
Most people have forgotten that this is how, and where, spoofing started :)

FireFox's is:
Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.0.1) Gecko/20060111 Firefox/1.5.0.1

It claims to be Mozilla/5.0, which it isn't. It's perhaps acceptable, since
it's in the general family of browsers reporting that name.


Safari on PowerPC is:

Mozilla/5.0 (Macintosh; U; PPC Mac OS X; en) AppleWebKit/XX (KHTML, like
Gecko) Safari/YY
And on Intel:

Mozilla/5.0 (Macintosh; U; Intel Mac OS X; en) AppleWebKit/XX (KHTML,
like Gecko) Safari/YY

<URL:http://developer.apple.com/internet/safari/faq.html#anchor2>
where XX and YY are appropriate version numbers. With the debug menu
enabled, it takes a second to change it to any of a number of strings,
including Netscape 4, 6, 7, IE 5 Mac, IE 6 Windows, even Konqueror 3.
There is still one site that, when I log in, reports:

"This site is not optimised for Netscape 6. We recommend
you use Microsoft Internet Explorer Version 6.0 or
Netscape 4.77 or 4.78."
I ignore the warning and everything works as expected.

Stumbled across this site that appears to have a fairly exhaustive list
of UA strings:

<URL:http://www.pgts.com.au/pgtsj/pgtsj0208c.html>

--
Rob
Apr 16 '06 #35

Lasse Reichstein Nielsen wrote:
"VK" <sc**********@yahoo.com> writes:
The latest Opera 8.54 default is
Mozilla/4.0 (compatible; MSIE 6.0; Windows 98; en) Opera 8.54
(with different OS and language of course for different users)

I presume it currently holds the record of spoofing :-) by claiming
three browsers at once. At least they mention the real one now (Opera)
too but AFAIK it is a rather recent improvement.


I don't think so ... but let me check.
Yep, Opera 3.62 reports:
Mozilla/4.0 (Windows NT 5.1;US) Opera 3.62 [en]
i.e., spoofing Netscape 4 to the uninitated, but revealing itself to be
Opera to those who knows that it exists.


I dug up several properties for Opera 7.21 from 2003 from some of my
backups. In case anyone needs the information for several other current
browsers in 2003, I can provide that also. The information was obtained
on a Windows XP OS.

__________________________________________________ ____________________

Opera 7.21

appCodeName=Mozilla
appMinorVersion=
appName=Microsoft Internet Explorer
appVersion=4.0 (compatible; MSIE 6.0; Windows NT 5.1)
userAgent=Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.1) Opera 7.21
[en]
vendor - NN6 up only=undefined
vendorSub - NN6 up only=undefined
document.all object support=true
getElementById object support=true
java support=true
Height=768
Width=1024
Available Height=738
Available Width=1024
Color Depth=32 bit
innerWidth=1022
innerHeight=584
clientHeight=584
clientWidth=1022
language for NN & Opera=en
language for IE & relatives=en
IE4 browser language or IE5 up op. sys.lang=en
Presistent cookies enabled?=true
CPU Class(IE4+)=undefined
On Line(IE4+)?=undefined
Operating System(NN6+)=undefined
Platform=Win32
Product Name(NN6+)=undefined
Product Version(NN6+)=undefined
Operating System Language(IE4+)=undefined
User Profile(IE3+)=undefined
document.body.clientWidth object support=true
document.body.clientHeight object support=true
document.body object support=true
window.innerHeight object support=true

__________________________________________________ ___________________

Apr 16 '06 #36
VK

RobG wrote:
Safari on PowerPC is:

Mozilla/5.0 (Macintosh; U; PPC Mac OS X; en) AppleWebKit/XX (KHTML, like
Gecko) Safari/YY


Wow! Opera is moved on the second place in the spoofing context.

Mozilla
KDE
Gecko
Safari

I just love this part: "like Gecko" - what a hell does it suppose to
mean? Almost but no sigars? That sustains my old idea that some UA
strings are being prepared under sever toxic influence.

Now if would be cool to see the Konqueror string. To become the winner
it should be now something like "(KHTML, like Safari) close to Gecko
almost MSIE".

:-)

Apr 16 '06 #37
"VK" <sc**********@yahoo.com> writes:
RobG wrote:
Safari on PowerPC is:

Mozilla/5.0 (Macintosh; U; PPC Mac OS X; en) AppleWebKit/XX (KHTML, like
Gecko) Safari/YY
I just love this part: "like Gecko" - what a hell does it suppose to
mean? Almost but no sigars?
Most likely that some induhviduals out there are checking for the
occurence of the string "Gecko" in the user-agent string before
allowing browser to use their site. An again they got what they asked
for.
That sustains my old idea that some UA strings are being prepared
under sever toxic influence.


Hardly. There's a logic to it, although a quite twisted one. It's
cops an robbers - every time someone tries to use the user-agent
string inappropriately, the browser makers change it so that their
browser still matches.

/L
--
Lasse Reichstein Nielsen - lr*@hotpop.com
DHTML Death Colors: <URL:http://www.infimum.dk/HTML/rasterTriangleDOM.html>
'Faith without judgement merely degrades the spirit divine.'
Apr 16 '06 #38
VK
> "VK" <sc**********@yahoo.com> writes:
RobG wrote:
Safari on PowerPC is:

Mozilla/5.0 (Macintosh; U; PPC Mac OS X; en) AppleWebKit/XX (KHTML, like
Gecko) Safari/YY
I just love this part: "like Gecko" - what a hell does it suppose to
mean? Almost but no sigars?


After rethinking I agree with you that it is not stupid: I also think
that it is exact on the topic of this thread.
This is the same as say <http://www.microsoft.com> - try to use it for
anything but Microsoft, Inc. But just add "my" and it becomes your
private business right away: <http://www.mymicrosoft.com>
<http://www.coca-cola.com> -no way; <http://www.my-coca-cola.com> - my
way.

AppleWebKit/XX (KHTML, Gecko) - too dangerous
AppleWebKit/XX (KHTML, like Gecko) - so sue me

"like Gecko", "almost Firefox", even "not MSIE". Stincky- but so far
legally secure I guess.
Hardly. There's a logic to it, although a quite twisted one. It's
cops an robbers - every time someone tries to use the user-agent
string inappropriately, the browser makers change it so that their
browser still matches.


I would go for this logic if new browsers would be *exact* functional
equivalents of UA's they are spoofing. I mean they can have better
usability and as many extra features as they want: but the
functionality of the spoofed browser must be implemented in full and in
all details. But it is not this way: so far mostly these are narrowed
implementations with a set of their particular bugs and rendering
twists. Yet they want to be served by the same server-side content
prepared for much more capable UA: and if they chock on it (no
surprise) they propose to "Report brocken site". I may be biased but
something is wrong with this picture.

Apr 16 '06 #39
VK said the following on 4/16/2006 4:27 PM:
"VK" <sc**********@yahoo.com> writes:

Your quoting is incorrect, RobG nor you wrote the below, Lasse did.
Hardly. There's a logic to it, although a quite twisted one. It's
cops an robbers - every time someone tries to use the user-agent
string inappropriately, the browser makers change it so that their
browser still matches.


I would go for this logic if new browsers would be *exact* functional
equivalents of UA's they are spoofing.


Stop trying to determine the UA and start testing for features and that
becomes a moot point, as any and all arguments about the validity of UA
strings are.

--
Randy
comp.lang.javascript FAQ - http://jibbering.com/faq & newsgroup weekly
Javascript Best Practices - http://www.JavascriptToolbox.com/bestpractices/
Apr 16 '06 #40
"VK" <sc**********@yahoo.com> writes:
I would go for this logic if new browsers would be *exact* functional
equivalents of UA's they are spoofing. I mean they can have better
usability and as many extra features as they want: but the
functionality of the spoofed browser must be implemented in full and in
all details.
If most pages required that, I could accept it as a requirement. But
pages that exclude browsers by browser detection are usually created
by people with less than perfect grasp of browser scripting. They
are likely to just use the same few features that everybody else on
the web are using, and that new browsers do support.

Think as a browser creator: If 100 pages exclude your browser, you
would rather spoof a browser that they include and have 90 of them
work perfectly and 10 of them break while running. It's better service
for your users (90 more pages that work), and users of non-IE browsers
are likely to, rightly, blame the page writer for errors anyway.
But it is not this way: so far mostly these are narrowed
implementations with a set of their particular bugs and rendering
twists.
Why should they be any different from the existing browsers (except
that they actually support the core W3C standards, unlike IE).

Everything on top of that is bonus features which should only be
used after detecting the feature, not just the browser.
Yet they want to be served by the same server-side content
prepared for much more capable UA: and if they chock on it (no
surprise) they propose to "Report brocken site". I may be biased but
something is wrong with this picture.


If you take Opera as an example, they always identify themselves as
Opera. Any page that still fails must be unaware of, or deliberatly
ignoring, that their page doesn't work. Reporting the page as broken
allows Opera Software to test if the page fails due to a bug in the
browser or due to being badly written. If the latter, they can contact
the site owner and report it (along with the message that at least on
Opera user wanted to use their site and couldn't).

/L
--
Lasse Reichstein Nielsen - lr*@hotpop.com
DHTML Death Colors: <URL:http://www.infimum.dk/HTML/rasterTriangleDOM.html>
'Faith without judgement merely degrades the spirit divine.'
Apr 17 '06 #41
VK
Ref. Should UA string spoofing be treated as a trademark violation?

As a support for my point of view expressed in this thread, where Opera
was mentioned among the most "nasty offenders". Either they read this
and agreed on it, or they came to the same conclusion independently,
but:

<http://www.opera.com/docs/changelogs/windows/900b1/> (freshly listed
on opera.com):

Changelog : HTTP
....
Changed default UserAgent string to identify as Opera.
....

P.S. Now it's time to take care of Safari and Konqueror ;-)

Apr 22 '06 #42
VK said the following on 4/22/2006 8:52 AM:
Ref. Should UA string spoofing be treated as a trademark violation?

As a support for my point of view expressed in this thread, where Opera
was mentioned among the most "nasty offenders". Either they read this
and agreed on it, or they came to the same conclusion independently,
but:
You can rest assured they didn't change it based on this thread.
<http://www.opera.com/docs/changelogs/windows/900b1/> (freshly listed
on opera.com):

Changelog : HTTP
....
Changed default UserAgent string to identify as Opera.
....

P.S. Now it's time to take care of Safari and Konqueror ;-)


No, it's time to stop browser detecting, object detecting instead, and
then it is a moot issue *what* the UA string says.

--
Randy
comp.lang.javascript FAQ - http://jibbering.com/faq & newsgroup weekly
Javascript Best Practices - http://www.JavascriptToolbox.com/bestpractices/
Apr 23 '06 #43
Randy Webb wrote:
No, it's time to stop browser detecting, object detecting instead, and
then it is a moot issue *what* the UA string says.


I raised a point in this thread to answer Richard's request for examples of
cases where browser sniffing would ever be needed. No one replied. I think
there are cases where browser sniffing is justified, even if not completely
accurate. I'd like to be proven wrong ;)

--
Matt Kruse
http://www.JavascriptToolbox.com
http://www.AjaxToolbox.com
Apr 23 '06 #44
VK

Matt Kruse wrote:
Randy Webb wrote:
No, it's time to stop browser detecting, object detecting instead, and
then it is a moot issue *what* the UA string says.


I raised a point in this thread to answer Richard's request for examples of
cases where browser sniffing would ever be needed. No one replied. I think
there are cases where browser sniffing is justified, even if not completely
accurate. I'd like to be proven wrong ;)


I gave you some already. And for IFRAME also if you want to cover that
AOL Browser without conditional comments support.

But it is just a play-bye. What about XSLT documents? For browsers
(Firefox, Camino, Opera 9.0a or higher, Safari 2.1 or higher, IE 6 or
higher) I want to serve normal .xml data for client-side
transformation. For sub-standard crap it is needed to say at least some
condolensences - in HTML.

Or even someone got shifted on the XHTML basis: wouldn't it be nice to
serve to XHTML happily unaware browsers "text/html" instead of
"application/xtml+xml" ?

There are other cases (how many do you need? :-)

Apr 23 '06 #45
Matt Kruse wrote:
Randy Webb wrote:
No, it's time to stop browser detecting, object detecting
instead, and then it is a moot issue *what* the UA string
says.
I raised a point in this thread to answer Richard's request
for examples of cases where browser sniffing would ever be
needed. No one replied.


Probably in part because we have discussed the subject before and your
re-raising the same question implies that you were not interested in the
responses you got last time:-

1. As the issue applies to all Windows browsers that employ the native
Windows select GUI component to represent HTML select elements the
problem has nothing to do with IE as such but instead determining
whether the particular UA is using that component. Which may be
detectable in some environments as that component is notorious for not
accepting particular CSS assignments and its refusal to accept them may
have manifestations in the DOM.

2. It is only worth considering not using the IFRAME even when it is not
necessary if its use is harmful, and branching in the creation of the
IFRAME also means having two sets of, for example, positioning code. It
may be simpler to just employ the IFRAME anyway. And having done that
you will have a generally available hidden IFRAME on hand for doing
asynchronous background loading; see it as an opportunity rather than a
problem.

3. The IFRAME shim is not the only technique for handling the burn
through issue.
I think there are cases where browser sniffing is justified,
even if not completely accurate. I'd like to be proven wrong ;)


Trying to justify something that doesn't work (to the extent of being
catastrophically wrong in worst cases) is hardly a reasonable response
to finding that the more reliable alternative cannot cover 100% of
cases.

Richard.
Apr 23 '06 #46
"Randy Webb" <Hi************@aol.com> wrote in message
news:47********************@comcast.com...
...
No, it's time to stop browser detecting, object detecting instead, and
then it is a moot issue *what* the UA string says.


How would you use object detecting to:

1) find out whether onunload fires when you hit the back, forward, or reload
buttons, or close the browser window? (Onunload does not work in those
cases in Opera and, I have been told, in Safari, but it works in many other
browsers.)

2) find out what colors an inset, outset, ridge, or groove border actually
uses when you specify a border color of #000000? (The actual colors vary
from black to light gray in various browsers.)
Apr 24 '06 #47
Richard Cornford wrote:
Probably in part because we have discussed the subject before and your
re-raising the same question implies that you were not interested in
the responses you got last time:-
I've not seen previous replies on the topic.
1. As the issue applies to all Windows browsers that employ the native
Windows select GUI component to represent HTML select elements
I'm not sure that is true. A browser could easily be written in Windows
using native controls which do not overlap other objects in the page. It is
merely a side-effect of bad programming in IE that is at fault.
the
problem has nothing to do with IE as such but instead determining
whether the particular UA is using that component.
I believe that is a false statement.
Which may be
detectable in some environments as that component is notorious for not
accepting particular CSS assignments and its refusal to accept them
may have manifestations in the DOM.
But that is a case where you are testing one feature and then inferring
another.
2. It is only worth considering not using the IFRAME even when it is
not necessary if its use is harmful
It cannot be assumed that there is no browser where using the iframe would
be a problem, but it would probably be a safe assumption AFAIK.
3. The IFRAME shim is not the only technique for handling the burn
through issue.


True, but it seems to be the best, IMO.
I think there are cases where browser sniffing is justified,
even if not completely accurate. I'd like to be proven wrong ;)

Trying to justify something that doesn't work (to the extent of being
catastrophically wrong in worst cases) is hardly a reasonable response
to finding that the more reliable alternative cannot cover 100% of
cases.


I'm talking only about the cases which cannot be covered by the more
reliable alternative.

--
Matt Kruse
http://www.JavascriptToolbox.com
http://www.AjaxToolbox.com
Apr 24 '06 #48
VK

Matt Kruse wrote:
I'm not sure that is true. A browser could easily be written in Windows
using native controls which do not overlap other objects in the page. It is
merely a side-effect of bad programming in IE that is at fault.


"Super Z of form elements" (now called "firing through" I guess) is an
age old problem first introduced with DHTML itself. It was common for
both NN 4.x and IE 4.x. Later fixed in IE 5.2, reintroduced in IE 5.5,
fixed, reintroduced - so the same version line might have it or not
depending on the minor version.

The core issue here is that systemwise form elements (in Windows at
least) are not parts of the page graphics context. They are graphics
peers painted separately atop of the page context and anchored to the
indicated page points. The closest analogy would be a transparent touch
screen with form elements painted and placed atop of your monitor
screen. One can reveal this "hidden misery" by making a big element
reach form and scrolling it on a slow machine. You'll see form elements
being late from the page content itself (labels, text, graphics etc.)
and then "jumping" all together to the new position.

Thus in order to let a page content to be atop of a form element, the
only way is to "place another touch screen atop of the existing one".
Until DirectX got mature it was simply impossible - and even now it is
a tricky task.

Personally I hate Super Z issue, and it was a headache since 1998 for
everyone. And I really thing that in so many years IE should switch
long ago on build-in DHTML widgets instead of torturing the legacy
peers with DirectX. But I may not know all issues.

Apr 24 '06 #49
On 24/04/2006 02:16, Warren Sarle wrote:

[snip]
How would you use object detecting to:

1) find out whether onunload fires when you hit the back, forward, or
reload buttons, or close the browser window?
One wouldn't nor should one care. The unload event shouldn't be used for
anything important, anyway, so it's not a big deal if it fails to fire.

[snip]
2) find out what colors an inset, outset, ridge, or groove border
actually uses when you specify a border color of #000000?


One wouldn't as it's a minor presentational issue, and unrelated to
browser scripting.

If black looks good in one browser, but not another, try a compromise. A
dark grey will probably do just as well for the former, and be enough to
adjust the secondary colour chosen by the latter.

[snip]

Mike

--
Michael Winter
Prefix subject with [News] before replying by e-mail.
Apr 24 '06 #50

This thread has been closed and replies have been disabled. Please start a new discussion.

By using Bytes.com and it's services, you agree to our Privacy Policy and Terms of Use.

To disable or enable advertisements and analytics tracking please visit the manage ads & tracking page.