By using this site, you agree to our updated Privacy Policy and our Terms of Use. Manage your Cookies Settings.
424,652 Members | 1,182 Online
Bytes IT Community
+ Ask a Question
Need help? Post your question and get tips & solutions from a community of 424,652 IT Pros & Developers. It's quick & easy.

Help Jquery: unable to register a ready function

P: n/a

Hello All

I am trying to activate a link using Jquery. Here is my code;

<html>
<head>
<script type="text/javascript" src="../../resources/js/
jquery-1.2.6.js"</script>

<script language="javascript" type="text/javascript">

$(document).ready(function(){ $('.mylink').click(function()
{ $.jPrintArea('#tabularData') }); });

jQuery.jPrintArea=function(el)
{
alert("hello");

var iframe=document.createElement('IFRAME');

var doc=null;

$(iframe).attr('style','position:absolute;width:0p x;height:
0px;left:-500px;top:-500px;');

document.body.appendChild(iframe);

doc=iframe.contentWindow.document;

var links=window.document.getElementsByTagName('link') ;

for(var i=0;i<links.length;i++)
if(links[i].rel.toLowerCase()=='stylesheet')

doc.write('<link type="text/css" rel="stylesheet" href="'+links[i].href
+'"></link>');

doc.write('<div class="'+$(el).attr("class")+'">'+$(el).html()+'</
div>');

doc.close();

iframe.contentWindow.focus();

iframe.contentWindow.print();

alert('Printing...');

//wait(1);

document.body.removeChild(iframe);

}
</script>
</head>
<body>
<div id="tabularData">
<a href="#" class="mylink" name="mylink">Print this Table</a>
</div>
</body>
</html>

When I click on the link I see nothing. I am expecting to see an
alert. Could someone please tell me where
I am going wrong?

Thanks for your help. Sorry I am unable to post the code on a web
site.

Oct 28 '08 #1
Share this Question
Share on Google+
53 Replies


P: n/a
On Oct 28, 7:50*am, "souporpo...@gmail.com" <soup_or_po...@yahoo.com>
wrote:
Hello All

I am trying to activate a link using Jquery. Here is my code;
Trying to do what? And you can't do anything useful with jQuery.

You forgot the doctype.
>
<html>
<head>
<script type="text/javascript" src="../../resources/js/
jquery-1.2.6.js"</script>
Remove this.
>
<script language="javascript" type="text/javascript">

$(document).ready(function(){ * $('.mylink').click(function()
{ $.jPrintArea('#tabularData') }); * * *});
This is the sort of mangled syntax that the jQuery crowd thinks "makes
JavaScript bearable?" This is complete nonsense and about as
inefficient as it gets. And BTW, their "ready" method is known to be
unreliable.

>
jQuery.jPrintArea=function(el)
{
* * * * alert("hello");

var iframe=document.createElement('IFRAME');

var doc=null;

$(iframe).attr('style','position:absolute;width:0p x;height:
0px;left:-500px;top:-500px;');

document.body.appendChild(iframe);

doc=iframe.contentWindow.document;

var links=window.document.getElementsByTagName('link') ;

for(var i=0;i<links.length;i++)
if(links[i].rel.toLowerCase()=='stylesheet')

doc.write('<link type="text/css" rel="stylesheet" href="'+links[i].href
+'"></link>');

doc.write('<div class="'+$(el).attr("class")+'">'+$(el).html()+'</
div>');

doc.close();

iframe.contentWindow.focus();

iframe.contentWindow.print();

alert('Printing...');

//wait(1);

document.body.removeChild(iframe);

}

</script>
</head>
<body>
<div id="tabularData">
<a href="#" class="mylink" name="mylink">Print this Table</a>
</div>
</body>
</html>

When I click on the link I see nothing. I am expecting to see an
alert. Could someone please tell me where
I am going wrong?
Yes. You clearly want to add scripting to your site, but do not know
how. You heard that jQuery would make it really easy and you won't
have to learn anything about ECMAScript, cross-browser issues, etc.
Just wind it up and watch it go, right? It won't work. Yes, lots of
pundits on blogs say it works, but they don't know what they are
talking about.

See the FAQ for some basic examples. That is the best place to start.
Oct 28 '08 #2

P: n/a
On Oct 28, 2:35*pm, David Mark <dmark.cins...@gmail.comwrote:
On Oct 28, 7:50*am, "souporpo...@gmail.com" <soup_or_po...@yahoo.com>
wrote:
Hello All
I am trying to activate a link using Jquery. Here is my code;

Trying to do what? *And you can't do anything useful with jQuery.

You forgot the doctype.
<html>
<head>
<script type="text/javascript" src="../../resources/js/
jquery-1.2.6.js"</script>

Remove this.
<script language="javascript" type="text/javascript">
$(document).ready(function(){ * $('.mylink').click(function()
{ $.jPrintArea('#tabularData') }); * * *});

This is the sort of mangled syntax that the jQuery crowd thinks "makes
JavaScript bearable?" *This is complete nonsense and about as
inefficient as it gets. *And BTW, their "ready" method is known to be
unreliable.


jQuery.jPrintArea=function(el)
{
* * * * alert("hello");
var iframe=document.createElement('IFRAME');
var doc=null;
$(iframe).attr('style','position:absolute;width:0p x;height:
0px;left:-500px;top:-500px;');
document.body.appendChild(iframe);
doc=iframe.contentWindow.document;
var links=window.document.getElementsByTagName('link') ;
for(var i=0;i<links.length;i++)
if(links[i].rel.toLowerCase()=='stylesheet')
doc.write('<link type="text/css" rel="stylesheet" href="'+links[i].href
+'"></link>');
doc.write('<div class="'+$(el).attr("class")+'">'+$(el).html()+'</
div>');
doc.close();
iframe.contentWindow.focus();
iframe.contentWindow.print();
alert('Printing...');
//wait(1);
document.body.removeChild(iframe);
}
</script>
</head>
<body>
<div id="tabularData">
<a href="#" class="mylink" name="mylink">Print this Table</a>
</div>
</body>
</html>
When I click on the link I see nothing. I am expecting to see an
alert. Could someone please tell me where
I am going wrong?

Yes. *You clearly want to add scripting to your site, but do not know
how. *You heard that jQuery would make it really easy and you won't
have to learn anything about ECMAScript, cross-browser issues, etc.
Just wind it up and watch it go, right? *It won't work. *Yes, lots of
pundits on blogs say it works, but they don't know what they are
talking about.

See the FAQ for some basic examples. *That is the best place to start.
You are right. I can't agree with you more. I found the following
vanilla javascript to be working
perfectly. For the benefit of some, I am posting it here. I am
convinced Jquery is not for me. Thanks

var iframe; //global
function printArea (el) {
iframe=document.createElement('IFRAME');
var doc=null;
iframe.style.height='0px';
iframe.style.width='0px';
document.body.appendChild(iframe);
doc=iframe.contentWindow.document;
var innerhtml = document.getElementById(el).innerHTML;
doc.write('<html><body><div>'+innerhtml +'</div></body></html>');
doc.close();
iframe.contentWindow.focus();
iframe.contentWindow.print();

setTimeout("document.body.removeChild(iframe)",500 0);

}
And it is called as printArea('mydiv') where mydiv is the div tag
wrapping around some html.
Oct 29 '08 #3

P: n/a
so*********@gmail.com wrote:
On Oct 28, 2:35 pm, David Mark <dmark.cins...@gmail.comwrote:
[...]
>See the FAQ for some basic examples. That is the best place to start.

You are right. I can't agree with you more. I found the following
vanilla javascript to be working
perfectly. For the benefit of some, I am posting it here. I am
convinced Jquery is not for me. Thanks

var iframe; //global
You seem to be making iframe global so you can call it from setTimeout.
You don't have to do that, you can use a closure instead.

function printArea (el) {
iframe=document.createElement('IFRAME');
var doc=null;
There is rarely any need to initialise a variable with null if you are
going to set its value later. Just declare it, its value will be
undefined, which is more-or-less equivalent to null.

iframe.style.height='0px';
iframe.style.width='0px';
document.body.appendChild(iframe);
doc=iframe.contentWindow.document;
Now you've assigned a value to doc, the previous assignment did nothing
useful. It's not really an issue, just that you didn't need to assign
it a value of null.

var innerhtml = document.getElementById(el).innerHTML;
doc.write('<html><body><div>'+innerhtml +'</div></body></html>');
doc.close();
iframe.contentWindow.focus();
iframe.contentWindow.print();

setTimeout("document.body.removeChild(iframe)",500 0);
To use a closure so you don't need the global variable, try:

setTimeout(function(){document.body.removeChild(if rame)}, 5000);
Untested, but should be OK.
>
}
And it is called as printArea('mydiv') where mydiv is the div tag
div element. :-)
--
Rob
Oct 29 '08 #4

P: n/a
On Oct 28, 1:35*pm, David Mark <dmark.cins...@gmail.comwrote:
And you can't do anything useful with jQuery.
You forgot the wink: ;)
You forgot the doctype.
Perhaps intentionally. (For example, when inserting code into an
existing product or service with no doctype, it's best to test without
a doctype).
$(document).ready(function(){ * $('.mylink').click(function()
{ $.jPrintArea('#tabularData') }); * * *});
This is the sort of mangled syntax that the jQuery crowd thinks "makes
JavaScript bearable?" *
No. His formatting was terrible. I would write it like:

$(function(){
$('.mylink').click(function(){
$.jPrintArea('#tabularData');
});
});

ah, yes. Much better.
And BTW, their "ready" method is known to be unreliable.
Really? Under what conditions? Where would someone locate a more
reliable method of determining when the DOM is ready?

Hope that helps!

Matt Kruse
Oct 29 '08 #5

P: n/a
On Oct 29, 2:39*pm, Matt Kruse <m...@thekrusefamily.comwrote:
On Oct 28, 1:35*pm, David Mark <dmark.cins...@gmail.comwrote:
And you can't do anything useful with jQuery.

You forgot the wink: ;)
Why would I wink about that? It is a proven fact. You just have a
(very) short memory.
>
You forgot the doctype.

Perhaps intentionally. (For example, when inserting code into an
existing product or service with no doctype, it's best to test without
a doctype).
Oh, I am sure that was the intention.
>
$(document).ready(function(){ * $('.mylink').click(function()
{ $.jPrintArea('#tabularData') }); * * *});
This is the sort of mangled syntax that the jQuery crowd thinks "makes
JavaScript bearable?" *

No. His formatting was terrible. I would write it like:
Formatting?! What does that have to do with anything?
>
$(function(){
* $('.mylink').click(function(){
* * $.jPrintArea('#tabularData');
* });

});

ah, yes. Much better.
Much better than what? Look at that BS. Create a big jQuery object,
which is known to have problems with its own arguments (typeof a ==
'array', isFunction, etc.) due to an incredibly ignorant design, then
discard a big jQuery object. Create a big jQuery object, discard a
big jQuery object, create a big jQuery object, discard a big jQuery
object, etc., etc. That's how all of the pathetic examples for this
library read. It encourages people to be as inefficient as possible.
And how many functions are called by the above "better" example?
3000?

And you already know about the browser sniffing and the all-around
incompetence of the author.

http://ejohn.org/blog/future-proofin...ipt-libraries/

That one is my personal favorite, next to the time he asked me for a
"magic flag" to feature test get/setAttribute. Yes, I gave it to him,
but I think he had fled back to gumdrop-land by then.

Though for comic relief, you can't beat this one:

http://ejohn.org/blog/most-bizarre-ie-quirk/

What kind of idiot would delegate the most critical browser scripting
tasks to people like that? In 2008 no less?! And we just had this
discussion a year ago. Did you fall on your head since?

I take it you like to write "plug-ins" for jQuery. How
irresponsible. Are we to congratulate you for volunteering to help
the helpless? Of course not. You are serving up poison in a soup
kitchen.

I gather you like being a Big Fish in a very shallow pond.
>
And BTW, their "ready" method is known to be unreliable.

Really? Under what conditions? Where would someone locate a more
reliable method of determining when the DOM is ready?
Google? This group? I have personally published such methods. Peter
has a Blog entry on the subject. Perhaps you were too busy with the
special needs class to notice.
>
Hope that helps!
In a way, I think it did. Care to help dispel any more myths about
jQuery? You do that inadvertently every time you open your mouth in
here.
Oct 29 '08 #6

P: n/a
On Oct 29, 2:43*pm, David Mark <dmark.cins...@gmail.comwrote:
And you can't do anything useful with jQuery.
You forgot the wink: ;)
Why would I wink about that? *It is a proven fact. *You just have a
(very) short memory.
I do useful things with it all the time. That seems to falsify your
theory. *WINK*
$(function(){
* $('.mylink').click(function(){
* * $.jPrintArea('#tabularData');
* });
});
ah, yes. Much better.
Much better than what? *Look at that BS. *Create a big jQuery object,
which is known to have problems with its own arguments (typeof a ==
'array', isFunction, etc.) due to an incredibly ignorant design
Ignorant design = true
Problems = Only if you hit the cases where problems might exist, and I
never have except in theory.
>, then
discard a big jQuery object. *Create a big jQuery object, discard a
big jQuery object, create a big jQuery object, discard a big jQuery
object, etc., etc. *
Computers are fast. Don't optimize just because you feel bad for them
having to work so hard.
There are certainly cases where using jQuery hinders performance.
Don't use jQuery in those cases.
For many problems, jQuery is an inadequate solution.
Don't use jQuery to solve those problems.
That's some revolutionary thinking right there.
And how many functions are called by the above "better" example?
3000?
Yes. Exactly 3000.
Though for comic relief, you can't beat this one:
http://ejohn.org/blog/most-bizarre-ie-quirk/
What kind of idiot would delegate the most critical browser scripting
tasks to people like that? *
Delegating the most critical scripting tasks to jQuery may be a
mistake. I use it mainly in controlled environments. Mostly to add
some UI enhancements and to show pictures of bunnies.
I take it you like to write "plug-ins" for jQuery.
I have. My motivation for doing so is probably quite different than
what you imagine.
>*How irresponsible. *
It's surely right up there with smoking while pregnant and eating live
puppies, both of which I have also done. I like to live on the edge.
Are we to congratulate you for volunteering to help the helpless?
Yes?
>*Of course not.
Damn.
>*You are serving up poison in a soup kitchen.
[Insert one-up kitchen analogy here]
I gather you like being a Big Fish in a very shallow pond.
[Insert one-up pond analogy here]
And BTW, their "ready" method is known to be unreliable.
Really? Under what conditions? Where would someone locate a more
reliable method of determining when the DOM is ready?
Google? *This group? *I have personally published such methods. *Peter
has a Blog entry on the subject. *Perhaps you were too busy with the
special needs class to notice.
I haven't read this group regularly for a while.
In a way, I think it did. *Care to help dispel any more myths about
jQuery? *You do that inadvertently every time you open your mouth in
here.
Consider using more descriptive words, like "open your big fat mouth".
It will help you to connect with your audience. HTH!

Matt Kruse
Oct 29 '08 #7

P: n/a
On Oct 29, 4:26*pm, Matt Kruse <m...@thekrusefamily.comwrote:
On Oct 29, 2:43*pm, David Mark <dmark.cins...@gmail.comwrote:
And you can't do anything useful with jQuery.
You forgot the wink: ;)
Why would I wink about that? *It is a proven fact. *You just have a
(very) short memory.

I do useful things with it all the time. That seems to falsify your
theory. *WINK*
Useful according to whom?
>
$(function(){
* $('.mylink').click(function(){
* * $.jPrintArea('#tabularData');
* });
});
ah, yes. Much better.
Much better than what? *Look at that BS. *Create a big jQuery object,
which is known to have problems with its own arguments (typeof a ==
'array', isFunction, etc.) due to an incredibly ignorant design

Ignorant design = true
Problems = Only if you hit the cases where problems might exist, and I
never have except in theory.
Christ on a crutch. You don't have to have empirical evidence. Look
at the code. You know it won't work on anything but a handful of
modern browsers in their default configurations. You know this, yet
you persist with these sorts of "arguments." You want to be
different.
>
, then
discard a big jQuery object. *Create a big jQuery object, discard a
big jQuery object, create a big jQuery object, discard a big jQuery
object, etc., etc. *

Computers are fast. Don't optimize just because you feel bad for them
having to work so hard.
Computers are fast?! That is your argument for using such inane and
inefficient patterns? EVERYTHING IS RELATIVE. Tattoo that on your
forehead.
There are certainly cases where using jQuery hinders performance.
And there are certainly cases where incompetence hinders productivity.
Don't use jQuery in those cases.
You had something there, but you stopped three words too late.
For many problems, jQuery is an inadequate solution.
^^^^

You misspelled "any and all."
Don't use jQuery to solve those problems.
Right.
That's some revolutionary thinking right there.
It is yours after all. Your assumption that people who use jQuery to
shield themselves from the intricacies of cross-browser scripting
would be in a position to judge what would be a problem for jQuery is
patently absurd. These people don't know what they are doing. That
is why they chose jQuery in the first place.
>
And how many functions are called by the above "better" example?
3000?

Yes. Exactly 3000.
At least 2998 too many.
>
Though for comic relief, you can't beat this one:
http://ejohn.org/blog/most-bizarre-ie-quirk/
What kind of idiot would delegate the most critical browser scripting
tasks to people like that? *

Delegating the most critical scripting tasks to jQuery may be a
mistake. I use it mainly in controlled environments. Mostly to add
some UI enhancements and to show pictures of bunnies.
I take it you like to write "plug-ins" for jQuery.

I have. My motivation for doing so is probably quite different than
what you imagine.
Oh, I see you wrote a "context menu" plug-in for jQuery. A popup menu
hard-wired to work only with the alternate mouse button? And it
requires jQuery to work? Whatever your motivation, you are not
helping.
>
*How irresponsible. *

It's surely right up there with smoking while pregnant and eating live
puppies, both of which I have also done. I like to live on the edge.
Advocating a blob of incompetently written, poorly designed, *browser
sniffing* script in 2008 is completely off-base. Your reputation can
join Resig's in the toilet. No amount of stupid asides can change
that.
>
Are we to congratulate you for volunteering to help the helpless?

Yes?
See next line.
>
*Of course not.
[snip]
>
I haven't read this group regularly for a while.
And you apparently forgot everything you learned previously. We just
had this exact discussion a year ago. What are you in self-imposed
exile ever since?
In a way, I think it did. *Care to help dispel any more myths about
jQuery? *You do that inadvertently every time you open your mouth in
here.

Consider using more descriptive words, like "open your big fat mouth".
It will help you to connect with your audience. HTH!
Consider what a foolish post this was.
Oct 29 '08 #8

P: n/a
On Oct 29, 3:58*pm, David Mark <dmark.cins...@gmail.comwrote:
I do useful things with it all the time. That seems to falsify your
theory. *WINK*
Useful according to whom?
Me. Why would I care if it's useful to you or anyone else?
Problems = Only if you hit the cases where problems might exist, and I
never have except in theory.
Christ on a crutch. *You don't have to have empirical evidence. *Look
at the code. *
Your argument is that it will fail under some cases. My argument is
that those cases don't matter to me. Your argument is theory, mine is
practical.

I'm doing X, and you're arguing that it can't do Y. Odd.
You know it won't work on anything but a handful of
modern browsers in their default configurations. *
And I'm okay with that. You seem to think that if a person finds
jQuery useful, then they must advocate its use on all web sites for
all needs. That's not the case. jQuery is a tool. I use it for what it
is useful for. I do not use or advocate it for situations where it is
not useful.
Computers are fast. Don't optimize just because you feel bad for them
having to work so hard.
Computers are fast?! *That is your argument for using such inane and
inefficient patterns? *
It is actually not as ridiculous of a strategy as you make it sound.
A data warehouse, for example, is a very inefficient pattern as far as
data storage is concerned. Keys duplicated, many rows, etc. If all you
care about is data integrity and storage, you might say it is a
terrible implementation. But if you use different criteria, it may
become the best solution.

A pattern that is less efficient to run, but more efficient to write
and maintain, yet runs with an imperceptible loss in performance, can
be considered a success.
EVERYTHING IS RELATIVE. *Tattoo that on your forehead.
I'm quite familiar with Einstein. We can discuss that if you'd like.
One of my favorite topics. Tattoos, not so much.
Don't use jQuery in those cases.
You had something there, but you stopped three words too late.
For many problems, jQuery is an inadequate solution.
* * * ^^^^
You misspelled "any and all."
Your extreme bias against jQuery makes your argument less compelling.
I find that most intelligent people are less extreme and more
reasoned.
It is yours after all. *Your assumption that people who use jQuery to
shield themselves from the intricacies of cross-browser scripting
would be in a position to judge what would be a problem for jQuery is
patently absurd. *These people don't know what they are doing. *That
is why they chose jQuery in the first place.
That is your absurd assumption.
I take it you like to write "plug-ins" for jQuery.
I have. My motivation for doing so is probably quite different than
what you imagine.
Oh, I see you wrote a "context menu" plug-in for jQuery. *A popup menu
hard-wired to work only with the alternate mouse button? *And it
requires jQuery to work? *Whatever your motivation, you are not
helping.
Your inability to think outside your box is amusing.
Your reputation can join Resig's in the toilet.
If you haven't noticed by now, I couldn't possibly care less what you
or anyone else thinks of my opinions on jQuery.
>*No amount of stupid asides can change that.
I'll keep trying.
And you apparently forgot everything you learned previously. *We just
had this exact discussion a year ago. *What are you in self-imposed
exile ever since?
Ummmmm, not really.
Consider what a foolish post this was.
Indeed it was, but I forgive you. You can try again.

Matt Kruse
Oct 29 '08 #9

P: n/a
On 2008-10-29 20:43, David Mark wrote:
http://ejohn.org/blog/future-proofin...ipt-libraries/

That one is my personal favorite [..]
Actually, I thought it was quite interesting. He wrote that scripting
libraries like JQuery or Dojo are used, among other things, to "pave
over" browser bugs and incompatibilities, so that the users can
concentrate on the real task at hand instead of hand-rolling yet another
cross-browser event abstraction layer, for example. He also demonstrated
that sometimes object detection is not possible, or not enough to
provide a solid abstraction. In those cases, feature tests can help; for
the remaining cases they have no option but to go by browser version.
This is obviously not ideal, hence the idea of taking the test suites of
popular scripting libraries and adding them to the existing Mozilla test
suite. This would have two major benefits: it would provide the QA at
Mozilla with better tests based on real-life applications, and it would
provide an early warning system to detect incompatibilities between the
libraries and the JS engine updates. Win-win. What didn't you like about
that article?
Though for comic relief, you can't beat this one:

http://ejohn.org/blog/most-bizarre-ie-quirk/

What kind of idiot would delegate the most critical browser scripting
tasks to people like that?
I thought it was funny. He found a weird behavior in IE, and joked about
finding a use for it. You didn't take that seriously, did you?
- Conrad
Oct 29 '08 #10

P: n/a
On Oct 29, 5:26*pm, Matt Kruse <m...@thekrusefamily.comwrote:
On Oct 29, 3:58*pm, David Mark <dmark.cins...@gmail.comwrote:
I do useful things with it all the time. That seems to falsify your
theory. *WINK*
Useful according to whom?

Me. Why would I care if it's useful to you or anyone else?
Problems = Only if you hit the cases where problems might exist, and I
never have except in theory.
Christ on a crutch. *You don't have to have empirical evidence. *Look
at the code. *

Your argument is that it will fail under some cases. My argument is
that those cases don't matter to me. Your argument is theory, mine is
practical.
No. You advocate the use of jQuery on the Web. That is pure folly.
>
I'm doing X, and you're arguing that it can't do Y. Odd.
Who knows what you are doing? Doesn't really matter. If it is
browser scripting and involves jQuery, then you are doing something
foolish. Intranet or not.
>
You know it won't work on anything but a handful of
modern browsers in their default configurations. *

And I'm okay with that. You seem to think that if a person finds
jQuery useful, then they must advocate its use on all web sites for
But that is where you see jQuery (all over the public Internet.) This
is largely because jackasses like you are handing out bad advice (like
these last few posts.)
all needs. That's not the case. jQuery is a tool. I use it for what it
is useful for. I do not use or advocate it for situations where it is
not useful.
It is not useful for anything but deluding yourself (and your clients)
into thinking you are competent to do cross-browser scripting.
>
Computers are fast. Don't optimize just because you feel bad for them
having to work so hard.
Computers are fast?! *That is your argument for using such inane and
inefficient patterns? *

It is actually not as ridiculous of a strategy as you make it sound.
Completely ridiculous. "Computers are fast" so it is okay to waste
all of their resources. Who are you now, Bill Gates? Try using any
of your jQuery apps on an older PC. You can literally hear the
incompetence in the whirring fans.
A data warehouse, for example, is a very inefficient pattern as far as
data storage is concerned. Keys duplicated, many rows, etc. If all you
Data normalization and de-normalization is OT (and irrelevant) here.
care about is data integrity and storage, you might say it is a
terrible implementation. But if you use different criteria, it may
become the best solution.
You are off in the weeds.
>
A pattern that is less efficient to run, but more efficient to write
and maintain, yet runs with an imperceptible loss in performance, can
Imperceptible?!
be considered a success.
You have completely lost it. There is nothing about using another
person's browser sniffing monstrosity of a script that will make for
easy maintenance. And who cares if it is easier to write if the end
result is a maintenance nightmare?
>
EVERYTHING IS RELATIVE. *Tattoo that on your forehead.

I'm quite familiar with Einstein. We can discuss that if you'd like.
One of my favorite topics. Tattoos, not so much.
Okay, write it in black magic marker on your hand, then smack yourself
in the forehead.
>
Don't use jQuery in those cases.
You had something there, but you stopped three words too late.
For many problems, jQuery is an inadequate solution.
* * * ^^^^
You misspelled "any and all."

Your extreme bias against jQuery makes your argument less compelling.
Prototype, jQuery, MooTools, etc. are all fruits of the same poison
tree. I am hardly alone in this determination.
I find that most intelligent people are less extreme and more
reasoned.
Most intelligent people would find your "arguments" to be non-
arguments. It is like talking to a wall.
>
It is yours after all. *Your assumption that people who use jQuery to
shield themselves from the intricacies of cross-browser scripting
would be in a position to judge what would be a problem for jQuery is
patently absurd. *These people don't know what they are doing. *That
is why they chose jQuery in the first place.

That is your absurd assumption.
Of course not. It's the library that "makes JavaScript (sic)
bearable." Articles about it always start out: "I used to *hate*
JavaScript, then I discovered..." To finish their thought, they
discovered that they could create really lousy, inefficient scripts if
they let somebody else's large, lousy and inefficient script do all of
the work. Nobody with experience would consider this a worthwhile
discovery.
>
I take it you like to write "plug-ins" for jQuery.
I have. My motivation for doing so is probably quite different than
what you imagine.
Oh, I see you wrote a "context menu" plug-in for jQuery. *A popup menu
hard-wired to work only with the alternate mouse button? *And it
requires jQuery to work? *Whatever your motivation, you are not
helping.

Your inability to think outside your box is amusing.
Outside my box? What you were after was a popup menu. I published
one here a full year ago. And yes, it had an optional context "plug-
in" (optional script) and "themes" (alternate style sheets.) It even
works with XHTML (try *that* with jQuery.) And last, but not least,
it does not require 50K of jQuery code to work. Certainly it couldn't
be easier to implement and maintenance will likely be non-existent for
a long time (unlike your browser sniffing "plug-in.")
>
Your reputation can join Resig's in the toilet.

If you haven't noticed by now, I couldn't possibly care less what you
or anyone else thinks of my opinions on jQuery.
And I don't care what you think about my opinions, but I will not let
you spread half-truths and outright falsehoods here. There is enough
blithering on the blogs.
>
*No amount of stupid asides can change that.

I'll keep trying.
And you will keep failing. The pattern is not hard to spot.
Oct 29 '08 #11

P: n/a
Conrad Lender wrote:
On 2008-10-29 20:43, David Mark wrote:
>http://ejohn.org/blog/future-proofin...ipt-libraries/

That one is my personal favorite [..]

Actually, I thought it was quite interesting. He wrote that scripting
libraries like JQuery or Dojo are used, among other things, to "pave
over" browser bugs and incompatibilities,
That is what those libraries are advertised as being. That, and being a
way to solve many problems. I view the libraries as Matt does: a tool.
That tool might be useful for many common, simple tasks. It won't be the
most suitable tool for specific tasks, and in some cases, might have
more than needed for a given application.
so that the users can
concentrate on the real task at hand instead of hand-rolling yet another
cross-browser event abstraction layer, for example.
It depends what the abstraction layer does. Have you looked at jQuery's
bind?

A simple "addEvent" function can work for most situations.

function addCallback(o, type, cb) {
if(o.addEventListener)
o.addEventListener(type, cb, false);
else if(o.attachEvent)
o.attachEvent("on"+type, function(ev){ cb.call(o, ev); });
return o;
}
A separate registry can be used for custom events or legacy DOM events
e.g. "onclick". The average web developers I've worked with don't use
custom events

He also demonstrated
that sometimes object detection is not possible, or not enough to
provide a solid abstraction.
He stated two cases where object detection fails and even provided a
workaround for one. That doesn't make a case that object detection is
sometimes not possible.
In those cases, feature tests can help; for
the remaining cases they have no option but to go by browser version.
You mean:-

| Unfortunately, in real-world JavaScript development, object detection
| can only get you so far. For example, there's no object that you can
|'detect' to determine if browsers return inaccurate attribute values
| from getAttribute

?

Concluding that there is no remaining option but to check the browser
version is a hasty conclusion. The topic of browser detection/capability
detection has debated a lot[1]. Would mean that all other viable
possibilities have been explored. Take a look at the mouseenter function
jquery recently added.

Feature detection doesn't get interesting until a browser supports a
feature but does so in a different way. Detecting how the feature is
implemented (or ruling out how it is not implemented) can often be done
by creating a situation where an expected result of using that feature
can be tested against how the feature actually behaves.

http://jibbering.com/faq/#getWindowSize

(for example)
>
>Though for comic relief, you can't beat this one:

http://ejohn.org/blog/most-bizarre-ie-quirk/

What kind of idiot would delegate the most critical browser scripting
tasks to people like that?

I thought it was funny. He found a weird behavior in IE, and joked about
finding a use for it. You didn't take that seriously, did you?
Yeah, that's weird. The comments by Lasse were interesting.

[1]http://dev.opera.com/articles/view/using-capability-detection/
--
comp.lang.javascript FAQ <URL: http://jibbering.com/faq/ >
Nov 4 '08 #12

P: n/a
On 2008-11-04 08:53, dhtml wrote:
[Snipping my quotes about how object detection and feature tests
sometimes can't be used to detect browser behavior]
Concluding that there is no remaining option but to check the browser
version is a hasty conclusion. The topic of browser detection/capability
detection has debated a lot[1]. Would mean that all other viable
possibilities have been explored.
I know it's been discussed at length, and there are many good reasons to
avoid browser sniffing wherever possible. I still believe there are
situations where it can't be easily avoided (see below).

Here's an example of where object detection fails and feature testing
can help: dynamic creation of radio buttons.

var radio = document.createElement("input");
radio.type = "radio";
radio.name = "xy";
out(radio.name); // a debugging function
target.appendChild(radio);

IE will follow all the steps, and the debug content will correctly read
"xy". As soon as we add the new element to the page, however, the "name"
attribute will be missing. As it turns out, IE can't set the "name"
attribute of radio buttons at runtime. To work around that, we could do
a feature test along the lines of

if (radio.outerHTML && radio.outerHTML.indexOf("name=") < 0) {
// use IE's proprietary createElement() version
}

There's no need for browser sniffing in this example. If/when Microsoft
decide to fix that bug, the check won't misbehave in future IE versions.

Examples where feature tests aren't enough usually include browser
capabilities that aren't directly related to javascript:

- can the browser can correctly render alpha transparency for PNGs?
- will calling window.print() pause execution of the script?
- will an element's opacity have a big visual jump between 99% and 100%?

I know, it's frowned upon here, but in cases like these I personally
have no problem with sacrificing purity for functionality. I *know* that
IE6 doesn't support PNG transparency, and I *know* that IE7 does. If for
some reason there is a requirement to make that distinction, I will do
the practical thing and check the browser version.
- Conrad
Nov 4 '08 #13

P: n/a
On Nov 4, 2:53*am, dhtml <dhtmlkitc...@gmail.comwrote:
Conrad Lender wrote:
On 2008-10-29 20:43, David Mark wrote:
>http://ejohn.org/blog/future-proofin...ipt-libraries/
That one is my personal favorite [..]
Actually, I thought it was quite interesting. He wrote that scripting
libraries like JQuery or Dojo are used, among other things, to "pave
over" browser bugs and incompatibilities,

That is what those libraries are advertised as being. That, and being a
way to solve many problems. I view the libraries as Matt does: a tool.
A tool for what?
That tool might be useful for many common, simple tasks. It won't be the
Like what? A mockup? I wouldn't even recommend them for that.
most suitable tool for specific tasks, and in some cases, might have
more than needed for a given application.
Or in the case of many, the code may be incompetently written and
therefore a waste of time.

And yes, the designs are absurd for browser scripting.
>
*so that the users can
concentrate on the real task at hand instead of hand-rolling yet another
cross-browser event abstraction layer, for example.

It depends what the abstraction layer does. Have you looked at jQuery's
bind?
Yes, it is a crock.
>
A simple "addEvent" function can work for most situations.

function addCallback(o, type, cb) {
* * *if(o.addEventListener)
* * * * *o.addEventListener(type, cb, false);
* * *else if(o.attachEvent)
* * * * *o.attachEvent("on"+type, function(ev){ cb.call(o, ev);});
* * *return o;

}

A separate registry can be used for custom events or legacy DOM events
e.g. "onclick". The average web developers I've worked with don't use
custom events

*He also demonstrated
that sometimes object detection is not possible, or not enough to
provide a solid abstraction.

He stated two cases where object detection fails and even provided a
And at least one of the two is demonstrably false. The other looks
like nonsense too.
workaround for one. That doesn't make a case that object detection is
sometimes not possible.
It isn't a case for anything.
>
*In those cases, feature tests can help; for
the remaining cases they have no option but to go by browser version.

You mean:-

| Unfortunately, in real-world JavaScript development, object detection
| can only get you so far. For example, there's no object that you can
|'detect' to determine if browsers return inaccurate attribute values
| from getAttribute
LOL. And there you have it.
>
?

Concluding that there is no remaining option but to check the browser
version is a hasty conclusion. The topic of browser detection/capability
It is far worse than a hasty conclusion. The idea that you cannot
feature test IE's broken getAttribute has been proven false by me in
this newsgroup a long time ago. And that pinhead was a participant in
the thread. (!) So why hasn't he retracted this rubbish?
detection has debated a lot[1]. Would mean that all other viable
possibilities have been explored. Take a look at the mouseenter function
jquery recently added.
Why?

[snip]
>
I thought it was funny. He found a weird behavior in IE, and joked about
finding a use for it. You didn't take that seriously, did you?

Yeah, that's weird. The comments by Lasse were interesting.

[1]http://dev.opera.com/articles/view/using-capability-detection/
Neither of you got it. Re-read that "quirk report" again.

The moral of the story is that even if people like John Resig had the
slightest idea what they were talking about, browser detection would
still be useless. If you read between the lines, you will see why
they think they need it. Needless to say, their fundamental mistake
has been discussed to death here.
Nov 4 '08 #14

P: n/a
On Nov 4, 8:24*am, Conrad Lender <crlen...@yahoo.comwrote:
On 2008-11-04 08:53, dhtml wrote:
[Snipping my quotes about how object detection and feature tests
sometimes can't be used to detect browser behavior]
Concluding that there is no remaining option but to check the browser
version is a hasty conclusion. The topic of browser detection/capability
detection has debated a lot[1]. Would mean that all other viable
possibilities have been explored.

I know it's been discussed at length, and there are many good reasons to
avoid browser sniffing wherever possible. I still believe there are
situations where it can't be easily avoided (see below).
That is inexperience talking.
>
Here's an example of where object detection fails and feature testing
can help: dynamic creation of radio buttons.

var radio = document.createElement("input");
radio.type = "radio";
radio.name = "xy";
out(radio.name); *// a debugging function
target.appendChild(radio);

IE will follow all the steps, and the debug content will correctly read
"xy". As soon as we add the new element to the page, however, the "name"
attribute will be missing. As it turns out, IE can't set the "name"
attribute of radio buttons at runtime. To work around that, we could do
a feature test along the lines of
Yes, you can feature test that.
>
if (radio.outerHTML && radio.outerHTML.indexOf("name=") < 0) {
* // use IE's proprietary createElement() version

}
Not like that!
>
There's no need for browser sniffing in this example. If/when Microsoft
decide to fix that bug, the check won't misbehave in future IE versions.
You have the basic idea.
>
Examples where feature tests aren't enough usually include browser
capabilities that aren't directly related to javascript:

- can the browser can correctly render alpha transparency for PNGs?
LOL. No need for browser sniffing there.
- will calling window.print() pause execution of the script?
That can (and should) be designed out of any system.
- will an element's opacity have a big visual jump between 99% and 100%?
Same.
>
I know, it's frowned upon here, but in cases like these I personally
have no problem with sacrificing purity for functionality. I *know* that
IE6 doesn't support PNG transparency, and I *know* that IE7 does. If for
some reason there is a requirement to make that distinction, I will do
the practical thing and check the browser version.
Nope. Not the user agent string. Try thinking about these issues a
little harder.
Nov 4 '08 #15

P: n/a
On 2008-11-04 19:04, David Mark wrote:
>As it turns out, IE can't set
the "name" attribute of radio buttons at runtime. To work around
that, we could do a feature test along the lines of

Yes, you can feature test that.
>if (radio.outerHTML && radio.outerHTML.indexOf("name=") < 0) { //
use IE's proprietary createElement() version
}

Not like that!
If you've got a better solution, how about telling us?
>Examples where feature tests aren't enough usually include browser
capabilities that aren't directly related to javascript:

- can the browser can correctly render alpha transparency for PNGs?

LOL. No need for browser sniffing there.
Again, what's _your_ solution?
>- will calling window.print() pause execution of the script?

That can (and should) be designed out of any system.
I disagree. Anyway, the point is that it can't be feature tested.
>I know, it's frowned upon here, but in cases like these I
personally have no problem with sacrificing purity for
functionality. I *know* that IE6 doesn't support PNG transparency,
and I *know* that IE7 does. If for some reason there is a
requirement to make that distinction, I will do the practical thing
and check the browser version.

Nope. Not the user agent string.
Where did I say anything about the user agent string?
Try thinking about these issues a little harder.
Try providing solutions (if you've got them) instead of lording it over
people who don't share your fierce hatred of anything connected to John
Resig. At least he put his code out there where other people can use it
and improve it. You're sitting on a huge (and possibly superior)
library, but it's not open source, and thus unavailable to most of us.
- Conrad
Nov 4 '08 #16

P: n/a
On Nov 4, 1:22*pm, Conrad Lender <crlen...@yahoo.comwrote:
On 2008-11-04 19:04, David Mark wrote:
As it turns out, IE can't set
the "name" attribute of radio buttons at runtime. To work around
that, we could do a feature test along the lines of
Yes, you can feature test that.
if (radio.outerHTML && radio.outerHTML.indexOf("name=") < 0) { //
use IE's proprietary createElement() version
}
Not like that!

If you've got a better solution, how about telling us?
A better solution to using outerHTML? How about anything?
>
Examples where feature tests aren't enough usually include browser
capabilities that aren't directly related to javascript:
- can the browser can correctly render alpha transparency for PNGs?
LOL. *No need for browser sniffing there.

Again, what's _your_ solution?
Depends on the context. As I have worked on numerous projects that
use transparent PNG's as widget decorations, I have had to work around
the same IE6 nonsense that you have. Never once have I had to stoop
to browser sniffing. The simplest solution is to provide GIF
equivalents and override the the PNG's in the inevitable IE6-specific
style sheets. And those are typically hidden with IE conditional
comments.
>
- will calling window.print() pause execution of the script?
That can (and should) be designed out of any system.

I disagree. Anyway, the point is that it can't be feature tested.
It can't be inferred from the user agent string either. Best to
design it out of the system (whether you agree or not is another
story.)
>
I know, it's frowned upon here, but in cases like these I
personally have no problem with sacrificing purity for
functionality. I *know* that IE6 doesn't support PNG transparency,
and I *know* that IE7 does. If for some reason there is a
requirement to make that distinction, I will do the practical thing
and check the browser version.
Nope. *Not the user agent string.

Where did I say anything about the user agent string?
That's what browser sniffing uses to "check the browser version."
>
Try thinking about these issues a little harder.

Try providing solutions (if you've got them) instead of lording it over
Thin ice, Conrad. Very thin ice. You are clearly new and very
impetuous. Try reading more and writing less.
people who don't share your fierce hatred of anything connected to John
Resig. At least he put his code out there where other people can use it
Fierce of hatred of who?! I don't know him at all. His code is awful
though.
and improve it. You're sitting on a huge (and possibly superior)
library, but it's not open source, and thus unavailable to most of us.
The ice just broke. Try a little research before opening your mouth.
It will save a lot time and embarrassment. You are currently
following in the footsteps of Matt Kruse, circa a year or so back.
Nov 4 '08 #17

P: n/a
On Nov 4, 12:35*pm, David Mark <dmark.cins...@gmail.comwrote:
Where did I say anything about the user agent string?
That's what browser sniffing uses to "check the browser version."
Not necessarily. Depends on your definition of "sniffing".
Using IE conditional comments is one alternate approach.

Matt Kruse
Nov 4 '08 #18

P: n/a
On Nov 4, 2:33*pm, Matt Kruse <m...@thekrusefamily.comwrote:
On Nov 4, 12:35*pm, David Mark <dmark.cins...@gmail.comwrote:
Where did I say anything about the user agent string?
That's what browser sniffing uses to "check the browser version."

Not necessarily. Depends on your definition of "sniffing".
Using IE conditional comments is one alternate approach.
IE conditional comments are not sniffing per se. And they are
absolutely the best way to include the handful of CSS corrections that
are virtually always required for older versions of IE (e.g.
proprietary rules like "zoom:1".)

They can be abused though. Do not use conditional comments to load
script that makes assumptions about IE's host objects based on the
browser version.
Nov 4 '08 #19

P: n/a
On 2008-11-04 19:35, David Mark wrote:
>If you've got a better solution, how about telling us?

A better solution to using outerHTML? How about anything?
That's not an answer. Until you actually show me a better solution, I'm
just going to assume that you don't have one.

In the example I've posted, I'm checking whether I should use the
IE-proprietary way of calling createElement, so it's perfectly
acceptable to use a proprietary property in the check.
>Again, what's _your_ solution?

[...] The simplest solution is to provide GIF
equivalents and override the the PNG's in the inevitable IE6-specific
style sheets. And those are typically hidden with IE conditional
comments.
That's not an answer either. My point was that it couldn't be feature
tested, and you say use GIFs and stylesheets. That's not a test, it's a
workaround, and an ugly one at that, given that GIFs don't support alpha
transparency.
>Where did I say anything about the user agent string?

That's what browser sniffing uses to "check the browser version."
There are better and more reliable ways to test for IE versions. And you
know it - you're using conditional evaluation in your own library...
>people who don't share your fierce hatred of anything connected to John
Resig. At least he put his code out there where other people can use it

Fierce of hatred of who?! I don't know him at all. His code is awful
though.
Then you've just called a person you don't know at all "all-around
incompetent". You also said "as for him *as a person*, he is an idiot".
>and improve it. You're sitting on a huge (and possibly superior)
library, but it's not open source, and thus unavailable to most of us.

The ice just broke.
I'm glad we were able to break the ice between us, maybe we can have a
more constructive discussion next time. I'm done with this.
- Conrad
Nov 4 '08 #20

P: n/a
On Nov 4, 2:02*pm, David Mark <dmark.cins...@gmail.comwrote:
IE conditional comments are not sniffing per se. *
Semantics. You can define it however you wish.
And they are
absolutely the best way to include the handful of CSS corrections that
are virtually always required for older versions of IE (e.g.
proprietary rules like "zoom:1".)
In theory, it is no more reliable than a user-agent string. Any
browser could choose to process code within IE's conditional comments.
Just as any browser can choose to fake a user-agent string.

Can you really assume CSS behavior based on a browser's evaluation of
conditional comments? Isn't that quite a leap?

You rely on one potentially-broken approach, but not the other. Why?
They can be abused though. *Do not use conditional comments to load
script that makes assumptions about IE's host objects based on the
browser version.
But you can make assumptions on PNG support from conditional comments?
What if a user has a custom extension in their version of IE6 that
allows PNG files to work correctly? Oh, SNAP!

Matt Kruse
Nov 4 '08 #21

P: n/a
On Nov 4, 3:44*pm, Matt Kruse <m...@thekrusefamily.comwrote:
On Nov 4, 2:02*pm, David Mark <dmark.cins...@gmail.comwrote:
IE conditional comments are not sniffing per se. *

Semantics. You can define it however you wish.
Nonsense. It is clearly not browser sniffing.
>
And they are
absolutely the best way to include the handful of CSS corrections that
are virtually always required for older versions of IE (e.g.
proprietary rules like "zoom:1".)

In theory, it is no more reliable than a user-agent string. Any
That is where you are wrong.
browser could choose to process code within IE's conditional comments.
But since there is no benefit for them to do so (unlike spoofing UA
strings), they don't.
Just as any browser can choose to fake a user-agent string.

Can you really assume CSS behavior based on a browser's evaluation of
conditional comments? Isn't that quite a leap?
Certainly not. It is the best practice for fixing IE6's silly quirks
(primarily by using rules that are proprietary to IE.)
>
You rely on one potentially-broken approach, but not the other. Why?
One is easily demonstrated as baseless, the other is not.
>
They can be abused though. *Do not use conditional comments to load
script that makes assumptions about IE's host objects based on the
browser version.

But you can make assumptions on PNG support from conditional comments?
What if a user has a custom extension in their version of IE6 that
allows PNG files to work correctly? Oh, SNAP!
Oh what? They would still get the GIF's. Wouldn't hurt a thing. See
how that works?
Nov 4 '08 #22

P: n/a
On Nov 4, 3:28*pm, David Mark <dmark.cins...@gmail.comwrote:
Semantics. You can define it however you wish.
Nonsense. *It is clearly not browser sniffing.
If you restrict sniffing to only looking at the user-agent, then I
suppose you are right.
In theory, it is no more reliable than a user-agent string. Any
That is where you are wrong.
I agree that it is more reliable in practice, but not in theory.
browser could choose to process code within IE's conditional comments.
But since there is no benefit for them to do so (unlike spoofing UA
strings), they don't.
So you're going to assume it's a safe tactic because, according to
your knowledge, no browser is currently doing it?
What if I wrote a browser that used IE's parsing/js engine but my own
CSS logic? It might evaluate conditional comments, but not follow the
CSS quirks that you are inferring. The point is, you are making an
assumption about some behavior (css quirks) based on something
entirely different (support for conditional comments evaluation).

In my experience, I've never come across a browser that faked it's UA
string and caused a problem for the user. So by your logic, can't I
assume it is a safe practice?

[snip the rest due to boredom]

Matt Kruse
Nov 4 '08 #23

P: n/a
On Nov 4, 3:14*pm, Conrad Lender <crlen...@yahoo.comwrote:
On 2008-11-04 19:35, David Mark wrote:
If you've got a better solution, how about telling us?
A better solution to using outerHTML? *How about anything?

That's not an answer. Until you actually show me a better solution, I'm
just going to assume that you don't have one.
Why do you feel the need to tell me that? Assume whatever you want.
>
In the example I've posted, I'm checking whether I should use the
IE-proprietary way of calling createElement, so it's perfectly
acceptable to use a proprietary property in the check.
I didn't care for any of it. Thank you.
>
Again, what's _your_ solution?
[...] The simplest solution is to provide GIF
equivalents and override the the PNG's in the inevitable IE6-specific
style sheets. *And those are typically hidden with IE conditional
comments.

That's not an answer either. My point was that it couldn't be feature
tested, and you say use GIFs and stylesheets. That's not a test, it's a
You have no point at all. You can't feature test the color shirt the
user is wearing either. Sheesh.
workaround, and an ugly one at that, given that GIFs don't support alpha
transparency.
Nothing ugly about it. Supporting alpha transparency (or not) does
not affect a single design of mine. Yours?
>
Where did I say anything about the user agent string?
That's what browser sniffing uses to "check the browser version."

There are better and more reliable ways to test for IE versions. And you
You don't have to "test for IE versions" in your script at all.
That's the point.
know it - you're using conditional evaluation in your own library...
Conditional evaluation? I think not. If you mean conditional
compilation, wrong again. I would never put that into the library as
(for one) it screws up the YUI minifier. Perhaps you are thinking of
an add-in? Sure as hell doesn't "check the IE version" either. Could
you get any more off-base than to lecture me on my own code?
>
people who don't share your fierce hatred of anything connected to John
Resig. At least he put his code out there where other people can use it
Fierce of hatred of who?! *I don't know him at all. *His code is awful
though.

Then you've just called a person you don't know at all "all-around
incompetent". You also said "as for him *as a person*, he is an idiot".
His incompetence is easily discerned from his numerous books, blog
posts, scripts, etc. Opinion seems to be divided here. Everybody
else says it is so, you and Matt Kruse say it isn't (though Matt seems
to have finally realized that Resig is not worth listening to.)

And yeah, I have talked to him. It is a matter of public record. He
is an idiot. Read more, write less.
>
and improve it. You're sitting on a huge (and possibly superior)
library, but it's not open source, and thus unavailable to most of us.
The ice just broke.

I'm glad we were able to break the ice between us, maybe we can have a
more constructive discussion next time. I'm done with this.
The ice under your feet, genius. As in, you had no argument at all.
All of the blithering that followed was the usual waste of time, which
was clearly followed by the inevitable epiphany.

If I really cared to help you, perhaps I would have suggested some
search terms (e.g. "browser scripting" library.) Looks like #1, #2
and #4 respectively on Yahoo, Google and Answers. Looks like you
figured it out on your own. There's a good fellow.
Nov 4 '08 #24

P: n/a
Matt Kruse wrote:
On Nov 4, 3:28 pm, David Mark <dmark.cins...@gmail.comwrote:
>>Semantics. You can define it however you wish.
Nonsense. It is clearly not browser sniffing.

If you restrict sniffing to only looking at the user-agent, then I
suppose you are right.
>>In theory, it is no more reliable than a user-agent string. Any
That is where you are wrong.

I agree that it is more reliable in practice, but not in theory.
The point that you are still missing after all these months I already tried
to explain it to you is that your theory is flawed.

Using Conditional Comments does not break anything because when it is not
regarded as a special processing instruction (as by MSHTML, or a
deliberately borken UA if it can't handle it properly) it is regarded an
SGML/HTML/XML comment and goes *ignored*, plain and simple. IOW, when a CC
fails you do *not* do anything.

Much in contrast, using browser sniffing and sniffing to the wrong effect is
inevitably harmful because you end up *doing* things that you did not
intended to do in the UA that you did not know to spoof you successfully
yet. And you will have to adapt your code each time it happens to handle
that case, *after* some kind soul *might* have told you that your code
messed with their UA. I hope you can agree that being aware of those issues
and continue sniffing anyway (unless it is *absolutely required*) is just
plain stupid.
PointedEars
--
Prototype.js was written by people who don't know javascript for people
who don't know javascript. People who don't know javascript are not
the best source of advice on designing systems that use javascript.
-- Richard Cornford, cljs, <f8*******************@news.demon.co.uk>
Nov 4 '08 #25

P: n/a
On Nov 4, 5:20*pm, Thomas 'PointedEars' Lahn <PointedE...@web.de>
wrote:
Matt Kruse wrote:
On Nov 4, 3:28 pm, David Mark <dmark.cins...@gmail.comwrote:
>Semantics. You can define it however you wish.
Nonsense. *It is clearly not browser sniffing.
If you restrict sniffing to only looking at the user-agent, then I
suppose you are right.
>In theory, it is no more reliable than a user-agent string. Any
That is where you are wrong.
I agree that it is more reliable in practice, but not in theory.

The point that you are still missing after all these months I already tried
to explain it to you is that your theory is flawed.

Using Conditional Comments does not break anything because when it is not
regarded as a special processing instruction (as by MSHTML, or a
deliberately borken UA if it can't handle it properly) it is regarded an
SGML/HTML/XML comment and goes *ignored*, plain and simple. *IOW, when a CC
fails you do *not* do anything.

Much in contrast, using browser sniffing and sniffing to the wrong effectis
inevitably harmful because you end up *doing* things that you did not
intended to do in the UA that you did not know to spoof you successfully
yet. *And you will have to adapt your code each time it happens to handle
that case, *after* some kind soul *might* have told you that your code
messed with their UA. *I hope you can agree that being aware of those issues
and continue sniffing anyway (unless it is *absolutely required*) is just
plain stupid.
As mentioned, he knows this. Seems he just wants to be different.
Nov 4 '08 #26

P: n/a
On Nov 4, 4:43*pm, Matt Kruse <m...@thekrusefamily.comwrote:
On Nov 4, 3:28*pm, David Mark <dmark.cins...@gmail.comwrote:
Semantics. You can define it however you wish.
Nonsense. *It is clearly not browser sniffing.

If you restrict sniffing to only looking at the user-agent, then I
suppose you are right.
In theory, it is no more reliable than a user-agent string. Any
That is where you are wrong.

I agree that it is more reliable in practice, but not in theory.
browser could choose to process code within IE's conditional comments..
But since there is no benefit for them to do so (unlike spoofing UA
strings), they don't.

So you're going to assume it's a safe tactic because, according to
your knowledge, no browser is currently doing it?
Your question has already been answered in this very thread. I won't
bother repeating it.
What if I wrote a browser that used IE's parsing/js engine but my own
Using IE's parsing engine? Do tell.
CSS logic? It might evaluate conditional comments, but not follow the
CSS quirks that you are inferring. The point is, you are making an
1. Nobody, but nobody, would use your browser, even if you could
somehow create it (and I am convinced you cannot.)

2. Wouldn't hurt a thing anyway as using a transparent GIF equivalent
to whatever fancy translucent PNG's you used with your widget(s) can
never be harmful. Not in this lifetime.
assumption about some behavior (css quirks) based on something
entirely different (support for conditional comments evaluation).
You are fantasizing.
>
In my experience, I've never come across a browser that faked it's UA
string and caused a problem for the user. So by your logic, can't I
Then you are extremely inexperienced (or lying.)
assume it is a safe practice?
You can take it from me (and many others) that it is not. It wasn't
safe a decade ago. It's just plain pitiful to design a browser script
that is hinged on sniffing the UA string (and therefore sure to blow
up when trying to apply a DirectX in any agent that is not IE.)

What are your other examples of scripts that absolutely *must* sniff
the browser? Opacity, getAttribute, etc.? We've heard all of this
before (from clearly dubious sources) and it is just as much nonsense
today as it has been in the past. You know all of this as you have
been involved in near identical threads in the past. Same with IE
conditional comments. Why do we have to keep revisiting these
topics? It only serves to confuse new users.

[snip]
Nov 4 '08 #27

P: n/a
On Nov 4, 4:20*pm, Thomas 'PointedEars' Lahn <PointedE...@web.de>
wrote:
Using Conditional Comments does not break anything because when it is not
regarded as a special processing instruction (as by MSHTML, or a
deliberately borken UA if it can't handle it properly) it is regarded an
SGML/HTML/XML comment and goes *ignored*, plain and simple. *IOW, when a CC
fails you do *not* do anything.
I know how CC's work, I'm not stupid.

My point is simple - a browser other than IE could implement CC's,
process them correctly, and behave as if it is IE6/7/8. If you are
using CC to determine the capabilities of the browser and inferring
it's CSS quirks, then you would apply incorrect logic to "fix" such a
browser that isn't broken.

In the same way, the user-agent string of IE6/7/8 is predictable. If a
browser fakes it and appears to be IE6/7/8, then your browser-sniffing
code may "fix" quirks that aren't really broken.

Both approaches are making an assumption about X based on some other
unrelated Y.

Granted, user-agent sniffing is more error-prone than CC. But I would
hope you would realize that both strategies have some room for
failure.

In my experience, both approaches should be used as a last-resort. And
I would use CC before browser sniffing. But also in my experience,
I've never come across a problem (in many years) caused by incorrect
browser-sniffing when used as a last resort. So it's not something I'm
afraid of, either.
Much in contrast, using browser sniffing and sniffing to the wrong effectis
inevitably harmful because you end up *doing* things that you did not
intended to do in the UA that you did not know to spoof you successfully
yet.
And as explained, using CC is vulnerable to the same thing.

And further on this point, I personally don't care about browsers that
are trying to spoof me and pretend to be IE or whatever. If they want
to proclaim to the world that they are IE, then they get treated as IE
in cases when it's necessary to differentiate. If users don't like it,
they should use a browser that doesn't pretend to be something it's
not.
>*And you will have to adapt your code each time it happens to handle
that case, *after* some kind soul *might* have told you that your code
messed with their UA. *I hope you can agree that being aware of those issues
and continue sniffing anyway (unless it is *absolutely required*) is just
plain stupid.
It is stupid to sniff when it is not the most efficient way to
accomplish a task. And in most cases, it is the wrong strategy.

Matt Kruse
Nov 4 '08 #28

P: n/a
On Nov 4, 4:09*pm, David Mark <dmark.cins...@gmail.comwrote:
That's not an answer either. My point was that it couldn't be feature
tested, and you say use GIFs and stylesheets. That's not a test, it's a
You have no point at all. *You can't feature test the color shirt the
user is wearing either. *Sheesh.
What about the need to fix for control z-index bleed-thru (ie, select
box showing above popup divs, etc).
I use CC to apply the fix for that. I've not come across a good test
for it.

Matt Kruse
Nov 4 '08 #29

P: n/a
David Mark <dm***********@gmail.comwrites:
[…] If you mean conditional
compilation, wrong again. I would never put that into the library as
(for one) it screws up the YUI minifier.
Only clueless idiots use minifiers (what else is new :-).

Nov 5 '08 #30

P: n/a
On Nov 4, 7:58*pm, Eric B. Bednarz <bedn...@fahr-zur-hoelle.org>
wrote:
David Mark <dmark.cins...@gmail.comwrites:
[] If you mean conditional
compilation, wrong again. *I would never put that into the library as
(for one) it screws up the YUI minifier.

Only clueless idiots use minifiers (what else is new :-).
Oh brother. Who told you that?
Nov 5 '08 #31

P: n/a
On Nov 4, 6:51*pm, Matt Kruse <m...@thekrusefamily.comwrote:
On Nov 4, 4:09*pm, David Mark <dmark.cins...@gmail.comwrote:
That's not an answer either. My point was that it couldn't be feature
tested, and you say use GIFs and stylesheets. That's not a test, it'sa
You have no point at all. *You can't feature test the color shirt the
user is wearing either. *Sheesh.

What about the need to fix for control z-index bleed-thru (ie, select
box showing above popup divs, etc).
There are numerous ways to design that out of the system. If you
refuse to do that, then you will have to either deal with it the same
way in all browsers (e.g. hide the selects when popping up a div)
or...
I use CC to apply the fix for that. I've not come across a good test
for it.
There is nothing inherently wrong with CC. If you find yourself using
it for more than this and perhaps two other things I can think of, you
are probably using it as a crutch. In any event, CC is not the same
thing as "detecting" the user agent.
>
Matt Kruse
Nov 5 '08 #32

P: n/a
On Nov 5, 7:43*am, Matt Kruse <m...@thekrusefamily.comwrote:
[...]
In my experience, I've never come across a browser that faked it's UA
string and caused a problem for the user. So by your logic, can't I
assume it is a safe practice?
I think there are two bigger issues.

1. When a new version of a browser fixes[1] whatever quirk the sniff
was directed at, either the browser continues to get the "assume it's
broken" fork when it should not or the code authors have to add a
version-specific sniff.

2. Developers become lazy and start the old "this site must be viewed
in browser X" crap when the browser X is actually pefectly capable of
viewing the site.

My ISP continues to deliver different content to Safari users based on
a UA sniff, despite the fact that they could very easily have used a
feature test and it was fixed about version 1.2 or so. I change my UA
string to mimic Firefox and everything is fine.
1. Where "fixes" can mean conforms to whatever norm is expected, it
need not actually be a bug or missing feature.

--
Rob
Nov 5 '08 #33

P: n/a
David Mark <dm***********@gmail.comwrites:
On Nov 4, 7:58*pm, Eric B. Bednarz <bedn...@fahr-zur-hoelle.org>:
>David Mark <dmark.cins...@gmail.comwrites:
[…] If you mean conditional
compilation, wrong again. *I would never put that into the library as
(for one) it screws up the YUI minifier.

Only clueless idiots use minifiers (what else is new :-).

Oh brother. Who told you that?
The same back-end guy who told me that creative mangling^Woptimizing
of source code will introduce new bugs, and that if your toolbox is
broken you fix your toolbox without making too much noise. Not that it’s
really relevant, I rather enjoy shooting without aiming, just like you.
--
||| hexadecimal EBB
o-o decimal 3771
--oOo--( )--oOo-- octal 7273
205 goodbye binary 111010111011
Nov 5 '08 #34

P: n/a
On Nov 4, 6:49*pm, Matt Kruse <m...@thekrusefamily.comwrote:
On Nov 4, 4:20*pm, Thomas 'PointedEars' Lahn <PointedE...@web.de>
wrote:
Using Conditional Comments does not break anything because when it is not
regarded as a special processing instruction (as by MSHTML, or a
deliberately borken UA if it can't handle it properly) it is regarded an
SGML/HTML/XML comment and goes *ignored*, plain and simple. *IOW, when a CC
fails you do *not* do anything.

I know how CC's work, I'm not stupid.
That is a debate for another time.
>
My point is simple - a browser other than IE could implement CC's,
A browser could be made out of cake too. Highly unlikely, but
theoretically possible. IE could stop implementing them in a future
version as well (also highly unlikely.)
process them correctly, and behave as if it is IE6/7/8. If you are
using CC to determine the capabilities of the browser and inferring
it's CSS quirks, then you would apply incorrect logic to "fix" such a
browser that isn't broken.
The browser you described is broken as designed. But regardless,
consider the case at hand, which comes up over and over. How to deal
with the fact that IE6 cannot render transparent PNG's properly.

Solution #1

Use CC's to include an additional style sheet for IE6. As discussed,
other browsers ignore comments (as they must!) The images will look
slightly less impressive (if they have any translucent pixels that is)
in IE6 (or any hypothetical browser that chooses to interpet comments
as directives.) No script required. No chance of degrading the user
experience in any way (other than maybe a slightly less impressive
graphic.)

Sounds hard to beat doesn't it? Inexplicably, many script developers
endeavor to create scripted solutions for problems that will never be
perfectly solved by script.

Solution #2

Sniff the user agent string. Set a global variable to true to
indicate when the string "MSIE" is contained therein. If this
variable is set, call a proprietary DirectX hack that is sure to throw
an exception in any agent other than IE. For those who don't know,
lots of agents have "MSIE" in their user agent strings (e.g. mobile
devices, old versions of Opera, FF with the UA string changed to
thwart browser sniffing scripts, etc.)

The inevitable, ridiculous argument that comes back from these library
developers and their proponents is that they "did what they had to do"
to make it "work." If, for example, a PNG correction routine is
present in Prototype or jQuery, it almost certainly uses browser
sniffing to "work", so clearly the feature should have been left out
altogether (it is better solved in other ways.) Ask why they didn't
opt for the obvious and the answer will be that it wouldn't have been
"cool."

And for those that don't know, in jQuery and Prototype, you can find
the sloppy fingerprints from this sort of handiwork throughout.
Really. Tangled all throughout, this sort of "logic" waits to
explode on anyone foolish enough to browse your site with something
other than the latest versions of FF, IE, Safari or Opera. I guess a
disclaimer wouldn't have been "cool" either.
>
In the same way, the user-agent string of IE6/7/8 is predictable. If a
browser fakes it and appears to be IE6/7/8, then your browser-sniffing
code may "fix" quirks that aren't really broken.
Whose browser sniffing code? This is an "apples vs. oranges" argument
anyway.
>
Both approaches are making an assumption about X based on some other
unrelated Y.
The assumption that browsers will treat comments as comments is far
better than any inference you can make from the user agent string.
>
Granted, user-agent sniffing is more error-prone than CC. But I would
Apples and oranges.
hope you would realize that both strategies have some room for
failure.
One has virtually zero chance of failure and for the other, failure is
a virtual certainty. Take your pick.
>
In my experience, both approaches should be used as a last-resort. And
One should be used as a last resort to serve proprietary rules to
IE6. The other should never be used for anything. There just aren't
any parallels here.
I would use CC before browser sniffing. But also in my experience,
I would Notepad before Outlook Express.
I've never come across a problem (in many years) caused by incorrect
browser-sniffing when used as a last resort. So it's not something I'm
afraid of, either.
Your clients should be scared to death though, particularly if you
built public sites for them.
>
Much in contrast, using browser sniffing and sniffing to the wrong effect is
inevitably harmful because you end up *doing* things that you did not
intended to do in the UA that you did not know to spoof you successfully
yet.

And as explained, using CC is vulnerable to the same thing.
Notepad and OE both crash.
>
And further on this point, I personally don't care about browsers that
are trying to spoof me and pretend to be IE or whatever. If they want
Oh, Christ on a crutch, here we go with the "I don't care" argument.
You are not your users.
to proclaim to the world that they are IE, then they get treated as IE
Are you really this clueless or just trying to make conversation here?
in cases when it's necessary to differentiate. If users don't like it,
they should use a browser that doesn't pretend to be something it's
not.
There is the classic and idiotic assumption that the user knows what
browser and configuration they are using and/or has any means to
change these circumstances. Blame the user for "pretending" to cover
up for your own shortcomings as a Web developer.
>
*And you will have to adapt your code each time it happens to handle
that case, *after* some kind soul *might* have told you that your code
messed with their UA. *I hope you can agree that being aware of thoseissues
and continue sniffing anyway (unless it is *absolutely required*) is just
plain stupid.

It is stupid to sniff when it is not the most efficient way to
accomplish a task. And in most cases, it is the wrong strategy.
I am still waiting (after ten odd years) to hear of a single case
where it is the right strategy.
Nov 5 '08 #35

P: n/a
On Nov 4, 8:35*pm, Eric B. Bednarz <bedn...@fahr-zur-hoelle.org>
wrote:
David Mark <dmark.cins...@gmail.comwrites:
On Nov 4, 7:58*pm, Eric B. Bednarz <bedn...@fahr-zur-hoelle.org>:
David Mark <dmark.cins...@gmail.comwrites:
[] If you mean conditional
compilation, wrong again. *I would never put that into the libraryas
(for one) it screws up the YUI minifier.
Only clueless idiots use minifiers (what else is new :-).
Oh brother. *Who told you that?

The same back-end guy who told me that creative mangling^Woptimizing
of source code will introduce new bugs, and that if your toolbox is
broken you fix your toolbox without making too much noise. Not that its
really relevant, I rather enjoy shooting without aiming, just like you.
Never me.
Nov 5 '08 #36

P: n/a
David Mark <dm***********@gmail.comwrites:
On Nov 4, 8:35*pm, Eric B. Bednarz <bedn...@fahr-zur-hoelle.org>
^^^
I *do* have one good thing to say about jQuery: I hate G2 much more.
>[…] I rather enjoy shooting without aiming, just like you.

Never me.
Not? The only thing that keeps me from considering you and Thomas Lahn
to be jQuery’s most effective – albeit somewhat unintentional –
ambassadors in this NG is having read its source code myself;
accidental readers are unlikely to share this advantage.
--
||| hexadecimal EBB
o-o decimal 3771
--oOo--( )--oOo-- octal 7273
205 goodbye binary 111010111011
Nov 5 '08 #37

P: n/a
On Nov 4, 9:06*pm, Eric B. Bednarz <bedn...@fahr-zur-hoelle.org>
wrote:
David Mark <dmark.cins...@gmail.comwrites:
On Nov 4, 8:35*pm, Eric B. Bednarz <bedn...@fahr-zur-hoelle.org>

* * * * * * * * * * * * * * * * * * * * * ^^^
I *do* have one good thing to say about jQuery: I hate G2 much more.
You hate something I have never heard of more than jQuery. How does
that promote jQuery?
>
[] I rather enjoy shooting without aiming, just like you.
Never me.

Not? The only thing that keeps me from considering you and Thomas Lahn
What does he have to do with it? Do you think he is the only other
member of the group to question the competence of the jQuery project?
to be jQuerys most effective albeit somewhat unintentional
ambassadors in this NG is having read its source code myself;
Perhaps you are reading a different newsgroup?
accidental readers are unlikely to share this advantage.
I don't follow the logic. You don't have to read all of the code.
Just read a few choice excerpts that have been posted here repeatedly.
Nov 5 '08 #38

P: n/a
Conrad Lender wrote:
On 2008-11-04 19:35, David Mark wrote:
>>If you've got a better solution, how about telling us?
A better solution to using outerHTML? How about anything?

That's not an answer. Until you actually show me a better solution, I'm
just going to assume that you don't have one.

In the example I've posted, I'm checking whether I should use the
IE-proprietary way of calling createElement, so it's perfectly
acceptable to use a proprietary property in the check.
No you are not. You are checking to see if creating a named control does
not result in a lower-case "name=" in an outerHTML property. Does your
application really care what the outerHTML string looks like?

The easiest way to avoid the problem of not being able to find an
element by name is simply not give the element a - name - and use ID
instead.

createElement(invalid_string) is supposed to raise a DOMException

If you used that, and you got a domexception, it would be entirely your
fault; that is exactly what should happen in that case. A browser that
had the same problem of creating named form controls would get the
createElement(invalid_string) would trigger that situation.

If creating an element and setting a name creates a problem, the problem
should be identified clearly in a feature test.

The pseudo code might be:-

create a named anchor
check to see if the element is found in the way you are looking for it
(getElementsByName).

>>Again, what's _your_ solution?
[...] The simplest solution is to provide GIF
equivalents and override the the PNG's in the inevitable IE6-specific
style sheets. And those are typically hidden with IE conditional
comments.

That's not an answer either. My point was that it couldn't be feature
tested, and you say use GIFs and stylesheets. That's not a test, it's a
workaround, and an ugly one at that, given that GIFs don't support alpha
transparency.
A device may support PNG without supporting the alpha channel. It's
optional.
http://www.w3.org/TR/PNG/

When a PNG is desired, but cannot be displayed, the grey fuzzy junk is
not acceptable. There should be a way to have a fallback.

The scope of the problem is larger than IE. I don't have the answer.

>>Where did I say anything about the user agent string?
That's what browser sniffing uses to "check the browser version."

There are better and more reliable ways to test for IE versions. And you
know it - you're using conditional evaluation in your own library...
>>people who don't share your fierce hatred of anything connected to John
Resig. At least he put his code out there where other people can use it
Fierce of hatred of who?! I don't know him at all. His code is awful
though.

I've seen much worse. Probably anyone whose had a job has. Search random
websites and view the source.

Garrett
>
- Conrad

--
comp.lang.javascript FAQ <URL: http://jibbering.com/faq/ >
Nov 5 '08 #39

P: n/a
On Nov 4, 7:06*pm, David Mark <dmark.cins...@gmail.comwrote:
What about the need to fix for control z-index bleed-thru (ie, select
box showing above popup divs, etc).
There are numerous ways to design that out of the system. *
Are you aware of every system?
Sounds like "I can't solve this problem, so I'll just avoid it
instead."
That works well in some cases, and not so well in others.
If you
refuse to do that, then you will have to either deal with it the same
way in all browsers (e.g. hide the selects when popping up a div)
Terrible approach. Especially for browsers that don't exhibit the
problem.
I use CC to apply the fix for that. I've not come across a good test
for it.
There is nothing inherently wrong with CC. *If you find yourself using
it for more than this and perhaps two other things I can think of, you
are probably using it as a crutch. *In any event, CC is not the same
thing as "detecting" the user agent.
But it kind of is "detecting" the user agent. You can use tags to
check against the OS, browser version, etc.

To use CC's, you are:
1. Trusting that the browser is what it says it is
2. Assuming that behavior X exists because of what your experience
tells you about that browser

When you use sniffing, you are:
1. Trusting that the browser is what it says it is
2. Assuming that behavior X exists because of what your experience
tells you about that browser

The point is valid that #1 is more reliable for CC than it is for
sniffing. True in practice. But not necessarily so. CC's could be
spoofed, just as user agent strings can be spoofed.

Point #2 is a necessary evil for both, because there are some things
you simply can't reliably test for. The GOAL is to handle as many
cases as possible and offer users the best possible experience.

A point that seems to get lost is that I'm not justifying the browser
sniffing in jQuery at all. It's unnecessary and amateur-ish. But just
because it exists doesn't invalidate the rest of the code for me. And
because it works consistently, reliably, and conveniently for me in
every situation I choose to use it in, I find value in it. It's far
from perfect and it has flaws, but I can accept that.

Matt Kruse
Nov 5 '08 #40

P: n/a
On Oct 29, 10:14 pm, Conrad Lender wrote:
On 2008-10-29 20:43, David Mark wrote:
>>http://ejohn.org/blog/future-proofin...ipt-libraries/
>That one is my personal favorite [..]

Actually, I thought it was quite interesting.
I found it quite informative, but on its author rather than its subject.
He wrote that scripting libraries like JQuery or Dojo are
used, among other things, to "pave over" browser bugs and
incompatibilities,
'Plaster over' rather than "pave over". In solving a small subset of the
cross-browsers scripting problem these libraries are in a potion to make
out that the people using them no longer have to consider the issues at
all. Their users find this appealing, and prefer not to perceive the
degree to which these things fall short genuinely addressing the issues.
As do their authors; John Resig is very keen to speak of the 3 or 4
recent desktop web browsers that JQuery actually does support as "all
browsers", and when you are fully supporting "all browsers" the result
must then be cross-browser.
so that the users can concentrate on the real task at hand
There is a general failure to appreciate that the "real task at hand" is
much more than just churning out web pages/applications. All projects to
which web development 'skills' are applied have a pre-determined purpose
and a context. One of the bigger tasks at hand is the application of
knowledge and experience to the design such that the outcome best
satisfies that purpose in context.

To illustrate; the primary purpose of an e-commerce site is to take
money off people in exchange for goods and/or services (that did not
take much working out). The state of web technologies is such that it is
possible to take money of everyone with an HTTP(S), HTML web browsers
that understands forms. Every time someone in that set is excluded from
the group from whom money can be taken that is a direct result of a
design decision. The design decisions have a direct impact on the
potential of the outcome to satisfy its purpose. There will be aspects
of the design that may have a positive impact on the outcome, such as a
more 'professional' look giving the potential customers greater
confidence and increasing their willingness to part with their money. So
we come to trade-offs; is the increased turnover that will result from a
more professional presentation greater than the loss in turnover that
may result form designing out the possibility of the user of some
browsers from accessing the site at all? How do you make those
judgments, and who should be making those decisions, and based on what
information, knowledge and experience?

Libraries such a JQuery allow people with little or no experience in web
development to crate sites that, on a limited number of web browsers, it
there default configurations, appear to 'work' and present a (more or
less) impressive presentation. But these people have no understanding of
the consequences of their actions (indeed mostly seem unaware that there
are any issues arising from their decisions at all). And the customers
of such sites are likely using one of the popular browsers in its
default configuration, and relying on the 'professionalism' of their web
developers to have had any design issues addressed in there best
interests. That is an understandable expectation from the site
customers, but not realistic in the light of general level of web
development skills exhibited in the real world.
instead of hand-rolling yet another
cross-browser event abstraction layer, for example.
This, oft repeated, assertion that the only alternative to using a third
party general purpose library is write everything from scratch for
yourself is a total nonsense. In-between those two alternatives lay a
whole spectrum of code re-use opportunities, and only the very
masochistic are not re-using pre-written code for the bulk of their
projects even if they are not using the 'popular' general purpose
libraries.
He also demonstrated that sometimes object detection is
not possible, or not enough to provide a solid abstraction.
"Demonstrated", did he? While there inevitably will be features that
cannot be detected (though far fewer than may people would like to make
out) I have to question whether John Resig demonstrated any of them in
that article.

The first example he mentions as "fail to return correct results from a
getElementsByTagName query" refers to an article that asserts that it is
sometimes impossible to retrieve collections of PARAM elements from
inside OBJECT elements. OBJECT elements tend to be inconsistent, odd and
divergent little sods so this is an entirely plausible assertion. It is
not a demonstration of the ineffectiveness of feature detection as if
you have an OBJECT elements and know it to contain PARAM elements it is
trivial to apply the element retrieval method and deduce the
applicability of the issue from the success of that attempt. That is, if
you fail to retrieve elements that you know are there you then know that
you are in an environment where such retrievals are ineffective (so can
attempt alternative retrieval techniques or follow your graceful
degradation path).

From John Resig's perspective this may not sound like a useful approach
because general purpose library code has no way of knowing whether any
OBJECT elements on a page have PARAM children, so not finding any cannot
be used to deduce anything. However, this is not an issue of future
detecting, it is an issue introduced by the attempt to be 'general
purpose'. And if you are going to address issues in browser scripting it
is quite important to be clear about where those issues are coming from.

Remember that only a small proportion of web sites/applications have
OBJECT elements, only a proportion of those have PARAM elements inside
those OBJECT elements, and only a small proportion of those
sites/applications have an interest in retracing those PARAM elements
with scirpts. So in the real world this is a non-issue for the huge
majority.

All of that assumes the issue reported on the limked page is accuratly
reported, which may still not be the case as the quality of issue
analysis among the 'bloogging' javascript comunity tends to be extramly
poor. Often they are just re-itterating the misconceved impressions of
others in the same 'comunity' without even testing them.

And on the subject of testing, lets get pack to John Resig's page, where
we read another "demonstration":-

| Additionally, object detection has the ability to completely fail.
| Safari currently has a super-nasty bugs related to object
| detection. For example, assuming that you have a variable and you
| need to determine if it contains a single DOM Element, or a DOM
| NodeList. One one would think that it would be as simple as:
|
| if ( elem.nodeName ) {
| // it's an element
| } else {
| // it's a nodelist
| }
| However, in the current version of Safari, this causes the browser
| to completely crash, for reasons unknown. (However, I'm fairly
| certain that this has already been fixed in the nightlies.)

That would not be a very good test to discriminate an Element from a
Node list as some IceBrowser NodeLists have non-empty nodeName -
properties, but we wouldn't expect John Resig to be aware of the
characteristics of any browsers that are not in his "all browsers" list.

Anyway, if true, that sounds like it could be a serious problem. So is
it true? When something is true it can be demonstrated in a way that is
reproducible by others (an attitude popular in science but not at all
out of place in software development). Of course if someone asserts
that a particular browser/version exhibits particular behaviour it
becomes very helpful if they also state precisely which browser/version
they have seen the issue on. Doing that makes verifying the observation
much easier because not seeing the issue on any different version does
not necessarily prove it does not exist.

John Resig does not provide that information, only the assertion that
the version is 'current' and that the issue is absent from 'nightlies,
and that the article was posted March 1st 2007. I have a Mac mini that
was purchased in late January 2007 and has never had any software
updates. That makes it seem likely that its Safari version is the one
that was 'current' in the run-up to March 1st 2007. It is version is
2.0.4 (419.3) and when exposed to this simple test page:-

<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01
Transitional//EN" "http://www.w3.org/TR/html4/loose.dtd">
<html>
<head>
<title></title>
<script type="text/javascript">
window.onload = function(){
var ndList = document.getElementsByTagName('BODY');
var emptyList = document.getElementsByTagName('DIV');
var node = document.body;
if(node.nodeName){
alert(
'[expected: BODY] node.nodeName = '+
node.nodeName
);
}
if(ndList.nodeName){
alert(
'[unexpect] ndList.nodeName = '+
ndList.nodeName
);
}else{
alert(
'[expect: undefined] ndList.nodeName = '+
ndList.nodeName
);
}
if(emptyList.nodeName){
alert(
'[unexpect] emptyList.nodeName = '+
emptyList.nodeName
);
}else{
alert(
'[expect: undefined] emptyList.nodeName = '+
emptyList.nodeName
);
}
};
</script>
</head>
<body>
</body>
</html>

- it happily completed the execution of the script and showed all of the
expected alerts.

So was that issue real, or just a bogus report made, untested, based on
rumour from a third party? As an isolated report of an issue you might
conclude that there was a Safari release that came after the versions
that I have and before the 'fix' in the 'nightlies' made it into a
post-march 2007 release that did indeed exhibit this uses (introduced in
one version and them fixed). On the other hand you might take this
apparently bogus report in contest, and specific in the context of an
article that goes on to assert:-

| Additionally, in Internet Explorer, doing object detection checks
| can, sometimes, cause actual function executions to occur. For
| example:
|
| if ( elem.getAttribute ) {
| // will die in Internet Explorer
| }
| That line will cause problems as Internet Explorer attempts to
| execute the getAttribute function with no arguments (which is
| invalid). (The obvious solution is to use
| "typeof elem.getAttribute == 'undefined'" instead.)

Now I know that that is pure BS because I have been using tests along
the lines of - if(elem.getAttribute){ .... } - on Element nodes for
years (at least 5) and have never seen them fail, and I do test/use that
code in (lots of versions of) IE. But still, lets make it easy for
everyone to test the proposition for themselves and so verify its
veracity. A simple test page might be:-

<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01
Transitional//EN" "http://www.w3.org/TR/html4/loose.dtd">
<html>
<head>
<title></title>
<script type="text/javascript">
window.onload = function(){
if(document.body.getAttribute){
alert(
'document.body.getAttribute = '+
document.body.getAttribute
);
}
};
</script>
</head>
<body>
</body>
</html>

Now to start with we need to know what IE's - getAttribute - would do if
it were executed with no arguments. The DOM spec is not that clear,
execute that it says no exceptions will be thrown. It is easy enough to
test, and it turns out that:-

alert(''+document.body.getAttribute());

- alert "null", so the method call returns null. So, if -
elem.getAttribute - calls the method with no arguments the result is
likely be null. If the result is null then in the above test page the -
if(document.body.getAttribute){ - test will be false, the - if - branch
will not be entered and the alert will not be shown. But on every IE
version where I have tried the above test the alert is shown, plus the
alert shows "document.body.getAttribute = function
getAttribute(){[native code]}", not the "null" that would be result of
calling the method with no arguments.

Can you find an IE version that does exhibit this 'issue'? I am pretty
certain that I have tested code that used this test on IE versions from
4 to 8 without issue, so I doubt it.

Given that this assertion is demonstrably BS may attitude toward the
previous "demonstrated" issue on Safari, in light of my test, leans
heavily toward dismissing that as also being BS.

So what is this article? If it is a reasoned analysis of future
detecting as a technique why are its examples bogus? An explanation
might be that this is not an examination of feature detection at all,
but rather an attempt to justify not trying feature detection written by
someone who has realised that they have no talent for it. And if you
have no concern for the veracity of the arguments you use it is possible
to find justification for anything.

One more quote from Resig's article:-

| The point of these examples isn't to rag on Safari or Internet
Explorer
| in particular, but to point out that rendering-engine checks can end
up
| becoming very convoluted - and thusly, more vulnerable to future
| changes within a browser.

- which can be summarised as 'complexity equals vulnerability'.
Certainly complexity will relate to vulnerability, in a very general
sense, but in the context of feature detection if a test is the correct
test, no matter how complex it may be, it will not be vulnerable to
updates in the browsers because significant changes the browsers will
directly impact on the outcomes of those tests, which is the point of
feature detection.

From my own experience; when IE 7 was released I was working on a web
application with (at the time) 100,000 lines of client-side code (code
employing nothing but feature detection where it is necessary to react
to divergent environments). The total number of changes in the scripts
needed to accommodate IE 7 was zero. When Chrome was released the total
number of changes necessary to accommodate that was also zero. And last
week QA started testing that application on IE 8. They may take another
week to run through all there tests but so far they have found no
issues, and I have a realistic expectation that they will not (as pretty
much everything that would be likely to be problematic would inevitably
get used in the first day or so).

Now that is "Future Proof" javascript, and low maintenance javascript.
In those cases, feature tests can help; for the remaining
cases they have no option but to go by browser version.
<snip>

Chrome and JQuery made interesting point about User Agent string based
browser detection; Chrome works (more or less) with JQuery because its
authors pragmatically gave it a UA string that would result in most
current browser sniffing scripts identifying it as Safari, and treating
Chrome as Safari is probably the best thing to do if you are going to
script in that style at all. But this means that UA sting based browser
sniffing was effective in this case not because it enabled the accurate
determination of the browser and its version, but instead was effective
precisely because it misidentified the browser and its version.
>Though for comic relief, you can't beat this one:
>>http://ejohn.org/blog/most-bizarre-ie-quirk/
>What kind of idiot would delegate the most critical
browser scripting tasks to people like that?

I thought it was funny. He found a weird behavior in IE,
and joked about finding a use for it. You didn't take
that seriously, did you?
I think you have missed the point. He observed a weird behaviour in IE,
applied an obviously fatly process to the analysis of that behaviour and
that then resulted in his coming to a series of utterly false
conclusions about what the observed behaviour was. Joking about
applications for that behaviour became irrelevant as having
misidentified the behaviour in the first place any proposed applications
of that misidentified behaviour would be worthless even if taken
seriously.

And you are likely to ask; his mistake was using an - alert - in the
test. Generally, alerts aren't much use in examining pointing device
interactions in web browsers because they tend to block script execution
(and so real-time event handling) and they can shift focus (keyboard
'enter' goes to the button on the box. Most people learn this lesson
quite early on as they learn browser scripting, as a consequence of
trying things out for themselves and trying to examine how they work.
The BS examples on the 'future proof javascript' page might suggest that
John Resig is not someone who goes in for trying things out for himself
that often.

Here is an alternative test page for the IE setTimeout quirk:-

<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01
Transitional//EN" "http://www.w3.org/TR/html4/loose.dtd">
<html>
<head>
<title></title>
<style type="text/css">
BODY {
width:100%;
height:100%;
}
</style>
<script type="text/javascript">
var count = 0;
setInterval(
function(){
window.status = ++count;
},
-1
);
</script>
</head>
<body>
<div style="width100%;height:100%;"></div>
</body>
</html>

- so give it a go. Make use your IE status bar is visible and watch the
displayed number as you click, mouse-down, move the mouse, release the
mouse button. And when you have done that see what you think of Resig's
conclusion that "What happened is positively bizarre: The callback
function will be executed every time the user left clicks the mouse,
anywhere in the document." In reality ckicking, and click events have
nothing to with IE's behaviour, beyond their representing a mousedown
followed by a mouseup.

While you may conclude that the reaction to the self proclaimed
"JavaScript Ninja" on this group is biased and personal, in reality it
is mostly a direct reaction to what he writes/does, and a reaction
informed by pertinent knowledge and experience in the field of browser
scripting.

Richard.

Nov 5 '08 #41

P: n/a
On Nov 5, 11:07*am, "Richard Cornford" <Rich...@litotes.demon.co.uk>
wrote:
[snip detailed analysis]
Richard, as always your analysis is detailed, analytical, accurate,
and insightful (and long).
I wish I had someone at my disposal to review my work in such detail.
Surely you recognize that there are few people who would be capable of
such an analysis.
John Resig is very keen to speak of the 3 or 4
recent desktop web browsers that JQuery actually does support as "all
browsers", and when you are fully supporting "all browsers" the result
must then be cross-browser.
I think the most valid non-technical criticism of jQuery is that it is
promoted as being a generalized solution that is applicable to most
(or all) public web sites. It is not. But must we continually repeat
that fact as if it completely invalidates it as a tool for other
purposes?
Libraries such a JQuery allow people with little or no experience in web
development to crate sites that, on a limited number of web browsers, it
there default configurations, appear to 'work' and present a (more or
less) impressive presentation. But these people have no understanding of
the consequences of their actions (indeed mostly seem unaware that there
are any issues arising from their decisions at all).
This is true of many things in life, not just jQuery. You just happen
to be an expert in the area that jQuery addresses and can see its
faults. Technical perfection is NOT the only measure of success or
evaluation criteria.

Take a retail situation as an example. A store may not be able to
accept all forms of payment, thereby limiting their potential
customers. They may not be open at convenient hours, further limiting
potential customers. They may not have adequate parking. Maybe some of
their products are out of reach of some customers. Maybe they have
shopping carts that fit most people, but some find inadequate so they
leave. Maybe their store layout is too confusing so some people leave
because they can't find what they want. Maybe their sales tactics are
obnoxious and they annoy customers so much that they walk out.

BUT WAIT! Don't they know that each of these things is a design
decision that could prevent a small percentage of potential customers
from giving them money?! Why don't they fix them all and optimize it
so every single person can use their store and buy as much as
possible?

Why? Because it's made by humans, and humans aren't perfect. Sometimes
you have to say "close enough" and get on with it. If a site "appears
to work" for most people, and the customer finds it acceptable, and
even if they lose out on a small percentage of potential sales, maybe
that is truly "good enough". Maybe they can't afford to do it exactly
right. Maybe the analysis and perfection of the system just isn't
warranted, because with just a little effort they can get a lot of
business, and that more than makes up for the potential losses of
customers. That's the real world.
instead of hand-rolling yet another
cross-browser event abstraction layer, for example.
This, oft repeated, assertion that the only alternative to using a third
party general purpose library is write everything from scratch for
yourself is a total nonsense.
It's not black-and-white to you, but to others in different situations
it just might be, because they don't have the same options as you do.
In-between those two alternatives lay a
whole spectrum of code re-use opportunities
You are in a position to determine that because you have a good
understanding of the technology. You criticize jQuery because it
doesn't handle all cases and has design flaws that may impact a public
site that is using it, because its developers are too naive to know
the pitfalls of the library. But then you expect these same naive
developers to understand how to develop, test, modularize, and re-use
their own code as a better option?
From my own experience; when IE 7 was released I was working on a web
application with (at the time) 100,000 lines of client-side code (code
employing nothing but feature detection where it is necessary to react
to divergent environments).
You realize that this kind of example shows that you are at the very
extreme edge of js development? You are very clearly not the audience
for jQuery or any other popular library approach. I suspect you have a
hard time relating to the average developer who is struggling just to
do simple animations or toggle divs.
And last week QA started testing that application on IE 8.
And do you know how many projects even _have_ QA departments? Consider
yourself lucky to work in such an environment! You are the exception,
not the norm.
Now that is "Future Proof" javascript, and low maintenance javascript.
Planned, designed, and written by an expert in the field. We just need
everyone to be experts and all this discussion would go away! :)
While you may conclude that the reaction to the self proclaimed
"JavaScript Ninja" on this group is biased and personal, in reality it
is mostly a direct reaction to what he writes/does, and a reaction
informed by pertinent knowledge and experience in the field of browser
scripting.
From my perspective, John Resig is clearly not as knowledgeable and
experienced as you are with regards to Javascript. But, whether you
like it or not, he has much more knowledge than the vast majority of
people attempting to write Javascript. He's not perfect (who is?) but
he's out there, sticking his neck out, showing his cards, sharing what
he knows (or thinks he knows). That takes a lot of guts, and I'm sure
he's learning along the way. If everyone needs to be an expert at your
level before they can share anything they know, no one would be
learning!

The developer world needs people like him, and libraries like jQuery,
to bridge the gap between the average user who is lost and confused,
and the expert developer such as yourself. You may not think that
jQuery is a positive thing in the scripting world, but countless other
people disagree with you, and things are moving in that direction
whether you like it or not. Even Microsoft (not your biggest measure
of success, I'm sure) has adopted jQuery into its development
platform. Surely this doesn't affect how you do your work. But you
need to recognize how it changes the game for the rest of the
developer world.

It seems that your perspective prevents you from understanding the
needs of people who are in a very different situation from yourself.
And if you want to have a more positive impact on the scripting world,
you need to learn from people like John Resig just as much as he needs
to learn from you.

All IMO, of course.

Matt Kruse
Nov 5 '08 #42

P: n/a
Eric B. Bednarz wrote:
[...] The only thing that keeps me from considering you and Thomas Lahn
to be jQuery’s most effective – albeit somewhat unintentional –
ambassadors in this NG is having read its source code myself;
accidental readers are unlikely to share this advantage.
That argument is fallacious as it is based on the false assumption that
discussing the shortcomings of a piece of software attracts a majority of
relevant users to exactly that software.
PointedEars
--
var bugRiddenCrashPronePieceOfJunk = (
navigator.userAgent.indexOf('MSIE 5') != -1
&& navigator.userAgent.indexOf('Mac') != -1
) // Plone, register_function.js:16
Nov 5 '08 #43

P: n/a
On Nov 5, 11:16*am, Matt Kruse <m...@thekrusefamily.comwrote:
On Nov 4, 7:06*pm, David Mark <dmark.cins...@gmail.comwrote:
What about the need to fix for control z-index bleed-thru (ie, select
box showing above popup divs, etc).
There are numerous ways to design that out of the system. *

Are you aware of every system?
Sounds like "I can't solve this problem, so I'll just avoid it
instead."
That works well in some cases, and not so well in others.
If you
refuse to do that, then you will have to either deal with it the same
way in all browsers (e.g. hide the selects when popping up a div)

Terrible approach. Especially for browsers that don't exhibit the
problem.
I use CC to apply the fix for that. I've not come across a good test
for it.
There is nothing inherently wrong with CC. *If you find yourself using
it for more than this and perhaps two other things I can think of, you
are probably using it as a crutch. *In any event, CC is not the same
thing as "detecting" the user agent.

But it kind of is "detecting" the user agent. You can use tags to
check against the OS, browser version, etc.

To use CC's, you are:
1. Trusting that the browser is what it says it is
2. Assuming that behavior X exists because of what your experience
tells you about that browser

When you use sniffing, you are:
1. Trusting that the browser is what it says it is
2. Assuming that behavior X exists because of what your experience
tells you about that browser

The point is valid that #1 is more reliable for CC than it is for
sniffing. True in practice. But not necessarily so. CC's could be
spoofed, just as user agent strings can be spoofed.
We've been over that.
>
Point #2 is a necessary evil for both, because there are some things
It isn't necessary. That is my whole point. When somebody asks the
jQuery or Prototype "teams" to implement (for example) a PNG fix
script, they should simply refuse as it is impossible to detect the
condition. Perhaps they could even recommend the obvious non-script
solutions. But no, they are in a perceived arms race to create the
biggest general-purpose script ever. It is just ridiculous.
you simply can't reliably test for. The GOAL is to handle as many
cases as possible and offer users the best possible experience.

A point that seems to get lost is that I'm not justifying the browser
sniffing in jQuery at all. It's unnecessary and amateur-ish. But just
I know you aren't and I know you tried to convince the jQuery support
group to see the light on this. I also know they didn't see the light
and therefore their uses are still in the dark.
because it exists doesn't invalidate the rest of the code for me. And
because it works consistently, reliably, and conveniently for me in
every situation I choose to use it in, I find value in it. It's far
from perfect and it has flaws, but I can accept that.
I think you know that I don't really care if you use it for your
private application. It is just that the library is "marketed" (often
with religious zeal) to Web developers, most of whom work on the
public Internet. That is upsetting for a number of reasons, not the
least of which is that I can't browse the Web without tripping over
incompetent scripts (can't turn script off either of course.) Not all
of them use jQuery of course (lots do), but the whole culture of Web
development seems to have taken a major step backwards in the last few
years.

So I would just like to see some disclaimers when people talk about
it. For example:

jQuery Rules!!!!!!!!!*

* Provided you are using the default configuration of a handful of
modern browsers, then, well, it still doesn't rule per se, but it is
there. Warning: may explode in the middle of "chained" calls, leaving
the document in an unexpected (and possibly non-working) state.
Nov 5 '08 #44

P: n/a
On Nov 5, 1:16*pm, David Mark <dmark.cins...@gmail.comwrote:
So I would just like to see some disclaimers when people talk about
it. *For example:

jQuery Rules!!!!!!!!!*

* Provided you are using the default configuration of a handful of
modern browsers, then, well, it still doesn't rule per se, but it is
there. *Warning: may explode in the middle of "chained" calls, leaving
the document in an unexpected (and possibly non-working) state.
Heh. Now that I agree with.

Matt Kruse
Nov 5 '08 #45

P: n/a
On Nov 5, 12:07*pm, "Richard Cornford" <Rich...@litotes.demon.co.uk>
wrote:
On Oct 29, 10:14 pm, Conrad Lender wrote:
On 2008-10-29 20:43, David Mark wrote:
>http://ejohn.org/blog/future-proofin...ipt-libraries/
That one is my personal favorite [..]
Actually, I thought it was quite interesting.

I found it quite informative, but on its author rather than its subject.
He wrote that scripting libraries like JQuery or Dojo are
used, among other things, to "pave over" browser bugs and
incompatibilities,

'Plaster over' rather than "pave over". In solving a small subset of the
Exactly. Their unit tests throw an exception and somebody comes up
with a half-baked plan to patch it with yet another browser sniff.
Prototype looks like a structure ready to collapse as so many hacks
have been piled on top of other hacks. They are reduced to testing
minor version numbers at one point. Of course, nobody seems to use
Prototype anymore (other than with those Rails "helper" things.)
jQuery has been widely mistaken as a viable alternative.

[snip]
| example:
|
| if ( elem.getAttribute ) {
| * * // will die in Internet Explorer
| }
| That line will cause problems as Internet Explorer attempts to
| execute the getAttribute function with no arguments (which is
| invalid). (The obvious solution is to use
| "typeof elem.getAttribute == 'undefined'" instead.)

Now I know that that is pure BS because I have been using tests along
You are correct. Pure and unadulterated BS and demonstrably so.
the lines of - if(elem.getAttribute){ .... } - on Element nodes for
years (at least 5) and have never seen them fail, and I do test/use that
IIRC, it happens only with XML elements (appears they are ActiveX
objects under the hood.) No surprise that jQuery attempts to deal
with such objects. Clearly Resig bumped into my "unknown" bug and
typical jumped to the wrong conclusion, based on guesswork,
meditation, etc.
code in (lots of versions of) IE. But still, lets make it easy for
everyone to test the proposition for themselves and so verify its
veracity. A simple test page might be:-

<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01
* Transitional//EN" "http://www.w3.org/TR/html4/loose.dtd">
<html>
* * <head>
* * * * <title></title>
<script type="text/javascript">
window.onload = function(){
* * if(document.body.getAttribute){
* * * * alert(
* * * * * * 'document.body.getAttribute = '+
* * * * * * document.body.getAttribute
* * * * );
* * }};

</script>
* * </head>
* * <body>
* * </body>
</html>
Here is a slightly modified demonstration that shows how another
property can explode in similar fashion (and how simple it is to test
this case.) This property isn't even a method, so it proves that
Resig's theory about accidentally calling methods by type conversion
is nonsense.

<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01
Transitional//EN" "http://www.w3.org/TR/html4/loose.dtd">
<html>
<head>
<title></title>
<script type="text/javascript">
window.onload = function(){
var el = document.createElement('div');
document.body.appendChild(el);

document.body.innerHTML = '';

if(typeof el.offsetParent == 'unknown'){
alert('Warning. About to explode...');
}
alert('offsetParent = '+ el.offsetParent);
};

</script>
</head>
<body>

</body>
</html>

Clearly a competently designed and written Web application will never
run into this. However, since MS can change the rules at any time, I
advocate using the typeof operator to test all host object methods.
Now to start with we need to know what IE's - getAttribute - would do if
it were executed with no arguments. The DOM spec is not that clear,
execute that it says no exceptions will be thrown. It is easy enough to
test, and it turns out that:-

alert(''+document.body.getAttribute());

- alert "null", so the method call returns null. So, if -
elem.getAttribute - calls the method with no arguments the result is
likely be null. If the result is null then in the above test page the -
if(document.body.getAttribute){ - *test will be false, the - if - branch
will not be entered and the alert will not be shown. But on every IE
version where I have tried the above test the alert is shown, plus the
alert shows "document.body.getAttribute = function
getAttribute(){[native code]}", not the "null" that would be result of
calling the method with no arguments.
Clearly Resig prefers voodoo to scientific methods (like testing) or
he would never have published his article.
>
Can you find an IE version that does exhibit this 'issue'? I am pretty
certain that I have tested code that used this test on IE versions from
4 to 8 without issue, so I doubt it.
Not the issue as such (clearly it is being mistaken for another
issue.)
>
Given that this assertion is demonstrably BS may attitude toward the
previous "demonstrated" issue on Safari, in light of my test, leans
heavily toward dismissing that as also being BS.
My feeling exactly. I have never finished that article. It is
sickening to think how many people have visited this Fantasyland over
the years and gone on to spread such "wisdom" in the real world. If
there is one guy in the world who should not be talking about feature
testing, it is this guy. Ironic that he is near constantly blithering
about the subject.
>
So what is this article? If it is a reasoned analysis of future
A delusion masquerading as a reasoned argument. As you mentioned, the
argument serves only to justify his own incompetence vis-a-vis feature
detection. One of the Prototype twits authored a similar call to
browser sniffing back when that library was the flavor of the day.
People have a right to their own ignorant opinions, but what compels
these people to spread them like they are gospel?

[snip]
>
One more quote from Resig's article:-

| The point of these examples isn't to rag on Safari or Internet
Explorer
| in particular, but to point out that rendering-engine checks can end
up
| becoming very convoluted - and thusly, more vulnerable to future
| changes within a browser.
And thus, John Resig is both incompetent and incoherent.
>
- which can be summarised as 'complexity equals vulnerability'.
Interesting take.
Certainly complexity will relate to vulnerability, in a very general
sense, but in the context of feature detection if a test is the correct
test, no matter how complex it may be, it will not be vulnerable to
updates in the browsers because significant changes the browsers will
directly impact on the outcomes of those tests, which is the point of
feature detection.

From my own experience; when IE 7 was released I was working on a web
application with (at the time) 100,000 lines of client-side code (code
employing nothing but feature detection where it is necessary to react
to divergent environments). The total number of changes in the scripts
needed to accommodate IE 7 was zero. When Chrome was released the total
number of changes necessary to accommodate that was also zero. And last
week QA started testing that application on IE 8. They may take another
week to run through all there tests but so far they have found no
issues, and I have a realistic expectation that they will not (as pretty
much everything that would be likely to be problematic would inevitably
get used in the first day or so).
Yes. My experiences with Chrome, Windows Safari, FF3, Opera, etc.
have been similar. Do things right the first time and you don't have
to re-do them.
>
Now that is "Future Proof" javascript, and low maintenance javascript.
The title of that article is ironic indeed. Even worse, jQuery's two
main selling points are reduced maintenance and that it is "fast." We
know browser sniffing only creates future maintenance headaches and,
of course, there is no slower way to do anything in browser scripting
than to use jQuery (the design ensures that.) He can rewrite his
silly CSS selector queries until the end of the earth but it won't put
a dent in the overall inefficiency (he is looking at a total rewrite
from scratch to accomplish that.)
>
In those cases, feature tests can help; for the remaining
cases they have no option but to go by browser version.

<snip>

Chrome and JQuery made interesting point about User Agent string based
browser detection; Chrome works (more or less) with JQuery because its
authors pragmatically gave it a UA string that would result in most
current browser sniffing scripts identifying it as Safari, and treating
Chrome as Safari is probably the best thing to do if you are going to
script in that style at all. But this means that UA sting based browser
sniffing was effective in this case not because it enabled the accurate
determination of the browser and its version, but instead was effective
precisely because it misidentified the browser and its version.
Of course. And the purveyors of jQuery, Prototype, etc. ran their
unit tests through it and rejoiced that their code still "worked."
Somehow they see the coincidental circumstances that led to this
latest "success" as validation of their baseless methods. People like
that should not be writing software. Period. They certainly should
not be degrading Web documents with their collective delusions.
>
Though for comic relief, you can't beat this one:
>http://ejohn.org/blog/most-bizarre-ie-quirk/
What kind of idiot would delegate the most critical
browser scripting tasks to people like that?
I thought it was funny. He found a weird behavior in IE,
and joked about finding a use for it. You didn't take
that seriously, did you?

I think you have missed the point. He observed a weird behaviour in IE,
He did.
applied an obviously fatly process to the analysis of that behaviour and
that then resulted in his coming to a series of utterly false
conclusions about what the observed behaviour was. Joking about
applications for that behaviour became irrelevant as having
misidentified the behaviour in the first place any proposed applications
of that misidentified behaviour would be worthless even if taken
seriously.

And you are likely to ask; his mistake was using an - alert - in the
test. Generally, alerts aren't much use in examining pointing device
interactions in web browsers because they tend to block script execution
(and so real-time event handling) and they can shift focus (keyboard
'enter' goes to the button on the box. Most people learn this lesson
quite early on as they learn browser scripting, as a consequence of
trying things out for themselves and trying to examine how they work.
The BS examples on the 'future proof javascript' page might suggest that
John Resig is not someone who goes in for trying things out for himself
that often.
Or paying the slightest attention to anything critical of his work.
His mantra seems to be "stop hating me!" Last time I pointed out an
obvious mistake to him, he responded with the age-old "argument" of
library authors: "where is your way-cool cross-browser library?" Now
that that "platform" has collapsed, he is predictably absent from all
such discussions.
>
Here is an alternative test page for the IE setTimeout quirk:-
[snip]
>
While you may conclude that the reaction to the self proclaimed
"JavaScript Ninja" on this group is biased and personal, in reality it
And who on earth would take technical advice from a self-described
"JavaScript Ninja?" I wonder if he has a JScript belt too?
is mostly a direct reaction to what he writes/does, and a reaction
informed by pertinent knowledge and experience in the field of browser
scripting.
I've had the displeasure of talking to him briefly (here) and came to
the conclusion that he is a few sandwiches short of a picnic. But it
is his code, books, blogs, etc. that irk me as he is spreading
outrageous misconceptions.
Nov 5 '08 #46

P: n/a
Thanks for your thorough and interesting reply. I agree with much of
what you wrote, so I'll focus on the direct questions.

On 2008-11-05 18:07, Richard Cornford wrote:
[e-commerce sites] Every time someone in that set is excluded from
the group from whom money can be taken that is a direct result of a
design decision. [..]
So we come to trade-offs; is the increased turnover that will result from
a more professional presentation greater than the loss in turnover that
may result form designing out the possibility of the user of some
browsers from accessing the site at all? How do you make those
judgments, and who should be making those decisions, and based on what
information, knowledge and experience?
This boils down to the question which user agents an application is
designed to support, and who makes that decision. Only the client is in
a position to decide who to shut out, and he will (or should) base this
decision on his customer base, collected browser statistics, and the
advice of his technical staff and developers. Amazon will reach a
different conclusion here than, say, YouTube, or a gaming site, or a
portal for Ajax fans. Dropping support for ancient user agents can help
keep the code base managable and reduce the maintainance effort. I would
never say that JQuery is suitable for any site; but for some sites the
target audience will be overwhelmingly likely to use one of the browser
versions that JQuery supports.

For personal sites, the owner gets to decide. A private blog, for
example, would be much less affected by the reactions of the <1% of
visitors whose user agent isn't in JQuery's list.

Anyway, I wasn't trying to recommend or defend JQuery, I was only
talking about the linked articles.
>instead of hand-rolling yet another
cross-browser event abstraction layer, for example.

This, oft repeated, assertion that the only alternative to using a third
party general purpose library is write everything from scratch for
yourself is a total nonsense. In-between those two alternatives lay a
whole spectrum of code re-use opportunities, and only the very
masochistic are not re-using pre-written code for the bulk of their
projects even if they are not using the 'popular' general purpose
libraries.
Perhaps so, but where *is* all that pre-written and tested code? Every
time somebody asks for a good library (as in collection of code that
helps to deal with browser quirks and incompatibilities), they're told
that there is no such thing. If they dare to mention one of the popular
libraries, they get flamed to cinders. If such a collection exists, I
would very much like to know where to find it.
"Demonstrated", did he? While there inevitably will be features that
cannot be detected (though far fewer than may people would like to make
out) I have to question whether John Resig demonstrated any of them in
that article.
(snipping your test cases) You're right. I had already tried to
duplicate the problems mentioned by Resig and couldn't reproduce them
either. Maybe the errors only happened with very obscure configurations,
or maybe he's just wrong. Who knows. It doesn't really matter though,
because the point was that object detection isn't a cure-all for
cross-browser scripting, and I agree with that.
Given that this assertion is demonstrably BS may attitude toward the
previous "demonstrated" issue on Safari, in light of my test, leans
heavily toward dismissing that as also being BS.
Probably. Still, using bad (or let's be nice and say unreproducible)
examples doesn't necessarily invalidate the main focus of the article,
or at least the point that I found interesting - which is adding the
JQuery test suite to the Mozilla test suite. It's *possible* that a new
browser version could introduce bugs like the ones Resig used in his
examples (even if they didn't exist now), or uncover bugs in the JQuery
library. Testing a browser with a widely used library is a good thing.
From my own experience; when IE 7 was released I was working on a web
application with (at the time) 100,000 lines of client-side code (code
employing nothing but feature detection where it is necessary to react
to divergent environments). The total number of changes in the scripts
needed to accommodate IE 7 was zero. When Chrome was released the total
number of changes necessary to accommodate that was also zero. And last
week QA started testing that application on IE 8. They may take another
week to run through all there tests but so far they have found no
issues, and I have a realistic expectation that they will not (as pretty
much everything that would be likely to be problematic would inevitably
get used in the first day or so).

Now that is "Future Proof" javascript, and low maintenance javascript.
That's very impressive. In an application of that size, I'd expect to
find issues even without accounting for new browsers. I assume that your
requirements are vastly different from those who want to build a flashy
site with moving objects and whistles and bells.
Make use your IE status bar is visible and watch the
displayed number as you click, mouse-down, move the mouse, release the
mouse button. And when you have done that see what you think of Resig's
conclusion that "What happened is positively bizarre: The callback
function will be executed every time the user left clicks the mouse,
anywhere in the document." In reality ckicking, and click events have
nothing to with IE's behaviour, beyond their representing a mousedown
followed by a mouseup.
Looks like he was on the wrong track with his test, and you're the first
person who's figured it out... (and the quirk looks even weirder now).
I'm not about to defend his misjudgement, I just don't think he took it
very seriously. He probably considers himself part of the "Web 2.0
wave", whatever that is, so he tossed out a blog entry about a curious
quirk without much research, to let the community figure it out and play
with it. I really don't think this could or should be used to discredit
him. What he does in his library is a different matter.
As I said, I'm still convinced that checking browser types/versions
can't always be avoided. But since both you and David say that it not
only CAN be done, but HAS to be done, I'll try a little experiment and
ask the group whenever I'm tempted to write [if browser is MSIE6]. If I
can find solid cross-browser solutions for these problems, I'll gladly
retract my statement.
- Conrad
Nov 5 '08 #47

P: n/a
On Nov 5, 3:29*pm, Conrad Lender <crlen...@yahoo.comwrote:
Thanks for your thorough and interesting reply. I agree with much of
what you wrote, so I'll focus on the direct questions.

On 2008-11-05 18:07, Richard Cornford wrote:
[e-commerce sites] Every time someone in that set is excluded from
the group from whom money can be taken that is a direct result of a
design decision. [..]
So we come to trade-offs; is the increased turnover that will result from
a more professional presentation greater than the loss in turnover that
may result form designing out the possibility of the user of some
browsers from accessing the site at all? How do you make those
judgments, and who should be making those decisions, and based on what
information, knowledge and experience?

This boils down to the question which user agents an application is
designed to support, and who makes that decision. Only the client is in
a position to decide who to shut out, and he will (or should) base this
decision on his customer base, collected browser statistics, and the
Browser statistics?! Are you kidding?
advice of his technical staff and developers. Amazon will reach a
different conclusion here than, say, YouTube, or a gaming site, or a
They are both public Web sites. The idea that they should make a
conscious decision to "shut out" users is absurd.
portal for Ajax fans. Dropping support for ancient user agents can help
As long as the documents are still usable in those ancient user
agents, then that is fine. But you *cannot* degrade gracefully
through browser sniffing. End of story.
keep the code base managable and reduce the maintainance effort. I would
The last thing that adding a browser sniffing script like jQuery to
your app is going to do is save on maintenance. Scripts like that are
patched to death, so trying to keep up with the ever-changing browser
landscape is an exercise in futility. The more workarounds added to
"fix" unit tests, the worse it gets. Try to remove any of these
patches and the whole thing unravels. And you are going to delegate
these tasks to people who do not understand the first thing about
browser scripting? That is not a sound strategy.
never say that JQuery is suitable for any site; but for some sites the
target audience will be overwhelmingly likely to use one of the browser
versions that JQuery supports.
Absolute rubbish. And even on an Intranet where one configuration of
IE is used, it is still a poorly designed, inefficient and ugly piece
of code. Does that sound like something you want?
>
For personal sites, the owner gets to decide. A private blog, for
example, would be much less affected by the reactions of the <1% of
visitors whose user agent isn't in JQuery's list.
But there are so many other options out there. What is the
fascination with this crappy little script from three years ago?
>
Anyway, I wasn't trying to recommend or defend JQuery, I was only
talking about the linked articles.
Instead, you should have been reading the linked articles. If you
didn't follow, then you should have searched the group for the links.
All of this is re-hash.
>
instead of hand-rolling yet another
cross-browser event abstraction layer, for example.
This, oft repeated, assertion that the only alternative to using a third
party general purpose library is write everything from scratch for
yourself is a total nonsense. In-between those two alternatives lay a
whole spectrum of code re-use opportunities, and only the very
masochistic are not re-using pre-written code for the bulk of their
projects even if they are not using the 'popular' general purpose
libraries.

Perhaps so, but where *is* all that pre-written and tested code? Every
These discussions always devolve into a demand for "pre-written and
tested" code. To have pre-written code, you have to have written some
code at some time that is re-usable. If you don't have that, it is
likely because you have been cobbling together bits of other people's
code.
time somebody asks for a good library (as in collection of code that
helps to deal with browser quirks and incompatibilities), they're told
that there is no such thing. If they dare to mention one of the popular
You know that is not true. In general, general-purpose browser
scripting libraries are a bad idea. Collections of code (good code)
are not a bad idea and have been published (here and other places) by
numerous contributors.
libraries, they get flamed to cinders. If such a collection exists, I
Public contradiction of dubious advice is invariably referred to as
"flaming" by those whose "sensibilities" it offends. If you had spent
months or years in the comfort of your own little vacuum, writing
browser scripts to great accolades from your peers, and then find
yourself summarily contradicted in the real world, it is likely to be
an unwelcome shock. You can react one of two ways (think again or
complain about some perceived hatred and flaming.)
would very much like to know where to find it.
Didn't we just have this discussion?
>
"Demonstrated", did he? While there inevitably will be features that
cannot be detected (though far fewer than may people would like to make
out) I have to question whether John Resig demonstrated any of them in
that article.

(snipping your test cases) You're right. I had already tried to
duplicate the problems mentioned by Resig and couldn't reproduce them
either. Maybe the errors only happened with very obscure configurations,
See my post on the subject. It is a behavior that I stumbled upon
years ago. I never wrote a blog entry about it, but I have posted
about it numerous times in this group. John Resig stumbled on the
same thing, failed to understand it at all, and posted his comedy bit
about phantom calls to getAttribute.
or maybe he's just wrong. Who knows. It doesn't really matter though,
He's just wrong. With him, that is virtually always a good
assumption. Never mind how many idiots in the publishing industry
think otherwise.
because the point was that object detection isn't a cure-all for
cross-browser scripting, and I agree with that.
He had no point and therefore proved nothing.
>
Given that this assertion is demonstrably BS may attitude toward the
previous "demonstrated" issue on Safari, in light of my test, leans
heavily toward dismissing that as also being BS.

Probably. Still, using bad (or let's be nice and say unreproducible)
No, bad.
examples doesn't necessarily invalidate the main focus of the article,
Certainly it does. The whole thing is an attempt to justify his own
incompetence. The Prototype people have published similarly ill-
conceived rants (my favorite involved the display style.) You have to
wonder why anybody would trust code from any of these "programmers."
I wouldn't take a batch file from any of them, let alone a script that
might turn users away from my site. (!)
or at least the point that I found interesting - which is adding the
JQuery test suite to the Mozilla test suite. It's *possible* that a new
Worthless as jQuery is almost entirely hinged on user agent
detection. Think about that.
browser version could introduce bugs like the ones Resig used in his
examples (even if they didn't exist now), or uncover bugs in the JQuery
library. Testing a browser with a widely used library is a good thing.
See my numerous posts on feature detection and testing. Peter
published a nice blog entry on the subject as well:

http://peter.michaux.ca/articles/fea...wser-scripting

(though the CSS seems to be malfunctioning!)
>
From my own experience; when IE 7 was released I was working on a web
application with (at the time) 100,000 lines of client-side code (code
employing nothing but feature detection where it is necessary to react
to divergent environments). The total number of changes in the scripts
needed to accommodate IE 7 was zero. When Chrome was released the total
number of changes necessary to accommodate that was also zero. And last
week QA started testing that application on IE 8. They may take another
week to run through all there tests but so far they have found no
issues, and I have a realistic expectation that they will not (as pretty
much everything that would be likely to be problematic would inevitably
get used in the first day or so).
Now that is "Future Proof" javascript, and low maintenance javascript.

That's very impressive. In an application of that size, I'd expect to
find issues even without accounting for new browsers. I assume that your
Your expectations are set realistically low. I would expect much the
same from your code at this point (you have a lot to learn.)
requirements are vastly different from those who want to build a flashy
site with moving objects and whistles and bells.
Irrelevant. You don't get more "whistles and bells" than the project
I have spent the last half year on. Yes, moving things, special
effects, etc. can be future-proof (sure as hell not with jQuery
though!)
>
Make use your IE status bar is visible and watch the
displayed number as you click, mouse-down, move the mouse, release the
mouse button. And when you have done that see what you think of Resig's
conclusion that "What happened is positively bizarre: The callback
function will be executed every time the user left clicks the mouse,
anywhere in the document." In reality ckicking, and click events have
nothing to with IE's behaviour, beyond their representing a mousedown
followed by a mouseup.

Looks like he was on the wrong track with his test, and you're the first
He was in the wrong rail yard.
person who's figured it out... (and the quirk looks even weirder now).
I'm not about to defend his misjudgement, I just don't think he took it
That would be difficult. He doesn't do it, so why should you try?
very seriously. He probably considers himself part of the "Web 2.0
wave", whatever that is, so he tossed out a blog entry about a curious
Ah yes, "New Wave JavaScript." He is part of a new wave of
incompetent programmers who insist on "teaching" others how to do
everything in the worst possible way.
quirk without much research, to let the community figure it out and play
with it. I really don't think this could or should be used to discredit
him. What he does in his library is a different matter.
He writes code as he writes blogs (incompetently.) What should he get
credit for? Exhorting and enabling hordes of incompetent Web
developers to add browser scripting to their acts?
>
As I said, I'm still convinced that checking browser types/versions
can't always be avoided. But since both you and David say that it not
Give me one example where it absolutely *cannot* be avoided.
only CAN be done, but HAS to be done, I'll try a little experiment and
ask the group whenever I'm tempted to write [if browser is MSIE6]. If I
can find solid cross-browser solutions for these problems, I'll gladly
retract my statement.
Then you are a better man than John Resig.
Nov 5 '08 #48

P: n/a
David, I'm not going to continue this discussion with you, until you
decide to skip the insults and at least pretend to be civil. The
constant digs at the "incompetence" of JQuery and its author are also
getting old, and don't need any further comments. But this does:

On 2008-11-05 22:27, David Mark wrote:
>Perhaps so, but where *is* all that pre-written and tested code?

These discussions always devolve into a demand for "pre-written and
tested" code. To have pre-written code, you have to have written
some code at some time that is re-usable.
So, to build a nice, dynamic site one has to build up a collection of
tried-and-tested cross-browser functions first. Of course, getting to
know all the little gotchas will take years, but that's okay, as long as
you don't use a library.
Every other language has its libraries and module archives; but we're
different. We're building it all ourselves. Because the library writers
are all "incompetent".
>would very much like to know where to find it.

Didn't we just have this discussion?
If you're hinting at your own library (again), I'll have to reply
(again) that without a Free Software license it's not much use for most
people. If we're following the previous pattern, this is the point where
you get angry with me.
http://peter.michaux.ca/articles/fea...wser-scripting

(though the CSS seems to be malfunctioning!)
It's not: http://peter.michaux.ca/articles/omg...-is-old-school

Great idea, btw.
- Conrad
Nov 5 '08 #49

P: n/a
Conrad Lender wrote:
Thanks for your thorough and interesting reply. I agree with
much of what you wrote, so I'll focus on the direct questions.

On 2008-11-05 18:07, Richard Cornford wrote:
>[e-commerce sites] Every time someone in that set is excluded
from the group from whom money can be taken that is a direct
result of a design decision. [..]
So we come to trade-offs; is the increased turnover that will
result from a more professional presentation greater than the
loss in turnover that may result form designing out the
possibility of the user of some browsers from accessing the
site at all? How do you make those judgments, and who should
be making those decisions, and based on what information,
knowledge and experience?

This boils down to the question which user agents an
application is designed to support,
But that particular aspect of the design problem is never (or should
never be) the first step in the process, and may be a long way
downstream of the start of the design process. It may be very
significant to everything that comes after it but it would be a mistake
to promote it to a primary position in the design process.
and who makes that decision. Only the client is in
a position to decide who to shut out,
Yes, it is a business decision and should be made by the person with the
business responsibility based on the best information available.
and he will (or should) base this
decision on his customer base,
Yes, the customer base, or "target audience". The question is, though,
what is the relationship between potential customer demographic and the
type of web browser they use? Beyond trivial observations such as that
the majority of any population will be using the most popular web
browsers (by definition), or that IT (and particularly web development)
professionals are the group moat likely to either be using a non-common
web browser or a non-default configuration of a popular browsers.

Last week I bought a wristwatch over the Internet, from the third site
attempting to sell them as the first two had designed out the
possibility of taking money off users of non-default configurations of
IE 6. So what then is the relationship between the web browser choices
and the warring of wristwatches?
collected browser statistics,
Despite the fact that HTTP precludes the possibility of gathering
accurate statistics by it very nature, that web statistics gathering
points are not necessarily representative (generally, or of any
particular 'target audience'), that the statistics gathering process is
self-biasing and that the resolution of most gathered web browser usage
statistics is limited to the ability to discriminate between web
browsers by examining the User Agent string (baring in mind that there
is no technical justification of a belief that web browsers can be
discriminated between by using the UA string, and there are demonstrated
cases were such discrimination is impossible)?

How many business people will understand the true status of any web
statistics they are shown? And how many developers will accurately
explain their status. Experience of discussing this point over the years
suggest that huge percentage of web developers take web statistics
purely on faith, assume they are meaningful and then employ them as a
convenient justification for doing precisely what they were going to do
anyway (the 'chicken and egg' situation that self-biases browser usage
statistics).
and the advice of his technical staff and developers.
Which rather assumes that these technical staff and developers have
sufficient understanding of the subject to give meaningful advice.
Everything that acts to facilitate the inexperienced or
non-knowledgeable getting into those positions reduces the odds of their
advice being accurate/worthwhile, and there is certainly no point in
expecting the business people to understand the issues.
Amazon will reach a different conclusion here than, say,
YouTube, or a gaming site, or a portal for Ajax fans.
Assuming that that for them the outcome was an active conclusion, rather
than just the coincidental side effect of the developers they employ,
they probably should have come to differing conclusions about how much
browser support is in their best interests. Ironically, though, given
that web developers are the single group most likely to be using
non-standard browsers, "portal for Ajax fans" do have a horrible
tendency to plum for one of the popular libraries and so render
themselves non-functional for a proportion of what you would imagine
would be their "target audience".
Dropping support for ancient user agents can help keep the
code base managable and reduce the maintainance effort.
Can, but does not necessarily. An architecture that is a hierarchical
structure of discrete components would only exhibit the browser specific
code in the lowest layers so the removal of support for ancient browsers
would not impact on the manageability of the totality, and code branches
for ancient browsers is, inevitably, old and well tried and tested, so
its presence should not impact much on maintainability, and so its
removal would not necessarily improve maintainability.
I would never say that JQuery is suitable for any site; but
for some sites the target audience will be overwhelmingly
likely to use one of the browser versions that JQuery
supports.
Can, but does not necessarily. An architecture that is a hierarchical
structure of discrete components would only exhibit the browser specific
code in the lowest layers so the removal of support for ancient browsers
would not impact much on the manageability of the totality, and code
branches for ancient browsers are, inevitably, old and well tried and
tested, so their presence should not impact much on maintainability, and
so their removal would not necessarily improve maintainability.
For personal sites, the owner gets to decide. A private blog,
for example, would be much less affected by the reactions of
the <1% of visitors whose user agent isn't in JQuery's list.
That rather assumes the real number to be <1%. Where would that
statistic be coming from?

If there is one trend that should be obvious in the generality of public
internet web development it is the rise of HTTP (rather than WAP)
browsers on ever more capable mobile phones. How many of those browsers
are on the "JQuery list"?
Anyway, I wasn't trying to recommend or defend JQuery, I
was only talking about the linked articles.
Fine, but I hope you are starting to understand why they are not held in
much regard here.
>>instead of hand-rolling yet another
cross-browser event abstraction layer, for example.

This, oft repeated, assertion that the only alternative
to using a third party general purpose library is write
everything from scratch for yourself is a total nonsense.
In-between those two alternatives lay a whole spectrum of
code re-use opportunities, and only the very masochistic
are not re-using pre-written code for the bulk of their
projects even if they are not using the 'popular' general
purpose libraries.

Perhaps so, but where *is* all that pre-written and tested
code?
It is everywhere. Quit a lot of it is in the archives for this group.
Now if you want it all bundled up together and handed to you on a plate
then you may be asking too much of whichever unknown other you expect to
do that work. Personally, I have a well paid full time job writing
javascript and a preference to spend what spare time I have to devote to
the subject promoting the understanding of browser scripting rather than
handing out code.
Every time somebody asks for a good library (as in collection
of code that helps to deal with browser quirks and
incompatibilities), they're told that there is no such thing.
Which would be an accurate description of the situation.
If they dare to mention one of the popular
libraries, they get flamed to cinders. If such a collection
exists, I would very much like to know where to find it.
Well, mine is on a number of hard disks and backup CD here and at work.
That does not do you much good, but I suspect that if you made the
effort to find where they have been posted to the group you would learn
a grate deal more from the search than you would by just being handed
the end result.
>"Demonstrated", did he? While there inevitably will be
features that cannot be detected (though far fewer than
may people would like to make out) I have to question
whether John Resig demonstrated any of them in that
article.

(snipping your test cases) You're right. I had already tried to
duplicate the problems mentioned by Resig and couldn't reproduce
them either. Maybe the errors only happened with very obscure
configurations,
That is quite unlikely. John Resig is not motivated to play with browser
configurations much as doing that would expose how vulnerable JQuery
code is when they are not in there default state. And try as you might
there is not a grate deal that you can deduce about a browsers
configuration form its UA string.
or maybe he's just wrong.
So you are not buying David's "incompetent idiot" theory?
Who knows. It doesn't really matter though,
because the point was that object detection isn't a cure-all
for cross-browser scripting, and I agree with that.
I would agree with that too. but the truth is that feature detection is
considerably more general, reliable and effective than any of the
alternatives, and so the flaws that it does have do not in any way
indicate abandoning it in favour of any of its inferior alternatives.
(And then if either of the alternatives is to be used it should be
object inference browser detection rather than UA string based browser
detection, as that is by far the superior of the available
alternatives).
>Given that this assertion is demonstrably BS may attitude
toward the previous "demonstrated" issue on Safari, in light
of my test, leans heavily toward dismissing that as also being
BS.

Probably. Still, using bad (or let's be nice and say
unreproducible) examples doesn't necessarily invalidate the
main focus of the article,
Employing bogus arguments do not do the credibility of the author any
good.
or at least the point that I found interesting - which is adding
the JQuery test suite to the Mozilla test suite. It's *possible*
that a new browser version could introduce bugs like the ones
Resig used in his examples (even if they didn't exist now), or
uncover bugs in the JQuery library.
So would that be the position that "future proofing" should not be
addressed by writing code that does not need updating with each browser
release but instead responsibility should be handed off to browser
manufacturers? That would be an optimistic notion given that browser
manufacturers already strive not to produce browsers that look broken
when exposed to the web, and yet the situation we find ourselves in is
the situation that it is.

You should remember that the current set of 'popular' libraries were
designed by relative novices who only had a very superficial familiarity
with the reality of web browsers (all of JQurey, Prootype.js and DoJo
certainly fall into that set) and that the totality of their manifest
misconceptions may not be reconcilable.
Testing a browser with a widely used library is a good thing.
From the point of view of a browser manufacture, it is a sensible thing
to do for pragmatic reasons. It does then no good to produce something
that promptly looks broken on a significant number of web sites. From
the library author's point of view, it is also a good thing because it
will become a barrier to their having their misconceptions and mistakes
exposed. Unfortunately, the latter does not serve the greater good as it
will get in the way of allowing the library authors from learning from
their mistakes.
>From my own experience; when IE 7 was released I was working
on a web application with (at the time) 100,000 lines of
client-side code (code employing nothing but feature detection
where it is necessary to react to divergent environments). The
total number of changes in the scripts needed to accommodate
IE 7 was zero. When Chrome was released the total number of
changes necessary to accommodate that was also zero. And last
week QA started testing that application on IE 8. They may
take another week to run through all there tests but so far
they have found no issues, and I have a realistic expectation
that they will not (as pretty much everything that would be
likely to be problematic would inevitably get used in the
first day or so).

Now that is "Future Proof" javascript, and low maintenance
javascript.

That's very impressive. In an application of that size, I'd
expect to find issues even without accounting for new browsers.
Yes, but my point only relates to changes that needed to be made to
accommodate newly released web browsers, as that is the aspect pertinent
to the subject of future proofing and maintainability.
I assume that your requirements are vastly different from those
who want to build a flashy site with moving objects and whistles
and bells.
Yes and no. The application is a serious business application sold to
large commercial concerns. However, the CEO loves bells, whistles and
moving objects. He maintains that they make the application easier to
sell because they catch the attention of the management who will be
making the purchasing decisions. On the other hand, he also loves the
application to perform as quickly as possible, asserting that also makes
it easier to sell. Those tend to be contradictory demands.

<snip>
>.... And when you have done that see what you think of Resig's
conclusion that "What happened is positively bizarre: The
callback function will be executed every time the user left
clicks the mouse, anywhere in the document." In reality
ckicking, and click events have nothing to with IE's behaviour,
beyond their representing a mousedown followed by a mouseup.

Looks like he was on the wrong track with his test,
Yes, but the point is that the false conclusions were the inevitable
outcome of his making a novice mistake in this testing strategy.
and you're the first
person who's figured it out...
Unlikely. Anyone with any reasonable practical experience of examining
web browsers behaviour would have seen that the conclusions were likely
to be wrong due to the flawed testing strategy. It is then just one
simple step to fix the testing strategy and expose the reality.
(and the quirk looks even weirder now).
I'm not about to defend his misjudgement, I just don't
think he took it very seriously.
And this is you not defending him?
He probably considers himself part of the "Web 2.0
wave", whatever that is, so he tossed out a blog entry
about a curious quirk without much research, to let the
community figure it out and play with it.
Seems unlikely. Not least because his 'community' is far too credulous
not to take whatever they see at face value.
I really don't think this could or should be used to
discredit him.
What? A self-proclaimed "JavaScript Ninja" making novice mistakes does
not speak to credibility?
What he does in his library is a different matter.
I don't think where the novice mistakes get made makes that much
difference.
As I said, I'm still convinced that checking browser
types/versions can't always be avoided. But since both you
and David say that it not only CAN be done, but HAS to be done,
I'll try a little experiment and ask the group whenever I'm tempted
to write [if browser is MSIE6].
We still have not seen how your 'if MSIE6' decision would be made.
If I can find solid cross-browser solutions for these problems,
I'll gladly retract my statement.
Yes, why not give it a try. But don't be surprised if you are expected
to interactively participate in the process (i.e. actually answer the
questions you get asked). There is no such thing as a free lunch on
comp.lang.javascript.

Richard.

Nov 6 '08 #50

53 Replies

This discussion thread is closed

Replies have been disabled for this discussion.