473,324 Members | 2,313 Online
Bytes | Software Development & Data Engineering Community
Post Job

Home Posts Topics Members FAQ

Join Bytes to post your question to a community of 473,324 software developers and data experts.

to learn jQuery if already using prototype

I am learning more and more Prototype and Script.aculo.us and got the
Bungee book... and wonder if I should get some books on jQuery (jQuery
in Action, and Learning jQuery) and start learning about it too?

Once I saw a website comparing Prototype to Java and jQuery to Ruby...
but now that I read more and more about Prototype, it is said that
Prototype actually came from Ruby on Rails development and the creator
of Prototype created it with making Prototype work like Ruby in mind.
Is jQuery also like Ruby? Thanks so much for your help.

Jun 27 '08
83 4088
On Apr 20, 5:46 pm, "Richard Cornford" <Rich...@litotes.demon.co.uk>
wrote:
There is no disadvantage from understanding javascript. In the event
that I was put in a position of being forced to use any of these
'popular' libraries the ability to understand how they work internally
would be a huge advantage in actually using them. Though that is
extremely unlikely to happen where I work now because none of the
existing popular libraries perform anywhere near fast enough for the web
applications I work on, and they never could.
Is it possible to see the above mentioned web applications?

Thanks,
kangax
Jun 27 '08 #51
On Apr 20, 4:44*pm, Thomas 'PointedEars' Lahn <PointedE...@web.de>
wrote:
beegee wrote:
Uh, no. *Do you understand what compilation is?

Yes, do you?
Transforming an english-like higher level programming language coded
program into a lower-level programming language coded program for the
benefits of speed and size.

I mean, there are javascript compilers but as far as I've heard, none in
a browser yet.

There are JIT-compilers.
And what lower level language or instruction set are these JIT-
compilers compiling Javascript into to? Setting up a symbol table and
transforming to more efficient Javascript in the pre-processing pass
of Javascript is not compiling. All modern interpreters do this.
Again, there are real compilers for Javascript (Rhino) but they are
not in browsers yet although it's possible that FF 3.0 has one.

I think you miss the point. *YUI is *supposedly* only "more like
'Javascript'" (whatever that might be) than Prototype or jQuery in the sense
that its developers *supposedly* knew enough about the programming languages
to unleash their full potential without having to resort to inefficient and
error-prone detours of inventing "classes" and "initializers" where there
are already prototypes and constructors.
You certainly do like to argue, don't you. *It takes quite a talent to
obfuscate agreement to point of it sounding like disagreement.

We don't have an agreement here.
Really? Are you saying the creators of YUI do not know enough about
javascript to avoid the pitfalls of JQuery and Prototype? I'm the
first to admit that YUI has some speed and object model problems but
I'm not sure that mocking the libary for only *supposedly* being
better than the others without some kind of specifics is really a
point view.

Bob
Jun 27 '08 #52
On 21 ÁÐÒ, 17:44, beegee <bgul...@gmail.comwrote:
On Apr 20, 4:44 pm, Thomas 'PointedEars' Lahn <PointedE...@web.de>
wrote:
beegee wrote:
Uh, no. Do you understand what compilation is?
Yes, do you?

Transforming an english-like higher level programming language coded
program into a lower-level programming language coded program for the
benefits of speed and size.
Brendan Eich (JavaScript):
"Client JavaScript reads source from SCRIPT tag contents, but compiles
it into an internal bytecode for faster interpretation."

Eric Lippert (JScript):
"JScript Classic acts like a compiled language in the sense that
before any JScript Classic program runs, we fully syntax check the
code, generate a full parse tree, and generate a bytecode. We then run
the bytecode through a bytecode interpreter. In that sense, JScript is
every bit as "compiled" as Java. The difference is that JScript does
not allow you to persist or examine our proprietary bytecode. Also,
the bytecode is much higher-level than the JVM bytecode -- the JScript
Classic bytecode language is little more than a linearization of the
parse tree, whereas the JVM bytecode is clearly intended to operate on
a low-level stack machine."
Jun 27 '08 #53
On Apr 19, 11:07*pm, "Richard Cornford" <Rich...@litotes.demon.co.uk>
wrote:
Who is going to be deciding what 'constructive' means in this context?
Each individual on his own. Or, in other words: say what you want to
say, and I'll brush off anything I think is unwarranted. I'm not
setting conditions for prior restraint here.
You mean that if someone is in a 'minority' then they must be wrong?
That is hardly an attitude that would allow progress through the
adoption of new ideas.
I never said anything of the sort. I said the minority need to do more
_persuading_. You stated that these libraries were junk as though it
were common knowledge. Clearly it isn't common knowledge.
You have not demonstrated that these are questions of taste. The last
time we discussed the prototype.js code here in detail (which was
version 1.6, in November last year) it demonstrated evidence of or its
authors (collectively, as nobody had corrected the code) not
understanding how the code they were writing was going to behave. Seeing
that brings everything into question, form the original design concepts
to every detail of its implementation. And those are not then questions
of taste but the inevitable consequence of evident ignorance among its
developers.
I hold that any technology decision is a question of taste. There is
no objective "better" in the sense of Ruby vs. Python, or vi vs.
emacs; there is only the subjective "better" — whichever best serves
the user's own needs.

Naturally, this does _not_ mean that everything is relative, or that
it's not worth having passionate arguments thereabout. I implied as
much in my music analogy: friends argue among themselves over which
band is "better," but they all realize that taste is the ultimate
arbiter. These arguments become tiresome only when people dig trenches
and start speaking in absolutes.
How do you know that? It seems likely to me that Thomas was using his
memory of the many (more or less detailed) discussions of Prototype.js
code that have happened here over the past few years to inform a general
assessment of the code.
And I submit that is a matter of taste. Bugs are bugs, of course, and
we welcome bug reports. But you've gone further than that; you've
inferred from "evidence" that code in Prototype does not do what its
author means for it to do.
I think it's far more constructive to say "Most of us
are not library fans, so you're unlikely to find useful
answers here,"

That would not be a true statement (at least the second part of it, and
the first part if you take the word 'library' in its most general
sense).
Then write your own words. That way they'll be _from the heart_. You
know the point I'm trying to make.
>
then point the way to the jQuery mailing list.

I have read enough of posts on the JQuery groups to be pretty sure
nobody is likely to much understanding their either. There is too much
of the blind leading the blind (and a huge proportion of questions never
seem to get answered at all).

>
That'd accomplish the same goal with far less
drama.

Accomplish which goal? Don't you think the OP, if he/she is still
reading this, has not learnt a great deal from this discussion despite
its initial response. There is a lot to be said for the uncensored
exchange of ideas in public.
The word "censorship" doesn't come within miles of this thread. I do
not own a telecommunications company; I don't have the means or
authority to "censor" anyone.

I can only imagine the OP was interested in the free exchange of ideas
when he asked you why you thought jQuery and Prototype were junk. I'm
arguing that he'd have learned far more from a link to a page called
"Why I, Richard Cornford, Think Prototype and jQuery Are Junk" than
from this low-signal, high-noise snark-off.
Perhaps the comp.lang.javascript FAQ needs an entry for this,
then.

It has been discussed, and a number of draft entries proposed. But there
remains some dispute as to what an appropriate response to the question
should be. From the point of view of someone wanting a better
understanding of javascript or browser scripting who just happens to be
using some 'popular' library then they really would get the best answers
to their questions here, if they could present their questions in
isolation (from all of the unrelated and irrelevant stuff that those
libraries contain).
That last sentence is the answer to such a FAQ. Even a link to that
question and answer would be more helpful than what has happened in
this thread.
A large portion of those who post on this newsgroup aren't
participants or even lurkers;

If they post questions then they are participants.
I mean that they weren't participants before their first post. Many
posters, I would venture, only come here when they need help, and
therefore aren't already familiar with the quirks of the community.
they come by only when they have problems. They can't
be expected to catch up on the backstory.

They can. Expecting someone to do a few web searches before they ask to
be spoon fed is not too unreasonable.
Please search this newsgroup for the terms "Prototype" and/or "jQuery"
and see how quickly you find a well-summarized critique of either
library.
In this case we have a very obviously stupid use of user
agent string based browser detection to make a decision that
would be more logically made with feature detection.
Actually, no - no browser implements String#(un)escapeHTML,
so feature detection would be pointless. We define that
function earlier in the file,

Yes, I noticed the other two assignments to the - String.prototype -
later and realised that I was wrong about that.
but redefine them for WebKit and IE because the String#replace
approach is much, much faster in these two browsers (but much,
much slower in FF and Opera).

Can you post a test-case that demonstrates that assertion? Historically
IE has been *renowned for its lousy performance with string
manipulation, while Mozilla outperformed everyone else in that area.
I don't have a test-case. The change was made one year ago by Thomas
Fuchs [1]. You're welcome to ask him, though I suspect he'll punch me
in the sternum for having dragged him into this.
>
But I noticed that the other two escaping/unescaping methods are nothing
like analogous to the two I posted. That is pretty bad in itself because
it means that the same source code is going to have different outcomes
depending on which browser it encounters, and the only way to avoid
falling foul of that would be to explicitly test any application using
the code on each and every browser and with a sufficiently divers set of
data to expose the situations where those inconsistencies might be
problematic. And that is something that is unlikely to actually happen
even in organisations that do do formal QA. After all, you missed the
fact that Safari/IE versions were defective yourselves so how could you
expect web developers who know no more than how to use a library find
the potential issues (or understand them if they did manifest
themselves).
Nothing I say in our defense will satisfy you, because you seem to
want a level of certitude that I can't guarantee. I could say that we
have extensive unit tests, but clearly they were not extensive enough
to catch the bug that you pointed out — though that bug was noticed in
the wild, submitted to our tracker, and patched, and more unit tests
were added in the process. This is how open-source software works.

I can't guarantee that any of us will write bug-free code on the first
pass, any more than you can guarantee that you won't spell another
word incorrectly for the rest of your life. instead, we have a testing
process meant to ferret out as many self-introduced bugs as possible.
When that fails, we rely on the community to file tickets. No doubt
you have rolled your eyes by now, so I'll just move on.
>
It would be safer to forget about performance in this respect and just
use one set of escaping and unescaping methods. After all, these methods
are not used by the library itself, and are unlikely to be that heavily
used in applications.
To be sure, there are other UA sniffs in Prototype that
hard-liners would decry as unnecessary.

It is not 'unnecessary' that is the questionable aspect of browser
sniffing, but rather that it is technically baseless and demonstrably
ineffective.
You haven't demonstrated that anything is baseless or ineffective;
you've only revealed a different set of priorities. You'd rather have
100% guaranteed behavior even if it meant a wildly-varying performance
graph across browsers. I'd rather have the reverse.
But I suppose that you would not agree that being able to find an
obvious rookie mistake in less then three seconds of looking (at a
library that is already a good few years old) tends to support the
"junk" assessment.
The check-in is only one year old. It is Thomas's bug, but he is no
rookie, so I can only surmise that we all make silly mistakes
sometimes. Bad luck for him that he managed to stumble upon your
Shibboleth Bug(TM).
A bug was filed on this not too long ago; I believe a
fix has been checked into trunk and will be in the
next release.

Well, it is a pretty simple fix, and I notice that the subject of my
last substantial criticism of the prototype's code has also been
removed.
We listen to criticism, we read bug reports, and we constantly search
for ways to improve the feedback loop. So does John Resig, by the way,
so I'd suggest you file a bug on jQuery's Trac about the "makeArray"
mistake.

Cheers,
Andrew
Jun 27 '08 #54
On Apr 19, 3:06*pm, "Richard Cornford" <Rich...@litotes.demon.co.uk>
wrote:
Interestingly I observed Matt Kruse (who is the nearest thing to a
supporter JQuery has among the regular contributors to this group, and
someone who can easily outgun anyone directly involved in JQuery
development when it comes to javascript)
Wow, given your view of the jQuery dev team, I'm not sure if that's
even close to a compliment ;)
directly asked however was
responsible for the - if ( typeof array != "array" ) - code to own up to
it in a post on the JQuery development group. However, when I checked
back a week later to see if anyone had the intellectual integrity to own
up to their mistake I found that Matt's post had been deleted from the
group.
If you're referring to this point:
http://groups.google.com/group/jquer...b54712bd48ec83
then I can still find it.

Unfortunately, it hasn't generated any further discussion as I thought
it certainly warranted.
It's this kind of indifference about truly embarrassing code in the
jQuery source that I find most troubling.
They seem to be more interested in adding a new CSS3 selector or
saving 20 bytes by condensing some code than in correcting bad
programming practices.

I think John Resig loves Javascript and is probably learning more
about it as time goes on, but I suspect that his primary goal of
evangelizing jQuery is to enable the writing of more books and
attending more speaking engagements. Otherwise I can't comprehend why
some of these issues haven't gotten instant attention as I believe
they deserve.

Matt Kruse
Jun 27 '08 #55
On Apr 21, 3:11*pm, Andrew Dupont <goo...@andrewdupont.netwrote:
We listen to criticism, we read bug reports, and we constantly search
for ways to improve the feedback loop. So does John Resig, by the way,
so I'd suggest you file a bug on jQuery's Trac about the "makeArray"
mistake.
I know nothing about Prototype's "feedback loop" but I'll say that my
impression of the jQuery loop has been that it is very fond of praise
and quick to dismiss or ignore genuine critiques about its core code
or design decisions.

I've made a number of attempts to make suggestions that I consider to
be no-brainers... getting rid of unnecessary browser sniffs,
optimizing loops which are extremely inefficient, making the
isFunction() function work more more closely to how it was
(misguidedly) intended, removing some code that is completely
unnecessary (makeArray), and reporting some bugs.

I've gotten a lackluster response to most posts, even though I would
consider some of these issues to be extremely important in cleaning up
the jQuery code, making it faster, and improving its overall quality.

I'm quite fond of much of the API that jQuery offers, the coding style
that it often enables, and the ease with which it enables new/amateur
developers to create working code in the right environments. If I had
the time and interest, I would probably be inclined to branch the code
into something more solid, fix many of the issues that exist, and
remove some of the overloading that makes the API kind of a mess.

But since I lack both the time and interest, I'm still trying to push
the jquery dev team into improving the code and hopefully bringing
jQuery up to par as a library that can withstand the scrutiny of
experienced js developers. Even if they would still not choose to use
it.

Matt Kruse
Jun 27 '08 #56
On Apr 19, 9:02 am, "Richard Cornford" <Rich...@litotes.demon.co.uk>
wrote:
Andrew Dupont wrote:
On Apr 17, 10:57 am, Thomas 'PointedEars' Lahn wrote:
One of the arguments paraded in favour of these libraries is that they
are examined, worked on and used by very large numbers of people, and so
they should be of reasonably high quality because with many eyes looking
at the code obvious mistakes should not go unnoticed. My experience of
looking at code from the various 'popular' libraries suggests that this
is a fallacy, because (with the exception of YUI (for obvious reasons))
Not obvious.

There's plenty of bugs YUI. A good number of them are reported in the
bugtracker (public) yet others are reported on the Yahoo internal bug
tracker (Bugzilla). Others may still be unfiled.

I don't 100% agree that the problem is not that the authors aren't
smart enough. A lot of these problems come from the process. Things
like having one guy own such-such piece of code, or the code freezes.
Or testing the happy path of maybe 20% of an object's methods.
all of the 'popular' library contain numerous obviously stupid mistakes.
Including YUI.

They also have bug trackers. As Andrew pointed out, Prototype does
too.

A fix for the bug that was demonstrated seems to be by simply putting
the &amp; last.

String.prototype.unescapeHTML = function() {
return this.replace(/&lt;/g,'<')
.replace(/&gt;/g,'>')
.replace(/&amp;/g,'&');
};

That would need to be tested out though.

Right?

Garrett
Richard.
Jun 27 '08 #57
On Apr 21, 2:50*pm, Zeroglif <zerog...@gmail.comwrote:
On 21 ÁÐÒ, 17:44, beegee <bgul...@gmail.comwrote:
On Apr 20, 4:44 pm, Thomas 'PointedEars' Lahn <PointedE...@web.de>
wrote:
beegee wrote:
Uh, no. *Do you understand what compilation is?
Yes, do you?
Transforming an english-like higher level programming language coded
program into a lower-level programming language coded program for the
benefits of speed and size.

Brendan Eich (JavaScript):
"Client JavaScript reads source from SCRIPT tag contents, but compiles
it into an internal bytecode for faster interpretation."

Eric Lippert (JScript):
"JScript Classic acts like a compiled language in the sense that
before any JScript Classic program runs, we fully syntax check the
code, generate a full parse tree, and generate a bytecode. We then run
the bytecode through a bytecode interpreter. In that sense, JScript is
every bit as "compiled" as Java. The difference is that JScript does
not allow you to persist or examine our proprietary bytecode. Also,
the bytecode is much higher-level than the JVM bytecode -- the JScript
Classic bytecode language is little more than a linearization of the
parse tree, whereas the JVM bytecode is clearly intended to operate on
a low-level stack machine."
Thanks for the quotes. I thought, at first, from reading them that I
had been totally wrong about compilation of javascript. Then I read
deeper in the second quote about what the byte code is. And really,
it is not a true compilation. It is still a first pass optimization.
In fact I wouldn't be surprised if the same machine that can interpret
unprocessed javascript, interprets the bytecode.

Bob
Jun 27 '08 #58
beegee wrote:
Thanks for the quotes. I thought, at first, from reading them that I had
been totally wrong about compilation of javascript. Then I read deeper
in the second quote about what the byte code is. And really, it is not a
true compilation. It is still a first pass optimization.
Winding around your error, are you? It is true compilation, as true as
byte-code compilation in Java is, for example.
In fact I wouldn't be surprised if the same machine that can interpret
unprocessed javascript, interprets the bytecode.
Bytecode is platform-independent, of course, because a Virtual Machine
interprets it. As I have said, in that sense at least JavaScript[tm],
and as it turns out JScript also, are compiled languages.
PointedEars
--
var bugRiddenCrashPronePieceOfJunk = (
navigator.userAgent.indexOf('MSIE 5') != -1
&& navigator.userAgent.indexOf('Mac') != -1
) // Plone, register_function.js:16
Jun 27 '08 #59
VK
On Apr 22, 10:09 pm, Thomas 'PointedEars' Lahn <PointedE...@web.de>
wrote:
Bytecode is platform-independent, of course, because a Virtual Machine
interprets it. As I have said, in that sense at least JavaScript[tm],
and as it turns out JScript also, are compiled languages.
???

What is interpreted language then for you? Read next line of code -
Parse for token - Create internal code - Execute - Forgot this code -
Cleared the memory - Moved to the next line? Something like that?
There is no such engine since Enigma and Cyclometer at least :-)

Javascript is being stored and delivered to the engine in the raw text
source code format. The engine naturally compiles it to be able to use
so the engine works with compiled code - but Javascript is not a
compiled language. Try to see the difference, if you can.

P.S. JScript.NET _is_ a compiled language.
Jun 27 '08 #60
VK wrote:
[...] Thomas 'PointedEars' Lahn [...] wrote:
>Bytecode is platform-independent, of course, because a Virtual Machine
interprets it. As I have said, in that sense at least JavaScript[tm],
and as it turns out JScript also, are compiled languages.

[...]
Please spare us your fantasies about what other people might think. Thank
you in advance.
Javascript is being stored and delivered to the engine in the raw text
source code format. The engine naturally compiles it to be able to use
so the engine works with compiled code - but Javascript is not a
compiled language.
Yes, it is. What you describe is called Just-In-Time (JIT) compilation.
Try to see the difference, if you can.
There is no difference at all, Often Wrong. One can even compile the same
source code into a file in bytecode format, and have the same JavaScript VM
execute that (as it is possible on NES-compatible servers).
P.S. JScript.NET _is_ a compiled language.
As are JavaScript and JScript. As hard as it may be for you to understand,
compilation does not require a file on the filesystem as its result.
PointedEars
--
realism: HTML 4.01 Strict
evangelism: XHTML 1.0 Strict
madness: XHTML 1.1 as application/xhtml+xml
-- Bjoern Hoehrmann
Jun 27 '08 #61
On Apr 22, 3:34*pm, VK <schools_r...@yahoo.comwrote:
On Apr 22, 10:09 pm, Thomas 'PointedEars' Lahn <PointedE...@web.de>
wrote:
Bytecode is platform-independent, of course, because a Virtual Machine
interprets it. *As I have said, in that sense at least JavaScript[tm],
and as it turns out JScript also, are compiled languages.
Javascript is being stored and delivered to the engine in the raw text
source code format. The engine naturally compiles it to be able to use
so the engine works with compiled code - but Javascript is not a
compiled language.
I hate to agree with VK, but I do.

The terms "compiled" and "interpreted" to refer to programming
languages are both vague and don't have exact meanings to begin with.
They are just labels.

Javascript is interpreted rather than compiled because the raw source
code is delivered to the end user. Of course it is turned into a
machine-readable form before execution - all languages are. Else the
term "interpreted language" would be meaningless.

Javascript is not pre-compiled into byte code which is then delivered
to the client's VM to execute (typically). Is there a standard
javascript bytecode definition that all javascript VM's will execute
identically? In contrast, a language like Java is considered to be a
"compiled" language because you can deliver the pre-compiled .class
files, even though they still require a VM to execute, but compiled
code is not the original source. The line there is even blurry because
you _can_ deliver the source and have it compiled "on the fly".

In the end, the discussion of the labels "compiled" and "interpreted"
is pointless because the reality of how it works is known. Especially
when the terms do not have "scientific" meanings and are open to
interpretation.

Matt Kruse
Jun 27 '08 #62
Matt Kruse wrote:
On Apr 22, 3:34 pm, VK <schools_r...@yahoo.comwrote:
>On Apr 22, 10:09 pm, Thomas 'PointedEars' Lahn <PointedE...@web.de>
wrote:
>>Bytecode is platform-independent, of course, because a Virtual
Machine interprets it. As I have said, in that sense at least
JavaScript[tm], and as it turns out JScript also, are compiled
languages.
Javascript is being stored and delivered to the engine in the raw text
source code format. The engine naturally compiles it to be able to use
so the engine works with compiled code - but Javascript is not a
compiled language.

I hate to agree with VK, but I do.
Your loss.
Javascript is not pre-compiled into byte code which is then delivered to
the client's VM to execute (typically).
Typically (JavaScript[tm] and JScript would cover, say, 95% of the market of
ECMAScript-compliant script engines), it is. You appear to have overlooked
the quotations by their inventors.
Is there a standard javascript bytecode definition that all javascript
VM's will execute identically?
There isn't even *a* "javascript VM" to begin with.

All JavaScript[tm] engines will have to use SpiderMonkey or Rhino; for the
former, we now have its inventors assertion that there is a bytecode
specification; for the latter, since it is Java-based, we can assume it has.

All JScript and JScript .NET engines would have to be that which Microsoft
provides with the Microsoft Script Engine (since it's closed source), so no
surprise there either.
The terms "compiled" and "interpreted" to refer to programming languages
are both vague and don't have exact meanings to begin with. They are just
labels.
This is simply wrong. RTFM.
[...] In the end, the discussion of the labels "compiled" and
"interpreted" is pointless because the reality of how it works is known.
Parse error.
Especially when the terms do not have "scientific" meanings and are open
to interpretation.
No, see above. But what matters here (if it matters at all; to remind you,
the cause of this subthread was a website that compared Prototype.js to Java
and jQuery to Ruby) is that JavaScript and JScript code is compiled *first*.
This does not apply to all languages that are finally interpreted.

What the both of you seem to overlook is that compilation and interpretation
can complement each other.
PointedEars
--
var bugRiddenCrashPronePieceOfJunk = (
navigator.userAgent.indexOf('MSIE 5') != -1
&& navigator.userAgent.indexOf('Mac') != -1
) // Plone, register_function.js:16
Jun 27 '08 #63
On Apr 21, 10:21 pm, dhtml wrote:
On Apr 19, 9:02 am, Richard Cornford wrote:
>One of the arguments paraded in favour of these libraries is
that they are examined, worked on and used by very large
numbers of people, and so they should be of reasonably high
quality because with many eyes looking at the code obvious
mistakes should not go unnoticed. My experience of looking
at code from the various 'popular' libraries suggests that
this is a fallacy, because (with the exception of YUI (for
obvious reasons))

Not obvious.

There's plenty of bugs YUI.
Who is talking about bugs? Take this code from the dojo library:-

| if(
| elem == null ||
| ((elem == undefined)&&(typeof elem == "undefined"))
| ){
| dojo.raise("No element given to dojo.dom.setAttributeNS");
| }

The rules for javascirpt dictate then whenever the -
(elem == undefined) - expression is evaluated (that is, whenever
- elem == null - is false) the result of the expression must be
false, and so the - (typeof elem == "undefined") - expression just
cannot ever be evaluated. The bottom line is that if the author of
that code had understood javascript when writing it the whole thing
would have been:-

if(elem == null){
dojo.raise("No element given to dojo.dom.setAttributeNS");
}

- or possibly:-

if(elem){
dojo.raise("No element given to dojo.dom.setAttributeNS");
}

- as there should be no issues following from pushing other
primitive values that have falseness through the exception
throwing path as well null and undefined.

The first is not a bug; it does exactly what it was written to, and
does it reliably and consistently. But it is a stupid mistake on the
part of its 'programmer', and survived in the dojo source for long
enough to be observed because nobody involved with dojo knew enough
actual javascript to see that it was a stupid mistake and correct it.

YUI may contain bugs but it does not contain this type of stupid
mistake because at least one person (and it only takes one) knows
javascript well enough to be able to see this type of thing and
stop it (presumably at source by ensuring any potential
transgressors become better informed bout the language they are
using).

Now JQuery contains the infamous - ( typeof array != "array" )-
stupid mistake, and Prototype.js (at least version 1.6 (which is
not that long ago)) contained the attempt to conditionally employ
function declarations that only worked by coincidence. Neither
of those are bugs as such (they don't stop the respective code
from 'working' (at least to the limited degree to which it is
designed to 'work')), but they are precisely the type of stupid
mistake that follows from code authors having a minimal
understanding of the language they are using. And where those
authors are part of a collective they don't speak for the
knowledge of the specific author responsible but instead
indicate the level of understanding of the _most_
knowledgeable person involved.

<snip>
A fix for the bug that was demonstrated seems to be by
simply putting the &amp; last.

String.prototype.unescapeHTML = function() {
return this.replace(/&lt;/g,'<')
.replace(/&gt;/g,'>')
.replace(/&amp;/g,'&');
};

That would need to be tested out though.
No it does not need to be test, it is correct. The general
rule is that the character significant in escaping needs to
be processed first when escaping and last when unescaping.
Right?
Absolutely. It is a simple bug, and a mistake that in my
experience is made by nearly every programmer who comes to
the issues of encoding/escaping for the web for the first
time (pretty much no matter what their previous level of
experience in other areas). It is something that I have
learnt to double check, habitually, and that is the reason
that I spotted it so quickly.

Richard.
Jun 27 '08 #64
Andrew Dupont wrote:
On Apr 19, 11:07 pm, Richard Cornford wrote:
>Who is going to be deciding what 'constructive' means in
this context?

Each individual on his own.
Maybe, but there are circumstances were the best advice possible is to
delete something and start again from scratch, but most individuals who
hear that advice don't regard it is constructive when they do.
Or, in other words: say what you want to say, and I'll
brush off anything I think is unwarranted.
Presumably you mean you will brush off anything that you regard as
unwarranted?
I'm not
setting conditions for prior restraint here.
Requiring what you get to be "constructive" is not a condition?
>You mean that if someone is in a 'minority' then they must
be wrong? That is hardly an attitude that would allow
progress through the adoption of new ideas.

I never said anything of the sort. I said the minority need
to do more _persuading_.
OK. Why, what is in it for them?
You stated that these libraries were junk
I very much doubt that I did.
as though it were common knowledge.
If I had it would not be because it was common knowledge, but rather
because it was the case.
Clearly it isn't common knowledge.
There are lots of things that are true but are not common knowledge. And
that is even if you are not taking 'common knowledge' as referring to
what is commonly know by ordinary people (ordinarily people mostly being
people who have no idea what javascript is in the first place, and
little interest in knowing).
>You have not demonstrated that these are questions of taste.
The last time we discussed the prototype.js code here in
detail (which was version 1.6, in November last year) it
demonstrated evidence of or its authors (collectively, as
nobody had corrected the code) not understanding how the
code they were writing was going to behave. Seeing that
brings everything into question, form the original design
concepts to every detail of its implementation. And those
are not then questions of taste but the inevitable
consequence of evident ignorance among its developers.

I hold that any technology decision is a question of taste.
Decisions suggest an informed process of deciding. Otherwise we may be
dealing with no more than the accumulated outcome of sequences of random
influences, misconceptions and learnt incantations. If someone writes:-

<script type="javascript">
var url = " ... ";
...
document.write('<scr'+'ipt type="javascript"
src="'+url+'"></scr'+'ipt>');
</script>

- there are things about that that are not a question of taste at all.
That the mark-up is invalid is an objective fact. That there are two
unnecessary concatenation operations is a fact, and that the apparent
justification for those additional concatenation operations has missed
the point is also a fact.

Some decisions to do things, or not to do things are not a question of
taste, but rather the consequences of understanding.
There is no objective "better" in the sense of Ruby vs.
Python, or vi vs. emacs;
Maybe, but there is an objective "better" in the sense of using:-

if(elem == null){
dojo.raise("No element given to dojo.dom.setAttributeNS");
}

- in place of:-

if(
elem == null ||
((elem == undefined)&&(typeof elem == "undefined"))
){
dojo.raise("No element given to dojo.dom.setAttributeNS");
}

- because the latter is just silly in comparison to the former (as they
both have precisely the same outcome).
there is only the subjective "better" - whichever best
serves the user's own needs.
There seems to be an unfortunate tendency among web developers to lose
site of who the "user" actually is. The user (for web developers) is the
poor sod looking at an alarming little grey box with yellow 'warning'
triangle just above their web browser's window that says "Your browser
does not support AJAX" and wondering what the hell they are expected to
do about it (get it tickets for the next match or something?).
Naturally, this does _not_ mean that everything is relative,
or that it's not worth having passionate arguments thereabout.
I implied as much in my music analogy: friends argue among
themselves over which band is "better," but they all realize
that taste is the ultimate arbiter. These arguments become
tiresome only when people dig trenches and start speaking
in absolutes.
>How do you know that? It seems likely to me that Thomas was
using his memory of the many (more or less detailed)
discussions of Prototype.js code that have happened here
over the past few years to inform a general assessment
of the code.

And I submit that is a matter of taste.
Thomas's memory is a matter of taste?
Bugs are bugs,
Not really. There are bugs and there are bugs. A typo in the middle of a
large block of code is something that can happen to anyone, and it could
also easily be missed by others reviewing that code. A glaring error in
something that experience would teach you to always double check and
also should be exposed in any reasonable testing is something else
entirely.
of course, and we welcome bug reports. But you've gone
further than that; you've inferred from "evidence" that
code in Prototype does not do what its author means for
it to do.
No, I said that the evidence was that Prototype.js was (at least in
November last year) only doing what was (apparently) expected by
coincidence; that it had not actually been programmed to do what it was
doing. I also implied that were that evidence existed it was reasonable
to question the understanding of javascript that informed all of the
design decisions that occurred prior to that code being written; such as
the underlying design approach and the resulting API.

(I have also pointed out that Prototype.js is incredibly slow at doing
pretty much anything complex)
>>I think it's far more constructive to say "Most of us
are not library fans, so you're unlikely to find useful
answers here,"

That would not be a true statement (at least the second
part of it, and the first part if you take the word
'library' in its most general sense).

Then write your own words.
I did.
That way they'll be _from the heart_.
No they would not. They would be from the head.
You know the point I'm trying to make.
Not really.

<snip>
>... . There is a lot to be said for the uncensored
exchange of ideas in public.

The word "censorship" doesn't come within miles of this
thread.
Well this is Usenet so there is no censorship.
I do not own a telecommunications company; I don't have
the means or authority to "censor" anyone.
You would not have the means to censor Usenet even if you did own a
telecommunications company.
I can only imagine the OP was interested in the free
exchange of ideas when he asked you why you thought
jQuery and Prototype were junk.
He (do you have any evidence that he is a 'he'?) did not ask me
anything.

<snip>
>... . From the point of view of someone wanting a better
understanding of javascript or browser scripting who just
happens to be using some 'popular' library then they really
would get the best answers to their questions here, if they
could present their questions in isolation (from all of the
unrelated and irrelevant stuff that those libraries contain).

That last sentence is the answer to such a FAQ.
It is already in the FAQ in as many words.
Even a link to that question and answer would be more
helpful than what has happened in this thread.
Not really. The OP is not asking for specific information on javascript,
and there is no code to post in relation to the question. The question
asked was along the lines of "having learnt something about Prototype.js
should I then spend some time learning something about JQuery". To
which the direct answer appears to have been "no" (if a little more
strongly/colourfully expressed). My answer, in as far as I answered the
question at all, was 'learn javascript and browser scripting first and
then you can make up your own mind'.
>>A large portion of those who post on this newsgroup aren't
participants or even lurkers;

If they post questions then they are participants.

I mean that they weren't participants before their first post.
And they weren't human before they were conceived.
Many posters, I would venture, only come here when they
need help, and therefore aren't already familiar with the
quirks of the community.
There is no need to "venture" that, it is self-evidently true.
>>they come by only when they have problems. They can't
be expected to catch up on the backstory.

They can. Expecting someone to do a few web searches before they ask
to
be spoon fed is not too unreasonable.

Please search this newsgroup for the terms "Prototype"
What are you expecting? You give a library the same name as a
significant aspect of the language it written in and then cannot find
specific references to it in the archives of a newsgroup dedicated to
that language. It was a predictably bad choice of name.
and/or "jQuery" and see how quickly you find a well-summarized
critique of either library.
Who said finding that sort of thing out was going to be quick? I bet the
search would still turn out to be informative even if it could not be
instantaneous.

<snip>
>but redefine them for WebKit and IE because the String#replace
approach is much, much faster in these two browsers (but much,
much slower in FF and Opera).

Can you post a test-case that demonstrates that assertion?
Historically IE has been renowned for its lousy performance
with string manipulation, while Mozilla outperformed everyone
else in that area.

I don't have a test-case. The change was made one year ago by
Thomas Fuchs [1]. You're welcome to ask him, though I suspect
he'll punch me in the sternum for having dragged him into this.
He won't have to. I will just dismiss this as yet another
unsubstantiated rumour.

<snip>
>>To be sure, there are other UA sniffs in Prototype that
hard-liners would decry as unnecessary.

It is not 'unnecessary' that is the questionable aspect of
browser sniffing, but rather that it is technically baseless
and demonstrably ineffective.

You haven't demonstrated that anything is baseless
How would you expect me to demonstrate the lack of any technical
foundation for UA string based browser sniffing? I can hardly point to
something that doesn't exist and say "there is the absence of any
technical foundation for all to see". Of course if there was any
technical foundation then that could be pointed at quite easily, but as
the navigator.userAgent string is a reflection of the HTTP User Agent
header then any such direction must lead to the definition of the header
in the HTTP specification, and that definition pretty much says that the
User Agent header is an arbitrary sequined of zero or more characters
that is not even required to be consistent from one request to the next
(i.e. that it is not specified as being a source of information at all).
or ineffective;
Does that need to be demonstrated (again)? It is known that web browsers
use User Agent headers that are indistinguishable form the default UA
header of IE, so how could it be effective to discriminate between
browsers using the UA string whenever two different browsers use UA
headers that are indistinguishable?
you've only revealed a different set of priorities.
You'd rather have 100% guaranteed behavior
I would certainly rather have consistent and predictable behaviour
before worrying about performance.
even if it meant a wildly-varying performance
graph across browsers.
Where is the evidence for "wildly-varying"? I don't think
escaping/unescaping methods are going to be used frequently enough for
their specific performance to mattered that much at all. If you used
them internally, or they were fundamental to using the library in the
first place then their performance would be much more significant.
I'd rather have the reverse.
So you would not be certain what the code was going to do, but you would
know that whatever it did it would take about the same amount of time to
do it wherever it was running? I certainly do not have a taste for that
design philosophy.
>But I suppose that you would not agree that being able to
find an obvious rookie mistake in less then three seconds
of looking (at a library that is already a good few years
old) tends to support the "junk" assessment.

The check-in is only one year old. It is Thomas's bug, but he
is no rookie,
Hansom is as hansom does. But that was not really my point. One of the
things that gets proposed as a justification for libraries of this sort
(a reason for their not being junk by virtue of what they are) is that
with many individuals contributing there are plenty of eyes looking at
the code to be able to find these sorts of things and fix them up front.
But if it takes me three seconds to find what nobody else had noticed
then it must be the case that there is nobody involved looking with my
eyes.
so I can only surmise that we all make silly mistakes
sometimes. Bad luck for him that he managed to stumble
upon your Shibboleth Bug(TM).
Bad luck for everyone else who manage to let it pass unnoticed.
>>A bug was filed on this not too long ago; I believe a
fix has been checked into trunk and will be in the
next release.
>Well, it is a pretty simple fix, and I notice that the subject
of my last substantial criticism of the prototype's code has
also been removed.

We listen to criticism, we read bug reports, and we constantly
search for ways to improve the feedback loop.
That all sounds very 'marketing-speak'.
So does John Resig, by the way, so I'd suggest you file a bug
on jQuery's Trac about the "makeArray" mistake.
Why? Polishing the handrails on the Titanic may have made it more
appealing to look at but didn't change the rate at which it sank after
the design flaw coincided with the iceberg.

Richard.

Jun 27 '08 #65
Matt Kruse wrote:
>On Apr 19, 3:06 pm, Richard Cornford wrote:
>Interestingly I observed Matt Kruse (who is the nearest
thing to a supporter JQuery has among the regular contributors
to this group, and someone who can easily outgun anyone
directly involved in JQuery development when it comes to
javascript)

Wow, given your view of the jQuery dev team, I'm not sure if
that's even close to a compliment ;)
Well, you know that I like to call things the way that I see them ;-)

They need you a lot more than you need them.
>directly asked however was responsible for the -
if ( typeof array != "array" ) - code to own up to it in a
post on the JQuery development group. However, when I checked
back a week later to see if anyone had the intellectual
integrity to own up to their mistake I found that Matt's
post had been deleted from the group.

If you're referring to this point:
http://groups.google.com/group/jquer...b54712bd48ec83
then I can still find it.
<snip>

That is odd. I can see your post from my computer at home but still not
from work. I cannot believe that our firewall is capable of being
sufficiently subtle as to be censoring a single post in a thread (and
certainly not without messing the rest of the page up in the process).
It does have some very silly aspects to its configuration, like we
cannot view the MSDN page on the - responseXML - property of HTTP XML
request objects because of its URL contains the character sequence made
from the last two letters of "response" and the first letter of "XML",
but that sort of thing should not come into this case.

Richard.
Jun 27 '08 #66
kangax wrote:
On Apr 20, 5:46 pm, Richard Cornford wrote:
<snip>
>... none of the existing popular libraries perform anywhere
near fast enough for the web applications I work on, and
they never could.

Is it possible to see the above mentioned web applications?
An invisible web application would not be very easy to use (or sell).

But you mean is it possible for you to see them. If you can convince our
marketing department that you are a potential customer they will happily
demonstrate it to you (in your own offices, anywhere on the planet, and
at your convenience). Their question will be "how much property do you
own/manage?", but if the answers is much less than 1000 building they
are probably not going to be interested.

Richard.

Jun 27 '08 #67
On Apr 15, 10:04 am, liketofindoutwhy <liketofindout...@gmail.com>
wrote:
I am learning more and more Prototype and Script.aculo.us and got the
Bungee book... and wonder if I should get some books on jQuery (jQuery
in Action, and Learning jQuery) and start learning about it too?

Once I saw a website comparing Prototype to Java and jQuery to Ruby...
but now that I read more and more about Prototype, it is said that
Prototype actually came from Ruby on Rails development and the creator
of Prototype created it with making Prototype work like Ruby in mind.
Is jQuery also like Ruby? Thanks so much for your help.
They are not bad... if you don't want to use javascript. The better is
you learn JS. Then, seeing it's code you'll know what it do.

But imagine...
Using Prototype, jQuery etc.. you'll need to load these scripts in all
your pages... using all functions or not. They create variables to
store long built-in JS properties. This is unnecessary.

The better is you code your object and functions and load only
necessary
to do especific things. And... about code... if you use many
document.objasdjhk()
and wanto to keep the code smaller... compress it with an javascript
compressor and keep the original code to edition.

Prototype, jQuery are "good" works from people that knows JS. But it
can
not be the best way of implement JS in your pages.
Jun 27 '08 #68
On Apr 23, 3:06*am, Lasse Reichstein Nielsen <l...@hotpop.comwrote:

<snip>
The combination of first compiling and then executing implements an
interpreter. Nothing prevents an interpreter from using a compiler as
a component, but its behavior is still clearly that of an interpreter:
it executes the program.
Yes, that is a very clear definition. Now, what is the difference
between an interpreter and a virtual machine? PointedEars suggests
they are the same thing. Certainly both execute a program and both
use a compiler as a component.

Instinctively, I know there is a difference between languages such as
Javascript, Ruby on one hand and Java,.NET C# on the other. There is
a trade off of speed for expression that I at first attributed to
compilation vs. interpretation. Recently, I've noticed that C# has
added lambdas and a variant (typeless) type to the latest version.
The syntax is kind of a nightmare compared to Javascript but it means
that VM is doing the same kind of "interpretation" that the Javascript
interpreter is doing. So maybe the difference between these languages
is that one type is oriented toward compilation and the other is
oriented toward interpretation even though they have evolved towards
each other.

Bob
Jun 27 '08 #69
beegee wrote:
Now, what is the difference between an interpreter and a virtual machine?
PointedEars suggests they are the same thing. Certainly both execute a
program and both use a compiler as a component.
Not necessarily.
Instinctively, I know there is a difference between languages such as
Javascript, Ruby on one hand and Java,.NET C# on the other.
So at least partially you would be let down by your instincts. At least as
for JavaScript, JScript, JScript .NET, and Java, there is no difference
regarding this as should have been clear to you by now.
PointedEars
--
realism: HTML 4.01 Strict
evangelism: XHTML 1.0 Strict
madness: XHTML 1.1 as application/xhtml+xml
-- Bjoern Hoehrmann
Jun 27 '08 #70
beegee <bg*****@gmail.comwrites:
Recently, I've noticed that C# has
added lambdas and a variant (typeless) type to the latest version.
The syntax is kind of a nightmare compared to Javascript but it means
that VM is doing the same kind of "interpretation" that the Javascript
interpreter is doing. So maybe the difference between these languages
is that one type is oriented toward compilation and the other is
oriented toward interpretation even though they have evolved towards
each other.
This is nothing new. Take a look at Common Lisp for a language that has
both extreme expressiveness and compilers that can produce very fast
code. Anyway the border between interpreted and compiled implementations
is very fuzzy (unless you just mean that you can transform the source
code to some pre-processed byte stream or stand-alone executable, which
is really quite easy and doesn't really mean much. Many languages that are
typically viewed as interpreted (perl, for instance) can do that.

This whole discussion is pretty meaningless.

--
Joost Diepenmaat | blog: http://joost.zeekat.nl/ | work: http://zeekat.nl/
Jun 27 '08 #71
On Apr 22, 5:39 pm, Richard Cornford <Richard.Cornf...@googlemail.com>
wrote:
On Apr 21, 10:21 pm, dhtml wrote:
On Apr 19, 9:02 am, Richard Cornford wrote:
One of the arguments paraded in favour of these libraries is
that they are examined, worked on and used by very large
numbers of people, and so they should be of reasonably high
quality because with many eyes looking at the code obvious
mistakes should not go unnoticed. My experience of looking
at code from the various 'popular' libraries suggests that
this is a fallacy, because (with the exception of YUI (for
obvious reasons))
Not obvious.
There's plenty of bugs YUI.

Who is talking about bugs? Take this code from the dojo library:-

| if(
| elem == null ||
| ((elem == undefined)&&(typeof elem == "undefined"))
| ){
| dojo.raise("No element given to dojo.dom.setAttributeNS");
| }

The rules for javascirpt dictate then whenever the -
(elem == undefined) - expression is evaluated (that is, whenever
- elem == null - is false) the result of the expression must be
false, and so the - (typeof elem == "undefined") - expression just
cannot ever be evaluated. The bottom line is that if the author of
that code had understood javascript when writing it the whole thing
would have been:-

if(elem == null){
dojo.raise("No element given to dojo.dom.setAttributeNS");

}

- or possibly:-

if(elem){
dojo.raise("No element given to dojo.dom.setAttributeNS");

}

- as there should be no issues following from pushing other
primitive values that have falseness through the exception
throwing path as well null and undefined.

The first is not a bug; it does exactly what it was written to, and
does it reliably and consistently. But it is a stupid mistake on the
part of its 'programmer', and survived in the dojo source for long
enough to be observed because nobody involved with dojo knew enough
actual javascript to see that it was a stupid mistake and correct it.

YUI may contain bugs but it does not contain this type of stupid
mistake because at least one person (and it only takes one) knows
javascript well enough to be able to see this type of thing and
stop it (presumably at source by ensuring any potential
transgressors become better informed bout the language they are
using).
I really didn't want to be goaded into postup a dumb and dumber
competition with other people's code, but you've left me with not very
good choices.
YUI has some pretty bad/obvious bugs in crucial places. augmentObject,
hasOwnProperty. Dom.contains:-

hasOwnProperty: function(o, prop) {
if (Object.prototype.hasOwnProperty) {
return o.hasOwnProperty(prop);
}

return !YAHOO.lang.isUndefined(o[prop]) &&
o.constructor.prototype[prop] !== o[prop];
},

- Which will throw errors in IE when - o - is a host object and
return wrong results in Opera when - o - is window. augmentObject:-

augmentObject: function(r, s) {
if (!s||!r) {
throw new Error("Absorb failed, verify dependencies.");
}
var a=arguments, i, p, override=a[2];
if (override && override!==true) { // only absorb the
specified properties
for (i=2; i<a.length; i=i+1) {
r[a[i]] = s[a[i]];
}
} else { // take everything, overwriting only if the third
parameter is true
for (p in s) {
if (override || !r[p]) {
r[p] = s[p];
}
}

YAHOO.lang._IEEnumFix(r, s);
}
},
It is questionable strategy to do object augmentation on the prototype
chain of the supplier. It would be better to use hasOwnProperty to
filter the stuff in the supplier's prototype chain out. Next, if the
receiver has a property p with a false-ish value, then the ovveride
flag is irrelevant.

Dojo calls object augmentation "extend" which to me seems to be
misleading.

isAncestor: function(haystack, needle) {
haystack = Y.Dom.get(haystack);
needle = Y.Dom.get(needle);

if (!haystack || !needle) {
return false;
}

if (haystack.contains && needle.nodeType && !isSafari)
{ // safari contains is broken
YAHOO.log('isAncestor returning ' +
haystack.contains(needle), 'info', 'Dom');
return haystack.contains(needle);
}
else if ( haystack.compareDocumentPosition &&
needle.nodeType ) {
YAHOO.log('isAncestor returning ' + !!
(haystack.compareDocumentPosition(needle) & 16), 'info', 'Dom');
return !!(haystack.compareDocumentPosition(needle) &
16);
} else if (needle.nodeType) {
// fallback to crawling up (safari)
return !!this.getAncestorBy(needle, function(el) {
return el == haystack;
});
}
YAHOO.log('isAncestor failed; most likely needle is not an
HTMLElement', 'error', 'Dom');
return false;
}

This would return inconsistent results, depending on the browser.
For example:
YAHOO.util.Dom.isAncestor(document.body, document.body);

Though the last is not as obvious a mistake as the others.

There are considerably questionable practices in the Event library.

The connection manager is horribly designed. The fact that it attempts
to do form serialization within iteself, is just a horrible decision.
If the author had been forced to write a test for that, he'd probably
have moved that form serialization code somewhere else, to make it
easier to test (easier coverage verification).

[snip]
And where those
authors are part of a collective they don't speak for the
knowledge of the specific author responsible but instead
indicate the level of understanding of the _most_
knowledgeable person involved.
This can lead to blocking code reviews and scape goating. With a test
driven approach, the only thing to blame is the process, and that's
fixable (besides the fact that the test never has any hard feelings).

Without tests, you get things like blood commits and code freezes.
Some libraries actually do code freezes. And they have at least one
expert. And they have bugs. Dumb ones.
<snip>
A fix for the bug that was demonstrated seems to be by
simply putting the &amp; last.
String.prototype.unescapeHTML = function() {
return this.replace(/&lt;/g,'<')
.replace(/&gt;/g,'>')
.replace(/&amp;/g,'&');
};
That would need to be tested out though.

No it does not need to be test, it is correct. The general
rule is that the character significant in escaping needs to
be processed first when escaping and last when unescaping.
It addresses the problem that was demonstrated in your example. It
does not, however, take into consideration the possibility that - this
- could contain * any * other entities.

If the need to handle - &quot; - or & (which is also '&') got
added in later, they'd need to be reviewed by the one, sole expert, to
make sure the person who wrote the amending code didn't make a rookie
mistake. A test could clearly prove it worked.

var s = "&quot;".replace(/&quot;/g, '"');

So the fix addresses only one concern.

another consideration is that String.prototype.escapeHTML should be
stable, but if there's a bug, and dependencies on that bug, then the
fixing of the bug becomes complicated. In may very well be the case
that some novice programmer used escapeHTML, found that it didn't work
right, made some adjustments in his implementation to compensate for
that bug. In essence, his implementation is now depending on that bug.
This is where I see adding things to built-in prototypes to be risky.

If the programmer had made a method, then that method could always be
deprecated in a future release, if found to be problematic.

So, to sum it up, my recommentations:
1) write a test
2) don't put the methods on String.prototype because they might change
later.
Right?

Absolutely. It is a simple bug, and a mistake that in my
experience is made by nearly every programmer who comes to
the issues of encoding/escaping for the web for the first
time (pretty much no matter what their previous level of
experience in other areas). It is something that I have
learnt to double check, habitually, and that is the reason
that I spotted it so quickly.
That was my first time writing an unescape function in Javascript. I
think I might have written one in Java several years ago in a response
filter exercise, though.

If I had to write something more comprehensive to account for more
entities, I'd probably consider looking into inverting control to the
browser's parser using a combination of newDiv.innerHTML and
newDiv.textContent|innerText

document.body.textContent = "&";
document.body.innerHTML; // &amp;

document.body.innerHTML = "&quot;"
document.body.textContent; // "

Obviously not using document.body, but a newly created node. I would
probably write some tests for that, including the cases you posted,
make sure they all fail, then write out some code (the code could be
changed in the future, since there are tests).

Garrett
Richard.
Jun 27 '08 #72
On Apr 20, 1:52 pm, Lasse Reichstein Nielsen <l...@hotpop.comwrote:

[snip]
People here will keep complaining about the shoddy quality of
libraries while that quality, slowly, increases.
I keep wondering why the process is so slow. The cores of all the
major libraries like Prototype.js, jQuery, etc are all relatively
small (~a few thousand lines). In a month I'm sure someone could go
through these libraries in great detail and find and fix many
problems.

One reason I can think up is the library granularity, packaging and
API are not of interest to experts who would know how to fix the
internals. That is, even if these libraries were technically perfect
the experts that complain about them still wouldn't use them. I think
this is primarily true for Prototype.js because augmenting objects you
don't own is something almost everyone stops doing very early on. I
don't know how the Prototype.js folks can find this practice
acceptable as it has burned them quite a few times.
This goes on until
the time when libraries have won, and anybody not using them is
putting themselves at a disadvantage.
I think that time has already come. It seems most c.l.j regulars who
complain about the mainstream libraries do maintain their own
libraries in various forms. Some of the various forms have interesting
APIs and priorities that are quite different than popular libraries.

I think the time will come very soon that maintaining one's own
library may not be feasible because the size of the library will be
too big. The boss will want big fancy widgets faster than before
because someone using a library can produce them quickly even though
they may have known bugs or have bloated code with large unused
portions being served to the client.

Peter
Jun 27 '08 #73
On Apr 26, 7:11*pm, VK <schools_r...@yahoo.comwrote:
On Apr 26, 9:42 am, Peter Michaux <petermich...@gmail.comwrote:
I keep wondering why the process is so slow. The cores of all the
major libraries like Prototype.js, jQuery, etc are all relatively
small (~a few thousand lines). In a month I'm sure someone could go
through these libraries in great detail and find and fix many
problems.

Easy to say - really hard to implement. It may take a couple of hours
to "streamline" some code upon programmer's idea of the best. But such
freedom of actions is available only for a new player on the market.
For a library long time used in serious commercial solutions the main
priority is not "beauty" or even effectiveness of code, these are all
secondary matters. The main mandatory priority is backward
compatibility - not only with the previous versions of the library
itself, but also with all current solutions using this library.
By that criterion, Prototype.js is a poor choice. The assertion
doesn't stack up anyway - a site that has been developed with a
particular version of a library is under no compulsion to change to
newer versions as they become available.

Anyway, I understood Peter's comment to be in regard to fixing bugs
and poor design in the actual code, not to change the API.

Sometimes it locks the possibility to update a segment even if its
clearly ineffective or wrong: because some big solutions are using
workarounds dependent on this particular segment structure.
Anyone who writes code that is dependent on aberrant behaviour
deserves what they get. I see no evidence in Prototype.js that they
refuse to fix bugs because it will cause previous versions to break if
the new version is substituted (noting that there is no compulsion to
use newer versions anyway).
This is the main rule and trend of the commercial programming: long
used software has an established brand and customer base, but unable
to be very flexible due to backward compatibility requirements.
Provide a single example of where the authors of Prototype.js have
refused to fix a bug because it will break backward compatibility.
[...]
have bloated code with large unused
portions being served to the client.

This part of complains oftenly seen at c.l.j. are really strange. OOP
by itself doesn't provide easy mechanics for "per chunk" code usage.
Rubbish. The fundamental design of a library can be extremely modular
if that is the designer's choice. It just happens that some popular
libraries are not designed to be modular.

With the common namespace protection
pattern where the whole library is one global object one would need to
write an AI enforced compiler to get only the needed parts and to get
them back together properly.
Not at all - it has been shown here that using:

var XXLIB = {
fnOne: function(){...},
fnTwo: function(){...},
...
};

provides no more (and possibly less) of a "name space" than the
effectively equivalent:

function XXLIB_fnOne(){...}
function XXLIB_fnTwo(){...}

however the former pattern is used more frequently as it seems more OO
than the latter. Neither pattern forces any kind of internal
dependency. You should try to track down the various ways of creating
an array in Prototype.js.

String.prototype.toArray is the trivial and limited:

toArray: function() {
return this.split('');
},

the $A function uses the bog standard:

function $A(iterable) {
if (!iterable) return [];
if (iterable.toArray) return iterable.toArray();
var length = iterable.length || 0, results = new Array(length);
while (length--) results[length] = iterable[length];
return results;
}

however Enumerable.toArray follows a torturous route through map, Hash
and several other parts of the library. So what does that tell you
about the author's intentions to modularise the code? Clearly it
wasn't a priority (which isn't necessarily a criticism, it's a
statement of fact).

Saying it is impossible to write modular OO code just because a few
popular libraries aren't modular is not a particularly convincing
argument.

And still in the majority of cases it
will result in copying the entire inheritance chain - so nothing but
an error prone loss of time out of such compiler.
There is no reason why a library must be based on an inheritance
chain, nor does that approach necessarily make modularisation more
difficult. Prototype.js takes the approach of extending nearly all
the built-in objects other than Object, however that doesn't
necessarily make one part of the code dependent on another - it is a
consequence of how the library has been written.

OOP library usage is
per library, not per method based.
Continually repeating the same statement does not make it so.
Internal dependencies are not necessarily a fundamental feature of OO
programming per se - the reverse *should* be the norm.
I do not understand why the generic
feature of the OOP paradigm is used to criticize Javascript libraries
alone as if it would some exclusive Javascript default.
Because is isn't a fundamental "feature" of OO design.
In OOP the
regular solution to the problem is having some commonly agreed core
libraries guaranteed to be presented and then developing own libraries
as extra layer(s) atop of core.
The same old assertion.
Javascript is slowly moving to this
direction.
It is?
The first necessary step was to let the market to clean up
the initial anarchy of N different libraries on each corner. This step
is pretty much accomplished as of this year, with Prototype being an
industry standard with jQuery second industry standard compliant
library.
"Industry standard compliant library". That is there any such thing
as "industry standard" in client-side browser scripting?
--
Rob
Jun 27 '08 #74
VK
On Apr 26, 4:01 pm, RobG <rg...@iinet.net.auwrote:
it has been shown here that using:

var XXLIB = {
fnOne: function(){...},
fnTwo: function(){...},
...

};

provides no more (and possibly less) of a "name space" than the
effectively equivalent:

function XXLIB_fnOne(){...}
function XXLIB_fnTwo(){...}
With the latter ("Macromedia notation") even more efficient at least
for JScript where DispIDs are not reusable so any lookup chain
abbreviation brings better performance. Alas the ol'good "Macromedia
notation" is currently a victim of programming fashion. Namely it is
"out of fashion".
OOP library usage is
per library, not per method based.

Continually repeating the same statement does not make it so.
Internal dependencies are not necessarily a fundamental feature of OO
programming per se - the reverse *should* be the norm.
Possibly we are talking about different OOP ideas. The one proposed in
the conventional CS departments assumes classes created on the base of
other classes (superclasses) where the choice of the superclass to
extend is based on the required minimum of augmentation to receive new
class with needed features. This way the modularity of a particular
class depends solely and exclusively on the position of such class in
the inheritance chain. If some class extends Object directly then it
is rather simple to include it directly in some other block. With a
class being on the top or even middle of the chain its inclusion also
requires the inclusion of all underlaying chain segment. Because no
educated guess can be a priori made about the position of the class X
in library Y, OO modularity overall is low - yet its maintainability
is high, as I said. It is possible to imagine (and to make) a library
where each and every class directly extends Object, no matter how
close some classes would be to each other. Just don't call it OO based
library then.
In OOP the
regular solution to the problem is having some commonly agreed core
libraries guaranteed to be presented and then developing own libraries
as extra layer(s) atop of core.

The same old assertion.
"The common knowledge basics mentioning" would be more appropriate :-)
Again - unless we are talking about different OO ideas.
Javascript is slowly moving to this
direction.

It is?
Yep.
The first necessary step was to let the market to clean up
the initial anarchy of N different libraries on each corner. This step
is pretty much accomplished as of this year, with Prototype being an
industry standard with jQuery second industry standard compliant
library.

"Industry standard compliant library". That is there any such thing
as "industry standard" in client-side browser scripting?
Of course. Say - just a small sample - make a library where $
identifier is used for your own purposes not related with DOM ID
lookup. Now try to sell it without fixing it. Report the results. ;-)
Jun 27 '08 #75
On Apr 26, 5:35 am, VK <schools_r...@yahoo.comwrote:
On Apr 26, 4:01 pm, RobG <rg...@iinet.net.auwrote:
it has been shown here that using:
var XXLIB = {
fnOne: function(){...},
fnTwo: function(){...},
...
};
provides no more (and possibly less) of a "name space" than the
effectively equivalent:
function XXLIB_fnOne(){...}
function XXLIB_fnTwo(){...}
I agree there is no difference in the namespace protection gained by
either solution above. I've brought this up ocassionally on c.l.js.
The response has been that there is no namespace protection difference
but performance. (There may be a slow hashing algorithm in that
browser?) Richard Cornford wrote that at least one browser does not do
well when there are many global objects. I believe it is somewhere in
this thread

<URL: http://groups.google.com/group/comp.lang.javascript/browse_frm/thread/494e1757fa51fe3f/a504c64b42db8c8d>

Richard seems to like a third option which I think works like this

var localFnOne = XXLIB('fnOne');

which gives the library a chance to "build" the fnOne function. This
also seems to encourage making local copies inside a function of
library functions which is faster when the function calling the
library runs. These local copies however, do seem to encourage early
binding. That is the library "fnOne" function cannot be redefined
unless there is special effort (not that difficult) in the library
design to allow for that. I've thought about Richard's system quite a
bit and haven't thought of a compelling advantage that the earlier two
version don't have.
With the latter ("Macromedia notation") even more efficient at least
for JScript where DispIDs are not reusable so any lookup chain
abbreviation brings better performance. Alas the ol'good "Macromedia
notation" is currently a victim of programming fashion. Namely it is
"out of fashion".
As I wrote above, I have been told that some browser(s) is slower with
many global symbols. I haven't verified that myself. I much prefer the
idea of the Macromedia style as I don't use underscore and it would
could easily be reserved for the concept of namespacing and the dot
could be saved when a conceptually real OOP object with mutable state
is involved.

One thing I don't like about using a dot for namespacing is someone
might use "this" in one of the functions to refer to the namespace
object. That means local copies cannot be made trivially. For example,

var XXLIB = {
fnOne: function(){...},
fnTwo: function(){this.fnOne()},
// ...
};

and then in local code

var fnTwo = XXLIB.fnTwo;

requires using apply

fnTwo.apply(XXLIB, [])
This would be difficult to make necessary with the Macromedia solution
or with Richard's solution.

Peter
Jun 27 '08 #76
VK
On Apr 26, 7:42 pm, Peter Michaux <petermich...@gmail.comwrote:
Richard Cornford wrote that at least one browser does not do
well when there are many global objects. I believe it is somewhere in
this thread
<URL:http://groups.google.com/group/comp.lang.javascript/browse_frm/thread...>
There is nothing obvious about this problem in the linked thread.
Maybe I looked at the wrong space? Overall name me a browser that
would _increase_ performance with more global vars created :-)

At the same time I am not aware of a browser that would be
_particulary_ bad with numerous global vars - up to the point of an
obvious productivity decrease in comparison with other browsers.

At the same time long lookup chains a.b.c.d.e.f etc. do impact
noticeably at least one browser with non-reusable DispIDs - IE/
JScript.
While preparing for sell my SVL library (Superimposed Vector Language,
a layer interface for SVG+VML) I couldn't get a satisfactory smooth
rotation of complex 3D shapes on 1.x GHz machines which was not
acceptable. Then I just rebuild the entire library in the old top
level based "Macromedia notation style" and things came to life right
away. I don't mean one could right new levels of Quake in SVL after
that :-) - but the productivity became commercially satisfactory for
the customer.
Jun 27 '08 #77
On Apr 26, 10:03*am, VK <schools_r...@yahoo.comwrote:
On Apr 26, 7:42 pm, Peter Michaux <petermich...@gmail.comwrote:
Richard Cornford wrote that at least one browser does not do
well when there are many global objects. I believe it is somewhere in
this thread
<URL:http://groups.google.com/group/comp.lang.javascript/browse_frm/thread...>

There is nothing obvious about this problem in the linked thread.
It seems that IE6 performs very well when one level of object
namespacing is used. These results in favor of IE and a level of
object namespacing seem too good to be true. I've tried slight
variations on the tests below with similar results.

TEST ONE --------------------------------------------------------
430 ms FF2
430 ms IE6

<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01//EN"
"http://www.w3.org/TR/html4/strict.dtd">
<html>
<head>
<title>Test 1</title>
</head>
<body>

<script type="text/javascript">

var symbols = []
for (var i=0; i<2000; i++) {
var name = 'sym' + i;
symbols.push(name)
this[name] = function() {};
}

var tic = (new Date());
for (var i=0; i<20; i++) {
for (var j=0, jlen=symbols.length; j<jlen; j++) {
this[symbols[j]]();
}
}
document.write((new Date()).getTime() - (tic).getTime());

</script>

</body>
</html>
TEST TWO ---------------------------------------------------------
440 ms FF2
120 ms IE6 <------ Wow!

<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01//EN"
"http://www.w3.org/TR/html4/strict.dtd">
<html>
<head>
<title>Test 2</title>
</head>
<body>

<script type="text/javascript">

var symbols = []
var namespace = {};
for (var i=0; i<2000; i++) {
var name = 'sym' + i;
symbols.push(name)
namespace[name] = function() {};
}

var tic = (new Date());
for (var i=0; i<20; i++) {
for (var j=0, jlen=symbols.length; j<jlen; j++) {
namespace[symbols[j]]();
}
}
document.write((new Date()).getTime() - (tic).getTime());

</script>

</body>
</html>

Jun 27 '08 #78
VK
On Apr 27, 11:36 pm, Peter Michaux <petermich...@gmail.comwrote:
It seems that IE6 performs very well when one level of object
namespacing is used. These results in favor of IE and a level of
object namespacing seem too good to be true. I've tried slight
variations on the tests below with similar results.

TEST ONE --------------------------------------------------------
430 ms FF2
430 ms IE6

<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01//EN"
"http://www.w3.org/TR/html4/strict.dtd">
<html>
<head>
<title>Test 1</title>
</head>
<body>

<script type="text/javascript">

var symbols = []
for (var i=0; i<2000; i++) {
var name = 'sym' + i;
symbols.push(name)
this[name] = function() {};
}

var tic = (new Date());
for (var i=0; i<20; i++) {
for (var j=0, jlen=symbols.length; j<jlen; j++) {
this[symbols[j]]();
}
}
document.write((new Date()).getTime() - (tic).getTime());

</script>

</body>
</html>

TEST TWO ---------------------------------------------------------
440 ms FF2
120 ms IE6 <------ Wow!

<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01//EN"
"http://www.w3.org/TR/html4/strict.dtd">
<html>
<head>
<title>Test 2</title>
</head>
<body>

<script type="text/javascript">

var symbols = []
var namespace = {};
for (var i=0; i<2000; i++) {
var name = 'sym' + i;
symbols.push(name)
namespace[name] = function() {};
}

var tic = (new Date());
for (var i=0; i<20; i++) {
for (var j=0, jlen=symbols.length; j<jlen; j++) {
namespace[symbols[j]]();
}
}
document.write((new Date()).getTime() - (tic).getTime());

</script>

</body>
</html>
The results are amazing :-) but not for the reason you are possibly
thinking of. It doesn't matter if one has top level or level down
functions. The "slowdown key" is in using window host object in DispID
resolution. I know that you are - as many - have been zombificated by
"window = this = Global". It is not true in overall and it is
especially wrong for IE that has to be ready to accommodate and
intercommunicate two completely different languages: JScript and
VBScript. Here window acts as a mediator level between two Globals of
two engines and any intensive usage of it for DispID resolution
decreases hits the performance with a big strength. In your second
sample replace
namespace[symbols[j]]();
with
this.namespace[symbols[j]]();
and "enjoy" the result.

Also try this for fun: (just don't take it as an eval promo, please) :

<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01//EN"
"http://www.w3.org/TR/html4/strict.dtd">
<html>
<head>
<title>Test 1</title>
</head>
<body>

<script type="text/javascript">
for (var i=0; i<2000; i++) {
var name = 'n' + i;
eval(name+'=new Function');
}

var tic = (new Date());

for (var i=0; i<2000; i++) {
var name = 'n' + i;
eval(name+'()');
}
document.write((new Date()).getTime() - (tic).getTime());

</script>

</body>
</html>
Jun 27 '08 #79
On Apr 26, 10:35 pm, VK <schools_r...@yahoo.comwrote:
On Apr 26, 4:01 pm, RobG <rg...@iinet.net.auwrote:
[...]
OOP library usage is
per library, not per method based.
Internal dependencies are not necessarily a fundamental feature of OO
programming per se - the reverse *should* be the norm.

Possibly we are talking about different OOP ideas. The one proposed in
the conventional CS departments assumes classes created on the base of
other classes (superclasses) where the choice of the superclass to
extend is based on the required minimum of augmentation to receive new
class with needed features.
I was just trying to point out that that isn't necessarily a
consequence of OO programming. Similar dependencies can be created in
a library of functions that aren't object oriented where there are
dependencies of functions based on lower (or higher, depending on your
perspective) tiers of functions, which is essential what a class
hierarchy is.

This way the modularity of a particular
class depends solely and exclusively on the position of such class in
the inheritance chain. If some class extends Object directly then it
is rather simple to include it directly in some other block. With a
class being on the top or even middle of the chain its inclusion also
requires the inclusion of all underlaying chain segment. Because no
educated guess can be a priori made about the position of the class X
in library Y, OO modularity overall is low - yet its maintainability
is high, as I said.
"Maintainability" is a double edged sword in this case. It becomes
very difficult to modify a higher level class if there are many, many
sub-classes based on it (as you said your self in regard to backwards
compatibility).
It is possible to imagine (and to make) a library
where each and every class directly extends Object, no matter how
close some classes would be to each other. Just don't call it OO based
library then.
I think the term "object based" fits javascript much better.

In OOP the
regular solution to the problem is having some commonly agreed core
libraries guaranteed to be presented and then developing own libraries
as extra layer(s) atop of core.
The same old assertion.

"The common knowledge basics mentioning" would be more appropriate :-)
Again - unless we are talking about different OO ideas.
No, I was just in a crappy mood. :-)

[...]
"Industry standard compliant library". That is there any such thing
as "industry standard" in client-side browser scripting?

Of course. Say - just a small sample - make a library where $
identifier is used for your own purposes not related with DOM ID
lookup. Now try to sell it without fixing it. Report the results. ;-)
You might have chosen a better example, I wouldn't use $ in any part
of an identifier name in a javascript program as the projects I work
on use a large numbers of server-generated identifiers like
xyz00$pppMmmmm and xyz00$UserStatus1$FullName.

It also reminds me of other programming environments where $ has a
particular meaning. As far as I know, the usage stems from using $
(elementID) clientside and $elementID serverside. It may have seemed
cool at the time, but could be classed as a newbie mistake that never
got fixed because a whole bunch of other noobs picked up on it as
beeing cool.
--
Rob
Jun 27 '08 #80
dhtml wrote:
On Apr 22, 5:39 pm, Richard Cornford wrote:
>On Apr 21, 10:21 pm, dhtml wrote:
>>On Apr 19, 9:02 am, Richard Cornford wrote:
One of the arguments paraded in favour of these libraries is
that they are examined, worked on and used by very large
numbers of people, and so they should be of reasonably high
quality because with many eyes looking at the code obvious
mistakes should not go unnoticed. My experience of looking
at code from the various 'popular' libraries suggests that
this is a fallacy, because (with the exception of YUI (for
obvious reasons))
>>Not obvious.
>>There's plenty of bugs YUI.

Who is talking about bugs? Take this code from the dojo
library:-
<snip>
>The rules for javascirpt dictate then whenever the -
(elem == undefined) - expression is evaluated (that is,
whenever - elem == null - is false) the result of the
expression must be false, and so the -
(typeof elem == "undefined") - expression just cannot
ever be evaluated. The bottom line is that if the author
of that code had understood javascript when writing it
the whole thing would have been:-

if(elem == null){
dojo.raise("No element given to dojo.dom.setAttributeNS");

}

- or possibly:-

if(elem){
dojo.raise("No element given to dojo.dom.setAttributeNS");

}

- as there should be no issues following from pushing other
primitive values that have falseness through the exception
throwing path as well null and undefined.

The first is not a bug; it does exactly what it was written
to, and does it reliably and consistently. But it is a
stupid mistake on the part of its 'programmer', and
survived in the dojo source for long enough to be observed
because nobody involved with dojo knew enough actual
javascript to see that it was a stupid mistake and correct
it.

YUI may contain bugs but it does not contain this type of
stupid mistake because at least one person (and it only
takes one) knows javascript well enough to be able to see
this type of thing and stop it (presumably at source by
ensuring any potential transgressors become better informed
bout the language they are using).

I really didn't want to be goaded into postup a dumb and
dumber competition with other people's code, but you've left
me with not very good choices.
You had a choice.
YUI has some pretty bad/obvious bugs in crucial places.
augmentObject, hasOwnProperty. Dom.contains:-

hasOwnProperty: function(o, prop) {
if (Object.prototype.hasOwnProperty) {
return o.hasOwnProperty(prop);
}

return !YAHOO.lang.isUndefined(o[prop]) &&
o.constructor.prototype[prop] !== o[prop];
},

- Which will throw errors in IE when - o - is a host
object and return wrong results in Opera when - o - is
window. augmentObject:-
So? Does the documentation propose that this method is objects other
than native ECMAScript objects?
augmentObject: function(r, s) {
if (!s||!r) {
throw new Error("Absorb failed, verify dependencies.");
}
var a=arguments, i, p, override=a[2];
if (override && override!==true) { // only absorb the
specified properties
Why do you find it so difficult to cope with posting code that is not
mangled by line wrapping?
for (i=2; i<a.length; i=i+1) {
r[a[i]] = s[a[i]];
}
} else { // take everything, overwriting only if the
third parameter is true
for (p in s) {
if (override || !r[p]) {
r[p] = s[p];
}
}

YAHOO.lang._IEEnumFix(r, s);
}
},
It is questionable strategy to do object augmentation
on the prototype chain of the supplier.
So a "questionable strategy" not a bug?
It would be better to use hasOwnProperty to
filter the stuff in the supplier's prototype chain out.
No, it would not. It may, under some circumstances, be better, but it
also may not be; it depends on what outcome you are after.
Next, if the receiver has a property p with a false-ish
value, then the ovveride flag is irrelevant.
That may be how the code was programmed, but what makes it a bug?
Dojo calls object augmentation "extend" which to me seems
to be misleading.
Misleading code may not be great, much as obscure code may not be great,
but where are the bugs you spoke of?
isAncestor: function(haystack, needle) {
haystack = Y.Dom.get(haystack);
needle = Y.Dom.get(needle);

if (!haystack || !needle) {
return false;
}

if (haystack.contains && needle.nodeType && !isSafari)
{ // safari contains is broken
YAHOO.log('isAncestor returning ' +
haystack.contains(needle), 'info', 'Dom');
return haystack.contains(needle);
}
else if ( haystack.compareDocumentPosition &&
needle.nodeType ) {
YAHOO.log('isAncestor returning ' + !!
(haystack.compareDocumentPosition(needle) & 16), 'info', 'Dom');
return !!(haystack.compareDocumentPosition(needle)
& 16);
} else if (needle.nodeType) {
// fallback to crawling up (safari)
return !!this.getAncestorBy(needle, function(el) {
return el == haystack;
});
}
YAHOO.log('isAncestor failed; most likely needle is
not an HTMLElement', 'error', 'Dom');
return false;
}

This would return inconsistent results, depending on the browser.
For example:
YAHOO.util.Dom.isAncestor(document.body, document.body);
What does the documentation have to say about the expected arguments?
Though the last is not as obvious a mistake as the others.
So there was no evidence of any bugs in the others and this one is less
obvious than they were?
There are considerably questionable practices in the Event
library.
What have "questionable practices" got to do with your suggestion that
you may be presenting "pretty bad/obvious bugs in crucial places"?
The connection manager is horribly designed. The fact that
it attempts to do form serialization within iteself, is
just a horrible decision. If the author had been forced
to write a test for that, he'd probably have moved that
form serialization code somewhere else, to make it easier
to test (easier coverage verification).
So you don't see the difference between writing code that logically
cannot be executed and your opinions about how things should be
designed? Unfortunately your not perceiving that distinction goes quite
some way towards bring into question your opinions on how code should be
designed.
[snip]
>And where those
authors are part of a collective they don't speak for the
knowledge of the specific author responsible but instead
indicate the level of understanding of the _most_
knowledgeable person involved.

This can lead to blocking code reviews and scape goating.
What "can lead to ..."?
With a test driven approach, the only thing to blame is the
process, and that's fixable (besides the fact that the test
never has any hard feelings).
A test driven approach helps nothing when the people designing those
tests are not capable of stressing their code to the point of showing
how were and why it falls over.
Without tests, you get things like blood commits and code
freezes. Some libraries actually do code freezes. And they
have at least one expert. And they have bugs. Dumb ones.
So if YUI is not the library with these bugs (just "questionable
practices", in your opinion) why aren't you naming it here?
><snip>
>>A fix for the bug that was demonstrated seems to be by
simply putting the &amp; last.
>>String.prototype.unescapeHTML = function() {
return this.replace(/&lt;/g,'<')
.replace(/&gt;/g,'>')
.replace(/&amp;/g,'&');
};
That would need to be tested out though.

No it does not need to be test, it is correct. The general
rule is that the character significant in escaping needs to
be processed first when escaping and last when unescaping.

It addresses the problem that was demonstrated in your
example. It does not, however, take into consideration
the possibility that - this - could contain * any * other
entities.
No, but it was not coded to do that.
If the need to handle - &quot; - or & (which is also
'&') got added in later, they'd need to be reviewed by the
one, sole expert, to make sure the person who wrote the
amending code didn't make a rookie mistake. A test could
clearly prove it worked.
Obviously.
var s = "&quot;".replace(/&quot;/g, '"');

So the fix addresses only one concern.
Yes, it addresses the thing that stops the pair of encoding/decoding
methods being symmetrical.
another consideration is that String.prototype.escapeHTML
should be stable, but if there's a bug, and dependencies on
that bug, then the fixing of the bug becomes complicated.
In may very well be the case that some novice programmer used
escapeHTML, found that it didn't work right, made some
adjustments in his implementation to compensate for that bug.
In essence, his implementation is now depending on that bug.
This is where I see adding things to built-in prototypes to
be risky.
That is a truly lousy argument for not augmenting built-in prototypes,
as it is just as true for any aspect of any public API in any
general-purpose library.
If the programmer had made a method, then that method
could always be deprecated in a future release, if found
to be problematic.
What is to stop a library that augments the built-in prototypes from
deprecating the methods it adds to the prototype?
So, to sum it up, my recommentations:
1) write a test
A reasonable suggestion, but a little superficial. The assertion is that
there was a test for these methods, but that in itself did not expose
the issue.
2) don't put the methods on String.prototype because they
might change later.
You might as well say 'don't put methods anywhere as they might change
later'.
>>Right?

Absolutely. It is a simple bug, and a mistake that in my
experience is made by nearly every programmer who comes to
the issues of encoding/escaping for the web for the first
time (pretty much no matter what their previous level of
experience in other areas). It is something that I have
learnt to double check, habitually, and that is the reason
that I spotted it so quickly.
That was my first time writing an unescape function in
Javascript.
But you did have the advantage of knowing that there was something up
with the code as it had been written, which usually makes finding and
fixing an error easier.
I think I might have written one in Java several years ago
in a response filter exercise, though.

If I had to write something more comprehensive to account
for more entities, I'd probably consider looking into
inverting control to the browser's parser using a combination
of newDiv.innerHTML and newDiv.textContent|innerText
That is what Prototype.js's other versions of this method do (hence the
cross-browser inconsistencies as it means that those methods will
handle other entities where these version will not).
document.body.textContent = "&";
document.body.innerHTML; // &amp;

document.body.innerHTML = "&quot;"
document.body.textContent; // "

Obviously not using document.body, but a newly created node.
I would probably write some tests for that, including the
cases you posted, make sure they all fail, then write out
some code (the code could be changed in the future, since
there are tests).
A general entity encoding/decoding method is probably either over the
top or totally unnecessary for most real-world contexts.

Richard.

Jun 27 '08 #81
VK
On May 6, 3:26 am, "Richard Cornford" <Rich...@litotes.demon.co.uk>
wrote:
But anyone then asserting that, for example, the best way forward with
Prototype.js would be to delete it and start again from scratch will be
disregarded even if they think that advice is constructive (and it is
virtually the only way that it would be possible to correct the mistake
of violating the language's specification's injunction against using the
'$' symbol as the initial character in Identifiers except when they were
machine generated).
So are you proposing to trash out a whole library because one of used
identifiers is _not suggested_ yet fully valid? IMO it is an act of
lunatism. Prototype.js has a number of its defaults - but making $
usage as the main reason to drop it makes the poster look ridiculous.
And the least I want to see in c.l.j. is _Richard Cornford_ looking
ridiculous or funny. Upset, sarcastic, nasty - any time, just please
don't make fun of yourself. Can you switch on some _substantional_
library criticism instead?
Jun 27 '08 #82
On May 5, 4:26 pm, "Richard Cornford" <Rich...@litotes.demon.co.uk>
wrote:
But anyone then asserting that, for example, the best way forward with
Prototype.js would be to delete it and start again from scratch will be
disregarded even if they think that advice is constructive (and it is
virtually the only way that it would be possible to correct the mistake
of violating the language's specification's injunction against using the
'$' symbol as the initial character in Identifiers except when they were
machine generated).
ES3 spec:

"The dollar sign is intended for use only in mechanically generated
code."

Both "intended" and "mechanically generated" make the sentence
ambiguous. It is only a recommendation at best.

Since ES is a spec that is based on existing implementations and
language use, I think the safe bet is that such a reservation about $
in identifiers will be removed when ES4 is published.

Peter
Jun 27 '08 #83
Peter Michaux wrote:
On May 5, 4:26 pm, Richard Cornford wrote:
>But anyone then asserting that, for example, the best way
forward with Prototype.js would be to delete it and start
again from scratch will be disregarded even if they think
that advice is constructive (and it is virtually the only
way that it would be possible to correct the mistake of
violating the language's specification's injunction against
using the '$' symbol as the initial character in Identifiers
except when they were machine generated).

ES3 spec:

"The dollar sign is intended for use only in mechanically
generated code."

Both "intended" and "mechanically generated" make the sentence
ambiguous.
Not that ambiguous (and particularly with the word "only" in there).
That sentence explains why a character that really did not need to be in
the set of characters allowed in identifiers was included in that set.
It is only a recommendation at best.
Yet the reaction to an almost identical assertion about Java identifiers
in its specification results in Java programmers finding the idea of
using a $ symbol unthinkable; something that only complete novices and
armatures would do, and an error that would stamped out as soon as they
got into a professional context.
Since ES is a spec that is based on existing implementations
and language use, I think the safe bet is that such a reservation
about $ in identifiers will be removed when ES4 is published.
ES4 looks like it is going to be a serious (if predictable) mistake. How
much of a mistake will not be clear until there is a specification to
read.

Richard.

Jun 27 '08 #84

This thread has been closed and replies have been disabled. Please start a new discussion.

Similar topics

5
by: ziobudda | last post by:
Hi, I want ask you if, for a web portal/application, is better prototype or Jquery? I don't want to innesc some type of flame, but after the announce that drupal use JQuery and that the new...
0
by: DolphinDB | last post by:
Tired of spending countless mintues downsampling your data? Look no further! In this article, you’ll learn how to efficiently downsample 6.48 billion high-frequency records to 61 million...
0
isladogs
by: isladogs | last post by:
The next Access Europe meeting will be on Wednesday 6 Mar 2024 starting at 18:00 UK time (6PM UTC) and finishing at about 19:15 (7.15PM). In this month's session, we are pleased to welcome back...
1
isladogs
by: isladogs | last post by:
The next Access Europe meeting will be on Wednesday 6 Mar 2024 starting at 18:00 UK time (6PM UTC) and finishing at about 19:15 (7.15PM). In this month's session, we are pleased to welcome back...
0
by: Vimpel783 | last post by:
Hello! Guys, I found this code on the Internet, but I need to modify it a little. It works well, the problem is this: Data is sent from only one cell, in this case B5, but it is necessary that data...
0
by: jfyes | last post by:
As a hardware engineer, after seeing that CEIWEI recently released a new tool for Modbus RTU Over TCP/UDP filtering and monitoring, I actively went to its official website to take a look. It turned...
0
by: ArrayDB | last post by:
The error message I've encountered is; ERROR:root:Error generating model response: exception: access violation writing 0x0000000000005140, which seems to be indicative of an access violation...
1
by: CloudSolutions | last post by:
Introduction: For many beginners and individual users, requiring a credit card and email registration may pose a barrier when starting to use cloud servers. However, some cloud server providers now...
1
by: Shællîpôpï 09 | last post by:
If u are using a keypad phone, how do u turn on JavaScript, to access features like WhatsApp, Facebook, Instagram....
0
by: Faith0G | last post by:
I am starting a new it consulting business and it's been a while since I setup a new website. Is wordpress still the best web based software for hosting a 5 page website? The webpages will be...

By using Bytes.com and it's services, you agree to our Privacy Policy and Terms of Use.

To disable or enable advertisements and analytics tracking please visit the manage ads & tracking page.