473,888 Members | 1,447 Online
Bytes | Software Development & Data Engineering Community
+ Post

Home Posts Topics Members FAQ

Output VALUE of INPUT textfield using document.write

Hello,

I really, really, need some help here - I've spent hours trying to find a
solution.

In a nutshell, I'm trying to have a user input a value in form's
textfield. The value should then be assigned to a variable and output
using document.write.

(Note, there is no submit button or other form elements. Basically
whatever the user places in the INPUT textfield, I want echoed elsewhere
on the webpage.)

---

Here's what hopefully should happen:

[ myName ] (web visitor types value in Input Text Field)

Welcome myName! (value of text field appears in web page using
document.write.

---

What I have so far:

I know I can declare a value and use document.write.

var myName = "Tom";

document.write= (myName);

But how do I assign the VALUE of the INPUT textfield to a variable name
and have it output using document.write? Please help - I've spent hours on
this!

Thanks in advance.
Stumped & Confused.

--
Using Opera's revolutionary e-mail client: http://www.opera.com/m2/
Jul 23 '05 #1
13 9672
Stumped and Confused wrote:
In a nutshell, I'm trying to have a user input a value in form's
textfield. The value should then be assigned to a variable and output
using document.write.


document.write will open a new document, clearing the current document,
which is probably not what you want. The dynWrite function from the FAQ
should give you better results:-)

<URL:http://jibbering.com/faq/#FAQ4_15>

However if you just want to write text and not HTML, it's probably
better to use standard DOM methods.
<form action="#">
What's you name&nbsp;?
<input>
<input type="button" value=":-)"
onclick="hello( this.form.eleme nts[0].value,'foo')">
</form>
<span id="foo"></span>

<script type="text/javascript">
function hello(what,wher e){
var target;
if(document.get ElementById &&
document.create TextNode &&
document.body &&
document.body.a ppendChild &&
typeof document.body.f irstChild!="und efined"
){ //all the methods we want to use are supported

target=document .getElementById (where);
if(target) {
if(target.first Child==null) {
//first time calling
//add the text node
target.appendCh ild(document.cr eateTextNode("" ));
}
if(/\S/.test(what)) { //not empty
target.firstChi ld.nodeValue="H ello, "+what;
}
}

}
}
</script>
Jul 23 '05 #2
"Stumped and Confused" <no****@jollydo nkey.com> wrote in message
news:opsehi7fmi 0q60t7@shaktima n
Hello,

I really, really, need some help here - I've spent hours trying to
find a solution.

In a nutshell, I'm trying to have a user input a value in form's
textfield. The value should then be assigned to a variable and output
using document.write.


I had a conversion script which may be coerced into doing what you want.
Have to wait until Monday though because it's on the PC at work.

It's bound to be crap because it was one of the first useable scripts I
wrote but if you've not found a solution by Monday I'll post the relevant
bits here.
Jul 23 '05 #3
Thanks a bunch!

--
Using Opera's revolutionary e-mail client: http://www.opera.com/m2/
Jul 23 '05 #4
Thank you - although I have found a solution (thank you, Yann-Erwan
Perio), I'll be interested in any possible alternative solutions -
especially, if it helps my learning.

Cheers and thank you!

--
Using Opera's revolutionary e-mail client: http://www.opera.com/m2/
Jul 23 '05 #5
On Fri, 17 Sep 2004 22:00:58 +0200, Yann-Erwan Perio
<y-*******@em-lyon.com> wrote:

[snip]
<form action="#">
A quick aside:

I know that IE is incapable of understanding that in a link href="" refers
to the current document[1], but it does treat action="" properly (in IE 6
- don't know about earlier versions).

[snip]
if(document.get ElementById &&
document.create TextNode &&
document.body &&
document.body.a ppendChild &&
typeof document.body.f irstChild!="und efined"
){ //all the methods we want to use are supported


I did bring this up once before a long time ago, but I forget my wording,
the thread (I think I hijacked one), and the result, so I'll ask again.

Is it safe to assume that because a method, like appendChild, is supported
on one type of node, that it will be supported on all nodes. Logic would
suggest that such an assumption is flawed, but is it in practice? Recent
major W3C DOM-supporting browsers should present no problems, but not
having experience with a wide cross-section of user agents and versions,
I'm not absolutely certain.

In the script that you presented, it would be trivial to rewrite it to be
more cautious, but things but your displayed approach, and the one I would
use, become disparate when something like the iteration of a collection,
and the application of methods the contained nodes, was required.

[snip]

Mike
[1] I was quite shocked when learning that.

--
Michael Winter
Replace ".invalid" with ".uk" to reply by e-mail.
Jul 23 '05 #6
Michael Winter wrote:
Is it safe to assume that because a method, like appendChild, is
supported on one type of node, that it will be supported on all nodes.
Logic would suggest that such an assumption is flawed, but is it in
practice? Recent major W3C DOM-supporting browsers should present no
problems, but not having experience with a wide cross-section of user
agents and versions, I'm not absolutely certain.


I don't have that much experience myself with a wide range of browsers;
I suppose that Jim, Martin or Richard could tell us more about this.

At first sight, your arguments seem very convincing. It is indeed
trivial to change the detection a little bit so that it tests whether
the methods needed are required on the object I want to use, and not on
another object.

History demonstrates that object models have differed across user
agents, and that a same model could be implemented very differently
across browsers. Moreover, numerous agents have been created, which
makes it nearly impossible to test a piece of code on every possible
agent (and probably not acceptable from an economic point of view).

Faced with this issue of unknown environment, testing features on the
very object to be used makes sense, and is actually the safest option.

This approach isn't without problems, though. Testing extensively the
features on each object can render the code unreadable, and break the
business flow of the script. This is the same problem as with localized
exceptions handled by try/catch constructs; it forces you to handle the
exception at many levels, probably too many to have something neat. This
is quite contradictory if you consider that the script will have three
possible states: run fine, run in a degraded mode, or don't run (and
degrade fine).

Another problem is that it can prevent code optimisation if you leave
the tests in place in each situation, not redefining the methods (see
the Russian Doll pattern;-)).

In addition, there's also the cost introduced by such techniques, less
technical but as important; it requires more time, more attention, more
experience etc., so definitely costs more (I don't know many people who
could understand and write advanced javascript without problems - in
fact I know of none apart from in clj - but that's not my job either).

I'm therefore less and less convinced of the approach of
feature-detecting as "much" as possible. To me the best approach is to
do something in between, first performing a rigorous intialization,
testing for all methods on a sample object (the document.body in the
example code), and then moving on to the business logic, without testing
more than required (object existence).

Years until 7-version browsers were like a "bubbling cauldron of
ideas"[1], where specifications were designed, tested, thrown away.
However things differ now, there are existing standards and models which
are recognized among vendors and implemented in the same way. We've
entered a new phase, where things are not written from scratch but built
on solid ground - therefore evolving less quickly (sadly). So in the
future it is unlikely to see diverging models (they wouldn't be profitable).

This means that "DOM-conformant" agents should/will really be
conformant. The strategy of using another object to test for methods
support appears to be valid, since these two objects would implement the
same DOM interfaces (a node is a node); you might indeed have methods
working on an object and not the other, but that'll be more rare with
time (and you could as well have a problem the other way around (reading
a property making the script fail, like mimeTypes in not-so-old Operas)).

Eventually, it comes back to the definition of detection; what model is
supported by the user agent, and to which extent. In the end, only
experience will draw the line; the ability to adopt the two-fold model
requires a good knowledge of the whole history (browsers, DOMs,
implementations ) so that the testing phase really covers all major
potential issues; but when you have it, I believe that the strategy of
testing "everything " is simply not worth the effort anymore; I'm more
inclined in doing iterative development, doing 95% of the detection at
first (giving the degraded mode to non-supporting agents), and then
correcting if anything wrong comes by (a hole in my experience). There's
a cost of non-quality which, I've come to realise, is simply too high.
It's getting late here, so I hope all of this made sense (wasn't too
disappointing/boring) and that I have, if not answered your question, at
least raised some, without drifting too far away.
Regards,
Yep.

---
[1] as, IIRC, Jack Welch put it about GE.
Jul 23 '05 #7
"Stumped and Confused" <no****@jollydo nkey.com> wrote in message
news:opsehnetl6 0q60t7@shaktima n
Thank you - although I have found a solution (thank you, Yann-Erwan
Perio), I'll be interested in any possible alternative solutions -
especially, if it helps my learning.

Cheers and thank you!


Found it on a disk I'd brought home.
It's one of my earliest efforts. It works but there isn't any error checking
and there's no DOCTYPE. I think the <!-- --> comments are no longer
necessary unless you have a very old browser but it doesn't seem to hurt to
leave them in.

Full working source code cut-n-pasted below.

=============== ==========

<html>
<head>
<title>Conversi on Calculator</title>
<script type="text/javascript">
<!--
function places(value)
{
value=Math.roun d(value*100)/100
return value
}

function convert(measure )
{
switch (measure)
{
case "mm" :
x=document.myfo rm.mm.value
document.myform .inch.value = places(x/25.4)
document.myform .feet.value= places((x/25.4)/12)
document.myform .cm.value= places(x/10)
document.myform .metres.value= places(x/1000)
break

case "inch" :
x=document.myfo rm.inch.value
document.myform .mm.value=place s(x*25.4)
document.myform .feet.value=pla ces(x/12)
document.myform .cm.value=place s(x*2.54)
document.myform .metres.value=p laces(x*.0254)
break

case "metres" :
x=document.myfo rm.metres.value
document.myform .mm.value=place s(x*1000)
document.myform .cm.value=place s(x*100)
document.myform .inch.value=pla ces(x/.0254)
document.myform .feet.value=pla ces((x/.0254)/12)
break

case "feet" :
x=document.myfo rm.feet.value
document.myform .inch.value=pla ces(x*12)
document.myform .mm.value=place s(x*12*25.4)
document.myform .cm.value=place s(x*12*2.54)
document.myform .metres.value=p laces(x*12*.025 4)
break

case "cm" :
x=document.myfo rm.cm.value
document.myform .mm.value=place s(x*10)
document.myform .metres.value=p laces(x/100)
document.myform .inch.value=pla ces(x/2.54)
document.myform .feet.value=pla ces((x/2.54)/12)
break

default:
}
}
-->
</script>
<noscript>
<p>You need to enable Javascripts to use this utility.</p>
</noscript>

</head>
<body>

<p>
<b>Insert a number in any input field and<br>
it will be automatically converted into all the others.</b>
<br>
This requires Javascript
</p>

<form name="myform">
<input name="mm" onkeyup="conver t('mm')"> Millimetres<br>
<br>
<input name="cm" onkeyup="conver t('cm')"> Centimetres<br>
<br>
<input name="metres" onkeyup="conver t('metres')"> Metres<br>
<br>
<input name="inch" onkeyup="conver t('inch')"> Inches<br>
<br>
<input name="feet" onkeyup="conver t('feet')"> Feet<br>
<br>
<input type="reset" value="Clear">
</form>

</body>
</html>
Jul 23 '05 #8
On Sat, 18 Sep 2004 02:02:49 +0200, Yann-Erwan Perio
<y-*******@em-lyon.com> wrote:
Michael Winter wrote:
Is it safe to assume that because a method, like appendChild, is
supported on one type of node, that it will be supported on all nodes.
Logic would suggest that such an assumption is flawed, but is it in
practice? Recent major W3C DOM-supporting browsers should present no
problems, but not having experience with a wide cross-section of user
agents and versions, I'm not absolutely certain.

[snip]
At first sight, your arguments seem very convincing.
And your counter is just as persuasive. I'll do my best to respond.

[snip]
Faced with this issue of unknown environment, testing features on the
very object to be used makes sense, and is actually the safest option.
Indeed, which is why I favour a full testing stratagy, but...
This approach isn't without problems, though. Testing extensively the
features on each object can render the code unreadable, and break the
business flow of the script. This is the same problem as with localized
exceptions handled by try/catch constructs; it forces you to handle the
exception at many levels, probably too many to have something neat. This
is quite contradictory if you consider that the script will have three
possible states: run fine, run in a degraded mode, or don't run (and
degrade fine).
....this certainly a potential stumbling block. That said, this problem hsa
always existed in programming. Perhaps with full support for exception
handling, it would be easier to create degradable scripts by moving
decisions regarding fallback to a more abstract level. I suppose that a
detailed example would be needed to investigate that properly, but
unfortunately, the continued use of wish-they-were-dead browsers like NN4
would scuttle any workable solution, should support for them be required.
Another problem is that it can prevent code optimisation if you leave
the tests in place in each situation, not redefining the methods (see
the Russian Doll pattern;-)).
I still haven't read that thread, yet. I started it, but became distracted.
In addition, there's also the cost introduced by such techniques, less
technical but as important; it requires more time, more attention, more
experience etc., so definitely costs more (I don't know many people who
could understand and write advanced javascript without problems - in
fact I know of none apart from in clj - but that's not my job either).
But if you're familiar with such a cautious approach, does it really add
extra cost? People often argue that writing "good" code takes extra time,
but that is simply because they aren't used to writing it. Certainly, a
very complex script will add overhead if there are many possible fallback
routes to cover, but would such a situation arise in your average web site?

There is also the factor of education. It's fine for those versed in
cross-browser scripting to say, "I don't have to worry about testing for
that [whatever "that" may be], because I know it will be there", but it
this piecemeal testing something that should be passed on to others?
Without experience, how can they judge what is needed and what isn't? By
our own admissions, neither of us are fully qualified to make such a
determination with any degree of authority.
I'm therefore less and less convinced of the approach of
feature-detecting as "much" as possible. To me the best approach is to
do something in between, first performing a rigorous intialization,
testing for all methods on a sample object (the document.body in the
example code), and then moving on to the business logic, without testing
more than required (object existence).
There are merits to that, but I'm still not certain it's something to be
adopted at the moment.
Years until 7-version browsers were like a "bubbling cauldron of
ideas"[1], where specifications were designed, tested, thrown away.
However things differ now, there are existing standards and models which
are recognized among vendors and implemented in the same way. We've
entered a new phase, where things are not written from scratch but built
on solid ground - therefore evolving less quickly (sadly). So in the
future it is unlikely to see diverging models (they wouldn't be
profitable).
[Rant]

And we're back to original issue: the specifications aren't implemented
consistently. Mozilla claims to fully comply with the various DOM
specifications through the hasFeature method, something which is reserved
for the truly compliant, but some of its bugs completely break
specification. Opera has very good support for the DOM, but it misses some
of the lesser-used, but basic, methods and properties. And let's not
forget IE (Oh, how I really wish we could). Until implementations are
complete and fully adopted by end-users - something that will take many
years - we stuck with relying on only one thing: feature detection, and
due to the incomplete support, is any testing that is less than
comprehensive reliable?

If vendors decided for once that they'd wait until they finished
development before releasing a product, we might simply be writing:

var imp;
if((imp = document.implem entation) && imp.hasFeature
&& imp.hasFeature( 'HTML', '2.0'))
{
// Yay! Full HTML DOM support.
}

when looking for the various DOM methods. However, the typical rush "not
to be left behind" has littered the landscape with little, if anything,
that can truly pass the test above.

[/Rant]
This means that "DOM-conformant" agents should/will really be conformant.
In time, but not now.
The strategy of using another object to test for methods support appears
to be valid, since these two objects would implement the same DOM
interfaces (a node is a node); you might indeed have methods working on
an object and not the other, but that'll be more rare with time (and you
could as well have a problem the other way around (reading a property
making the script fail, like mimeTypes in not-so-old Operas)).
Yes, you would hope so.
Eventually, it comes back to the definition of detection; what model is
supported by the user agent, and to which extent. In the end, only
experience will draw the line; the ability to adopt the two-fold model
requires a good knowledge of the whole history (browsers, DOMs,
implementations ) so that the testing phase really covers all major
potential issues; but when you have it, I believe that the strategy of
testing "everything " is simply not worth the effort anymore; I'm more
inclined in doing iterative development, doing 95% of the detection at
first (giving the degraded mode to non-supporting agents), and then
correcting if anything wrong comes by (a hole in my experience). There's
a cost of non-quality which, I've come to realise, is simply too high.
The only issue here is, how can you tell if you have omitted something? It
relies on you finding it yourself or it being reported, but neither is
likely to happen. You can't possibly test with all user agents, and many
visitors would ever bother reporting something, as they'd never grasp what
was wrong.

I appreciate your position and I would adopt it, but only if it can be
proved reliable. On a website, it would be a simple matter of updating
code, but posts to this group can't be so easily rectified.
It's getting late here, so I hope all of this made sense (wasn't too
disappointing/boring) and that I have, if not answered your question, at
least raised some, without drifting too far away.


You certainly have made good points. I'm curious to know what the other
regulars here have to say on the matter.

Apologies for the brief rant,
Mike

--
Michael Winter
Replace ".invalid" with ".uk" to reply by e-mail.
Jul 23 '05 #9
Yann-Erwan Perio wrote:
Michael Winter wrote:
Is it safe to assume that because a method, like
appendChild, is supported on one type of node,
that it will be supported on all nodes. Logic would
suggest that such an assumption is flawed, but is
it in practice? ...
<snip> I don't have that much experience myself with a wide
range of browsers; I suppose that Jim, Martin or
Richard could tell us more about this.
We start with two well established principles relating to browser
scripting:-

1. Making assumptions about the browser environment is
extremely risky.
2. Feature detecting tests should be performed in a way
that is as closely related to the problem as possible
(preferably a direct one-to-one relationship).

We also have the realisation that an overly dogmatic application of
those principles in all circumstances will potentially stand in the way
of being able to create viable scripts.

There may be cases where an assumption that is not strictly valid, but
for which no example of a contrary environment has been identified,
facilitates, for example, controlled clean degradation where it might
otherwise be problematic. Such as the assumption that a browser that
dynamically supports the switching of the CSS - display - property will
exhibit a named property of - style - objects that is typeof 'string'.
Allowing the assumption that if the style object has no such property
then the browser is not going to respond to attempts to set - display -
to 'none'.

Personally I am yet to see a browser that could dynamically switch the
display of an element via the - display - property where - typeof
styleObj.displa y == 'string' - is not true, and also a non-dynamic
browser where it is not false (assuming a normalised - style - object
for Net 4, etc). However, it remains an assumption.

Trying to get feature detection as close to the problem as possible
could imply testing everything each and every time it is used. But code
that attempts that is burdening itself heavily, and may even end up
doing more testing than acting. When you are writing DHTML to be as
fluid as is achievable using the combination of HTML and CSS there is a
great deal that may need to be continually examined in terms of the
sizes and positions of elements (as users re-size their browser windows,
change the font-size settings, etc, and can do so at any moment).

Adding, on top of that requirement, full feature detection on every
action stands a chance of rendering the result non-viable (slowing the
script to the point where it is unacceptable to its users). Leaving the
only viable, for example, menu scripts, the ones that fall apart
whenever the font size is changed or encourage page authors to attempt
to pin-down the dynamic aspects of web pages so the menus will not
disintegrate.

The necessary, and inescapable, aspect of feature detection is that if a
feature is to be used at all it should be tested to verify that it is
available in the environment prior to its use. But prior to its use does
not necessarily mean prior to each and every use. While it is an
assumption that the environment of any given browser will not
significantly change while as script is executing, it is not that
unreasonable an assumption.

One strategy for reducing the level of feature detection testing going
on while a script is running is to give it a single "gateway" test that
is executed during an initialisation phase. Testing for the features
that the script will be using and then, if the test is passed, using
those features without additional verification. This is based on the
assumption that the environment will not change while the script is
running.

Unfortunately some aspects of tests performed during an initialisation
phase could not be as close to the problem as to qualify as a one-to-one
relationship. Testing DOM elements being one example. Instead of
examining the properties of some unknown element that is actually going
to be used by the script it is sometimes necessary to test the
corresponding properties of an element that is known to exist at that
point. The - document.body - element being a good candidate as it is
virtually guaranteed to exist (once the opening tag has been passed or
implied).

So the question is; what is it reasonable to infer from an element such
as - document.body - about the nature of other elements in the DOM. Such
an inference will be based on an assumption and so should be subject to
careful consideration.

Mike's question is really about the test made in the posted code.
Specifically - document.body.a ppendChild - and - typeof
document.body.f irstChild!="und efined" - because the tests are applied to
the - document.body - element and the corresponding method and property
are used on a SPAN element (and could be applied to any element that
allowed text content).

My experience of web browsers suggests that those tests are safe (in
that I know of no browsers where the inference drawn form those two
tests on - document.body - will not hold true for any SPAN element in
the same environment). But I also think that the logic of the test is
reasonable because of the nature of the properties being examined. They
are both part of W3C Core DOM Node interface, and it is the intention of
the W3C that all of the elements in the DOM implement the Node interface
(along with much else). So it doesn't seem unreasonable to infer that if
any specific element implements the significant part of that interface
then all other elements within the same DOM should also be expected to.
_With_some_cave ats_:-

Internet Explorer 4 has a non-W3C standard - appendChild - method on (at
least some of) its elements so it is important that no assumptions be
made based on - appendChild - alone. IE 4 also implements a -
document.create Element - method, as does Opera 6, so that is also
dangerous property to be inferring anything from. Given a script that
only really needs those two features to be supported on a W3C standard
browsers I usually throw in an additional test for - replaceChild - just
to ensure that IE 4 does not execute the code. (IE 4 cannot pass the
tests used because it does not implement - document.getEle mentById -
or - document.create TextNode -)

While I would be happy with examining a BODY element and making
deductions about a SPAN element form it (within the confines of a single
W3C specified interface, or a single property/method (or paired
property, e.g.:- width/height) that when implemented is common to all
elements) there are boundaries that I would not be happy to carry that
deduction across. I would not want to deduce anything about the document
element from the body, or about the body from the document, although the
document should also implement the Node interface. IE 5.0 is the problem
here as its document did not implement the Node interface (and others
may have copied Microsoft's structure at that time).

I would also be cautious about carrying the deduction from an Element to
a Text, Attribute, CDATASection, etc, Node. While the W3C intends all to
implement the Node interface I would want to re-verify the interface on
the type in question. Remember; Text nodes cannot have children so while
they should have - appendChild - method the expectation would be that
they never be used, so the browser manufacturer might consider it safe
to just omit them. Indeed there are IE 6 versions that *crash* if you,
for example, attempt to apply typeof to the - appendChild - method of an
attribute.

I have also observed (generally older) browsers where the elements
within the HEAD behaved quite differently from the displayed elements
within the BODY, being less amenable to dynamic manipulation, etc. This
would make me reluctant to apply deductions made from BODY elements (and
their descendants) to HEAD elements (and their descendants).

With those caveats, generally I would say that if the expectation is
that when one element implements a particular interface, or single
property/method, all other elements also implement it, then it is
probably safe to assume that positive verification on any one element
can be regarded as grounds for assuming that interface/property/method
to be implemented on all others. That would applly to W3C Node and
Element interfaces, the HTMLElement interface and various proprietary
features known to be common to elements in certain browsers.

I would, for example, happily assume that if the first element examined
had a numeric - offsetWidth - property then all subsequent elements
would also possess that property (though in that case I would not make
the deduction from the BODY element, as it is likely to be a special
case). And I would also be fairly happy to assume; if - offsetWidth -
then - offsetHeight -, as they wouldn't mean much in isolation.

<snip> Faced with this issue of unknown environment, testing
features on the very object to be used makes sense,
and is actually the safest option.

This approach isn't without problems, though. Testing
extensively the features on each object can render the
code unreadable,
The readability argument is often overstated. It is not unusual for
people to comment on not being able to make head or tail of some of the
code I write, because I exploit what I have learnt about javascript over
the past years and that leaves individuals who are not familiar with the
techniques unable to comprehend the code. (making it particularly
amusing when people ask about how they should go about obfuscating code
(if it was worth obfuscating there would be no need as the only people
capable of understanding it would be able to write it for themselves)
:). Three years ago I would not have been capable of understanding the
code I write now.

But is it be reasonable to suggest that I should presently be writing
code that I would have been capable of understanding three years ago,
when I didn't know a fraction of what I currently know about javascript?
Should I be writing code that I know to be sub-optimal because there are
people in the world who want to be able to write javascript without
learning how best to do so (without even being interested in doing so)?

So I write objects that appear complex. They are complex because they
attempt to address all of the issues that I have learnt need addressing
(maybe not always all, but at least most). Any code addressing those
same issues would exhibit similar complexity, though maybe in a
different form.

They also seem more complex than they really are to individuals who
don't know the techniques I choose to use to address those issues, but I
make an informed choice of the techniques to apply based on my judgement
of which is best suited to the situation (very often for optimum
performance).

Above all else it is important that any apparent complexity in the code
I write is internal to objects that have very simple public interfaces
(and document those interfaces). Making internal complexity
insignificant to third parties who use the code, so long as they don't
have to put any work into maintaining it, which would only becomes
necessary if I fail to write complete cross-browser code with planned
behaviour in all environments (obviously that is never my intention).
and break the business flow of the script.
Making that level of a script as clear as possible is always a good idea
as it is where any requested changes would be needed. Either pushing the
complexities needed to handle differing browser environments down so
they are hidden behind simple interfaces, or doing that work up-front
once, certainly does leave that level of a script clearer and more
unified.

<snip> Another problem is that it can prevent code optimisation if
you leave the tests in place in each situation, not redefining
the methods (see the Russian Doll pattern;-)).
This applies particularly to general DHTML libraries made up of numerous
functions, where each function tests the browsers for its supported
methods prior to using them. It may be possible to reduce the logic of
the running code to little more than the use of those functions but the
overhead of re-testing on each call, for conditions that are unlikely to
have changed between calls, can rapidly add-up to the point where it
becomes a problem in itself.
In addition, there's also the cost introduced by such
techniques, less technical but as important; it requires
more time, more attention, more experience etc., so definitely
costs more (I don't know many people who could understand and
write advanced javascript without problems - in fact I know
of none apart from in clj - but that's not my job either).
Writing complete code; code that addresses all the relevant issues as it
operates and cleanly degrades when it cannot, is going to be more time
consuming than writing code that disregards the issues and fails
unpredictably. Any additional cost arising form doing the job properly
cannot be a good reason for not bothering.

And badly authored code must carry an additional burden in costs arising
from its unreliability and fragility. Though that may be harder to
quantify, and possibly go unnoticed. Such as a commercial site I looked
at recently where the most unreliable and javascript dependent aspect of
the entire site appeared to be the mechanism for reporting problems,
virtually guaranteeing that owners would not become aware of users
experiencing problems as a result of bad javascript authoring (and so
unaware of any needless loss of revenue resulting from it).

Rather than attempting to reduce the cost of javascript authoring by
tolerating the creation and use of inadequate scripts, I would rather be
concentrating on strategies for reducing costs through easy code re-use.
Which is why I have been writing a lot of low-level interface objects
recently. Because they offer a way of abstracting the complexity of
handling the variations in browser environments behind a simple
interface, and result in easily re-usable code without the code bloat
that follows from the use of large and interdependent javascript
libraries. It is also why I am getting interested in optimising the
configuration of independent chunks of code, because I want those
interface objects to be as self-contained as possible (so they can be
dropped into code that needs them with few (preferably zero) concerns
for interdependenci es).

There is also a point where the expertise required to comprehend the
more advanced techniques, or design a complete script, while possibly
being perceived as expensive, actually reduces development costs itself.
It is not unusual for the inexperienced to get a script to broadly
'work' on one browser and then spend a lot of time thrashing about
trying to extend support to another. I have done it myself, and we see
plenty of questions on that particular subject posted to the group.

These days it is extremely rare for me to encounter new problems (and
then only when testing with the less common browsers/configurations) ; I
design and write cross-browser code and when I test it it mostly
exhibits the designed behaviour first time. And I can write in a day
what I would have taken a week or more to write 3 years ago. Giving me
more freedom to consider the design of the script and its
implementation. And providing a direct return in reduced hours spent in
script creation, followed by the reduced maintenance costs that follow
from good script design.
I'm therefore less and less convinced of the approach of
feature-detecting as "much" as possible. To me the best
approach is to do something in between, first performing
a rigorous intialization, testing for all methods on a
sample object (the document.body in the example code),
and then moving on to the business logic, without
testing more than required (object existence).

<snip>

Broadly I concur. Javascript is not particularly fast; the price of a
dynamic, interpreted language. Many optimisations are achieved by not
doing the same thing repeatedly when you can do it once and hold on to
the result, and (at least some, probably most) feature detection is
amenable to handling in that way.

Posting example code to the group is the area where the integration of
feature detecting techniques troubles me most. Most questions are so
simple that they do not warrant a full implementation and instead can be
addressed with little more than a simple function, or just a specific
code example.

It would be remiss to omit the feature detection entirely; that might
give the impression that doing so was acceptable. But an optimum
implementation would usually be above the level of the example code
used, design wise (particularly the "gateway" initialisation style).

A more local test and initialise pattern (such as the 'Russian doll') is
potentially beyond the comprehension of many questioners, so they may
use the code and find that it works, but they would not necessarily
learn anything from it.

That leaves posting example functions with the feature detection
demonstrated directly in the function, but in a way that means it will
be re-executed on each call (and the implication that that is an
appropriate and 'correct' style for javascript authoring).

On the whole I think it is best that code that demonstrates optimum
patterns does get posted in response to questions on the group. And if
the OPs find the result incomprehensibl e then at least they will have
learnt that there is more to javascript than they currently understand.
It is not as if those examples will ever be the only ones posted; the
over trivial, incomplete and/or more direct but potentially sub-optimal
examples will always appear along side the more elaborate examples (and
people have different opinions of what constitutes a good implementation
anyway).

Richard.
Jul 23 '05 #10

This thread has been closed and replies have been disabled. Please start a new discussion.

Similar topics

7
9028
by: Scott | last post by:
I need help to modify the code below to pass url variables from a framset. The click to run this will be in the mainFrame. This script works well in a non-frame page. Grabs the current url variables and adds them to my url. <script type="text/javascript"> //<!]> </script>
4
2830
by: Razzbar | last post by:
I need to be able to conditionally load a remote script, using the "<script src=..." syntax. Or some other way. I've seen people writing about using the document.write method, but it's not working like I want it to. What's happening is, if I do a document.write(scriptcall), the script loads, although sometimes very slowly... (?). Any inline code is executed (I used an alert to verify that the script is loading). But all the previous...
8
9428
by: '69 Camaro | last post by:
Perhaps I'm Googling for the wrong terms. Does anyone have links to examples of the syntax necessary to read the HTML on another Web page when that HTML is produced from JavaScript using the document.write( ) method? For a simplified example, I have two Web pages. Page 1 uses JavaScript with the following: htmlData = "<B>This is bold text.</B>"; document.write(htmlData);
2
1727
by: BP | last post by:
if the input field was generated in Client-side Javascript by document.write, how to retrieve the data of the input field in code behind? Thanks
1
7552
by: anupamaavadutha | last post by:
hi all, iam new to javascript. i have problem calling javascript functions.iam designing a calender page.here is my code. <%@ page import='java.io.*,java.util.*,java.sql.*,javax.servlet.*,javax.servlet.http.*' session='true'%> <%@ page import='java.text.*'%> <html> <script language='javascript'> function backward() { alert("hello"); }
2
5504
jkmyoung
by: jkmyoung | last post by:
I was trying to create an applet with a TextField that would only accept an integer, and ignore any other keystrokes. Eg, if a user typed in an 'f' into the field, the TextField should ignore it, and not even put the f into the textbox. However, the TextField does not appear to update until one keypress after I need it to. Eg. if I typed, 10fg, it would end up as "10f" then after the g, become "g10". I'm currently using the keyPressed...
0
9961
marktang
by: marktang | last post by:
ONU (Optical Network Unit) is one of the key components for providing high-speed Internet services. Its primary function is to act as an endpoint device located at the user's premises. However, people are often confused as to whether an ONU can Work As a Router. In this blog post, we’ll explore What is ONU, What Is Router, ONU & Router’s main usage, and What is the difference between ONU and Router. Let’s take a closer look ! Part I. Meaning of...
0
9800
by: Hystou | last post by:
Most computers default to English, but sometimes we require a different language, especially when relocating. Forgot to request a specific language before your computer shipped? No problem! You can effortlessly switch the default language on Windows 10 without reinstalling. I'll walk you through it. First, let's disable language synchronization. With a Microsoft account, language settings sync across devices. To prevent any complications,...
0
11181
Oralloy
by: Oralloy | last post by:
Hello folks, I am unable to find appropriate documentation on the type promotion of bit-fields when using the generalised comparison operator "<=>". The problem is that using the GNU compilers, it seems that the internal comparison operator "<=>" tries to promote arguments from unsigned to signed. This is as boiled down as I can make it. Here is my compilation command: g++-12 -std=c++20 -Wnarrowing bit_field.cpp Here is the code in...
0
10778
jinu1996
by: jinu1996 | last post by:
In today's digital age, having a compelling online presence is paramount for businesses aiming to thrive in a competitive landscape. At the heart of this digital strategy lies an intricately woven tapestry of website design and digital marketing. It's not merely about having a website; it's about crafting an immersive digital experience that captivates audiences and drives business growth. The Art of Business Website Design Your website is...
0
9597
agi2029
by: agi2029 | last post by:
Let's talk about the concept of autonomous AI software engineers and no-code agents. These AIs are designed to manage the entire lifecycle of a software development project—planning, coding, testing, and deployment—without human intervention. Imagine an AI that can take a project description, break it down, write the code, debug it, and then launch it, all on its own.... Now, this would greatly impact the work of software developers. The idea...
0
7148
by: conductexam | last post by:
I have .net C# application in which I am extracting data from word file and save it in database particularly. To store word all data as it is I am converting the whole word file firstly in HTML and then checking html paragraph one by one. At the time of converting from word file to html my equations which are in the word document file was convert into image. Globals.ThisAddIn.Application.ActiveDocument.Select();...
0
5819
by: TSSRALBI | last post by:
Hello I'm a network technician in training and I need your help. I am currently learning how to create and manage the different types of VPNs and I have a question about LAN-to-LAN VPNs. The last exercise I practiced was to create a LAN-to-LAN VPN between two Pfsense firewalls, by using IPSEC protocols. I succeeded, with both firewalls in the same network. But I'm wondering if it's possible to do the same thing, with 2 Pfsense firewalls...
0
6014
by: adsilva | last post by:
A Windows Forms form does not have the event Unload, like VB6. What one acts like?
3
3252
bsmnconsultancy
by: bsmnconsultancy | last post by:
In today's digital era, a well-designed website is crucial for businesses looking to succeed. Whether you're a small business owner or a large corporation in Toronto, having a strong online presence can significantly impact your brand's success. BSMN Consultancy, a leader in Website Development in Toronto offers valuable insights into creating effective websites that not only look great but also perform exceptionally well. In this comprehensive...

By using Bytes.com and it's services, you agree to our Privacy Policy and Terms of Use.

To disable or enable advertisements and analytics tracking please visit the manage ads & tracking page.