473,387 Members | 3,684 Online
Bytes | Software Development & Data Engineering Community
Post Job

Home Posts Topics Members FAQ

Join Bytes to post your question to a community of 473,387 software developers and data experts.

if I wanted to never use innerHTML, what else would I use?

In the course of my research I stumbled upon this article by Alex
Russel and Tim Scarfe:

http://www.developer-x.com/content/i...l/default.html

The case is made that innerHTML should never be used. I'm wondering, If
I wanted all the content of BODY as a string, how else could I get
except through innerHTML?

Feb 7 '06
63 4669
"John W. Kennedy" <jw*****@attglobal.net> wrote:
John Bokma wrote:
"John W. Kennedy" <jw*****@attglobal.net> wrote:
It is /not/ a fundamental problem with XHTML, it is a fundamental
problem with so-called "web browsers" that can't be bothered to
follow standards


The recommendation (it's not a standard) for XML is that if the
document is not well-formed, the parser should *stop* and report.

You really think my mom is waiting for stuff like:

Error at line #121 open tag found without close tag.


No, I think your mom is waiting for websites coded by the competent.


We're all human after all. I am afraid that making sure that each and
every piece of code coming out of a program is 100% well formed. (Sounds
like solving the halting problem to me).
(in some cases, due to incompetence, but, in Microsoft's case,
because it is their deliberate policy to ignore and sabotage
standards wherever possible).


You're mistaken, most standards you call standards are
recommendations and working drafts.


A verbal quibble to pass off Microsoft's vicious behavior as
acceptable, and you know it.


IMO, w3c is wrong with wasting time on a "standard" that will never make
it (I call *that* incompetence). They *should* focus on improving HTML,
not dreaming up something that no webmaster is ever going to use, *unless*
UA's are able to handle non-well-formed documents.

And I am sure UA's are not going to be made to choke on each and every
document that's not well-formed, so there goes the XHTML dream.

Computers are here, IMNSO, to make life easier. Giving no output because
someone made a typing error is just plain crazy in an end user situation.

It's like selling an email client which rejects all email with a spelling
mistake.

--
John MexIT: http://johnbokma.com/mexit/
personal page: http://johnbokma.com/
Experienced programmer available: http://castleamber.com/
Happy Customers: http://castleamber.com/testimonials.html
Feb 10 '06 #51
Lasse Reichstein Nielsen <lr*@hotpop.com> wrote:
So, to make writing XML even remotely pleasent, the editor should
at least be able to check the syntax against a document definition.
Syntax highlighting is also important, since black-on-white XML
isn't very readable either.
Well written! I maintain my site in XML, and am already looking for a way
to hide the XML details internally in an editor I am about to write.
(The development of syntax:

Math:
x |-> x * 2

Scheme:
(lambda (x) (* x 2))

MathML:
<lambda>
<bvar><ci> x </ci></bvar>
<apply>
<times/>
<ci> x </ci>
<cn> 2 </cn>
</apply>
</lambda>

:)


Good one, and quite true. Gazing on XML for hours, daily, really does make
you wonder.

--
John MexIT: http://johnbokma.com/mexit/
personal page: http://johnbokma.com/
Experienced programmer available: http://castleamber.com/
Happy Customers: http://castleamber.com/testimonials.html
Feb 10 '06 #52
John Bokma wrote:
Ian Collins <ia******@hotmail.com> wrote:

John Bokma wrote:
Ian Collins <ia******@hotmail.com> wrote:
If the internet is to move on, we have to look beyond HTML. The
combination of XML/CSS and JavaScript offers unlimited potential.

Can you explain why HTML + JavaScript can't offer such a thing? Also,
because I see no reason why a HTML parser can't fix errors instead of
dying at the first one, and create a parse tree that is well formed.


You're freed from the constraints of the HTML DTD.

I was always under the impression that the HTML DTD could be extended.

There's a big difference form extending and creating something new.
When you extend, you still have all the baggage from the base document
and you can't (I don't think) change existing entities.

--
Ian Collins.
Feb 10 '06 #53
John Bokma wrote:
"John W. Kennedy" <jw*****@attglobal.net> wrote:
John Bokma wrote:
"John W. Kennedy" <jw*****@attglobal.net> wrote:

It is /not/ a fundamental problem with XHTML, it is a fundamental
problem with so-called "web browsers" that can't be bothered to
follow standards
The recommendation (it's not a standard) for XML is that if the
document is not well-formed, the parser should *stop* and report.

You really think my mom is waiting for stuff like:

Error at line #121 open tag found without close tag.

No, I think your mom is waiting for websites coded by the competent.


We're all human after all. I am afraid that making sure that each and
every piece of code coming out of a program is 100% well formed. (Sounds
like solving the halting problem to me).


Don't be ridiculous; making XML well-formed is trivial. Most languages
with rapid release cycles already have bulletproof XML support either
built-in or in well-known public archives.
(in some cases, due to incompetence, but, in Microsoft's case,
because it is their deliberate policy to ignore and sabotage
standards wherever possible).
You're mistaken, most standards you call standards are
recommendations and working drafts.

A verbal quibble to pass off Microsoft's vicious behavior as
acceptable, and you know it.


IMO, w3c is wrong with wasting time on a "standard" that will never make
it (I call *that* incompetence). They *should* focus on improving HTML,
not dreaming up something that no webmaster is ever going to use, *unless*
UA's are able to handle non-well-formed documents.


Because webmasters are too lazy to do their job right? Screw 'em!

I just had to spend six hours of my life hand-editing someone's tag soup
into a usable form.

--
John W. Kennedy
"But now is a new thing which is very old--
that the rich make themselves richer and not poorer,
which is the true Gospel, for the poor's sake."
-- Charles Williams. "Judgement at Chelmsford"
Feb 10 '06 #54
VK

John W. Kennedy wrote:
I just had to spend six hours of my life hand-editing someone's tag soup
into a usable form.


The soup is in the head, *not* in the standard.

One can make a soup-less HTML delivery system.

You think that XML implies a soup-less thinking and data management?
How much do you bet for practical jokes from my collection?

Feb 10 '06 #55
Ian Collins <ia******@hotmail.com> wrote:
John Bokma wrote:
Ian Collins <ia******@hotmail.com> wrote:
[..]
I was always under the impression that the HTML DTD could be extended.

There's a big difference form extending and creating something new.
When you extend, you still have all the baggage from the base document
and you can't (I don't think) change existing entities.


You can point to your own DTD. Major question is: is each recent browser
going to honor that request.

--
John MexIT: http://johnbokma.com/mexit/
personal page: http://johnbokma.com/
Experienced programmer available: http://castleamber.com/
Happy Customers: http://castleamber.com/testimonials.html
Feb 10 '06 #56
"John W. Kennedy" <jw*****@attglobal.net> wrote:
John Bokma wrote:
"John W. Kennedy" <jw*****@attglobal.net> wrote:
John Bokma wrote:
"John W. Kennedy" <jw*****@attglobal.net> wrote:

> It is /not/ a fundamental problem with XHTML, it is a fundamental
> problem with so-called "web browsers" that can't be bothered to
> follow standards
The recommendation (it's not a standard) for XML is that if the
document is not well-formed, the parser should *stop* and report.

You really think my mom is waiting for stuff like:

Error at line #121 open tag found without close tag.
No, I think your mom is waiting for websites coded by the competent.
We're all human after all. I am afraid that making sure that each and
every piece of code coming out of a program is 100% well formed.
(Sounds like solving the halting problem to me).


Don't be ridiculous; making XML well-formed is trivial.


*sigh*, yes, handcoding and verifying it is.

But you have to think about XML generated on the fly from a database. If
it's trivial to make that well-formed, maybe you can explain why so much
software has so many bugs?

For example, imagine a program that during a fetch from a database, and
spitting out XML, suddenly reports:

"Error: value of x should be > 10 and < 23"

That would result in a nice white page now wouldn't it? And even if you
encode > and < you still risk not writing out closing elements.
Most languages
with rapid release cycles already have bulletproof XML support either
built-in or in well-known public archives.
Uhm, I use Perl, and certainly wouldn't call the XML support bullet
proof. Maybe you talk about a different language?
IMO, w3c is wrong with wasting time on a "standard" that will never
make it (I call *that* incompetence). They *should* focus on
improving HTML, not dreaming up something that no webmaster is ever
going to use, *unless* UA's are able to handle non-well-formed
documents.


Because webmasters are too lazy to do their job right? Screw 'em!


LOL, yeah

"Lift Your Skinny Fists Like Antennas to Heaven" [1]
I just had to spend six hours of my life hand-editing someone's tag
soup into a usable form.


Now add up all the hours people have to spend to clean up their sites
and wonder, should a small bunch of people at W3C waste that much time
and money because they think that a parser should be more strict?

Finally, what's the point in sites showing white pages? A better option
would be to have a browser report parsing issues back to the webmaster
*and* doing it's best to render the page.

My own site was down for a few hours some time ago. You think people
always report such problems? I was lucky that a friend visited the site.
So what's the point of strict parsers? It doesn't improve code quality.
[1] Godspeed You Black Emperor! album title

--
John MexIT: http://johnbokma.com/mexit/
personal page: http://johnbokma.com/
Experienced programmer available: http://castleamber.com/
Happy Customers: http://castleamber.com/testimonials.html
Feb 10 '06 #57
John Bokma wrote:

Finally, what's the point in sites showing white pages? A better option
would be to have a browser report parsing issues back to the webmaster
*and* doing it's best to render the page.

My own site was down for a few hours some time ago. You think people
always report such problems? I was lucky that a friend visited the site.
So what's the point of strict parsers? It doesn't improve code quality.

At risk of drifting back on topic, for sites which rely on dynamic
content generated in the user agent, an accurate DOM representation is
required. So the markup has to be well formed.

Now this can (and currently is to varying degrees) be done by the user
agent (look at the innerHTML of some malformed HTML) or by an
application like HTML tidy, but I'd consider myself a fool if I
generated bad markup and then wondered why the page dynamic content
didn't work.

So getting back to your last sentence, well formed markup is the first
step to code quality. You can't build on shaky foundations.

--
Ian Collins.
Feb 11 '06 #58
Ian Collins <ia******@hotmail.com> wrote:
John Bokma wrote:

Finally, what's the point in sites showing white pages? A better
option would be to have a browser report parsing issues back to the
webmaster *and* doing it's best to render the page.

My own site was down for a few hours some time ago. You think people
always report such problems? I was lucky that a friend visited the
site. So what's the point of strict parsers? It doesn't improve code
quality.

At risk of drifting back on topic, for sites which rely on dynamic
content generated in the user agent, an accurate DOM representation is
required. So the markup has to be well formed.

Now this can (and currently is to varying degrees) be done by the user
agent (look at the innerHTML of some malformed HTML) or by an
application like HTML tidy, but I'd consider myself a fool if I
generated bad markup and then wondered why the page dynamic content
didn't work.

So getting back to your last sentence, well formed markup is the first
step to code quality. You can't build on shaky foundations.


Which referred to strict parsers in a user agent. The web works as it is
now, why suddenly spin it back 10 years? Just because some people
*think* that a strict UA suddenly will improve the web? Those people
clearly have little serious programming experience. It's not the rules
that generate quality, it's the person doing the work.

I am not saying that a developer should't use tools to catch mistakes,
but IMNSHO it's crazy to think that by giving someone tools, he/she
suddenly becomes an experienced coder. (I have seen people work around
compiler warnings instead of fixing the issue at hand)

Again: a well-formed document doesn't mean that it's well formed from a
human point of view.

And IMNSHO, visitors shouldn't be bothered with such issues.

--
John MexIT: http://johnbokma.com/mexit/
personal page: http://johnbokma.com/
Experienced programmer available: http://castleamber.com/
Happy Customers: http://castleamber.com/testimonials.html
Feb 11 '06 #59
John Bokma wrote:
True, however that is not a valid argument against XHTML as a
hopefully _future_ "mainstream" markup language. And I was talking
about a possible, and for me desirable, future only.


The major valid argument is that the parser is too strict to be
practically useful in the real world.


Which is THE good thing of XHTML and why I would like it to be much more
common (or at least for its authoring to be better) -- XHTML being XML
allows me to do all wild thing with it like processing with XSLT etc. (yes,
I know I could quite often process HTML with XSLT, but it is always
something close to playing in Las Vegas).

Matej
Feb 11 '06 #60
ce****@gmail.com wrote:
John Bokma wrote:
True, however that is not a valid argument against XHTML as a
hopefully _future_ "mainstream" markup language. And I was talking
about a possible, and for me desirable, future only.


The major valid argument is that the parser is too strict to be
practically useful in the real world.


Which is THE good thing of XHTML and why I would like it to be much
more common (or at least for its authoring to be better) -- XHTML
being XML allows me to do all wild thing with it like processing with
XSLT etc. (yes, I know I could quite often process HTML with XSLT, but
it is always something close to playing in Las Vegas).


Nobody is going to stop you from doing the real work in XML, and
converting it, when needed, into HTML. That's what I do.

But I don't feel the need to move the debugging process to my visitors, if
they ever would report back to me (which the majority probably won't).

--
John MexIT: http://johnbokma.com/mexit/
personal page: http://johnbokma.com/
Experienced programmer available: http://castleamber.com/
Happy Customers: http://castleamber.com/testimonials.html
Feb 11 '06 #61
VK

John Bokma wrote:
Which referred to strict parsers in a user agent. The web works as it is
now, why suddenly spin it back 10 years? Just because some people
*think* that a strict UA suddenly will improve the web? Those people
clearly have little serious programming experience. It's not the rules
that generate quality, it's the person doing the work.


Right. In addition (I missed it in my first post) I would like to say:

If anyone is really starving for a "non-forgiving" browser, then it
already exists. Simply go to <http://www.w3.org/Amaya/> and get your
copy of Amaya.

<quote>
The main motivation for developing Amaya was to provide a framework
that can integrate as many W3C technologies as possible. It is used to
demonstrate these technologies in action while taking advantage of
their combination in a single, consistent environment.
</quote>

After several days...sorry... hours ... sorry... I would say minutes...
of browsing one can switch back to the regular "error-tolerant"
browser.

It would be great to hear one's *sincere* feedback. Does one still
dream about "stricter UA" or she was magically cured? :-)

Feb 11 '06 #62
Good for you! I rather have one copy of my website, than two.

Matej

Feb 13 '06 #63
Jim Ley wrote:
On Wed, 08 Feb 2006 23:45:16 GMT, RobG <rg***@iinet.net.au> wrote:
One thing it does is to highlight that DOM 3 Load and Save does more
than innerHTML and outerHTML combined in a more robust and supportable
manner.

[...]
How much support for DOM 3 will be in IE 7? It's struggling to implement
DOM 2 fully.


DOM 3 is about XML, [...]


No, it is not. DOM 3 _Load and Save_ is about _XML document types_ only.
PointedEars
Feb 15 '06 #64

This thread has been closed and replies have been disabled. Please start a new discussion.

Similar topics

6
by: adamrfrench | last post by:
Let it be mentioned that Javascript is not my forte, so the solution to this could very well be a simple one. I am working on an AJAX function where I can pass a URL and the target ID in, and...
9
by: Hallvard B Furuseth | last post by:
Why does the FAQ (Q 4.15) recommend innerHTML when so many here say one should use createElement(), replaceChild() etc? Also, why does the "Alternative DynWrite function" at...
7
by: Hoss | last post by:
Hello all- This is what im trying to achieve. At the top of my page there is some search functionality, through which you cause to be loaded a string representing an HTML page. Below this and...
8
by: Clément | last post by:
Hi! I am currently developping a user interface with Ajax/C#/.net. And I am facing a problem with Mozilla, and Firefox. I use the function innerHTML to load a Web UserControl into a div, this...
17
by: PJ | last post by:
Greetings... I have stumbled upon a small problem. I use Ajax to retrieve part of a page I need to update. I update a DIV element with the HTML contents I get from another page. It works...
1
by: huzheng001 | last post by:
I have develop a on-line dictionary website, http://www.stardict.org I meet a problem: Here is two lines of js codes: document.getElementById("wordlist").innerHTML = "";...
7
by: John | last post by:
Hi Everyone, I'm having this extremely annoying problem with Internet Explorer 6, giving me an error message saying "unknown runtime error" whenever I try to alter the contents of a <divelement...
9
by: martymix | last post by:
simple question: I have a simple <dt>test text</dt> I get the innerHTML of that dt, and I try and append some text to it like so: dt = document.getElementsByTagName('dt') var text =...
4
by: Eo | last post by:
Hey I hav JS code that expend TD if beeing click on from some reson it dose not work in firefox(won't open) just under IE it works any Idea? function MoreInfo(id) { AId=id; FullPage ="<?...
0
by: taylorcarr | last post by:
A Canon printer is a smart device known for being advanced, efficient, and reliable. It is designed for home, office, and hybrid workspace use and can also be used for a variety of purposes. However,...
0
by: Charles Arthur | last post by:
How do i turn on java script on a villaon, callus and itel keypad mobile phone
0
by: ryjfgjl | last post by:
If we have dozens or hundreds of excel to import into the database, if we use the excel import function provided by database editors such as navicat, it will be extremely tedious and time-consuming...
0
by: ryjfgjl | last post by:
In our work, we often receive Excel tables with data in the same format. If we want to analyze these data, it can be difficult to analyze them because the data is spread across multiple Excel files...
0
BarryA
by: BarryA | last post by:
What are the essential steps and strategies outlined in the Data Structures and Algorithms (DSA) roadmap for aspiring data scientists? How can individuals effectively utilize this roadmap to progress...
1
by: nemocccc | last post by:
hello, everyone, I want to develop a software for my android phone for daily needs, any suggestions?
0
marktang
by: marktang | last post by:
ONU (Optical Network Unit) is one of the key components for providing high-speed Internet services. Its primary function is to act as an endpoint device located at the user's premises. However,...
0
Oralloy
by: Oralloy | last post by:
Hello folks, I am unable to find appropriate documentation on the type promotion of bit-fields when using the generalised comparison operator "<=>". The problem is that using the GNU compilers,...
0
jinu1996
by: jinu1996 | last post by:
In today's digital age, having a compelling online presence is paramount for businesses aiming to thrive in a competitive landscape. At the heart of this digital strategy lies an intricately woven...

By using Bytes.com and it's services, you agree to our Privacy Policy and Terms of Use.

To disable or enable advertisements and analytics tracking please visit the manage ads & tracking page.