469,270 Members | 1,006 Online
Bytes | Developer Community
New Post

Home Posts Topics Members FAQ

Post your question to a community of 469,270 developers. It's quick & easy.

if I wanted to never use innerHTML, what else would I use?

In the course of my research I stumbled upon this article by Alex
Russel and Tim Scarfe:

http://www.developer-x.com/content/i...l/default.html

The case is made that innerHTML should never be used. I'm wondering, If
I wanted all the content of BODY as a string, how else could I get
except through innerHTML?

Feb 7 '06
63 4303
"John W. Kennedy" <jw*****@attglobal.net> wrote:
John Bokma wrote:
"John W. Kennedy" <jw*****@attglobal.net> wrote:
It is /not/ a fundamental problem with XHTML, it is a fundamental
problem with so-called "web browsers" that can't be bothered to
follow standards


The recommendation (it's not a standard) for XML is that if the
document is not well-formed, the parser should *stop* and report.

You really think my mom is waiting for stuff like:

Error at line #121 open tag found without close tag.


No, I think your mom is waiting for websites coded by the competent.


We're all human after all. I am afraid that making sure that each and
every piece of code coming out of a program is 100% well formed. (Sounds
like solving the halting problem to me).
(in some cases, due to incompetence, but, in Microsoft's case,
because it is their deliberate policy to ignore and sabotage
standards wherever possible).


You're mistaken, most standards you call standards are
recommendations and working drafts.


A verbal quibble to pass off Microsoft's vicious behavior as
acceptable, and you know it.


IMO, w3c is wrong with wasting time on a "standard" that will never make
it (I call *that* incompetence). They *should* focus on improving HTML,
not dreaming up something that no webmaster is ever going to use, *unless*
UA's are able to handle non-well-formed documents.

And I am sure UA's are not going to be made to choke on each and every
document that's not well-formed, so there goes the XHTML dream.

Computers are here, IMNSO, to make life easier. Giving no output because
someone made a typing error is just plain crazy in an end user situation.

It's like selling an email client which rejects all email with a spelling
mistake.

--
John MexIT: http://johnbokma.com/mexit/
personal page: http://johnbokma.com/
Experienced programmer available: http://castleamber.com/
Happy Customers: http://castleamber.com/testimonials.html
Feb 10 '06 #51
Lasse Reichstein Nielsen <lr*@hotpop.com> wrote:
So, to make writing XML even remotely pleasent, the editor should
at least be able to check the syntax against a document definition.
Syntax highlighting is also important, since black-on-white XML
isn't very readable either.
Well written! I maintain my site in XML, and am already looking for a way
to hide the XML details internally in an editor I am about to write.
(The development of syntax:

Math:
x |-> x * 2

Scheme:
(lambda (x) (* x 2))

MathML:
<lambda>
<bvar><ci> x </ci></bvar>
<apply>
<times/>
<ci> x </ci>
<cn> 2 </cn>
</apply>
</lambda>

:)


Good one, and quite true. Gazing on XML for hours, daily, really does make
you wonder.

--
John MexIT: http://johnbokma.com/mexit/
personal page: http://johnbokma.com/
Experienced programmer available: http://castleamber.com/
Happy Customers: http://castleamber.com/testimonials.html
Feb 10 '06 #52
John Bokma wrote:
Ian Collins <ia******@hotmail.com> wrote:

John Bokma wrote:
Ian Collins <ia******@hotmail.com> wrote:
If the internet is to move on, we have to look beyond HTML. The
combination of XML/CSS and JavaScript offers unlimited potential.

Can you explain why HTML + JavaScript can't offer such a thing? Also,
because I see no reason why a HTML parser can't fix errors instead of
dying at the first one, and create a parse tree that is well formed.


You're freed from the constraints of the HTML DTD.

I was always under the impression that the HTML DTD could be extended.

There's a big difference form extending and creating something new.
When you extend, you still have all the baggage from the base document
and you can't (I don't think) change existing entities.

--
Ian Collins.
Feb 10 '06 #53
John Bokma wrote:
"John W. Kennedy" <jw*****@attglobal.net> wrote:
John Bokma wrote:
"John W. Kennedy" <jw*****@attglobal.net> wrote:

It is /not/ a fundamental problem with XHTML, it is a fundamental
problem with so-called "web browsers" that can't be bothered to
follow standards
The recommendation (it's not a standard) for XML is that if the
document is not well-formed, the parser should *stop* and report.

You really think my mom is waiting for stuff like:

Error at line #121 open tag found without close tag.

No, I think your mom is waiting for websites coded by the competent.


We're all human after all. I am afraid that making sure that each and
every piece of code coming out of a program is 100% well formed. (Sounds
like solving the halting problem to me).


Don't be ridiculous; making XML well-formed is trivial. Most languages
with rapid release cycles already have bulletproof XML support either
built-in or in well-known public archives.
(in some cases, due to incompetence, but, in Microsoft's case,
because it is their deliberate policy to ignore and sabotage
standards wherever possible).
You're mistaken, most standards you call standards are
recommendations and working drafts.

A verbal quibble to pass off Microsoft's vicious behavior as
acceptable, and you know it.


IMO, w3c is wrong with wasting time on a "standard" that will never make
it (I call *that* incompetence). They *should* focus on improving HTML,
not dreaming up something that no webmaster is ever going to use, *unless*
UA's are able to handle non-well-formed documents.


Because webmasters are too lazy to do their job right? Screw 'em!

I just had to spend six hours of my life hand-editing someone's tag soup
into a usable form.

--
John W. Kennedy
"But now is a new thing which is very old--
that the rich make themselves richer and not poorer,
which is the true Gospel, for the poor's sake."
-- Charles Williams. "Judgement at Chelmsford"
Feb 10 '06 #54
VK

John W. Kennedy wrote:
I just had to spend six hours of my life hand-editing someone's tag soup
into a usable form.


The soup is in the head, *not* in the standard.

One can make a soup-less HTML delivery system.

You think that XML implies a soup-less thinking and data management?
How much do you bet for practical jokes from my collection?

Feb 10 '06 #55
Ian Collins <ia******@hotmail.com> wrote:
John Bokma wrote:
Ian Collins <ia******@hotmail.com> wrote:
[..]
I was always under the impression that the HTML DTD could be extended.

There's a big difference form extending and creating something new.
When you extend, you still have all the baggage from the base document
and you can't (I don't think) change existing entities.


You can point to your own DTD. Major question is: is each recent browser
going to honor that request.

--
John MexIT: http://johnbokma.com/mexit/
personal page: http://johnbokma.com/
Experienced programmer available: http://castleamber.com/
Happy Customers: http://castleamber.com/testimonials.html
Feb 10 '06 #56
"John W. Kennedy" <jw*****@attglobal.net> wrote:
John Bokma wrote:
"John W. Kennedy" <jw*****@attglobal.net> wrote:
John Bokma wrote:
"John W. Kennedy" <jw*****@attglobal.net> wrote:

> It is /not/ a fundamental problem with XHTML, it is a fundamental
> problem with so-called "web browsers" that can't be bothered to
> follow standards
The recommendation (it's not a standard) for XML is that if the
document is not well-formed, the parser should *stop* and report.

You really think my mom is waiting for stuff like:

Error at line #121 open tag found without close tag.
No, I think your mom is waiting for websites coded by the competent.
We're all human after all. I am afraid that making sure that each and
every piece of code coming out of a program is 100% well formed.
(Sounds like solving the halting problem to me).


Don't be ridiculous; making XML well-formed is trivial.


*sigh*, yes, handcoding and verifying it is.

But you have to think about XML generated on the fly from a database. If
it's trivial to make that well-formed, maybe you can explain why so much
software has so many bugs?

For example, imagine a program that during a fetch from a database, and
spitting out XML, suddenly reports:

"Error: value of x should be > 10 and < 23"

That would result in a nice white page now wouldn't it? And even if you
encode > and < you still risk not writing out closing elements.
Most languages
with rapid release cycles already have bulletproof XML support either
built-in or in well-known public archives.
Uhm, I use Perl, and certainly wouldn't call the XML support bullet
proof. Maybe you talk about a different language?
IMO, w3c is wrong with wasting time on a "standard" that will never
make it (I call *that* incompetence). They *should* focus on
improving HTML, not dreaming up something that no webmaster is ever
going to use, *unless* UA's are able to handle non-well-formed
documents.


Because webmasters are too lazy to do their job right? Screw 'em!


LOL, yeah

"Lift Your Skinny Fists Like Antennas to Heaven" [1]
I just had to spend six hours of my life hand-editing someone's tag
soup into a usable form.


Now add up all the hours people have to spend to clean up their sites
and wonder, should a small bunch of people at W3C waste that much time
and money because they think that a parser should be more strict?

Finally, what's the point in sites showing white pages? A better option
would be to have a browser report parsing issues back to the webmaster
*and* doing it's best to render the page.

My own site was down for a few hours some time ago. You think people
always report such problems? I was lucky that a friend visited the site.
So what's the point of strict parsers? It doesn't improve code quality.
[1] Godspeed You Black Emperor! album title

--
John MexIT: http://johnbokma.com/mexit/
personal page: http://johnbokma.com/
Experienced programmer available: http://castleamber.com/
Happy Customers: http://castleamber.com/testimonials.html
Feb 10 '06 #57
John Bokma wrote:

Finally, what's the point in sites showing white pages? A better option
would be to have a browser report parsing issues back to the webmaster
*and* doing it's best to render the page.

My own site was down for a few hours some time ago. You think people
always report such problems? I was lucky that a friend visited the site.
So what's the point of strict parsers? It doesn't improve code quality.

At risk of drifting back on topic, for sites which rely on dynamic
content generated in the user agent, an accurate DOM representation is
required. So the markup has to be well formed.

Now this can (and currently is to varying degrees) be done by the user
agent (look at the innerHTML of some malformed HTML) or by an
application like HTML tidy, but I'd consider myself a fool if I
generated bad markup and then wondered why the page dynamic content
didn't work.

So getting back to your last sentence, well formed markup is the first
step to code quality. You can't build on shaky foundations.

--
Ian Collins.
Feb 11 '06 #58
Ian Collins <ia******@hotmail.com> wrote:
John Bokma wrote:

Finally, what's the point in sites showing white pages? A better
option would be to have a browser report parsing issues back to the
webmaster *and* doing it's best to render the page.

My own site was down for a few hours some time ago. You think people
always report such problems? I was lucky that a friend visited the
site. So what's the point of strict parsers? It doesn't improve code
quality.

At risk of drifting back on topic, for sites which rely on dynamic
content generated in the user agent, an accurate DOM representation is
required. So the markup has to be well formed.

Now this can (and currently is to varying degrees) be done by the user
agent (look at the innerHTML of some malformed HTML) or by an
application like HTML tidy, but I'd consider myself a fool if I
generated bad markup and then wondered why the page dynamic content
didn't work.

So getting back to your last sentence, well formed markup is the first
step to code quality. You can't build on shaky foundations.


Which referred to strict parsers in a user agent. The web works as it is
now, why suddenly spin it back 10 years? Just because some people
*think* that a strict UA suddenly will improve the web? Those people
clearly have little serious programming experience. It's not the rules
that generate quality, it's the person doing the work.

I am not saying that a developer should't use tools to catch mistakes,
but IMNSHO it's crazy to think that by giving someone tools, he/she
suddenly becomes an experienced coder. (I have seen people work around
compiler warnings instead of fixing the issue at hand)

Again: a well-formed document doesn't mean that it's well formed from a
human point of view.

And IMNSHO, visitors shouldn't be bothered with such issues.

--
John MexIT: http://johnbokma.com/mexit/
personal page: http://johnbokma.com/
Experienced programmer available: http://castleamber.com/
Happy Customers: http://castleamber.com/testimonials.html
Feb 11 '06 #59
John Bokma wrote:
True, however that is not a valid argument against XHTML as a
hopefully _future_ "mainstream" markup language. And I was talking
about a possible, and for me desirable, future only.


The major valid argument is that the parser is too strict to be
practically useful in the real world.


Which is THE good thing of XHTML and why I would like it to be much more
common (or at least for its authoring to be better) -- XHTML being XML
allows me to do all wild thing with it like processing with XSLT etc. (yes,
I know I could quite often process HTML with XSLT, but it is always
something close to playing in Las Vegas).

Matej
Feb 11 '06 #60
ce****@gmail.com wrote:
John Bokma wrote:
True, however that is not a valid argument against XHTML as a
hopefully _future_ "mainstream" markup language. And I was talking
about a possible, and for me desirable, future only.


The major valid argument is that the parser is too strict to be
practically useful in the real world.


Which is THE good thing of XHTML and why I would like it to be much
more common (or at least for its authoring to be better) -- XHTML
being XML allows me to do all wild thing with it like processing with
XSLT etc. (yes, I know I could quite often process HTML with XSLT, but
it is always something close to playing in Las Vegas).


Nobody is going to stop you from doing the real work in XML, and
converting it, when needed, into HTML. That's what I do.

But I don't feel the need to move the debugging process to my visitors, if
they ever would report back to me (which the majority probably won't).

--
John MexIT: http://johnbokma.com/mexit/
personal page: http://johnbokma.com/
Experienced programmer available: http://castleamber.com/
Happy Customers: http://castleamber.com/testimonials.html
Feb 11 '06 #61
VK

John Bokma wrote:
Which referred to strict parsers in a user agent. The web works as it is
now, why suddenly spin it back 10 years? Just because some people
*think* that a strict UA suddenly will improve the web? Those people
clearly have little serious programming experience. It's not the rules
that generate quality, it's the person doing the work.


Right. In addition (I missed it in my first post) I would like to say:

If anyone is really starving for a "non-forgiving" browser, then it
already exists. Simply go to <http://www.w3.org/Amaya/> and get your
copy of Amaya.

<quote>
The main motivation for developing Amaya was to provide a framework
that can integrate as many W3C technologies as possible. It is used to
demonstrate these technologies in action while taking advantage of
their combination in a single, consistent environment.
</quote>

After several days...sorry... hours ... sorry... I would say minutes...
of browsing one can switch back to the regular "error-tolerant"
browser.

It would be great to hear one's *sincere* feedback. Does one still
dream about "stricter UA" or she was magically cured? :-)

Feb 11 '06 #62
Good for you! I rather have one copy of my website, than two.

Matej

Feb 13 '06 #63
Jim Ley wrote:
On Wed, 08 Feb 2006 23:45:16 GMT, RobG <rg***@iinet.net.au> wrote:
One thing it does is to highlight that DOM 3 Load and Save does more
than innerHTML and outerHTML combined in a more robust and supportable
manner.

[...]
How much support for DOM 3 will be in IE 7? It's struggling to implement
DOM 2 fully.


DOM 3 is about XML, [...]


No, it is not. DOM 3 _Load and Save_ is about _XML document types_ only.
PointedEars
Feb 15 '06 #64

This discussion thread is closed

Replies have been disabled for this discussion.

Similar topics

9 posts views Thread by Hallvard B Furuseth | last post: by
8 posts views Thread by Clément | last post: by
17 posts views Thread by PJ | last post: by
1 post views Thread by huzheng001 | last post: by
9 posts views Thread by martymix | last post: by
4 posts views Thread by Eo | last post: by
1 post views Thread by CARIGAR | last post: by
reply views Thread by zhoujie | last post: by
By using this site, you agree to our Privacy Policy and Terms of Use.