By using this site, you agree to our updated Privacy Policy and our Terms of Use. Manage your Cookies Settings.
437,648 Members | 1,203 Online
Bytes IT Community
+ Ask a Question
Need help? Post your question and get tips & solutions from a community of 437,648 IT Pros & Developers. It's quick & easy.

Validation: XHTML Transitional vs. HTLM 4.01 Strict

P: n/a
What are the pluses and minuses of constructing and validating between
XHTML Transitional vs. HTLM 4.01 Strict

Thanks, CMA


Jul 20 '05 #1
Share this Question
Share on Google+
25 Replies


P: n/a
On Fri, 23 Jul 2004 17:10:46 GMT, CMAR <cm***@yahoo.com> wrote:
What are the pluses and minuses of constructing and validating between
XHTML Transitional vs. HTLM 4.01 Strict

Thanks, CMA


If you use XML tools to build the page, XHTML is worthwhile if you can
serve it properly (which includes accounting for that set of browsers
which cannot handle properly-served XHTML).

Otherwise, there is no benefit I'm aware of.
Jul 20 '05 #2

P: n/a
CMAR wrote:
What are the pluses and minuses of constructing and validating
between XHTML Transitional vs. HTLM 4.01 Strict


Your comparing different variants of different languages. Let's
separate them.

transitional v. strict
----------------------
transitional is meant to east the transition (get it?) from HTML 3.2
pseudo desktop publishing markup to a more SGML semantic markup. There
is very little presentational aid in strict, so you'll be relying on
CSS almost exclusively for the layout/colors. I think that's an
advantage. CSS gives you more options, and is more efficient. It is
also a depressing experiment in browser bugs, so brace yourself.

winner? strict
XHTML v HTML
------------
XHTML is a reformulation of HTML as an XML application. Lots of people
think it's superior because it came after HTML 4, and is thus the
newest standard. But they don't have any reason to use it other than
"it's the latest".

XHTML offers no advantages over HTML in terms of markup. There are no
additional elements, no radically different constructs. Nonetheless,
it supposed to be served with a different MIME type. HTML is
text/html, whereas XHTML is application/xhtml+xml. The problem is that
MSIE does not understand this new MIME type. So servers send it with
text/html. That leads to its own problems:

http://www.hixie.ch/advocacy/xhtml

Since it offers no benefits, but does have drawbacks, the answer seems
obvious, unless you have specific reasons for needing XHTML.

winner? HTML
In short, use HTML 4.01/strict.

HTH

--
Brian (remove ".invalid" to email me)
http://www.tsmchughs.com/
Jul 20 '05 #3

P: n/a
Brian wrote:
Your comparing different variants of different languages.


Jeez Louise. My sister just misused "your" instead of "you're" in an
email to me, and apparently it's contagious. That should be "You're
comparing...." (Sure, it's pathetic to blame someone else for my
screwups, but that one is too embarassing to take the fall by myself.)

--
Brian (remove ".invalid" to email me)
http://www.tsmchughs.com/
Jul 20 '05 #4

P: n/a
Brian wrote:
Brian wrote:
Your comparing different variants of different languages.


Jeez Louise. My sister just misused "your" instead of "you're" in an
email to me, and apparently it's contagious. That should be "You're
comparing...." (Sure, it's pathetic to blame someone else for my
screwups, but that one is too embarassing to take the fall by myself.)


People also ought to attempt to spell "HTML" correctly when posting to
an HTML newsgroup, shouldn't they?

--
Dan
Jul 20 '05 #5

P: n/a
"CMAR" <cm***@yahoo.com> wrote in news:qEbMc.56758$yd5.32501
@twister.nyroc.rr.com:

in theory, web browsers can handle xhtml specifics - self closing tags for
instance, but that, as always is up to the software houses to sort.

The advantage of xhtml is the better structure you're forced to, with a far
stricter ruleset for how all the tags have to be written. xhtml 2, just
released in draft, goes somewhat further with some important tag changes;
this is, of course the rub. You still find pages out there that are written
in html 2, but they look dreadful. Keeping up with the current standards,
as xhtml is (html 4 has been around about as long as M$ NT4, at M$ have
stopped support for that now) is important.

In the end, validating xhtml is tougher, but you end up with a better
formatted document.
What are the pluses and minuses of constructing and validating between
XHTML Transitional vs. HTLM 4.01 Strict

Thanks, CMA


Jul 20 '05 #6

P: n/a
Daniel R. Tobias wrote:
Brian wrote:
That should be "You're comparing...."


People also ought to attempt to spell "HTML" correctly when posting
to an HTML newsgroup, shouldn't they?


Uh, I'll take what's coming to me for mixing up "your" and "you're",
but I'm afraid the op is on the hook for transposing the letters in
the subject, so take it up with him.

--
Brian (remove ".invalid" to email me)
http://www.tsmchughs.com/
Jul 20 '05 #7

P: n/a
> "CMAR" <cm***@yahoo.com> wrote
What are the pluses and minuses of constructing and validating
between
XHTML Transitional vs. HTLM 4.01 Strict


s_m_b wrote:
in theory, web browsers can handle xhtml specifics - self closing tags for
instance, but that, as always is up to the software houses to sort.
Theory does not match reality. The most popular software used to
browse web pages, MSIE/Win cannot handle xhtml when served up properly.

http://www.hixie.ch/advocacy/xhtml
The advantage of xhtml is the better structure you're forced to, with a far
stricter ruleset for how all the tags have to be written.
If you want stricter syntax (e.g., explicitly closed p and li
elements), then just close them. That's no reason to choose XHTML.
xhtml 2, just
released in draft, goes somewhat further with some important tag changes;
Since it breaks backward compatability, it's hard to see what
advantages it has as an authoing language.
Keeping up with the current standards,
as xhtml is (html 4 has been around about as long as M$ NT4, at M$ have
stopped support for that now) is important.


One should not decide on XHTML simply because it's new. If there isn't
a reason, then HTML is the way to go.

--
Brian (remove ".invalid" to email me)
http://www.tsmchughs.com/
Jul 20 '05 #8

P: n/a
In article <10*************@corp.supernews.com>,
Brian <us*****@julietremblay.com.invalid> wrote:
s_m_b wrote:
in theory, web browsers can handle xhtml specifics - self closing tags for
instance, but that, as always is up to the software houses to sort.


Theory does not match reality. The most popular software used to
browse web pages, MSIE/Win cannot handle xhtml when served up properly.

http://www.hixie.ch/advocacy/xhtml


If ound it somehow disturbing that it does not talk about a future
transition to 'real' XHTML of ones markup. It costs authors little
trouble of writing XHTML right now and serving it as HTML tagsoup
because current browser limitations enforces that.

One of the caveats being that authors could be easily thinking that they
produce documents that can be served as application/xhtml+xml anytime of
their choosing without a problem. Often, a document is not well-formed
from the start or it loses well-formedness over time because of updates
to it's content. Also, scripts embedded or attached to the document are
often not good enough to continue function in browsers when the switch
to real XHTML is made.
The advantage of xhtml is the better structure you're forced to, with a far
stricter ruleset for how all the tags have to be written.


If you want stricter syntax (e.g., explicitly closed p and li
elements), then just close them. That's no reason to choose XHTML.


Choosing for the Strict version of HTML4.01 or XHTML1.0 would make a
bigger difference than choosing between HTML4.01 or XHTML1.0. Strict is
the magic to better strictness and that should be self-evident by it's
name.

If authors serve their XHTML as text/html, which in return makes UAs
interpret it as HTML tagsoup, there is no 'forced to more strictness',
because the author does not get penalized with an XML parsing error for
making a mistake when checking the page.
Keeping up with the current standards,
as xhtml is (html 4 has been around about as long as M$ NT4, at M$ have
stopped support for that now) is important.


One should not decide on XHTML simply because it's new. If there isn't
a reason, then HTML is the way to go.


No comment on that. Found it important to have it quoted here for anyone
who is considering using XHTML.

--
Kris
<kr*******@xs4all.netherlands> (nl)
Jul 20 '05 #9

P: n/a
On Sat, 24 Jul 2004 11:00:36 +0200, Kris
<kr*******@xs4all.netherlands> wrote:
In article <10*************@corp.supernews.com>,
Brian <us*****@julietremblay.com.invalid> wrote:
s_m_b wrote:
> in theory, web browsers can handle xhtml specifics - self closing tags for
> instance, but that, as always is up to the software houses to sort.
Theory does not match reality. The most popular software used to
browse web pages, MSIE/Win cannot handle xhtml when served up properly.

http://www.hixie.ch/advocacy/xhtml


If ound it somehow disturbing that it does not talk about a future
transition to 'real' XHTML of ones markup.


That's because it's trivial to convert valid HTML 4.01 to XHTML, no
trouble at all, in fact it's a lot easier than authoring XHTML
straight than follows the observations of Appendix C.
If authors serve their XHTML as text/html, which in return makes UAs
interpret it as HTML tagsoup, there is no 'forced to more strictness',
because the author does not get penalized with an XML parsing error for
making a mistake when checking the page.


It's the user that is penalised in the above scenario, not the author.

XML WF constraints should not exist on user focused languages.

Jim.
--
comp.lang.javascript FAQ - http://jibbering.com/faq/

Jul 20 '05 #10

P: n/a
On Sat, 24 Jul 2004, Jim Ley wrote:
XML WF constraints should not exist on user focused languages.


But how do you know what the author intended, if they haven't followed
the elementary rules of the language? Getting "whatever was displayed
on the author's screen by version X of browser Y" isn't a very clear
design criterion, especially if you don't know the values of X and Y.

CSS at least has some mandatory requirements for error handling (even
/those/ are ignored by the Operating System Component that thinks it's
a web browser), and some general principles: I would summarise as
approx. "ignore everything that can't be understood unambiguously,
lest your guess prove disastrous". But that approach doesn't work for
broken (X)HTML.

The root problem is that the mass of authors are allowing themselves
to use a browser (or two) as arbiters of what is correct. If they
can't be disabused of that (and it doesn't look as if it's happening
any time soon), then browsers -should- be putting up a clear
indication whenever something is wrong, for the benefit both of
authors and of their poor readers.
Jul 20 '05 #11

P: n/a
On Sat, 24 Jul 2004 11:41:12 +0100, "Alan J. Flavell"
<fl*****@ph.gla.ac.uk> wrote:
On Sat, 24 Jul 2004, Jim Ley wrote:
XML WF constraints should not exist on user focused languages.
But how do you know what the author intended, if they haven't followed
the elementary rules of the language?


They didn't follow the elementary rules of the language, it doesn't
matter what they intended, what's important is what the user wants.

What matters is that the user isn't penalised for their incompetence,
so the User Agent should be allowed to have a guess at what at least
makes sense to it. Generally you'll find that the guess is good
enough, and if it's not then it'll be obvious to the human that's
using the document.

Take the trivial example of a missing </html> on the end of an XHTML
document, the guess that it's really there will not turn anything
disastrous. Or writing an undeclared entity in a standalone XML doc
is probably a pretty workable error recovery.

You may be able to construct a few documents where XML WF error
recovery results in documents that are not understandable, or not
obviously broken to the user, but I think you'll struggle - and I
think the vast majority of cases the user will be benefitted from the
behaviour.
But that approach doesn't work for broken (X)HTML.
I think the observations in Appendix C. suggest that it does work for
broken HTML.
The root problem is that the mass of authors are allowing themselves
to use a browser (or two) as arbiters of what is correct.


Yep, that's a great shame, but it's not a reason to penalise the
authors, and if you want to penalise the authors - penalise them on
the servers! make it an HTTP requirement that documents are not
served if invalid - at least that will mean users will never be able
to know there's a document there they can't read.

Jim.
--
comp.lang.javascript FAQ - http://jibbering.com/faq/

Jul 20 '05 #12

P: n/a
Jim Ley wrote:
They didn't follow the elementary rules of the language, it doesn't
matter what they intended, what's important is what the user wants.


What browser version has a mind-reading module to determine this?

--
== Dan ==
Dan's Mail Format Site: http://mailformat.dan.info/
Dan's Web Tips: http://webtips.dan.info/
Dan's Domain Site: http://domains.dan.info/
Jul 20 '05 #13

P: n/a
On Sat, 24 Jul 2004 16:52:47 -0400, "Daniel R. Tobias"
<da*@tobias.name> wrote:
Jim Ley wrote:
They didn't follow the elementary rules of the language, it doesn't
matter what they intended, what's important is what the user wants.


What browser version has a mind-reading module to determine this?


Determine what?

I'm talking about problems with XML that make it inappropriate for use
on user focused documents. Users want content, they don't want "bog
off you can't read this it contains a single character missing off the
end of the document"

Of course if you claim that that requires mind-reading, then I think
you're being ridiculous.

Jim.
--
comp.lang.javascript FAQ - http://jibbering.com/faq/

Jul 20 '05 #14

P: n/a
In <41***************@news.individual.net>, on 07/24/2004
at 11:11 AM, ji*@jibbering.com (Jim Ley) said:
Yep, that's a great shame, but it's not a reason to penalise the
authors,


It may not be a good reason the penalize the innocent users, but it is
certainly a good reason to penalize the authors.

--
Shmuel (Seymour J.) Metz, SysProg and JOAT <http://patriot.net/~shmuel>

Unsolicited bulk E-mail subject to legal action. I reserve the
right to publicly post or ridicule any abusive E-mail. Reply to
domain Patriot dot net user shmuel+news to contact me. Do not
reply to sp******@library.lspace.org

Jul 20 '05 #15

P: n/a
GwG

"Brian" <us*****@julietremblay.com.invalid> wrote in message
news:10*************@corp.supernews.com...
Daniel R. Tobias wrote:
Brian wrote:
That should be "You're comparing...."


People also ought to attempt to spell "HTML" correctly when posting
to an HTML newsgroup, shouldn't they?


Uh, I'll take what's coming to me for mixing up "your" and "you're",
but I'm afraid the op is on the hook for transposing the letters in
the subject, so take it up with him.


"transitional is meant to east the transition (get it?)"

Not really ;-)

Jul 20 '05 #16

P: n/a
Jim Ley wrote:
That's because it's trivial to convert valid HTML 4.01 to XHTML, no
trouble at all, in fact it's a lot easier than authoring XHTML
straight than follows the observations of Appendix C.

If authors serve their XHTML as text/html, which in return makes UAs
interpret it as HTML tagsoup, there is no 'forced to more strictness',
because the author does not get penalized with an XML parsing error for
making a mistake when checking the page.


I'm one of the clueless newbies who used XHTML because that was what W3C
recommended.

Since following several threads here & reading Appendix C, I've come
around to going with HTML 4.01 Strict.

However I like the idea of imposing upon myself the restrictions that
XHTML defines, which I would be free to do with HTML 4.01, such as
closing all tags & keeping the HTML free of presentational markup.

One of my reasons for using XHTML was that it allowed me to check I'd
done the above via validation. Is there any way I can write HTML 4.01
within these constraints & use a validator to tell me I've not made any
errors, rather than merely tell me I've complied with the requirements
of HTML 4.01?

--
Michael
m r o z a t u k g a t e w a y d o t n e t
Jul 20 '05 #17

P: n/a
On Sat, 24 Jul 2004, Jim Ley wrote:
On Sat, 24 Jul 2004 11:41:12 +0100, "Alan J. Flavell"
<fl*****@ph.gla.ac.uk> wrote:
But how do you know what the author intended, if they haven't followed
the elementary rules of the language?
They didn't follow the elementary rules of the language, it doesn't
matter what they intended, what's important is what the user wants.


Oh dear, we've been around this before, haven't we?

The user wants <strong under no circumstances</strong> to find out
what the author was trying to tell them. Hmmm?

Only the other day, I accidentally botched the closing of an <h1>. As
a consequence, the browser showed the entire rest of the page as if it
had been a heading-1. Was that what I intended? Some misguided
authors stick a <b> tag at the start of a whole chapter, -intending-
it to remain in effect across paragraph etc. boundaries (no surprises
that they use <b> in preference to <strong>, ho hum). Should the
browser pander to this wish, or not? Others who -know- the rules
nevertheless occasionally botch their closing </strong> tag and find
the browser continues the <strong> effect way beyond what they had
really intended. OK, so this might not be disastrous to the content,
but what if it had been "display: none;" that was involved? Or they
put textual content inside a table but outside of a table cell, and
the browser renders it completely out of its intended context?
What matters is that the user isn't penalised for their incompetence,
I agree with the principle: but it's sheer chance whether the fixed-up
result will show obvious defects, be merely incomprehensible, or
occasionally convey *the wrong message*. So, take care not to
"penalise the user" by displaying some botched content which might be
completely wrong, but without any form of warning that it's been
botched.
so the User Agent should be allowed to have a guess at what at least
makes sense to it. Generally you'll find that the guess is good
enough,
Often you will, but is that good enough? - it would be unwise to rely
on it. And in that regard an error-fixup indicator *could* be a
useful alert. After all, many browsers have javascript error logs,
Java error logs, popup rejection logs, etc. etc.; the one major thing
they seem to lack are HTML/CSS rendering error logs - or indeed, *any*
kind of indicator of rendering errors, even if a full-scale log would
be thought overkill and might slow-down the action.
and if it's not then it'll be obvious to the human that's
using the document.
That's *not* guaranteed (unfortuntely I haven't collected the examples
I've met in the past). If it's only 1 in 100 defective documents
which carry a significantly misleading message, it's still 1 too many,
and you have no idea which one of that 100 it's going to be. I don't
want to over-dramatise this, but as the web gets read more widely and
taken more seriously as a source of practical advice, the results are
potentially life-threatening in some cases.
Take the trivial example of a missing </html> on the end of an XHTML
document,
....which an HTML rendering agent would take in its stride anyway...
the guess that it's really there will not turn anything
disastrous. Or writing an undeclared entity in a standalone XML doc
is probably a pretty workable error recovery.
Sounds like the usual HTML error recovery. Why not take an example
which has -real- consequences, instead of picking easy cases?
You may be able to construct a few documents where XML WF error
recovery results in documents that are not understandable, or not
obviously broken to the user, but I think you'll struggle - and I
think the vast majority of cases the user will be benefitted from the
behaviour.


And IMHO none of those would be harmed by some kind of error
indicator, which could be a valuable marker in the instances - perhaps
the minority - where the defect does not produce obvious blemishes to
alert the reader to the defects in the rendered result. And by the
same token to alert many of those misguided authors, and discourage
them from putting their crap on the web until they fixed it. Which
would be beneficial to *both* sides.

IMHO and YMMV.
Jul 20 '05 #18

P: n/a
Michael Rozdoba wrote:
Since following several threads here & reading Appendix C, I've
come around to going with HTML 4.01 Strict.

However I like the idea of imposing upon myself the restrictions
that XHTML defines, which I would be free to do with HTML 4.01,
such as closing all tags & keeping the HTML free of presentational
markup.
This is wrong. You've conflated strict-v.-transitional with
HTML-v.-XHTML. HTML strict does not offer any presentational
markup that XHTML strict does not. The only difference is closing tags
that are optional in HTML but required in XHTML. (There are other
differences that are not relevant to your query.)
One of my reasons for using XHTML was that it allowed me to check
I'd done the above via validation. Is there any way I can write
HTML 4.01 within these constraints & use a validator to tell me
I've not made any errors, rather than merely tell me I've complied
with the requirements of HTML 4.01?


You could write your own dtd, referencing the w3c's strict dtd but
adding additional requirements e.g. for closing </p> and </li> tags.

[This thread has nothing to do with css, so I've set followups to ciwah.]

--
Brian (remove ".invalid" to email me)
http://www.tsmchughs.com/
Jul 20 '05 #19

P: n/a
On Sun, 25 Jul 2004 17:45:31 +0100, "Alan J. Flavell"
<fl*****@ph.gla.ac.uk> wrote:
Only the other day, I accidentally botched the closing of an <h1>. As
a consequence, the browser showed the entire rest of the page as if it
had been a heading-1. Was that what I intended?
My point is that _XML_ is not suitablie for user focussed languages,
im your example, if you'd used XML, all the user would've seen is
nothing but "haha there's some content here, but I'm not going to show
you" The problem is the XML WF constraints on stopping display of the
information.
What matters is that the user isn't penalised for their incompetence,


I agree with the principle: but it's sheer chance whether the fixed-up
result will show obvious defects, be merely incomprehensible, or
occasionally convey *the wrong message*.


I've yet to see or hear of a scenario where fixup of invalid content
has resulted in the wrong message, and I visit and hear about a lot of
invalid webpages - also remember it's not invalid I'm arguing against
here, it's not WF (assuming UA's remain non-validating).
so the User Agent should be allowed to have a guess at what at least
makes sense to it. Generally you'll find that the guess is good
enough,


Often you will, but is that good enough? - it would be unwise to rely
on it.


It would indeed be unwise to rely on it, which is why authors should
ensure their mark-up is invalid.

Users should absolutely have a caveat-emptor view on anything they
read, so I don't see any additional risk to the user.
That's *not* guaranteed (unfortuntely I haven't collected the examples
I've met in the past). If it's only 1 in 100 defective documents
which carry a significantly misleading message, it's still 1 too many,
and you have no idea which one of that 100 it's going to be.
I don't agree with this, there's a lot more than 1 in 100 (and I would
suggest there's a lot more 0's on that list) web documents that
contain wrong information - so the additional misleadingness is
unlikely to be relevant.
Take the trivial example of a missing </html> on the end of an XHTML
document,


...which an HTML rendering agent would take in its stride anyway...


Of course, I'm only discussing why XML should no be used for
user-focused language - HTML is an excellent example of a more fault
tolerant SGML application that I would encourage people to use for
user focused languages.
Sounds like the usual HTML error recovery. Why not take an example
which has -real- consequences, instead of picking easy cases?
Because XML WF errors are pretty rare other than the trivial cases. I
think you're absolutely agreeing with me, that HTML is better model
than XML.
And IMHO none of those would be harmed by some kind of error
indicator, which could be a valuable marker in the instances - perhaps
the minority


I don't disagree, <url: http://ieqabar.sourceforge.net/ > but at the
same time, I don't think it would have any real value to a user, since
only a tiny minority would be shown - at the same time this is
irrelelvant to what I understood us to be discussing - XML WF
constraints on normal processing (rendering in the case of XHTML)

Jim.
--
comp.lang.javascript FAQ - http://jibbering.com/faq/

Jul 20 '05 #20

P: n/a
On Sat, 24 Jul 2004 16:52:47 -0400, Daniel R. Tobias <da*@tobias.name>
wrote:
Jim Ley wrote:
They didn't follow the elementary rules of the language, it doesn't
matter what they intended, what's important is what the user wants.


What browser version has a mind-reading module to determine this?


Browsers already do try and fix user mistakes. However, here is a short
list of easy, sensible ways to mind-read a mis-formed XHTML document:

Error: No closing tag
Example: <p><em>Hi this is blah....</p>
Fix: Put a closing tag right before the first instance where nesting
breaks: <p><em>Hi this is blah....</em></p>.

Error: No closing tag (for normally self-terminating elements)
Example: ... <hr></div>
Fix: Assume the element was meant to self-terminate: ... <hr/></div>

Error: Open elements at end of document
Example: Lack of a </div>, </body>, or </html> perhaps.
Fix: Close all open elements in proper order.
That's not exactly mind-reading. With an XHTML document the browser
_could_ warn the user that it is malformed and that it might not exactly
represent the author's intentions, and then display it anyway.

--
Accessible web designs go easily unnoticed;
the others are remembered and avoided forever.
Jul 20 '05 #21

P: n/a
On Sun, 25 Jul 2004 15:00:14 -0400, Sam Hughes <hu****@rpi.edu> wrote:
On Sat, 24 Jul 2004 16:52:47 -0400, Daniel R. Tobias <da*@tobias.name>
wrote:
Jim Ley wrote:
They didn't follow the elementary rules of the language, it doesn't
matter what they intended, what's important is what the user wants.


What browser version has a mind-reading module to determine this?


Browsers already do try and fix user mistakes. However, here is a short
list of easy, sensible ways to mind-read a mis-formed XHTML document:

Error: No closing tag
Example: <p><em>Hi this is blah....</p>
Fix: Put a closing tag right before the first instance where nesting
breaks: <p><em>Hi this is blah....</em></p>.

Error: No closing tag (for normally self-terminating elements)
Example: ... <hr></div>
Fix: Assume the element was meant to self-terminate: ... <hr/></div>

Error: Open elements at end of document
Example: Lack of a </div>, </body>, or </html> perhaps.
Fix: Close all open elements in proper order.
That's not exactly mind-reading. With an XHTML document the browser
_could_ warn the user that it is malformed and that it might not exactly
represent the author's intentions, and then display it anyway.


But mind-reading shouldn't do things such as fix authors' misnesting of
inline and block-level elements; if there is something such as <b><p>...,
it should be fixed by <b></b><p>....

Really, my definition of "mind-reading" is just the in-putting of missing
end-tags at the last-possible byte. :)

--
Accessible web designs go easily unnoticed;
the others are remembered and avoided forever.
Jul 20 '05 #22

P: n/a
On Sun, 25 Jul 2004 15:00:14 -0400, "Sam Hughes" <hu****@rpi.edu>
wrote:
On Sat, 24 Jul 2004 16:52:47 -0400, Daniel R. Tobias <da*@tobias.name>
wrote:
Jim Ley wrote:
They didn't follow the elementary rules of the language, it doesn't
matter what they intended, what's important is what the user wants.
What browser version has a mind-reading module to determine this?


Browsers already do try and fix user mistakes.


Browsers are not XML UA's (unless you believe the HTML WG XHTML 2.0
preamble) - I think their success provides both a model that can be
used, and an example that it is practical.
Error: No closing tag
Example: <p><em>Hi this is blah....</p>
Fix: Put a closing tag right before the first instance where nesting
breaks: <p><em>Hi this is blah....</em></p>.
Yep, but XML does not allow this.
<url: http://www.w3.org/TR/2000/REC-xml-20001006#dt-fatal >
That's not exactly mind-reading. With an XHTML document the browser
_could_ warn the user that it is malformed and that it might not exactly
represent the author's intentions, and then display it anyway.


No it couldn't... well by original my interpretation it could, but
others have convinced me this is wrong.

Jim.
--
comp.lang.javascript FAQ - http://jibbering.com/faq/

Jul 20 '05 #23

P: n/a
In article <41**********************@news.zen.co.uk>,
Michael Rozdoba <mr**@nowhere.invalid> writes:
Is there any way I can write HTML 4.01
within these constraints & use a validator to tell me I've not made any
errors, rather than merely tell me I've complied with the requirements
of HTML 4.01?


http://valet.webthing.com/page/parsemode.html

--
Nick Kew

Nick's manifesto: http://www.htmlhelp.com/~nick/
Jul 20 '05 #24

P: n/a
Nick Kew wrote:

[snip]
http://valet.webthing.com/page/parsemode.html


Ah, so 'fussy' will do what I'm after, it seems. Thanks :)

--
Michael
m r o z a t u k g a t e w a y d o t n e t
Jul 20 '05 #25

P: n/a
In <41****************@news.individual.net>, on 07/25/2004
at 06:17 PM, ji*@jibbering.com (Jim Ley) said:
It would indeed be unwise to rely on it, which is why authors should
ensure their mark-up is invalid.


ITYM why authors should ensure their mark-up is *VALID*.

--
Shmuel (Seymour J.) Metz, SysProg and JOAT <http://patriot.net/~shmuel>

Unsolicited bulk E-mail subject to legal action. I reserve the
right to publicly post or ridicule any abusive E-mail. Reply to
domain Patriot dot net user shmuel+news to contact me. Do not
reply to sp******@library.lspace.org

Jul 20 '05 #26

This discussion thread is closed

Replies have been disabled for this discussion.