By using this site, you agree to our updated Privacy Policy and our Terms of Use. Manage your Cookies Settings.
439,977 Members | 1,352 Online
Bytes IT Community
+ Ask a Question
Need help? Post your question and get tips & solutions from a community of 439,977 IT Pros & Developers. It's quick & easy.

XHTML Problems

P: n/a
My take on problems composing, serving and rendering XHTML
documents/web pages:

1. Typical conscientious web authors are producing XHTML documents (Web
pages) that feature valid Markup and with the content (MIME) type
specified as text/html
(http://keystonewebsites.com/articles/mime_type.php). These pages are
then loaded on to their Server where they are served to Rendering
Agents (browsers) as HTML (SGML application) documents with no problem
-- most Web Service Provider implementations associate HTML documents
with the content (MIME) type text/html. These pages are rendered
successfully by all extant graphical browsers -- but they are not XHTML
documents -- they are HTML documents without XML functionality
(http://hixie.ch/advocacy/xhtml). HTML documents that include DocType
declarations display in standards mode -- those that do not display in
"quirks (non-standard) mode. MSIE browsers render XHTML documents that
include the XML declaration in "quirks" mode, therefor, it seems the
declaration should be omitted for XHTML documents served as HTML --
although the W3C doesn't do so on their Home page.

2. Web authors who produce XHTML documents (Web pages) that feature
valid Markup and with the content (MIME) type specified as
application/xhtml+xml (prescribed by the W3C) inevitably face serious
problems (http://www.w3.org/International/arti...erving-xhtml/).
Most Web Service Provider implementations do not recognize this content
(MIME) type. In that event, the web author must contact his or her Web
Service Provider to try and convince them to adopt
application/xhtml+xml as the content (MIME) type to associate with
XHTML documents.If that doesn't work -- and it often doesn't -- then
the web author is faced with the task of producing and loading up to
the server a .htaccess file that provides the association -- a tricky
endeavor for many web authors. But the problems do not end there.
Current XML compliant browsers such as Mozilla, Netscape, Opera, et al.
retrieve and render these pages with no problem. However, older
browsers -- and more importantly by far -- the most frequently used
graphical browsers today -- MSIE 5.x/6.x -- will not render these
documents correctly. MSIE presents them as Down Load files and there is
no backward compatibility for older browsers
(http://www.w3.org/People/mimasa/test...-types/results)

3. This situation is a dilemma for web authors -- and the W3C. The W3C
has attempted to resolve this situation by installing a facility called
Content-Negotiation on their Server (to be a model for others?) that is
supposed to offer a choice of content (MIME) type text/html or
application/xhtml+xml XHTML documents to browsers so that they can
render them according to their capabilities
(http://www.w3.org/2003/01/xhtml-mime...t-negotiation). The
idea is to provide backward compatibility for older browsers and
accommodate current non XML compliant MSIE browsers. Of course, all
these would be HTML documents -- not XHTML documents. In theory, XML
compliant browsers would be served fully functional XHTML documents.
The W3C offers their Home page as an exemplar of this functionality.
BTW, the W3C Content Negotiation page only addresses the Apache Server
implementation in depth, Jigsaw only briefly and others such as Zeus
not at all.

4. The above procedure is not working for me. The page displays
correctly in my (XML compliant) Mozilla Firefox browser, but when I run
it through the W3C Markup validator the Content-Type displays as
text/html -- it is not being served as an XHTML document as intended.
Of course, if the W3C fix did work, the pages would still only be
served as HTML. It seems to me these problems must be sorted out ASAP
by the W3C -- certainly before they release XHTML 2.0. Now if only
Microsoft would produce a browser (and offer modifications to their
existing ones) to recognize content (MIME) type application/xhtml+xml
and serve real XHTML documents -- just like Firefox, Opera, et al. do!

James Pickering
Pickering Pages
http://www.jp29.org/

Jul 24 '05 #1
Share this Question
Share on Google+
32 Replies


P: n/a
jp**@cox.net wrote:
My take on problems composing, serving and rendering XHTML
documents/web pages:

1. Typical conscientious web authors are producing XHTML documents (Web
pages) that feature valid Markup and with the content (MIME) type
specified as text/html
(http://keystonewebsites.com/articles/mime_type.php). These pages are
then loaded on to their Server where they are served to Rendering
Agents (browsers) as HTML (SGML application) documents with no problem
-- most Web Service Provider implementations associate HTML documents
with the content (MIME) type text/html. These pages are rendered
successfully by all extant graphical browsers -- but they are not XHTML
documents -- they are HTML documents without XML functionality
(http://hixie.ch/advocacy/xhtml). HTML documents that include DocType
declarations display in standards mode -- those that do not display in
"quirks (non-standard) mode. MSIE browsers render XHTML documents that
include the XML declaration in "quirks" mode, therefor, it seems the
declaration should be omitted for XHTML documents served as HTML --
although the W3C doesn't do so on their Home page.

2. Web authors who produce XHTML documents (Web pages) that feature
valid Markup and with the content (MIME) type specified as
application/xhtml+xml (prescribed by the W3C) inevitably face serious
problems (http://www.w3.org/International/arti...erving-xhtml/).
Most Web Service Provider implementations do not recognize this content
(MIME) type. In that event, the web author must contact his or her Web
Service Provider to try and convince them to adopt
application/xhtml+xml as the content (MIME) type to associate with
XHTML documents.If that doesn't work -- and it often doesn't -- then
the web author is faced with the task of producing and loading up to
the server a .htaccess file that provides the association -- a tricky
endeavor for many web authors. But the problems do not end there.
Current XML compliant browsers such as Mozilla, Netscape, Opera, et al.
retrieve and render these pages with no problem. However, older
browsers -- and more importantly by far -- the most frequently used
graphical browsers today -- MSIE 5.x/6.x -- will not render these
documents correctly. MSIE presents them as Down Load files and there is
no backward compatibility for older browsers
(http://www.w3.org/People/mimasa/test...-types/results)

3. This situation is a dilemma for web authors -- and the W3C. The W3C
has attempted to resolve this situation by installing a facility called
Content-Negotiation on their Server (to be a model for others?) that is
supposed to offer a choice of content (MIME) type text/html or
application/xhtml+xml XHTML documents to browsers so that they can
render them according to their capabilities
(http://www.w3.org/2003/01/xhtml-mime...t-negotiation). The
idea is to provide backward compatibility for older browsers and
accommodate current non XML compliant MSIE browsers. Of course, all
these would be HTML documents -- not XHTML documents. In theory, XML
compliant browsers would be served fully functional XHTML documents.
The W3C offers their Home page as an exemplar of this functionality.
BTW, the W3C Content Negotiation page only addresses the Apache Server
implementation in depth, Jigsaw only briefly and others such as Zeus
not at all.

4. The above procedure is not working for me. The page displays
correctly in my (XML compliant) Mozilla Firefox browser, but when I run
it through the W3C Markup validator the Content-Type displays as
text/html -- it is not being served as an XHTML document as intended.
Of course, if the W3C fix did work, the pages would still only be
served as HTML. It seems to me these problems must be sorted out ASAP
by the W3C -- certainly before they release XHTML 2.0. Now if only
Microsoft would produce a browser (and offer modifications to their
existing ones) to recognize content (MIME) type application/xhtml+xml
and serve real XHTML documents -- just like Firefox, Opera, et al. do!


AFAIK content negotiation is only an issue for XHTML 1.0 - XHTML 1.1 and up
are required to be served as application/xhtml+xml.

I don't see any issues here that the W3C should solve. IMHO it was a mistake
to recommend using XHTML served as text/html - as long as you (as a
webmaster) cannot assume that the majority of user agents parse it as XML
there is no point in using XHTML 1.0 at all (there are no real differences
between XHTML1.0 and HTML4.01).
If you want to use XHTML anyway, then it's the webmasters job to setup the
server correctly (or the software that generates dynamic pages). That's not
the W3C's job.

--
Benjamin Niemann
Email: pink at odahoda dot de
WWW: http://www.odahoda.de/
Jul 24 '05 #2

P: n/a
jp**@cox.net wrote:
(http://www.w3.org/2003/01/xhtml-mime...nt-negotiation) when I run it through the W3C Markup validator the Content-Type displays
as text/html
The markup validator does not appear to send an accept header... but then it
doesn't, IIRC, have a proper XML parser anyway.
It seems to me these problems must be sorted out ASAP
by the W3C -- certainly before they release XHTML 2.0.
You're welcome to download the source of the validator and offer a patch.
Now if only
Microsoft would produce a browser (and offer modifications to their
existing ones) to recognize content (MIME) type application/xhtml+xml
and serve real XHTML documents -- just like Firefox, Opera, et al. do!


Yes, well, GoogleBot, Safari, Konqueror, Links, Lynx, and W3M don't support
XHTML yet either AFAIK.

--
David Dorward <http://blog.dorward.me.uk/> <http://dorward.me.uk/>
Home is where the ~/.bashrc is
Jul 24 '05 #3

P: n/a


jp**@cox.net wrote:
Current XML compliant browsers such as Mozilla, Netscape, Opera, et al.
retrieve and render these pages with no problem.


XHTML as application/xhtml+xml supports improves but there are certainly
still problems, in particular as on the web you cannot expect any user
to show up with the latest versions of Opera and Mozilla. And half of
the Opera 7.xy releases for instance do not support script in
application/xhtml+xml documents. Or Mozilla does render the document but
compared to text/html document not incrementally which is quite a
shortcoming:
<http://www.mozilla.org/docs/web-developer/faq.html#xhtmldiff>

--

Martin Honnen
http://JavaScript.FAQTs.com/
Jul 24 '05 #4

P: n/a
I will relate my own experiences in experimenting with XHTML Web pages
served with W3C "recommended practice" Content/MIME-type of
application/xhtml+xml .

Of course, just using the header Markup ..........

<meta http-equiv="Content-Type" content="application/xhtml+xml;
charset=utf-8" />

........... in the header of a test XHTML page didn't do the job -- the
Header output via http://www.web-caching.com/showheaders.html showed
Content-Type: text/html in my Mozilla Firefox Browser. I called my Web
Service Provider to inquire if their Server (Zeus/3.4) recognized
MIME-type application/xhtml+xml to associate with XHTML documents. The
technician I spoke with informed me that only MIME-type text/html was
recognized for association with HTML documents -- they were unwilling
to modify their implementation and suggested I compose and load to the
Server an appropriate .htaccess file to add MIME-type
application/xhtml+xml to my directory -- he offered to walk me through
the procedure, but I knew how to do that so I proceeded on my own. I
did compose and load up to the Server the following .htaccess file
...........

AddType application/xhtml+xml html

...... That did the trick. I tested my exemplar XHTML 1.0 (strict) page
http://www.jp29.org/indexx.html in
http://www.web-caching.com/showheaders.html and the header showed
Content-Type: application/xhtml+xml in my Firefox Browser (of course
only a Download File window is rendered by MSIE 6.0). I use the file
extension .htm for my HTML pages and .html for my (experimental) XHTML
pages in order to use both MIME-types on my Server.

IMO http://www.wats.ca/resources/.htaccessandmimetypes/32 is an
excellent reference and guide for those XHTML authors who wish to
compose and load their own .htaccess file in order to add a required
MIME-type and are not familiar with the procedure.

James Pickering
Pickering Pages
http://www.jp29.org/

Jul 24 '05 #5

P: n/a
jp**@cox.net wrote:
[snip]
I use the file
extension .htm for my HTML pages and .html for my (experimental) XHTML
pages in order to use both MIME-types on my Server.


Why not use .html and .xhtml? At least on my local apache
installation .xhtml files are served as application/xhtml+xml - I guess
that this is the default, and probably for other servers, too.

--
Benjamin Niemann
Email: pink at odahoda dot de
WWW: http://www.odahoda.de/
Jul 24 '05 #6

P: n/a
Benjamin Niemann wrote:

........... Why not use .html and .xhtml? ..........

Laziness and convenience -- HTML-Kit (my Editor) uses .htm & html as
automatic file extensions.
........... my local apache installation .xhtml files are served as
application/xhtml+xml - I guess that this is the default, and probably
for other servers, too ..........

Not on my Web Service Provider Zeus 3.4 implementation -- the only
MIME-Type association they provide is text/html (for HTML).

James Pickering
http://www.jp29.org/

Jul 24 '05 #7

P: n/a
David Dorward wrote:

...... The markup validator does not appear to send an accept header
......

The extended interface does:

http://validator.w3.org/detailed.html

James Pickering
http://www.jp29.org/

Jul 24 '05 #8

P: n/a
James Pickering wrote:
David Dorward wrote:
The markup validator does not appear to send an accept header
The extended interface does:
http://validator.w3.org/detailed.html


Nope. Just tested that too. Not sign of an accept header.

--
David Dorward <http://blog.dorward.me.uk/> <http://dorward.me.uk/>
Home is where the ~/.bashrc is
Jul 24 '05 #9

P: n/a
David Dorward wrote:
James Pickering wrote:
David Dorward wrote:
The markup validator does not appear to send an accept header

The extended interface does:
http://validator.w3.org/detailed.html


Nope. Just tested that too. Not sign of an accept header.


Try:

http://www.web-caching.com/showheaders.html

James Pickering
http://www.jp29.org/

Jul 24 '05 #10

P: n/a
In article <da*******************@news.demon.co.uk>,
David Dorward <do*****@yahoo.com> wrote:
Yes, well, GoogleBot, Safari, Konqueror, Links, Lynx, and W3M don't support
XHTML yet either AFAIK.


Safari supports XHTML as application/xhtml+xml but does not advertise it
in the Accept header. I have not tried scripting with XHTML, though.

In the absence of an explicit character encoding declaration, Safari
fails to decode characters properly.

--
Henri Sivonen
hs******@iki.fi
http://hsivonen.iki.fi/
Mozilla Web Author FAQ: http://mozilla.org/docs/web-developer/faq.html
Jul 24 '05 #11

P: n/a
On 03/07/2005 21:52, James Pickering wrote:

[snip]
David Dorward wrote:

The markup validator does not appear to send an accept header

David is correct.

[snip]
Try:

http://www.web-caching.com/showheaders.html


That will show you what headers the *query page* sends in response, but
now what the validator sends in its request. To do that, you should
write a short server-side script that outputs the request headers, then
point the validator to it with the 'Show Source' option selected.

There were no Accept headers of any kind in my test.

Mike

--
Michael Winter
Prefix subject with [News] before replying by e-mail.
Jul 24 '05 #12

P: n/a
On Sun, 03 Jul 2005 21:35:20 +0200, Benjamin Niemann <pi**@odahoda.de>
wrote:
Why not use .html and .xhtml?


It tends to unstable URLs for the pages. A URI should refer to the
resource, not to irrelevant (for the source of the link) details of its
implementation.

A site with a mix of .html and .xhtml extensions in the links would be a
nightmare to manage, even worse if pages were gradually changing from
one to the other.

There nay be some clever Apache-fu that works around this, serving
either file extension under the same URI, depending on the presence of
files with particular extensions.
Jul 24 '05 #13

P: n/a
Tim
Benjamin Niemann <pi**@odahoda.de> wrote:
Why not use .html and .xhtml?

Andy Dingley wrote:
It tends to unstable URLs for the pages. A URI should refer to the
resource, not to irrelevant (for the source of the link) details of its
implementation.

A site with a mix of .html and .xhtml extensions in the links would be a
nightmare to manage, even worse if pages were gradually changing from one
to the other.
Are one of you talking about filenames, and the other about URIs? You can
have both filenames in use (depending on content), and not refer to the
filename specifically with requests (i.e. sans-suffix).

e.g. Request http://example.com/pagename
And get pagename.html or pagename.xhtml, depending on what's stored
on the server, and what suits the browser (should there be a choice).
There nay be some clever Apache-fu that works around this, serving either
file extension under the same URI, depending on the presence of files with
particular extensions.


"Content negotiation"

--
If you insist on e-mailing me, use the reply-to address (it's real but
temporary). But please reply to the group, like you're supposed to.

This message was sent without a virus, please delete some files yourself.

Jul 24 '05 #14

P: n/a
Henri Sivonen wrote:
In article <da*******************@news.demon.co.uk>,
David Dorward <do*****@yahoo.com> wrote:

Yes, well, GoogleBot, Safari, Konqueror, Links, Lynx, and W3M don't
support XHTML yet either AFAIK.


Safari supports XHTML as application/xhtml+xml but does not advertise it
in the Accept header.


I don't have a copy of Safari to hand for testing, but Konqueror (the open
source cousin to Safari) accepts an application/xhtml+xml content-type ...
but then shoves it through a tag soup slurper rather then an XML parser. It
renders http://dorward.me.uk/tmp/error.xhtml despite the well-formedness
errors.

--
David Dorward <http://blog.dorward.me.uk/> <http://dorward.me.uk/>
Home is where the ~/.bashrc is
Jul 24 '05 #15

P: n/a
On Mon, 04 Jul 2005 13:58:51 +0900, Tim <ti*@mail.localhost.invalid>
wrote:
Are one of you talking about filenames, and the other about URIs?
I'm talking about both. Two sorts of file extension is good, as it
labels the content. Two sorts of extension embedded in the URL is bad,
as it makes link-management messy.

If there's some means of solving both situations (i.e. distinguishing
them when we talk about files, hiding it when we talk about URLs) then
we can avoid these problems.

You can
have both filenames in use (depending on content), and not refer to the
filename specifically with requests (i.e. sans-suffix).

e.g. Request http://example.com/pagename
And get pagename.html or pagename.xhtml, depending on what's stored
on the server, and what suits the browser (should there be a choice).


My Apache-fu is weak.

Is it practical to do this when there's only one file (.xhtml) and the
browser wants only text/html ? I know the server can choose to serve a
..html file instead of the .xhtml, but AFAIK this would require two
copies of the content on the server.

What I'm looking for is content negotiation that can silently choose to
deliver either XHTML, or Appendix C XHTML-as-HTML, according to browser
acceptance. Is this possible ? Would you happen to have an example of
it that we might learn from ?
Jul 24 '05 #16

P: n/a
On Mon, 4 Jul 2005, Andy Dingley wrote:
On Mon, 04 Jul 2005 13:58:51 +0900, Tim <ti*@mail.localhost.invalid>
wrote:
Are one of you talking about filenames, and the other about URIs?
I'm talking about both. Two sorts of file extension is good, as it
labels the content. Two sorts of extension embedded in the URL is
bad, as it makes link-management messy.


I'm not sure exactly what that's supposed to mean, but for any reader
who isn't accustomed to your style, may I just stress that in WWW
terms, the content-type of anything retrieved by HTTP is determined by
the HTTP Content-type header, and the file "extension" does not play
any *direct* role in that interworking interface. [Sure, within the
actual server it may very well be that the *server itself* uses the
file "extension" as a way of labelling the content-type; the key
feature is that this is not part of the defined HTTP interworking
interface between the server and the client.] You knew all that, of
course, I'm just worried that some readers might be confused about it,
and perhaps exacerbated by MSIE's violation of RFC2616, and MS's
mischievous citation of RFC2616 in the part of their documentation
where they describe, without admitting it, that they are in violation
of this mandatory requirement.

[...]
Is it practical to do this when there's only one file (.xhtml) and
the browser wants only text/html ? I know the server can choose to
serve a .html file instead of the .xhtml, but AFAIK this would
require two copies of the content on the server.
My experiments with Apache 1.3.* showed that strange things could
sometimes happen with MultiViews, but generally speaking, a symlink
would be sufficient, you wouldn't need literally two copies. If you
work via a type-map file (which could be written programmatically, e.g
as part of your publish-to-web process in a makefile), then you can
eliminate even this part of the problem.
What I'm looking for is content negotiation that can silently choose
to deliver either XHTML, or Appendix C XHTML-as-HTML, according to
browser acceptance.
I understand you here to mean delivering the same actual file with
different content-type headers as the outcome of content negotiation,
right?
Is this possible ?


It's definitely possible, yes. However, since I wrote the relevant
pages (which were concentrating on language negotiation rather than on
content-type negotiation, but the principle is closely similar), we've
moved our server from Apache 1.3.* to 2.0.*, so I can't offer you a
100% convincing demonstration yet - all that I can offer you is an
assurance that it can be done. As I say, if MultiViews proves
intractable for your purpose (I'm not absolutely sure if this is so or
not), then using a type-map file will, I am convinced, offer a
solution.

http://ppewww.ph.gla.ac.uk/~flavell/www/lang-neg.html

see also http://ppewww.ph.gla.ac.uk/~flavell/...tent-type.html

good luck
Jul 24 '05 #17

P: n/a
I solicit comments on the following article:

http://insight.zdnet.co.uk/internet/...2135615,00.htm

Jul 24 '05 #18

P: n/a
"James Pickering" <jp**@cox.net> wrote:
I solicit comments on the following article:

http://insight.zdnet.co.uk/internet/...2135615,00.htm


Hmm, comments. It's a two year old article and I don't see the
relevance to this thread about XHTML and content-type headers in
general and to Alan's post in particular.

Steve

--
"Grab reality by the balls and squeeze." - Tempus Thales

Steve Pugh <st***@pugh.net> <http://steve.pugh.net/>
Jul 24 '05 #19

P: n/a
On 4 Jul 2005 10:09:11 -0700, "James Pickering" <jp**@cox.net> wrote:
I solicit comments on the following article:


How about "completely irrelevant" ?
Jul 24 '05 #20

P: n/a
On Mon, 4 Jul 2005 17:32:17 +0100, "Alan J. Flavell"
<fl*****@ph.gla.ac.uk> wrote:
in WWW
terms, the content-type of anything retrieved by HTTP is determined by
the HTTP Content-type header, and the file "extension" does not play
any *direct* role in that interworking interface.
There's a mapping from characters in the URL to the file extension of
the file selected to supply the content. This isn't part of HTTP, it's
entirely server and server-config dependent. But it happens a lot, we
all do it, and it's the subject of this thread - so I don't want to
pretend that we exist in some vacuum of standards-only behaviour where
the dirty behind-the-scenes stuff just doesn't exist.
Is it practical to do this when there's only one file (.xhtml) and
the browser wants only text/html ?

a symlink would be sufficient, you wouldn't need literally two copies.
Hmmm - that's in some ways inelegant, but I can see how it would work.
Why should I get my filesystem to tell lies, just to fool a server into
accomplishing what's at heart a server-specific task? What happens if a
filesystem-based CMS task is trying to look and see if an XHTML or HTML
version is available, by using a simple directory listing?

I understand you here to mean delivering the same actual file with
different content-type headers as the outcome of content negotiation,
right?


Yes. The _same_ file, if there is no more relevant file (i.e. pure
HTML) available.

Jul 24 '05 #21

P: n/a
Tim
Tim <ti*@mail.localhost.invalid> wrote:
Are one of you talking about filenames, and the other about URIs?
Andy Dingley <di*****@codesmiths.com> posted:
I'm talking about both. Two sorts of file extension is good, as it
labels the content.
Unnecessary, except to the page author. People clicking links don't need
to know whether it's HTML, XHTML, a flat file or dynamically generated,
etc.

It also *doesn't* label the content. You might think that it does, but
it's only a hint at what you might get. I can serve you anything with a
..html suffixed link. And how many of us have tried to right-click and
download some file, seeing a something.exe link, only to end up saving an
HTML page which was a click-through to the download?
Two sorts of extension embedded in the URL is bad, as it makes
link-management messy.
Unnecessary, again. That's not how it works. Going back to my prior
example, as below. The links are ambiguous.

For now, and evermore, I can offer a link about "configuring my DNS
software" at <http://example.com/configuring_my_DNS_software> and everyone
will be able to read it, whether I write it in HTML today, XHTML next
month, or something else in two years time.

There's one link, and the server provides what's needed, behind the scenes.
If I provide just one file, that's all everybody gets (as the requested
URI). If I provide different versions, their browser and my server can
decide what's best. If we can't decide, the browser *can* offer a list of
choices.

I don't ever have to get people to change their bookmarks, re-write
documentation, etc.
You canhave both filenames in use (depending on content), and not refer
to the filename specifically with requests (i.e. sans-suffix).

e.g. Request http://example.com/pagename
And get pagename.html or pagename.xhtml, depending on what's stored
on the server, and what suits the browser (should there be a choice).

My Apache-fu is weak.

Is it practical to do this when there's only one file (.xhtml) and the
browser wants only text/html ? I know the server can choose to serve a
.html file instead of the .xhtml, but AFAIK this would require two
copies of the content on the server.
Why bother? HTML and XHTML provide the same information to the reader.
Carry on using HTML on existing documents, when you add new documents or
modify old ones, you can make them XHTML, and forget about dual serving.

Personally, I strongly advise against using XHTML. It's just so seriously
broken in the most prevalent client, and the usual daft way of serving it
destroys any benefits of using it with better clients.
What I'm looking for is content negotiation that can silently choose to
deliver either XHTML, or Appendix C XHTML-as-HTML, according to browser
acceptance. Is this possible ? Would you happen to have an example of
it that we might learn from ?


Possible, but why bother. And I can't think of an example, because it's
pointless.

Also, trying to serve one to this and the other to that will fall right
into that age-old recipe for disaster - browser sniffing (something that's
unreliable).

--
If you insist on e-mailing me, use the reply-to address (it's real but
temporary). But please reply to the group, like you're supposed to.

This message was sent without a virus, please delete some files yourself.
Jul 24 '05 #22

P: n/a

On Mon, 4 Jul 2005, Andy Dingley wrote:
On Mon, 4 Jul 2005 17:32:17 +0100, "Alan J. Flavell"
<fl*****@ph.gla.ac.uk> wrote:
in WWW
terms, the content-type of anything retrieved by HTTP is determined by
the HTTP Content-type header, and the file "extension" does not play
any *direct* role in that interworking interface.
There's a mapping from characters in the URL to the file extension of
the file selected to supply the content.


There can be, and often is, yes, which is why it's IMHO so important to
understand how this fits into the general scheme of things.
This isn't part of HTTP, it's entirely server and server-config
dependent.
Exactly so. The file "extension" is of no concern to the client, they
*must* (according to rfc2616) go solely on the HTTP content-type when
there is one (and in practice there always is).
But it happens a lot, we all do it,
Sure, many of "us" use file extensions like .cgi and .php to generate HTML
:-}
and it's the subject of this thread
Is it? Anyway, I think this issue *does* need to be stressed, for the
reasons that I gave.
- so I don't want to pretend that we exist in some vacuum of
standards-only behaviour where the dirty behind-the-scenes stuff just
doesn't exist.
Now you've lost me. The standards-only behaviour works fine (between the
server and the client), whether the file extension was .htm, .html., .cgi,
..pl or .php (or .asp, for that matter).
Is it practical to do this when there's only one file (.xhtml) and
the browser wants only text/html ?
a symlink would be sufficient, you wouldn't need literally two copies.


Hmmm - that's in some ways inelegant,


You've every right to say that, but if you use MultiViews then you do need
some way to tell the server what's what, and that seems to me to be the
way to do it. If, on the other hand, you use a typemap file for
negotiation, then the typemap file can do the job, based on just a single
file with the content in it, as you're requesting. No matter how many
different dimensions of negotiation are involved.
Why should I get my filesystem to tell lies, just to fool a server into
accomplishing what's at heart a server-specific task?
I can only repeat "see above". And the interaction between the (httpd)
server and the server's file system is purely internal, as noted before:
it has no relevance for the client.
What happens if a
filesystem-based CMS task is trying to look and see if an XHTML or HTML
version is available, by using a simple directory listing?


Interesting question, and maybe server dependent.

Give me a while (I have a few pressing problems to solve right now to earn
my crust) and I'll make a test case for this, since I don't seem to have
one yet.
I understand you here to mean delivering the same actual file with
different content-type headers as the outcome of content negotiation,
right?


Yes. The _same_ file, if there is no more relevant file (i.e. pure
HTML) available.


Then maybe you would indeed be happier with the type-map approach instead
of MultiViews.

hope this helps
Jul 24 '05 #23

P: n/a

On Tue, 5 Jul 2005, Tim wrote:
Andy Dingley <di*****@codesmiths.com> posted: [...]
What I'm looking for is content negotiation that can silently choose to
deliver either XHTML, or Appendix C XHTML-as-HTML, according to browser
acceptance. Is this possible ? Would you happen to have an example of
it that we might learn from ?


Possible, but why bother. And I can't think of an example, because it's
pointless.


As I say on my cited page, if you implement server-side negotation then I
would recommend also implementing some other way of getting to the
content. Server-side negotiation works fine in properly-configured
web-compatible browsers, but many users will be using something that fails
one or both of those criteria.

http://ppewww.ph.gla.ac.uk/~flavell/www/lang-neg.html
Also, trying to serve one to this and the other to that will fall right
into that age-old recipe for disaster - browser sniffing


No, it won't. Server-side negotiation does no such thing. Quite the
opposite, in fact, since making the user-agent string a dimension of
negotation is completely impractical and useless, I don't recall anyone
ever seriously suggesting it even, let alone implementing it.

Of course MSIE doesn't implement anything that could be useful for
server-side negotiation (quite the opposite in fact), but it rules itself
out as a web-compatible browser in so many other ways too. If you follow
the advice to implement some alternative way of accessing the content,
this need not be a specific problem.

You're entitled to your views on the pointlessness of serving XHTML under
the provisions of Appendix C (in fact I rather sympathise with those views
myself), but that should be argued as an issue in its own right, and it in
no way discredits the idea of server-side negotiation as such.
Jul 24 '05 #24

P: n/a
Tim
Andy Dingley <di*****@codesmiths.com> posted:
What I'm looking for is content negotiation that can silently choose to
deliver either XHTML, or Appendix C XHTML-as-HTML, according to browser
acceptance. Is this possible ? Would you happen to have an example of
it that we might learn from ?

Tim wrote:
Possible, but why bother. And I can't think of an example, because it's
pointless.
"Alan J. Flavell" <fl*****@physics.gla.ac.uk> posted:
As I say on my cited page, if you implement server-side negotation then I
would recommend also implementing some other way of getting to the
content. Server-side negotiation works fine in properly-configured
web-compatible browsers, but many users will be using something that fails
one or both of those criteria.

http://ppewww.ph.gla.ac.uk/~flavell/www/lang-neg.html
I mean, I couldn't see the point in serving the same XHTML page with one
MIME type or another based on some presumption that one will do the job
better than the other. I can't see the point in doing *that* both ways.

I also wouldn't care for the extra issues involved in trying to make sure
that you do it right.
Also, trying to serve one to this and the other to that will fall right
into that age-old recipe for disaster - browser sniffing

No, it won't. Server-side negotiation does no such thing. Quite the
opposite, in fact, since making the user-agent string a dimension of
negotation is completely impractical and useless, I don't recall anyone
ever seriously suggesting it even, let alone implementing it.

Of course MSIE doesn't implement anything that could be useful for
server-side negotiation (quite the opposite in fact), but it rules itself
out as a web-compatible browser in so many other ways too. If you follow
the advice to implement some alternative way of accessing the content,
this need not be a specific problem.


Which is what I was getting at. If I were to try and either server XHTML
as XHTML, or fake it as HTML, I can't rely on a browser saying it can
handle it or not (because some just lie). So, the usual stupid trick is
for the webmaster (sneer) to decide that this browser can, that browser
can't, and try and work out which browser is currently browsing the site.

--
If you insist on e-mailing me, use the reply-to address (it's real but
temporary). But please reply to the group, like you're supposed to.

This message was sent without a virus, please delete some files yourself.
Jul 24 '05 #25

P: n/a
The following message exchange (extracts) produced a disappointing
result for me:

A query to my Web Hosting Service.....

".......... The W3C has attempted to resolve this situation by
installing Content-Negotiation on their Server (to be a model for
others) that is supposed to offer a choice of content (MIME) type
text/html or application/xhtml+xml XHTML documents to browsers so that
they can render them according to their capabilities
(http://www.w3.org/2003/01/xhtml-mime...nt-negotiation)
........... I have tried some scripting routines of my own to effect
content negotiation on your server without success -- now I solicit
your help -- please .........."

Their response ..........

"I'm sorry but we do not have any directions or suggestions for setting
up such a thing on your account. As a note - we have serveral load
balanced webservers but the Zeus server version on some are different
than others which may make advanced hardly used features such as
content negotiation unreliable or unavailable. I hope I have answered
your question".

Now I have to retain a new Web Host Service (that utilizes Apache,
for instance) if I wish to implement content-negotiation. I wonder how
many other Web authors are facing the same situation?

James Pickering
Pickering Pages
http://www.jp29.org/

Jul 24 '05 #26

P: n/a
On Wed, 6 Jul 2005, Tim wrote:
Andy Dingley <di*****@codesmiths.com> posted:
What I'm looking for is content negotiation that can silently
choose to deliver either XHTML, or Appendix C XHTML-as-HTML,
according to browser acceptance. Is this possible ? Would you
happen to have an example of it that we might learn from ?
Tim wrote:
Possible, but why bother. And I can't think of an example,
because it's pointless.
"Alan J. Flavell" <fl*****@physics.gla.ac.uk> posted:
As I say on my cited page, if you implement server-side negotation
then I would recommend also implementing some other way of getting
to the content. Server-side negotiation works fine in
properly-configured web-compatible browsers, but many users will
be using something that fails one or both of those criteria.

http://ppewww.ph.gla.ac.uk/~flavell/www/lang-neg.html
I mean, I couldn't see the point in serving the same XHTML page with
one MIME type or another based on some presumption that one will do
the job better than the other. I can't see the point in doing
*that* both ways.


OK, I can't disagree with that specific point, but it doesn't negate
the general principle of server-side negotiation, which (under the
limitations that I already mentioned) really does work.

You'd have every right to discourage the questioner from wanting to
serve XHTML "properly" to browsers which can cope with it, and
compatibly as text/html to those that can't, if that's your intention,

I'm just trying to make sure you don't use this as a weapon to diss
the whole principle of server-side negotiation. *If* the questioner
has decided they want to do it despite your discouragement, then I
still say that server-side negotiation can be used. Some kind of
fallback is needed, e.g for situations where browsers send nothing
more informative than "*/*" in their accept header, they evidently
have to be sent text/html in the interests of compatibility. I'm
confident that this can be done (if necessary using a type-map file,
as I said), although I don't currently have a demonstration to offer
you. Mark Tranchant has a page about it, let's see:

http://tranchant.plus.com/notes/multiviews

Oh yes, this was specifically aimed at PHP, but the principles are
much the same I think.
Of course MSIE doesn't implement anything that could be useful for
server-side negotiation (quite the opposite in fact), but it rules
itself out as a web-compatible browser in so many other ways too.
If you follow the advice to implement some alternative way of
accessing the content, this need not be a specific problem.


Which is what I was getting at. If I were to try and either server
XHTML as XHTML, or fake it as HTML, I can't rely on a browser saying
it can handle it or not (because some just lie).


If they lie about that, then they will get what they asked for, I'm
afraid. On the other hand they do have every right to lie about their
user-agent, and many of them do so, so one would be MOST unwise to put
any reliance on that; but if they lie about their content-type
capability in their Accept header, then that's nobody's fault but
their own.

F.y.i, MSIE will typically say in its Accept header that it likes MS
Word files and Excel files (if you've got MS Office or similar
products installed), but does not Accept text/html , other than under
the provisions of "*/*. So if you have an MS Word version of your
document, server-side negotiation will NEVER offer the HTML version to
an MSIE user, it will always show your MS Word version as the only
suitable format, since the "*/" would only be resorted to if an
explicit match has not been found. I say again, you also need to
supply some other route to your resources, if only for such crippled
users to find the alternatives.
So, the usual stupid trick is for the webmaster (sneer) to decide
that this browser can, that browser can't, and try and work out
which browser is currently browsing the site.


So "don't do that".

all the best
Jul 24 '05 #27

P: n/a
James Pickering wrote:
Now I have to retain a new Web Host Service (that utilizes Apache,
for instance) if I wish to implement content-negotiation. I wonder how
many other Web authors are facing the same situation?


The reply wasn't clear to me: if you could point them to the relevant
documentation, would they set it up?

I don't know zeus, but I'd be surprised if it doesn't support
content negotiation.

--
Nick Kew
Jul 24 '05 #28

P: n/a
Nick Kew wrote:
James Pickering wrote:
Now I have to retain a new Web Host Service (that utilizes Apache,
for instance) if I wish to implement content-negotiation. I wonder how
many other Web authors are facing the same situation?


The reply wasn't clear to me: if you could point them to the relevant
documentation, would they set it up?

I don't know zeus, but I'd be surprised if it doesn't support
content negotiation.

--
Nick Kew


Let me assure that I provided them with very comprehensive references,
Nick, including the article with which I led off this thread, links to
all of the W3C pages and documents -- and numerous private Web pages --
plus several phone conversations that I initiated with my Web Host
Technical staff relating to content negotiation. I only included
fragments of my message exchanges here.

There responses were very clear and unequivocating to me -- they
couldn't and wouldn't implement content-negotiation on their Zeus
implementation.

Here is my last interchange with them ..........

Thank you for your prompt reply. Unfortunately, I need
content-negotiation and so I will have to find a new Web Hosting
service.

Best regards,

James Pickering

"Hello,

I'm sorry to hear that. The cancellation form is at .........."

Jul 24 '05 #29

P: n/a
I previously wrote:
.......... I have tried some scripting routines of my own to effect
content negotiation on my Zeus server without success ........


Please check http://www.jp29.org/content-neg.php in your Browser(s) and
report if it renders as Content-Type: application/xhtml+xml with
correct functionality in your XML compliant Browsers (Firefox, et al)
and Content-Type: text/html in your MSIE and legacy Browsers.

James Pickering

Jul 24 '05 #30

P: n/a
I neglected to include one of the very best (IMO) references/discussion
relating to Content-Type:

http://ppewww.ph.gla.ac.uk/~flavell/...tent-type.html

James Pickering
http://www.jp29.org/index.htm

Jul 24 '05 #31

P: n/a
On Tue, 5 Jul 2005 12:57:32 +0930, Tim <ti*@mail.localhost.invalid>
wrote:
Unnecessary, except to the page author.


Even page authors need a hug.
Jul 24 '05 #32

P: n/a
Tim
Tim <ti*@mail.localhost.invalid> wrote:
Unnecessary, except to the page author.

Andy Dingley <di*****@codesmiths.com> posted:
Even page authors need a hug.


And some webmasters need a whipping... >;-)

--
If you insist on e-mailing me, use the reply-to address (it's real but
temporary). But please reply to the group, like you're supposed to.

This message was sent without a virus, please delete some files yourself.
Jul 24 '05 #33

This discussion thread is closed

Replies have been disabled for this discussion.