By using this site, you agree to our updated Privacy Policy and our Terms of Use. Manage your Cookies Settings.
443,694 Members | 1,882 Online
Bytes IT Community
+ Ask a Question
Need help? Post your question and get tips & solutions from a community of 443,694 IT Pros & Developers. It's quick & easy.

utf-8 and xhtml 1.0 strict

P: n/a
My web site has not been spidered by Googlebot since April 2003. The site in
question is at www.TheBicyclingGuitarist.net/ I received much help from this
NG and the stylesheets NG when updating the code before then.

My host's tech guy just sent me the following. Isn't it okay to specify
UTF-8 as the charset in the HTTP headers at the server level? Isn't it okay
to have validated XHTML 1.0 strict code?

************************************************** ***********

If it was an misconfiguration with IIS the problem would be presenting
itself for every site that is hosted on that server under that instance of
IIS which isn't the case here. I have only been able to find two
differences between your site which Google isn't updated, and the sites that
are.

1) You have a custom charset also specified in the HTTP headers at the
server level
2) You are using XHTML strict.

I am curious why you chose XHTML strict rather than traditional? Here's a
quote from broadbandreports.com with the full link at
http://www.broadbandreports.com/faq/...ebmonks?text=1

"If you are using XHTML you should strive to make your pages validate as
XHTML 1.0 Transitional. The XHTML 1.0 Strict standard is a bit too confining
for real world web sites."

My suggestion is still that you talk to Google to find out why their bot
both is getting a 406 error, and why it isn't updating the content it isn't
getting an error on. If you would like I would be happy to reset the HTTP
headers to the default setting so your site identically matches every other
site hosted on this server as far as IIS goes.

----- Original Message -----
From: Chris Watson
To: XXXXXXXXXXXXX
Sent: Tuesday, October 26, 2004 5:16 PM
Subject: RE: Jesse, you really need to see this.
I posted the information from your last two emails in one message at a
search engines newsgroup. What about this guy's answer? It's short and
sweet.

The Bicycling Guitarist wrote:
The following are two messages from the tech guy at my host concerning my
problems with Googlebot or vice versa.


The problem seems to be with your IIS configuration. Google sends
Accept: text/html,text/plain; which of course makes good sense for a
robot as it doesn't want anything else. Your IIS appears to be
incorrectly configured to send a 406 not acceptable message when it sees
this.

If you accept text/* you get your page. It doesn't seem to be linked to
the charset.
Jul 23 '05 #1
Share this Question
Share on Google+
35 Replies


P: n/a
On Thu, 28 Oct 2004 06:26:27 GMT, The Bicycling Guitarist
<Ch***@TheBicyclingGuitarist.net> wrote:
My web site has not been spidered by Googlebot since April 2003. The
site in
question is at www.TheBicyclingGuitarist.net/ I received much help from
this
NG and the stylesheets NG when updating the code before then.

My host's tech guy just sent me the following. Isn't it okay to specify
UTF-8 as the charset in the HTTP headers at the server level? Isn't it
okay
to have validated XHTML 1.0 strict code?


I replied in alt.html - yep, UTF-8 and XHTML (served as text/html) has put
my site at PR4 and is on top for my keywords. The problem must lie
elsewhere.
Jul 23 '05 #2

P: n/a
On Thu, 28 Oct 2004, The Bicycling Guitarist wrote:
My web site has not been spidered by Googlebot since April 2003.

My host's tech guy just sent me the following. Isn't it okay to specify
UTF-8 as the charset in the HTTP headers at the server level?
It certainly is.
<http://google.com/search?q=www.unics.uni-hannover.de/nhtcapri/multilingual1.html>
Isn't it okay
to have validated XHTML 1.0 strict code?
I think it's okay. But why are you writing XHTML 1.0 instead of HTML 4.01?
Is there any good reason or is it just "kewl"?
2) You are using XHTML strict.
I am curious why you chose XHTML strict rather than traditional?

"If you are using XHTML you should strive to make your pages validate as
XHTML 1.0 Transitional. The XHTML 1.0 Strict standard is a bit too confining
for real world web sites."


This is absurd because any document that validates as [X]HTML Strict also
validates as [X]HTML Transitional. This person is clueless.

I have no indication that Google struggles with XHTML - but what's your
reason to write XHTML 1.0 rather than HTML 4.01 Strict?

--
Top-posting.
What's the most irritating thing on Usenet?

Jul 23 '05 #3

P: n/a

"Andreas Prilop" <nh******@rrzn-user.uni-hannover.de> wrote in message
news:Pine.GSO.4.44.0410281439180.12878-100000@s5b003...
On Thu, 28 Oct 2004, The Bicycling Guitarist wrote:
My web site has not been spidered by Googlebot since April 2003.

My host's tech guy just sent me the following. Isn't it >> 2) You are
using XHTML strict.
I am curious why you chose XHTML strict rather than traditional?

"If you are using XHTML you should strive to make your pages validate as
XHTML 1.0 Transitional. The XHTML 1.0 Strict standard is a bit too
confining
for real world web sites."
This is absurd because any document that validates as [X]HTML Strict also
validates as [X]HTML Transitional. This person is clueless.


I will point this out to him.

I have no indication that Google struggles with XHTML - but what's your
reason to write XHTML 1.0 rather than HTML 4.01 Strict?


For Duty and Humanity! I wish to make my pages more available to different
user-agents as they develop, and to make it easier to convert to xml in the
future.

If the tech really is clueless, then perhaps he hasn't configured the server
correctly even if he thinks he has. Can this be tested by somebody other
than him?

Chris Watson
www.TheBicyclingGuitarist.net/

..
Jul 23 '05 #4

P: n/a
On Thu, 28 Oct 2004, The Bicycling Guitarist wrote:
My host's tech guy just sent me the following.
Sounds to me like "grasp as any conceivable excuse and see if
the customer is clueless enough to swallow it".
Isn't it okay to specify UTF-8 as the charset in the HTTP headers at
the server level?
Not only is it "OK", but it's also strongly recommended.
Isn't it okay to have validated XHTML 1.0 strict code?
Only a super-purist (i.e me on a bad day) would complain that Appendix
C is hostile to the very intentions of XML. However, a more practical
person might counsel you to carry on using HTML/4.01 (by all means
"strict") until you're ready to go for the full XHTML+XML stuff. Or
rather, until the rest of the world is ready for you ;-))
If it was an misconfiguration with IIS
Oh, blimey. Is -that- the server they're using....
the problem would be presenting itself for every site that is hosted
on that server under that instance of IIS which isn't the case here.
The chap's plausible, at least... but is it technically well
founded?...
1) You have a custom charset also specified in the HTTP headers at the
server level
I wouldn't exactly refer to utf-8 as "custom" !!!
2) You are using XHTML strict.

I am curious why you chose XHTML strict rather than traditional?
I'm confident that this issue is irrelevant to your original question.
Here's a
quote from broadbandreports.com [...]

"If you are using XHTML you should strive to make your pages validate as
XHTML 1.0 Transitional. The XHTML 1.0 Strict standard is a bit too confining
for real world web sites."
I would take the opposite point of view. If starting from HTML4.*
transitional, my first priority would be to head in the direction of
"strict" (with CSS for presentation), than to worry for the moment
about XHTML as such.

As far as I'm concerned, XHTML/1.0 "transitional" is merely a formal
rewriting into XML notation of an HTML/4.01 specification that was
already obsolescent at the time, and is surely inappropriate now for
new developments.
My suggestion is still that you talk to Google to find out why their
bot both is getting a 406 error,
Uh-uh, now we're getting to the -real- problem. Let's see this on the
lab bench:

$ telnet www.thebicyclingguitarist.net 80
Trying 216.229.101.149...
Connected to www.thebicyclingguitarist.net.
Escape character is '^]'.
GET / HTTP/1.0
Host: www.thebicyclingguitarist.net
Accept: text/html,text/plain

HTTP/1.1 406 No acceptable objects were found
Server: Microsoft-IIS/5.0
Date: Thu, 28 Oct 2004 13:51:57 GMT
Content-Length: 3906
Content-Type: text/html

<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 3.2 Final//EN">
<html dir=ltr>

[...and so on, yeuch...]
Once more with feeling:

$ telnet www.thebicyclingguitarist.net 80
Trying 216.229.101.149...
Connected to www.thebicyclingguitarist.net.
Escape character is '^]'.
GET / HTTP/1.0
Host: www.thebicyclingguitarist.net
Accept: text/html

HTTP/1.1 406 No acceptable objects were found
Server: Microsoft-IIS/5.0
Date: Thu, 28 Oct 2004 13:53:59 GMT

[...bleagh...]
You need to concentrate on why content-type negotiation is failing.

<prejudice type="Halloween Papers">If it was me, my first priority
would be to migrate to an Apache server.</>
If you would like I would be happy to reset the HTTP headers to the
default setting so your site identically matches every other site
hosted on this server as far as IIS goes.


Pfffffffffffffft.

This server is demonstrating its inability to implement content-type
negotiation correctly. As such, it's unfit for use on the WWW in this
state.

That's my best offer based on the evidence presented and the results
as far as I can see them.
Jul 23 '05 #5

P: n/a
Once upon a time *The Bicycling Guitarist* wrote:
"Andreas Prilop" <nh******@rrzn-user.uni-hannover.de> wrote in message
news:Pine.GSO.4.44.0410281439180.12878-100000@s5b003...
On Thu, 28 Oct 2004, The Bicycling Guitarist wrote:
My web site has not been spidered by Googlebot since April 2003.

My host's tech guy just sent me the following. Isn't it >> 2) You are
using XHTML strict.
I am curious why you chose XHTML strict rather than traditional?

"If you are using XHTML you should strive to make your pages validate as
XHTML 1.0 Transitional. The XHTML 1.0 Strict standard is a bit too
confining
for real world web sites."


This is absurd because any document that validates as [X]HTML Strict also
validates as [X]HTML Transitional. This person is clueless.


I will point this out to him.


I have small site in XHTML 1.1 and UTF-8 and I have no problem with
Google. So, your host's tech guy don't know what his talking about!

I have no indication that Google struggles with XHTML - but what's your
reason to write XHTML 1.0 rather than HTML 4.01 Strict?


For Duty and Humanity! I wish to make my pages more available to different
user-agents as they develop, and to make it easier to convert to xml in the
future.


I you want to use XHTML, why not go strait to XHTML 1.1 if you don't
need the transitional in 1.0 It's no big difference moving from 1.0
Strict to 1.1

In my opinion the transitional doctype should not be used in others than
frameset build sites, no matter if its XHTML 1.0 or HTML 4. The
"frameset" doctype must be used for the file containing the frameset,
and the transitional for other pages within the site makes the use of
the "target" attribut valid. But if you don't use frameset, you can go
to XHTML 1.1 directly

--
/Arne
http://w1.978.telia.com/~u97802964/
Jul 23 '05 #6

P: n/a
On Thu, 28 Oct 2004 14:53:30 GMT, Invalid User <us**@domain.invalid> wrote:
I have small site in XHTML 1.1 and UTF-8 and I have no problem with
Google. So, your host's tech guy don't know what his talking about!


But how do you get IE to handle it?

Or are you serving 1.1 as text/html ? Oh dear.

Steve

Jul 23 '05 #7

P: n/a
Invalid User <us**@domain.invalid> writes:
Once upon a time *The Bicycling Guitarist* wrote:
"Andreas Prilop" <nh******@rrzn-user.uni-hannover.de> wrote in message
I have no indication that Google struggles with XHTML - but what's your
reason to write XHTML 1.0 rather than HTML 4.01 Strict?
For Duty and Humanity! I wish to make my pages more available to
different user-agents as they develop, and to make it easier to
convert to xml in the future.


Write XHTML and use tidy or similar to convert it to HTML before
serving? At the moment there are more user agents that support HTML
than there are that support XHTML, and I don't know of any that only
support XHTML (given that most pages don't validate to any doctype,
any that do exist are probably confined to the lab).

XHTML also has problems with incremental rendering.

Oh, and all those /s in XHTML add to the filesize. ;)
I you want to use XHTML, why not go strait to XHTML 1.1 if you don't
need the transitional in 1.0 It's no big difference moving from 1.0
Strict to 1.1


The Appendix C fix isn't allowed with XHTML 1.1, so you either serve
it against the specifications (and why go to something as obscure as
XHTML 1.1 if not to follow the specifications), or you accept that
Internet Explorer users won't be able to view it.

--
Chris
Jul 23 '05 #8

P: n/a
The Bicycling Guitarist wrote:
My web site has not been spidered by Googlebot since April 2003. The site in
question is at www.TheBicyclingGuitarist.net/ I received much help from this
NG and the stylesheets NG when updating the code before then.


Seems to be sending the incorrect MIME type "text/*" instead of text/html:
[leif@localhost leif]$ telnet TheBicyclingGuitarist.net 80
Trying 216.229.101.149...
Connected to TheBicyclingGuitarist.net (216.229.101.149).
Escape character is '^]'.
GET / HTTP/1.1
Host: TheBicyclingGuitarist.net

HTTP/1.1 200 OK
Server: Microsoft-IIS/5.0
Content-Location: http://TheBicyclingGuitarist.net/index.htm
Date: Thu, 28 Oct 2004 18:24:11 GMT
Content-Type: text/*;charset=utf-8
Accept-Ranges: bytes
Last-Modified: Wed, 27 Oct 2004 19:48:04 GMT
ETag: "fab99adf5dbcc41:9f9"
Content-Length: 5169

<snip>
Jul 23 '05 #9

P: n/a
Once upon a time *Steve Pugh* wrote:
On Thu, 28 Oct 2004 14:53:30 GMT, Invalid User <us**@domain.invalid> wrote:
I have small site in XHTML 1.1 and UTF-8 and I have no problem with
Google. So, your host's tech guy don't know what his talking about!


But how do you get IE to handle it?

Or are you serving 1.1 as text/html ? Oh dear.


Yes, I have to if I want the IE users to have access to it.
What do you mean with "Oh dear"? Nothing wrong with text/html since it's
valid to do so, andf even recommended by W3C. However, it's just a small
personal "play ground" for testing purposes.

But basically, why use HTML 4.01 Strict, when XHTML is available. The
difference is not that big, but XHTML is more prepared for the future.
Therefore more XHTML coded sites is published almost every day now.

--
/Arne
http://w1.978.telia.com/~u97802964/
Jul 23 '05 #10

P: n/a
Invalid User wrote:
Once upon a time *Steve Pugh* wrote:
Invalid User wrote:
I have small site in XHTML 1.1 and UTF-8 and I have no problem
with Google.
But how do you get IE to handle it?

Or are you serving 1.1 as text/html ? Oh dear.


Yes, I have to if I want the IE users to have access to it.


That's why HTML is better.
What do you mean with "Oh dear"? Nothing wrong with text/html since
it's valid to do so, andf even recommended by W3C.
Wrong. For XHTML 1.1, the W3C says XHTML 1.1 "should not" be sent as
text/html.

http://www.w3.org/TR/xhtml-media-types/

But what the W3C says should take second seat to the current situation,
where XHTML served as text/html has real problems.

http://www.hixie.ch/advocacy/xhtml
But basically, why use HTML 4.01 Strict, when XHTML is available.
Because XHTML offers no benefits over HTML, but comes with several
distinct disadvantages. The most obvious is that MSIE cannot process XHTML.
The difference is not that big,
In terms of what each offers for authors, the difference is
non-existant. That is, XHTML offers nothing that HTML cannot do.
but XHTML is more prepared for the future.
Uh, yeah. But HTML is more prepared for now. And will still be useful in
the future. Just as useful as XHTML will be. Or do you have something
more to offer than the empty phrase "more prepared"?
Therefore more XHTML coded sites is published almost every day now.


Mostly by people who don't understand the issues involved. Instead, like
you, they've been seduced the the "X" into thinking it's somehow cool or
modern.

--
Brian (remove "invalid" to email me)
Jul 23 '05 #11

P: n/a
Invalid User <us**@domain.invalid> wrote:
But basically, why use HTML 4.01 Strict, when XHTML is available.


Currently, most people use browsers (or browser-like OS components) that do
not understand XHTML. XHTML only works for these browsers if you pretend
that it is HTML (by sending it as text/html and relying on buggy parsing to
ignore the syntax differences).

If you have to pretend that XHTML is HTML to get browsers (or browser-like
OS components) to do something sensible with it, then why not just send
them HTML in the first place?

See also http://hixie.ch/advocacy/xhtml
--
Darin McGrew, mc****@stanfordalumni.org, http://www.rahul.net/mcgrew/
Web Design Group, da***@htmlhelp.com, http://www.HTMLHelp.com/

"Entering Yosemite National Park: laws of gravity strictly enforced"
Jul 23 '05 #12

P: n/a
On Thu, 28 Oct 2004, Invalid User wrote:
What do you mean with "Oh dear"? Nothing wrong with text/html since
it's valid to do so, ^^^^^

First of all: if it's sent as text/html, it doesn't conform to the
requirements for XHTML/1.1, therefore it's not XHTML/1.1. The W3C
have a trademark on that.

And I would have a care about using the term "valid" in such a
context. That's a specialist technical term with a rather precise
meaning.

At best it might be compatible with XHTML/1.0 Appendix C, but that
concession (compatibility-faking XHTML as text/html) stops right
there.

You should be aware that even that depends on some handwaving, and
the validators have to go around some hoops to stand a chance of
coping with these schizophrenic documents.

By rights, HTML is an "application of SGML" (as the W3C specification
says) and thus *should* be measured against SGML specifications, not
against some XML compatibility hack.
even recommended by W3C.
I must say I think you need to read those recommendations more
carefully. Even the W3C offers XHTML/1.0 Appendix C as nothing more
than an end-of-the-line compatibility hack.
However, it's just a small personal "play ground" for testing
purposes.
That's fine. No objections to that at all - best of luck to you.
But when you start claiming that you're taking the W3C specifications
seriously, then I think you at least must expect to be measured in
those terms.
But basically, why use HTML 4.01 Strict, when XHTML is available.
Because it's what the presently deployed browsers were designed to
render, I'd say.
The difference is not that big,
Fine... however, the difference between tag-soup and properly
structured markup is important. Properly structured markup is just as
possible in HTML as it is in XHTML, but unfortunately, in real-world
operational terms, a content-type of "text/html" implies to the
recipient "you better treat this as tag soup", no matter that some of
us try to take structured HTML seriously.

And HTML/4.01 and XHTML/1.0 can be converted to and from each other by
rote. So *that* difference is no big deal, and represents less than a
small step for mankind. Whereas the difference between tag-soup and
structured markup is so broad that it may never be bridged. My
suspicion is that we're going to end up with XHTML-flavoured tag soup,
thus losing any of the benefits which XML was *supposed* to bring us.
but XHTML is more prepared for the future.
Then the suggestion would be to base your internal processes on XML,
but - for now - to generate HTML as the end product.
Therefore more XHTML coded sites is published almost every day now.


That reads like a plain non-sequitur to me. Publishing more and more
XHTML pages doesn't in itself improve the ability of browsers to
render it. It might motivate browser developers to do better, but
that's not going to produce immediate results.
Jul 23 '05 #13

P: n/a
Once upon a time *Chris Morris* wrote:

The Appendix C fix isn't allowed with XHTML 1.1, so you either serve
it against the specifications (and why go to something as obscure as
XHTML 1.1 if not to follow the specifications), or you accept that
Internet Explorer users won't be able to view it.


The specifications and guidlines from W3C can be a bit confusing some
times. But at least XHTML served as text/html is valid. And on several
W3C pages content similar to this can be found:

<copy and pasted>
C.9. Character Encoding
Historically, the character encoding of an HTML document is either
specified by a web server via the charset parameter of the HTTP
Content-Type header, or via a meta element in the document itself. In an
XML document, the character encoding of the document is specified on the
XML declaration (e.g., <?xml version="1.0" encoding="EUC-JP"?>).

In order to portably present documents with specific character
encodings, the best approach is to ensure that the web server provides
the correct headers. If this is not possible, a document that wants to
set its character encoding explicitly must include both the XML
declaration an encoding declaration and a meta http-equiv statement
(e.g., <meta http-equiv="Content-type" content="text/html;
charset=EUC-JP" />).

In XHTML-conforming user agents, the value of the encoding declaration
of the XML declaration takes precedence.
</end copy and paste>

I am using the content=application/xhtml+xml" on couple of pages. But
since the server don't provide the correct header, the page is served as
text/html and is still valid. Anyway I'm prepaired for the real xhtml
and xml when playing around with it as I do :-)

--
/Arne
http://w1.978.telia.com/~u97802964/
Jul 23 '05 #14

P: n/a
Once upon a time *Brian* wrote:
Therefore more XHTML coded sites is published almost every day now.


Mostly by people who don't understand the issues involved. Instead, like
you, they've been seduced the the "X" into thinking it's somehow cool or
modern.


Where do you think I copied this from:
<?xml version="1.0"?>
<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Strict//EN"
"http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd">
<html xmlns="http://www.w3.org/1999/xhtml" lang="en" xml:lang="en">

No, it's not any of my pages. It's from the W3C site! Don't even the
guys at W3C understand? I know a lot more sites by people who know a lot
more about this than I do, and who use XHTML served as text/html

--
/Arne
http://w1.978.telia.com/~u97802964/
Jul 23 '05 #15

P: n/a

"Leif K-Brooks" <eu*****@ecritters.biz> wrote in message
news:2u*************@uni-berlin.de...
The Bicycling Guitarist wrote:
My web site has not been spidered by Googlebot since April 2003. The site
in question is at www.TheBicyclingGuitarist.net/ I > Seems to be sending
the incorrect MIME type "text/*" instead of text/html:


[leif@localhost leif]$ telnet TheBicyclingGuitarist.net 80
Trying 216.229.101.149...
Connected to TheBicyclingGuitarist.net (216.229.101.149).
Escape character is '^]'.
GET / HTTP/1.1
Host: TheBicyclingGuitarist.net

HTTP/1.1 200 OK
Server: Microsoft-IIS/5.0
Content-Location: http://TheBicyclingGuitarist.net/index.htm
Date: Thu, 28 Oct 2004 18:24:11 GMT
Content-Type: text/*;charset=utf-8
Accept-Ranges: bytes
Last-Modified: Wed, 27 Oct 2004 19:48:04 GMT
ETag: "fab99adf5dbcc41:9f9"
Content-Length: 5169

Oh no. I just asked the tech to change from text/html to text/* on the
advice of someone in another NG. I am sorry about posting the same question
to two similar NG's. I was told "If you accept text/* you get your page. It
doesn't seem to be linked to the charset."

The problem existed for a year and a half using "text/html". The change to
"text/*" just happened today or yesterday. Should the tech change it back?
How do I get these two NG threads back together?

Chris Watson a.k.a. "The Bicycling Guitarist"
Jul 23 '05 #16

P: n/a
On Thu, 28 Oct 2004 19:50:17 GMT, Invalid User <us**@domain.invalid>
wrote:
[...]
Where do you think I copied this from:
<?xml version="1.0"?>
<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Strict//EN"
"http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd">
<html xmlns="http://www.w3.org/1999/xhtml" lang="en" xml:lang="en">


Hmm, lets quote you from an earlier post...

"On Thu, 28 Oct 2004 14:53:30 GMT, Invalid User
<us**@domain.invalid> wrote:
[...]
I have small site in XHTML 1.1 and UTF-8 and I have no
problem with Google. So, your host's tech guy don't know
what his talking about!"

You claimed; "I have small site in XHTML 1.1 and UTF-8"
but your document prolog indicates XHTML 1.0

There's a difference specified at W3C on what MIME type to use for these
two different versions of XHTML as you may want to find out.

Hence the confusion that did develop in this thread.

--
Rex
Jul 23 '05 #17

P: n/a

"Alan J. Flavell" <fl*****@ph.gla.ac.uk> wrote in message
news:Pi*******************************@ppepc56.ph. gla.ac.uk...
On Thu, 28 Oct 2004, The Bicycling Guitarist wrote:

$ telnet www.thebicyclingguitarist.net 80
Trying 216.229.101.149...
Connected to www.thebicyclingguitarist.net.
Escape character is '^]'.
GET / HTTP/1.0
Host: www.thebicyclingguitarist.net
Accept: text/html,text/plain

HTTP/1.1 406 No acceptable objects were found
Server: Microsoft-IIS/5.0
Date: Thu, 28 Oct 2004 13:51:57 GMT
Content-Length: 3906
Content-Type: text/html

Thank you for your help, Alan. I notice the test date is Oct 28. On Oct 27
the tech changed the MIME from text/html to text/* at my request. Is that
wrong?
Chris Watson a.k.a. "The Bicycling Guitarist"
Jul 23 '05 #18

P: n/a
The Bicycling Guitarist wrote:
Oh no. I just asked the tech to change from text/html to text/* on the
advice of someone in another NG. I am sorry about posting the same question
to two similar NG's. I was told "If you accept text/* you get your page. It
doesn't seem to be linked to the charset."


He was talking about the accept header that the user agent (e.g.
browser) sends to the server, not the content-type that your server
changes to the browser. Wildcarding is acceptable for accept headers,
but for content-type headers, it's ridiculous.

By the way, your broken content-type header makes the site not work in
working browsers (like Mozilla).
Jul 23 '05 #19

P: n/a

"Alan J. Flavell" <fl*****@ph.gla.ac.uk> wrote in message
news:Pi*******************************@ppepc56.ph. gla.ac.uk...
On Thu, 28 Oct 2004, The Bicycling Guitarist wrote:
My host's tech guy just sent me the following.
Once more with feeling:
$ telnet www.thebicyclingguitarist.net 80
Trying 216.229.101.149...
Connected to www.thebicyclingguitarist.net.
Escape character is '^]'.
GET / HTTP/1.0
Host: www.thebicyclingguitarist.net
Accept: text/html

HTTP/1.1 406 No acceptable objects were found
Server: Microsoft-IIS/5.0
Date: Thu, 28 Oct 2004 13:53:59 GMT

You need to concentrate on why content-type negotiation is failing.


The following is what my host's tech guy just sent me. Now he says it's the
fault of specifying the charset in the server header. I thought that's what
we're supposed to do. Is he doing it wrong?

Chris,
While I agree that the charset of usf-8 [sic] is hardly "custom" the fact
that you had us enter it as a server injected http header as documented at
http://www.w3.org/International/O-HTTP-charset makes it a custom setting on
your site on IIS. Also doing a search on the M$ KB I found a list of IIS
error codes which included "406 - Client browser does not accept the MIME
type of the requested page." again leading me to believe this issue has to
do with the MIME charset specificiation. You also qouted in a previous
message "Google sends Accept: text/html,text/plain; which of course makes
good sense for a robot as it doesn't want anything else" further reinforcing
my theory.

When you telnet in and request the page manually with an accept of
"text/html" or "text/html,text/plain: the server tries to return
"text/html;charset=utf-8" which isn't what the client requested so it kicks
back the 406 error. If you specify the content type in the accept field, it
matches what the server is offering (see below) and gives the http 200
status. I also temporarily removed the charset=usf-8 setting and tested it
again sending nothing but an "Accept: text/html" and "text/html,text/plain"
and it responded correctly. If there is another way in IIS that you would
like me to specify the charset, other than outlined at w3.org please let me
know.

$ telnet thebicyclingguitarist.net 80

Trying 216.229.101.149...

Connected to www.thebicyclingguitarist.net.

Escape character is '^]'.

GET / HTTP/1.0

Host: www.thebicyclingguitarist.net

Accept Content-type: text/html,text/plain,usf-8

HTTP/1.1 200 OK

Server: Microsoft-IIS/5.0

Content-Location: http://www.thebicyclingguitarist.net/index.htm

Date: Thu, 28 Oct 2004 21:01:12 GMT

Content-Type: text/html;charset=utf-8

Accept-Ranges: bytes

Last-Modified: Wed, 27 Oct 2004 19:48:04 GMT

ETag: "fab99adf5dbcc41:9fa"

Content-Length: 5169


Jul 23 '05 #20

P: n/a
Alan J. Flavell wrote:
My suspicion is that we're going to end up with XHTML-flavoured tag soup,
thus losing any of the benefits which XML was *supposed* to bring us.


We're seeing some of that already, in tag-soup pages in which
XHTML-style tags like <br /> are sprinkled randomly (interspersed with
their HTML-style equivalents like <br>).

--
== Dan ==
Dan's Mail Format Site: http://mailformat.dan.info/
Dan's Web Tips: http://webtips.dan.info/
Dan's Domain Site: http://domains.dan.info/
Jul 23 '05 #21

P: n/a
In article <nc****************@newssvr13.news.prodigy.com>,
"The Bicycling Guitarist" <Ch***@TheBicyclingGuitarist.net> writes:
The following is what my host's tech guy just sent me. Now he says it's the
fault of specifying the charset in the server header. I thought that's what
we're supposed to do. Is he doing it wrong?


Yes.

Well, I don't know IIS - it may be impossible to do it right.
But what Alan reported shows that it doesn't support HTTP, at least
as currently configured. If your host can't fix that, then they're
as fit for use on the Internet as a washing machine that only works
on 115 volts DC.

Hmmm, I recently ran an accessibility evaluation on an IIS site -
just checked it. They return "text/html; charset=UTF-8" and an HTML
page, regardless of any Accept header I send. Maybe that would be
an improvement for your host, if they lack the basic commonsense to
upgrade to Apache.

--
Nick Kew
Jul 23 '05 #22

P: n/a
On Thu, 28 Oct 2004 21:32:47 -0400, Daniel R. Tobias <da*@tobias.name>
wrote:
Alan J. Flavell wrote:

My suspicion is that we're going to end up with XHTML-flavoured tag
soup, thus losing any of the benefits which XML was *supposed* to bring
us.


We're seeing some of that already, in tag-soup pages in which
XHTML-style tags like <br /> are sprinkled randomly (interspersed with
their HTML-style equivalents like <br>).


Don't forget the XHTML pages with topmargin=0 added to the BODY tag...
Even systems geared to producing standards based markup like some blogging
tools are not vigorously checking input and comments, so you end up with
invalid pages anyway. Not a big deal, but this means that browsers will
not be able to treat XHTML different from HTML: it is and will stay tag
soup. There is nothing (more) wrong with sending XHTML tag soup instead of
sending HTML tag soup, from the browsers and readers point of view.

--
Rijk van Geijtenbeek

The Web is a procrastination apparatus:
It can absorb as much time as is required to ensure that you
won't get any real work done. - J.Nielsen

Jul 23 '05 #23

P: n/a

"Nick Kew" <ni**@hugin.webthing.com> wrote in message
news:7e************@hugin.webthing.com...
In article <nc****************@newssvr13.news.prodigy.com>,
"The Bicycling Guitarist" <Ch***@TheBicyclingGuitarist.net> writes:

But what Alan reported shows that it doesn't support HTTP, at least
as currently configured. If your host can't fix that, then they're
as fit for use on the Internet as a washing machine that only works
on 115 volts DC.

Hmmm, I recently ran an accessibility evaluation on an IIS site - Nick Kew


Hi Nick. I like "fussy" mode. Your online tools have greatly assisted me
many times. I am very grateful for your assistance, and to everyone else who
has contributed to this thread. I may need another host soon.
Chris Watson a.k.a. "The Bicycling Guitarist"
Jul 23 '05 #24

P: n/a
In article <op**************@news.individual.net>,
"Rijk van Geijtenbeek" <ri**@operaremovethiz.com> writes:
Even systems geared to producing standards based markup like some blogging
tools are not vigorously checking input and comments, so you end up with
invalid pages anyway.


Huh? Either tools generate valid markup, or they don't. You want to
accept markup from users and guarantee it's valid, see for example
http://www.apachetutor.org/apps/annot

--
Nick Kew
Jul 23 '05 #25

P: n/a
On Fri, 29 Oct 2004 12:08:42 +0100, Nick Kew <ni**@hugin.webthing.com>
wrote:
In article <op**************@news.individual.net>,
"Rijk van Geijtenbeek" <ri**@operaremovethiz.com> writes:
Even systems geared to producing standards based markup like some
blogging
tools are not vigorously checking input and comments, so you end up with
invalid pages anyway.
Huh? Either tools generate valid markup, or they don't.


Many don't. Even though they use nice standards-based table-less CSS
driven markup, that's what I meant.
You want to
accept markup from users and guarantee it's valid, see for example
http://www.apachetutor.org/apps/annot


--
Rijk van Geijtenbeek

The Web is a procrastination apparatus:
It can absorb as much time as is required to ensure that you
won't get any real work done. - J.Nielsen

Jul 23 '05 #26

P: n/a
"The Bicycling Guitarist" <Ch***@TheBicyclingGuitarist.net> a écrit
dans le message de news:nc****************@newssvr13.news.prodigy.com
The following is what my host's tech guy just sent me. (...) Chris, (...) When you telnet in and request the page manually with an accept of
"text/html" or "text/html,text/plain: the server tries to return
"text/html;charset=utf-8" which isn't what the client requested so it
kicks back the 406 error.


Seems like there is a big confusion : the mime type value and the charset
share the same content type http header. For a content type negociation of
course only the mime type is compared. The charset information is, as told
before, very important and should always be returned.

Jul 23 '05 #27

P: n/a

"Pierre Goiffon" <pg******@nowhere.invalid> wrote in message
news:41***********************@news.free.fr...
"The Bicycling Guitarist" <Ch***@TheBicyclingGuitarist.net> a écrit
dans le message de news:nc****************@newssvr13.news.prodigy.com
The following is what my host's tech guy just sent me.

(...)
Chris,

(...)
When you telnet in and request the page manually with an accept of
"text/html" or "text/html,text/plain: the server tries to return
"text/html;charset=utf-8" which isn't what the client requested so it
kicks back the 406 error.


Seems like there is a big confusion : the mime type value and the charset
share the same content type http header. For a content type negociation of
course only the mime type is compared. The charset information is, as told
before, very important and should always be returned.


Should I also forward the above paragraph to my host's tech? Will *this*
finally fix things (assuming he understands it and can implement it on IIS)?
Chris Watson a.k.a. "The Bicycling Guitarist"
Jul 23 '05 #28

P: n/a
"The Bicycling Guitarist" <Ch***@TheBicyclingGuitarist.net> a écrit
dans le message de news:33****************@newssvr13.news.prodigy.com
Should I also forward the above paragraph to my host's tech?


Hu. Well no problem. You should just rephrase because I think my english is
far for perfect :)
The idea is that the content-type HTTP header contains two distincts MIME
informations : the MIME content-type itself and the charset. The content
negociation must be done with the _MIME_ content-type.

Jul 23 '05 #29

P: n/a
Invalid User <us**@domain.invalid> wrote:
Therefore more XHTML coded sites is published almost every day now.

Every day more Lemmings are jumping off a cliff.
Mostly by people who don't understand the issues involved. Instead, like
you, they've been seduced the the "X" into thinking it's somehow cool or
modern.
Where do you think I copied this from:
<?xml version="1.0"?>
<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Strict//EN"
"http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd">
<html xmlns="http://www.w3.org/1999/xhtml" lang="en" xml:lang="en">

No, it's not any of my pages. It's from the W3C site! Don't even the
guys at W3C understand?


They do, it's a mistake to see their usage of xhtml served as text/html
as a recommendation. It's an aspiration that has proven to be pointless
for the foreseeable future. Even the things they do recommend should be
evaluated for value, usefulness and practicality. This evaluation is
left to the coding community and the verdict on xhtml is: don't. Look
through the archives of this group; every time it is discussed the
mythical benefits of xhtml have been dispelled by the people who've been
there and done that.
I know a lot more sites by people who know a lot
more about this than I do, and who use XHTML served as text/html


Lemmings, do you want to be one?

--
Spartanicus
Jul 23 '05 #30

P: n/a
On Thu, 28 Oct 2004 19:50:17 GMT, Invalid User <us**@domain.invalid> wrote:
Where do you think I copied this from:
<?xml version="1.0"?>
<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Strict//EN"
"http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd">
<html xmlns="http://www.w3.org/1999/xhtml" lang="en" xml:lang="en">

No, it's not any of my pages. It's from the W3C site! Don't even the
guys at W3C understand? I know a lot more sites by people who know a lot
more about this than I do, and who use XHTML served as text/html


Just because W3C publishes a recommendation and puts it into practice does
not mean it is A Good Thing all by itself.

The code cited above will screw things up on IE. I know the results page
on their validator renders tragically on Opera. You must remember, W3C
page coding is not normative, as the recommendations can be. Errors are
made, likely not in validity but in execution, just like many sites.

And if you give it a little thought, they do kind of "have to" use their
recommendations, don't they? Whether they're in the end good or bad, they
need to show their way is worth doing. If they were publishing a new way
to send content to browsers and they themselves did not use it, it would
be suspect. So, even if we consider each use of XHTML as a "vote" for that
way of doing things (which IMO we should not), the W3C site should not
have an equal vote to another site with no compelling reason to use it.
Jul 23 '05 #31

P: n/a
Invalid User wrote:
Once upon a time *Brian* wrote:
Therefore more XHTML coded sites is published almost every day
now.


Mostly by people who don't understand the issues involved. Instead,
like you, they've been seduced the the "X" into thinking it's
somehow cool or modern.


Where do you think I copied this from:
<?xml version="1.0"?>
<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Strict//EN"
"http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd">


So this is XHTML 1.0, and not 1.1? Because you were claiming that 1.1
could be sent as text/html. Anyways, none of this addresses the issue:
what is the content type that the W3C's server sends before sending the
document which you copied here?

--
Brian (remove "invalid" to email me)
Jul 23 '05 #32

P: n/a
Pierre Goiffon wrote:
The idea is that the content-type HTTP header contains two distincts
MIME informations : the MIME content-type itself and the charset. The
content negociation must be done with the _MIME_ content-type.


Sure, but note that you could, if you were so inclined, also offer
negotiation on encoding.

--
Brian (remove "invalid" to email me)
Jul 23 '05 #33

P: n/a
On Fri, 29 Oct 2004, The Bicycling Guitarist wrote:
Seems like there is a big confusion : the mime type value and the
charset share the same content type http header. For a content
type negociation of course only the mime type is compared. The
charset information is, as told before, very important and should
always be returned.
Should I also forward the above paragraph to my host's tech?


Based on your earlier reports, s/he seems to be making it up as they
go along. I've seen not the slightest evidence so far (even making
generous allowances for the fact that we're getting these reports at
second hand - no offence meant) that the tech in question either
understands content-type negotiation, or understands that is what
you're trying to achieve, or has the first inkling how to set it up on
that particular server.
Will *this* finally fix things (assuming he understands it and can
implement it on IIS)?


That's the imponderable question, or two, isn't it?

Those of us around here who have commented already, are confident that
they know how to do it with Apache, and are aware of some minor
glitches and can advise on details (and I don't for a moment mean only
me). Even if the details of Apache would really be better discussed
on a group having comp.infosystems.www.servers.* in its name. But, if
you needed to counsel your own ISP how to do it with IIS, I think I'd
have to refer you to the relevant servers group, and leave you to the
tender mercies of the MCSEs ("Must Consult Someone Else" was one of
the more polite versions I heard).

I really have no idea whether this is going to prove feasible in the
end, on IIS, and I have higher priority issues on my stack than to
want to find out, to be brutally honest. I can't recall anything
that has both "MS" and either "Internet" or "W3C" in it, that hasn't
ended in tears for one reason or another.

Good luck. (let's hope you're not paying a premium for using IIS
rather than an Apache-based server...)
Jul 23 '05 #34

P: n/a

"Alan J. Flavell" <fl*****@ph.gla.ac.uk> wrote in message
news:Pi*******************************@ppepc56.ph. gla.ac.uk...
On Fri, 29 Oct 2004, The Bicycling Guitarist wrote:

Based on your earlier reports, s/he seems to be making it up as they
go along. I really have no idea whether this is going to prove feasible in the
end, on IIS, and I have higher priority issues on my stack than to
want to find out, to be brutally honest. I can't recall anything
that has both "MS" and either "Internet" or "W3C" in it, that hasn't
ended in tears for one reason or another.


Thank you, Alan. I am honored by the attention you gave already to this
matter. It just further proved what you recall about MS.
Chris Watson a.k.a. "The Bicycling Guitarist"
Jul 23 '05 #35

P: n/a
"Alan J. Flavell" <fl*****@ph.gla.ac.uk> writes:
1) You have a custom charset also specified in the HTTP headers at the
server level


I wouldn't exactly refer to utf-8 as "custom" !!!


Obviously you have never dealt with IIS sys-ads; any attempt to invoke
http protocol operations are custom -- as in 'undocumented' -- and as
such must be avoided to keep the system stable.

It is usually sufficient, faster and much cheaper to just change to a
hosting plan that includes access to a web server.
--
| ) Più Cabernet,
-( meno Internet.
| ) http://bednarz.nl/
Jul 23 '05 #36

This discussion thread is closed

Replies have been disabled for this discussion.