Java script Dude wrote:
Have tried both:
<META http-equiv="Content-Type" content="text/html;
charset=western">
<META http-equiv="Content-Type" content="text/html;
charset=western/iso">
... without success.
Well, you may get more success by stop inventing your own encodings and
by using existing ones (I did):
Western Latin:
<meta http-equiv="Content-Type" content="text/html;
charset=iso-8859-1">
Central European:
<meta http-equiv="Content-Type" content="text/html;
charset=iso-8859-2">
UTF-8
<meta http-equiv="Content-Type" content="text/html; charset=utf-8">
Korean for this matter:
<meta http-equiv="Content-Type" content="text/html; charset=euc-kr">
Otherwise this so-called "flaw" is just a proof that if you feed the
browser with wrong parameters (or do not feed *required* ones) then
it's rendering behavior may be randomized. This may be an interesting
game, but one can play it with any existing browser, so why to pick up
on IE exclusively? ;-)
Also kill me if I understand why this particular behavior should be a
*flaw*. The mode called *Auto-Detect*, not *Use System Default*. So
instead of simply using system default or iso-8859-1 like wannabes do,
IE indeed tries to *auto* *detect* the encoding of the crap you fed
into it. If not a single hint provided from server or http-equiv (or
http-equiv provides a non-existing encoding) then IE studies the body
text for hints.
+n- is a valid UTF-7 character so what is your claim? Should IE connect
to ICANN to check if this domain appertains to an Asia domain? Or
submit the text to an online translator to see if UTF-7 variant can be
translated into any existing language (if not then use what?)
Namely, what is your proposed algorithm for such case? "No matter what
it is not UTF-7"? And why not?
One should never depend on server headers and have used encoding
indicated on the page itself. HTTP, XML and even sorry a** XHTML have
all means for it.