473,388 Members | 1,346 Online
Bytes | Software Development & Data Engineering Community
Post Job

Home Posts Topics Members FAQ

Join Bytes to post your question to a community of 473,388 software developers and data experts.

The need of Unicode types in C++0x

Hi, I am currently learning QT, a portable C++ framework which comes
with both a commercial and GPL license, and which provides conversion
operations to its various types to/from standard C++ types.

For example its QString type provides a toWString() that returns a
std::wstring with its Unicode contents.

So, since wstring supports the largest character set, why do we need
explicit Unicode types in C++?

I think what is needed is a "unicode" locale or at the most, some
unicode locales.
I don't consider being compatible with C99 as an excuse.
Oct 1 '08 #1
29 2077
Correction:
Ioannis Vranos wrote:
Hi, I am currently learning QT, a portable C++ framework which comes
with both a commercial and GPL license, and which provides conversion
operations to its various types to/from standard C++ types.
== For example its QString type provides a toStdWString()that returns a
std::wstring with its Unicode contents.

So, since wstring supports the largest character set, why do we need
explicit Unicode types in C++?

I think what is needed is a "unicode" locale or at the most, some
unicode locales.
I don't consider being compatible with C99 as an excuse.
Oct 1 '08 #2
REH
On Oct 1, 5:59*am, Ioannis Vranos <ivra...@no.spam.nospamfreemail.gr>
wrote:
Hi, I am currently learning QT, a portable C++ framework which comes
with both a commercial and GPL license, and which provides conversion
operations to its various types to/from standard C++ types.

For example its QString type provides a toWString() that returns a
std::wstring with its Unicode contents.

So, since wstring supports the largest character set, why do we need
explicit Unicode types in C++?

I think what is needed is a "unicode" locale or at the most, some
unicode locales.

I don't consider being compatible with C99 as an excuse.
If I understand what you are asking...

wstring in the standard defines neither the character set, nor the
encoding. Given that Unicode is currently a 21-bit standard, how can
wstring support the largest character set on a system where wchar_t is
16-bits (assuming a one-character-per-element encoding)? You could
only support the BMP (which is exactly what most systems and language
that "claim" Unicode support are really capable of).

REH
Oct 1 '08 #3
REH wrote:
On Oct 1, 5:59 am, Ioannis Vranos <ivra...@no.spam.nospamfreemail.gr>
wrote:
>Hi, I am currently learning QT, a portable C++ framework which comes
with both a commercial and GPL license, and which provides conversion
operations to its various types to/from standard C++ types.

For example its QString type provides a toWString() that returns a
std::wstring with its Unicode contents.

So, since wstring supports the largest character set, why do we need
explicit Unicode types in C++?

I think what is needed is a "unicode" locale or at the most, some
unicode locales.

I don't consider being compatible with C99 as an excuse.

If I understand what you are asking...

wstring in the standard defines neither the character set, nor the
encoding. Given that Unicode is currently a 21-bit standard, how can
wstring support the largest character set on a system where wchar_t is
16-bits (assuming a one-character-per-element encoding)? You could
only support the BMP (which is exactly what most systems and language
that "claim" Unicode support are really capable of).

I do not know much about encodings, only the necessary for me stuff, but
the question does not sound reasonable for me.

If that system supports Unicode as a system-specific type, why can't
wchar_t be made wide enough as that system-specific Unicode type, in
that system?
Oct 1 '08 #4
On 2008-10-01 18:57, Ioannis Vranos wrote:
REH wrote:
>On Oct 1, 5:59 am, Ioannis Vranos <ivra...@no.spam.nospamfreemail.gr>
wrote:
>>Hi, I am currently learning QT, a portable C++ framework which comes
with both a commercial and GPL license, and which provides conversion
operations to its various types to/from standard C++ types.

For example its QString type provides a toWString() that returns a
std::wstring with its Unicode contents.

So, since wstring supports the largest character set, why do we need
explicit Unicode types in C++?

I think what is needed is a "unicode" locale or at the most, some
unicode locales.

I don't consider being compatible with C99 as an excuse.

If I understand what you are asking...

wstring in the standard defines neither the character set, nor the
encoding. Given that Unicode is currently a 21-bit standard, how can
wstring support the largest character set on a system where wchar_t is
16-bits (assuming a one-character-per-element encoding)? You could
only support the BMP (which is exactly what most systems and language
that "claim" Unicode support are really capable of).


I do not know much about encodings, only the necessary for me stuff, but
the question does not sound reasonable for me.

If that system supports Unicode as a system-specific type, why can't
wchar_t be made wide enough as that system-specific Unicode type, in
that system?
Because it has been to narrow for 5 to 10 years and the compiler vendor
does not want to take any chances with backward compatibility, and since
we will get Unicode types it is a good idea to use wchar_t for encodings
not the same size as the Unicode types.

--
Erik Wikström
Oct 1 '08 #5
On 2008-10-01 12:57:27 -0400, Ioannis Vranos
<iv*****@no.spam.nospamfreemail.grsaid:
>
If that system supports Unicode as a system-specific type, why can't
wchar_t be made wide enough as that system-specific Unicode type, in
that system?
It can be. But the language definition doesn't require it to be, and
with many implementations it's not. So if you want to traffic in
Unicode you have basically three options: ensure that your character
type can handle 21 bits, drop down to a subset of Unicode (as REH
mentioned, the BMP fits in 16 bit code points), or use a variable-width
encoding like UTF-8 or UTF-16.

Or you can wait for C++0x, which will provide char16_t and char32_t.

--
Pete
Roundhouse Consulting, Ltd. (www.versatilecoding.com) Author of "The
Standard C++ Library Extensions: a Tutorial and Reference
(www.petebecker.com/tr1book)

Oct 1 '08 #6
On Oct 1, 11:59 am, Ioannis Vranos <ivra...@no.spam.nospamfreemail.gr>
wrote:
Hi, I am currently learning QT, a portable C++ framework which
comes with both a commercial and GPL license, and which
provides conversion operations to its various types to/from
standard C++ types.
For example its QString type provides a toWString() that
returns a std::wstring with its Unicode contents.
In what encoding format? And what if the "usual" encoding for
wstring isn't Unicode (the case on many Unix platforms).
So, since wstring supports the largest character set, why do
we need explicit Unicode types in C++?
Because wstring doesn't guarantee Unicode, and implementers
can't change what it does guarantee in their particular
implementation.
I think what is needed is a "unicode" locale or at the most,
some unicode locales.
Well, to begin with, there are only two sizes of character
types; the various Unicode encoding forms come in three sizes,
so you already have a size mismatch. And since wchar_t already
has a meaning, we can't just arbitrarily change it.
I don't consider being compatible with C99 as an excuse.
How about being compatible with C++03?

--
James Kanze (GABI Software) email:ja*********@gmail.com
Conseils en informatique orientée objet/
Beratung in objektorientierter Datenverarbeitung
9 place Sémard, 78210 St.-Cyr-l'École, France, +33 (0)1 30 23 00 34

Oct 2 '08 #7
On Oct 1, 6:28 pm, REH <spamj...@stny.rr.comwrote:

[...]
wstring in the standard defines neither the character set, nor the
encoding. Given that Unicode is currently a 21-bit standard, how can
wstring support the largest character set on a system where wchar_t is
16-bits (assuming a one-character-per-element encoding)? You could
only support the BMP (which is exactly what most systems and language
that "claim" Unicode support are really capable of).
No. Most systems that claim Unicode support on 16 bits use
UTF-16. Granted, it's a multi-element encoding, but if you're
doing anything serious, effectively, so is UTF-32. (In
practice, I find that UTF-8 works fine for a lot of things.)

--
James Kanze (GABI Software) email:ja*********@gmail.com
Conseils en informatique orientée objet/
Beratung in objektorientierter Datenverarbeitung
9 place Sémard, 78210 St.-Cyr-l'École, France, +33 (0)1 30 23 00 34
Oct 2 '08 #8
James Kanze wrote:
On Oct 1, 11:59 am, Ioannis Vranos <ivra...@no.spam.nospamfreemail.gr>
wrote:
>Hi, I am currently learning QT, a portable C++ framework which
comes with both a commercial and GPL license, and which
provides conversion operations to its various types to/from
standard C++ types.
>For example its QString type provides a toWString() that
returns a std::wstring with its Unicode contents.

In what encoding format? And what if the "usual" encoding for
wstring isn't Unicode (the case on many Unix platforms).
<curious>
What are those implementations using for 'wchar_t'?
</curious>

Schobi
Oct 2 '08 #9
Erik Wikström wrote:
>
Because it has been to narrow for 5 to 10 years and the compiler vendor
does not want to take any chances with backward compatibility,

How will it break backward compatibility, if the size of whcar_t changes?
and since
we will get Unicode types it is a good idea to use wchar_t for encodings
not the same size as the Unicode types.

I am talking about not needing those Unicode types since we have wchar_t
and locales.

Oct 2 '08 #10
Pete Becker wrote:
On 2008-10-01 12:57:27 -0400, Ioannis Vranos
<iv*****@no.spam.nospamfreemail.grsaid:
>>
If that system supports Unicode as a system-specific type, why can't
wchar_t be made wide enough as that system-specific Unicode type, in
that system?

It can be. But the language definition doesn't require it to be, and
with many implementations it's not

C++03 mentions:
"Type wchar_t is a distinct type whose values can represent distinct
codes for all members of the *largest* extended character set specified
among the supported *locales* (22.1.1). Type wchar_t shall have the same
size, signedness, and alignment requirements (3.9) as one of the other
integral types, called its underlying type".
Oct 2 '08 #11
James Kanze wrote:
On Oct 1, 11:59 am, Ioannis Vranos <ivra...@no.spam.nospamfreemail.gr>
wrote:
>So, since wstring supports the largest character set, why do
we need explicit Unicode types in C++?

Because wstring doesn't guarantee Unicode, and implementers
can't change what it does guarantee in their particular
implementation.

Again, if the implementers want Unicode, they can add a Unicode local
and make wchar_t size large enough to match it.
In other words, C++0x could require all implementations to provide
specific Unicode locales that will work with existing facilities
(wchar_t, wstring, etc).
Oct 2 '08 #12
REH
On Oct 2, 3:41*am, James Kanze <james.ka...@gmail.comwrote:
No. *Most systems that claim Unicode support on 16 bits use
UTF-16. *Granted, it's a multi-element encoding, but if you're
doing anything serious, effectively, so is UTF-32. *(In
practice, I find that UTF-8 works fine for a lot of things.)
The ones I am familiar with only support UCS-2, not UTF-16. Windows,
for example, has WCHAR_T which is not UTF-16 (although Windows does
support MBCS, but I am not sure if that is truly UTF-8).

REH

Oct 2 '08 #13
REH wrote:
On Oct 2, 3:41 am, James Kanze <james.ka...@gmail.comwrote:
>No. Most systems that claim Unicode support on 16 bits use
UTF-16. Granted, it's a multi-element encoding, but if you're
doing anything serious, effectively, so is UTF-32. (In
practice, I find that UTF-8 works fine for a lot of things.)
The ones I am familiar with only support UCS-2, not UTF-16. Windows,
for example, has WCHAR_T which is not UTF-16 [...].
TTBOMK, this isn't true anymore. It's UTF-16 now, not UCS-2.
REH
Schobi
Oct 2 '08 #14
REH
On Oct 2, 9:25*am, Hendrik Schober <spamt...@gmx.dewrote:
REH wrote:
On Oct 2, 3:41 am, James Kanze <james.ka...@gmail.comwrote:
No. *Most systems that claim Unicode support on 16 bits use
UTF-16. *Granted, it's a multi-element encoding, but if you're
doing anything serious, effectively, so is UTF-32. *(In
practice, I find that UTF-8 works fine for a lot of things.)
The ones I am familiar with only support UCS-2, not UTF-16. Windows,
for example, has WCHAR_T which is not UTF-16 [...].

* TTBOMK, this isn't true anymore. It's UTF-16 now, not UCS-2.
Thanks. I guess I need to update my reference material. I haven't done
Windows programming since the NT days.

REH
Oct 2 '08 #15
On Oct 2, 12:21 pm, Hendrik Schober <spamt...@gmx.dewrote:
James Kanze wrote:
On Oct 1, 11:59 am, Ioannis Vranos <ivra...@no.spam.nospamfreemail.gr>
wrote:
Hi, I am currently learning QT, a portable C++ framework which
comes with both a commercial and GPL license, and which
provides conversion operations to its various types to/from
standard C++ types.
For example its QString type provides a toWString() that
returns a std::wstring with its Unicode contents.
In what encoding format? And what if the "usual" encoding for
wstring isn't Unicode (the case on many Unix platforms).
<curious>
What are those implementations using for 'wchar_t'?
</curious>
EUC. EUC (= Extended Unix Codes) is originally a multi-byte
code, but exists as a 32 bit code as well, see
http://docs.sun.com/app/docs/doc/802...sn?l=en&a=view.
It's apparently the standard encoding for wchar_t under Solaris
and HP/UX, and perhaps elsewhere as well. Thus, LATIN SMALL
LETTER E WITH ACUTE has the code 0x00E9 in Unicode, but
0x30000069 under Solaris. (``printf( "%04x\n", (unsigned
int)L'é''') -- the compiler apparently recognizes my
LC_CTYPE=iso_8859_1 locale for the file input.)

--
James Kanze (GABI Software) email:ja*********@gmail.com
Conseils en informatique orientée objet/
Beratung in objektorientierter Datenverarbeitung
9 place Sémard, 78210 St.-Cyr-l'École, France, +33 (0)1 30 23 00 34
Oct 2 '08 #16
On Oct 2, 12:39 pm, Ioannis Vranos <ivra...@no.spam.nospamfreemail.gr>
wrote:
James Kanze wrote:
On Oct 1, 11:59 am, Ioannis Vranos
<ivra...@no.spam.nospamfreemail.grwrote:
So, since wstring supports the largest character set, why
do we need explicit Unicode types in C++?
Because wstring doesn't guarantee Unicode, and implementers
can't change what it does guarantee in their particular
implementation.
Again, if the implementers want Unicode, they can add a
Unicode local and make wchar_t size large enough to match it.
And break their existing code base? They're not that
irresponsible (most of them, anyway). And the basic idea behind
wchar_t is that it is suppose to be locale independent, at least
for the encoding.
In other words, C++0x could require all implementations to
provide specific Unicode locales that will work with existing
facilities (wchar_t, wstring, etc).
It could. It would also be ignored by most major implementors
if it did.

--
James Kanze (GABI Software) email:ja*********@gmail.com
Conseils en informatique orientée objet/
Beratung in objektorientierter Datenverarbeitung
9 place Sémard, 78210 St.-Cyr-l'École, France, +33 (0)1 30 23 00 34
Oct 2 '08 #17
James Kanze wrote:
>
And break their existing code base? They're not that
irresponsible (most of them, anyway). And the basic idea behind
wchar_t is that it is suppose to be locale independent, at least
for the encoding.


How would they "break" their existing code base, by adding some
additional locales and even changing the size of wchar_t?
Oct 2 '08 #18
In article <gc***********@ulysses.noc.ntua.gr>,
Ioannis Vranos <iv*****@no.spam.nospamfreemail.grwrote:
>REH wrote:
>On Oct 1, 5:59 am, Ioannis Vranos <ivra...@no.spam.nospamfreemail.gr>
wrote:
>>>
I think what is needed is a "unicode" locale or at the most, some
unicode locales.

I don't consider being compatible with C99 as an excuse.

If I understand what you are asking...

wstring in the standard defines neither the character set, nor the
encoding. Given that Unicode is currently a 21-bit standard, how can
wstring support the largest character set on a system where wchar_t is
16-bits (assuming a one-character-per-element encoding)? You could
only support the BMP (which is exactly what most systems and language
that "claim" Unicode support are really capable of).


I do not know much about encodings, only the necessary for me stuff, but
the question does not sound reasonable for me.

If that system supports Unicode as a system-specific type, why can't
wchar_t be made wide enough as that system-specific Unicode type, in
that system?
There is no system that support "Unicode". you should go to:
http://www.unicode.org/standard/WhatIsUnicode.html

Unicode is basically a catalog of glyphs and associated numeric value.
for a computer system, it only make sense to be precise and talk about
UTF8, UTF16 or UTF32.
http://www.unicode.org/faq/utf_bom.html

A "Unicode" locale makes no sense because the
locale represent much more than simply the character encoding that is
being used.
http://www.unicode.org/reports/tr35/#Locale

How, and finally, MS-Windows misused the word Unicode to mean UTF16
(nowadays, in the past, they meant UCS2)

Yannick
Oct 2 '08 #19
Yannick Tremblay wrote:
>
>>
I do not know much about encodings, only the necessary for me stuff, but
the question does not sound reasonable for me.

If that system supports Unicode as a system-specific type, why can't
wchar_t be made wide enough as that system-specific Unicode type, in
that system?

There is no system that support "Unicode". you should go to:
http://www.unicode.org/standard/WhatIsUnicode.html

Unicode is basically a catalog of glyphs and associated numeric value.
for a computer system, it only make sense to be precise and talk about
UTF8, UTF16 or UTF32.
http://www.unicode.org/faq/utf_bom.html
I agree so far.

A "Unicode" locale makes no sense because the
locale represent much more than simply the character encoding that is
being used.
http://www.unicode.org/reports/tr35/#Locale

True, but I think Unicode locales could be implemented for characters
only, leaving the rest unchanged (as they are).
For example:
locale::global(locale("english"));
wcin.imbue(locale("UTF16"));
wcout.imbue(locale("UTF16"));
would change only the character set, keeping the rest of the locale
settings as they are either they were previously defined or they are the
default ones.
Oct 2 '08 #20
On 2008-10-02 06:34:25 -0400, Ioannis Vranos
<iv*****@no.spam.nospamfreemail.grsaid:
Pete Becker wrote:
>On 2008-10-01 12:57:27 -0400, Ioannis Vranos
<iv*****@no.spam.nospamfreemail.grsaid:
>>>
If that system supports Unicode as a system-specific type, why can't
wchar_t be made wide enough as that system-specific Unicode type, in
that system?

It can be. But the language definition doesn't require it to be, and
with many implementations it's not


C++03 mentions:
"Type wchar_t is a distinct type whose values can represent distinct
codes for all members of the *largest* extended character set specified
among the supported *locales* (22.1.1). Type wchar_t shall have the same
size, signedness, and alignment requirements (3.9) as one of the other
integral types, called its underlying type".
There's nothing there that requires wchar_t to be large enough to hold
Unicode code points. Certainly if an implementation supports a Unicode
local, wchar_t has to be large enough to handle those characters. But
the language definition doesn't require Unicode locales.

--
Pete
Roundhouse Consulting, Ltd. (www.versatilecoding.com) Author of "The
Standard C++ Library Extensions: a Tutorial and Reference
(www.petebecker.com/tr1book)

Oct 2 '08 #21
Pete Becker wrote:
On 2008-10-02 06:34:25 -0400, Ioannis Vranos
<iv*****@no.spam.nospamfreemail.grsaid:
>Pete Becker wrote:
>>On 2008-10-01 12:57:27 -0400, Ioannis Vranos
<iv*****@no.spam.nospamfreemail.grsaid:
If that system supports Unicode as a system-specific type, why can't
wchar_t be made wide enough as that system-specific Unicode type, in
that system?

It can be. But the language definition doesn't require it to be, and
with many implementations it's not


C++03 mentions:
"Type wchar_t is a distinct type whose values can represent distinct
codes for all members of the *largest* extended character set
specified among the supported *locales* (22.1.1). Type wchar_t shall
have the same
size, signedness, and alignment requirements (3.9) as one of the other
integral types, called its underlying type".

There's nothing there that requires wchar_t to be large enough to hold
Unicode code points. Certainly if an implementation supports a Unicode
local, wchar_t has to be large enough to handle those characters. But
the language definition doesn't require Unicode locales.

Yes, I am talking about the upcoming Unicode character types in C++0x,
in comparison with the Unicode locales alternative.

Oct 2 '08 #22
On 2008-10-02 12:26, Ioannis Vranos wrote:
Erik Wikström wrote:
>>
Because it has been to narrow for 5 to 10 years and the compiler vendor
does not want to take any chances with backward compatibility,


How will it break backward compatibility, if the size of whcar_t changes?
Because the user expects to be able to pack 5 wchar_t into a network-
message of a fixed size, or read a few characters from a specific
position in a binary file. Or any number of reasons where someone have
made assumptions about the size of wchar_t.

--
Erik Wikström
Oct 2 '08 #23
On Oct 2, 6:11 pm, Ioannis Vranos <ivra...@no.spam.nospamfreemail.gr>
wrote:
Yannick Tremblay wrote:
[...]
A "Unicode" locale makes no sense because the
locale represent much more than simply the character encoding that is
being used.
http://www.unicode.org/reports/tr35/#Locale
True, but I think Unicode locales could be implemented for characters
only, leaving the rest unchanged (as they are).
For example:
locale::global(locale("english"));
wcin.imbue(locale("UTF16"));
wcout.imbue(locale("UTF16"));
would change only the character set, keeping the rest of the
locale settings as they are either they were previously
defined or they are the default ones.
That's not quite how locales work. What I think your talking
about is a UTF16 codecvt facet. And there are ways of
constructing a local by copying another locale, just replacing a
single facet. Of course, the ctype facet is also affected; part
of the problem in doing this cleanly is that abstractions that
we'd like to keep separate get mixed up. (Note that this can be
a problem even within a pure Unicode environment. Something
like toupper( 'i' ) is locale dependent, and will return a
different character in a Turkish locale.)

--
James Kanze (GABI Software) email:ja*********@gmail.com
Conseils en informatique orientée objet/
Beratung in objektorientierter Datenverarbeitung
9 place Sémard, 78210 St.-Cyr-l'École, France, +33 (0)1 30 23 00 34
Oct 2 '08 #24
On Oct 2, 4:27 pm, Ioannis Vranos <ivra...@no.spam.nospamfreemail.gr>
wrote:
James Kanze wrote:
And break their existing code base? They're not that
irresponsible (most of them, anyway). And the basic idea
behind wchar_t is that it is suppose to be locale
independent, at least for the encoding.
How would they "break" their existing code base, by adding
some additional locales and even changing the size of wchar_t?
Adding locales is no problem. Changing the size, or anything
involving the behavior of wchar_t breaks real code. Some of the
code is probably poorly written, but convincing your customers
that they are idiots doesn't sell many compilers.

--
James Kanze (GABI Software) email:ja*********@gmail.com
Conseils en informatique orientée objet/
Beratung in objektorientierter Datenverarbeitung
9 place Sémard, 78210 St.-Cyr-l'École, France, +33 (0)1 30 23 00 34
Oct 2 '08 #25
James Kanze wrote:
>
Adding locales is no problem. Changing the size, or anything
involving the behavior of wchar_t breaks real code. Some of the
code is probably poorly written, but convincing your customers
that they are idiots doesn't sell many compilers.

OK, but if their badly-written code is broken, they will fix it.
Oct 2 '08 #26
Ioannis Vranos wrote:
James Kanze wrote:
>Adding locales is no problem. Changing the size, or anything
involving the behavior of wchar_t breaks real code. Some of the
code is probably poorly written, but convincing your customers
that they are idiots doesn't sell many compilers.

OK, but if their badly-written code is broken, they will fix it.
For most of the past ten years I have written code that
had to be compiled using halve a dozen compiler/std lib
combinations on so many platforms. We had the very same
code carry UTF-8 strings on some Linux versions, UTF-16
on Windows, and UTF-32 on OSX and some other Unices. We
have learned to deal with all data types being platform-
dependent and our code needing to adapt.
Still, if your vendor does something stupid (like when VC
suddenly started to throw several 10k of useless warnings
for a 2MLoc code base that used to compile clean), you're
doomed.
And this isn't any different when you got yourself into
the trouble yourself. Even if you know that, 15 years ago,
some (people who had long left the company when you came,
and the company was a very different one back then, and
the code's been bought several times over) did something
stupid, it doesn't mean that, now you have several MLoC
relying on a specific size of some built-in type, you can
spend several man-years fixing this and take another two
releases until the dust has settled and all the bugs you
introduced doing so are fixed. While that would be nice
to do, the customers won't pay for it.

C++ has always respected the gazillions of lines of legacy
code real-world projects have. That's probably a reason
for its success.

Schobi
Oct 2 '08 #27
James Kanze wrote:
On Oct 2, 12:21 pm, Hendrik Schober <spamt...@gmx.dewrote:
>James Kanze wrote:
[...]
>>In what encoding format? And what if the "usual" encoding for
wstring isn't Unicode (the case on many Unix platforms).
> <curious>
What are those implementations using for 'wchar_t'?
</curious>

EUC. EUC (= Extended Unix Codes) is originally a multi-byte
code, but exists as a 32 bit code as well, see
http://docs.sun.com/app/docs/doc/802...sn?l=en&a=view.
It's apparently the standard encoding for wchar_t under Solaris
and HP/UX, and perhaps elsewhere as well. Thus, LATIN SMALL
LETTER E WITH ACUTE has the code 0x00E9 in Unicode, but
0x30000069 under Solaris. (``printf( "%04x\n", (unsigned
int)L'é''') -- the compiler apparently recognizes my
LC_CTYPE=iso_8859_1 locale for the file input.)
Thanks!

Schobi
Oct 2 '08 #28
On Oct 2, 9:06 pm, Ioannis Vranos <ivra...@no.spam.nospamfreemail.gr>
wrote:
James Kanze wrote:
Adding locales is no problem. Changing the size, or
anything involving the behavior of wchar_t breaks real code.
Some of the code is probably poorly written, but convincing
your customers that they are idiots doesn't sell many
compilers.
OK, but if their badly-written code is broken, they will fix
it.
I don't guess you've ever worked in industry. The authors of
the code will claim that it's the compiler which is broken, and
find one which accepts it.

And of course, some of the code that would break probably isn't
broken. If you have no real portability requirements, and you
have a guarantee that wchar_t contains EUC, what's wrong about
programming against that. And you have that guarantee.

Practically speaking, it's easy to add new features---about the
only thing adding char32_t et al. can break is code which used
those symbols as keywords. Where as the standard, and vendor
specifications are a contract, which you really can't change
without wrecking havoc. And if you're a vendor, loosing sales.

--
James Kanze (GABI Software) email:ja*********@gmail.com
Conseils en informatique orientée objet/
Beratung in objektorientierter Datenverarbeitung
9 place Sémard, 78210 St.-Cyr-l'École, France, +33 (0)1 30 23 00 34

Oct 3 '08 #29
On Oct 2, 9:43 pm, Hendrik Schober <spamt...@gmx.dewrote:

[...]
C++ has always respected the gazillions of lines of legacy
code real-world projects have. That's probably a reason
for its success.
Were it only so. One of the reasons why there was so much
interest in Java was because it was so difficult to write
portable C++, and because the language was felt to be changing
under you. We've had to rework quite a bit of code, including
reorganizing some, because of two phase look-up, and the
differences between the classical iostream and the standard one
have caused more than a few problems as well.

--
James Kanze (GABI Software) email:ja*********@gmail.com
Conseils en informatique orientée objet/
Beratung in objektorientierter Datenverarbeitung
9 place Sémard, 78210 St.-Cyr-l'École, France, +33 (0)1 30 23 00 34
Oct 3 '08 #30

This thread has been closed and replies have been disabled. Please start a new discussion.

Similar topics

16
by: ^_^ | last post by:
conversion from: a="a"; to a=0x????; If there are many unicode strings to convert, how can I do batch-conversion?
4
by: Luk Vloemans | last post by:
Hey, I'm currently working on a project to get GPS-data onto a PDA. At this stage, I'm already getting data from the device, but my problem is: It's rubbish. At least, it looks as if it were...
33
by: Nikhil Bokare | last post by:
I wanted a C++ compiler which would follow the ANSI C++ standards. If you could tell me an IDE also, it would be more helpful. Thanks.
2
by: Ioannis Vranos | last post by:
Based on a discussion about Unicode in clc++ inside a discussion thread with subject "next ISO C++ standard", and the data provided in http://en.wikipedia.org/wiki/C%2B%2B0x , and with the design...
10
by: himanshu.garg | last post by:
Hi, The following std c++ program does not output the unicode character.:- %./a.out en_US.UTF-8 Infinity:
0
by: taylorcarr | last post by:
A Canon printer is a smart device known for being advanced, efficient, and reliable. It is designed for home, office, and hybrid workspace use and can also be used for a variety of purposes. However,...
0
BarryA
by: BarryA | last post by:
What are the essential steps and strategies outlined in the Data Structures and Algorithms (DSA) roadmap for aspiring data scientists? How can individuals effectively utilize this roadmap to progress...
1
by: nemocccc | last post by:
hello, everyone, I want to develop a software for my android phone for daily needs, any suggestions?
1
by: Sonnysonu | last post by:
This is the data of csv file 1 2 3 1 2 3 1 2 3 1 2 3 2 3 2 3 3 the lengths should be different i have to store the data by column-wise with in the specific length. suppose the i have to...
0
by: Hystou | last post by:
There are some requirements for setting up RAID: 1. The motherboard and BIOS support RAID configuration. 2. The motherboard has 2 or more available SATA protocol SSD/HDD slots (including MSATA, M.2...
0
marktang
by: marktang | last post by:
ONU (Optical Network Unit) is one of the key components for providing high-speed Internet services. Its primary function is to act as an endpoint device located at the user's premises. However,...
0
Oralloy
by: Oralloy | last post by:
Hello folks, I am unable to find appropriate documentation on the type promotion of bit-fields when using the generalised comparison operator "<=>". The problem is that using the GNU compilers,...
0
jinu1996
by: jinu1996 | last post by:
In today's digital age, having a compelling online presence is paramount for businesses aiming to thrive in a competitive landscape. At the heart of this digital strategy lies an intricately woven...
0
by: Hystou | last post by:
Overview: Windows 11 and 10 have less user interface control over operating system update behaviour than previous versions of Windows. In Windows 11 and 10, there is no way to turn off the Windows...

By using Bytes.com and it's services, you agree to our Privacy Policy and Terms of Use.

To disable or enable advertisements and analytics tracking please visit the manage ads & tracking page.