mr wrote:
How can i 'force' c++ to interprete "blabla" strings as unicode string
instead of ascii string (i just don't want to add 'L' before the thousands
strings that are on my projects...), as all my projects are using unicode,
and i don't see any reason that c++ compilator keep creating ascii
Is it a way?
thanks !
I would have to agree with the other poster, adding the L before string
and character literals is the least of your problems, but a good place
to start.
I have already done this for a huge project and it is a lot of work.
You need to examine every single place you use strings and determine
what the correct types are. Some places you must still use narrow
strings (such as network protocols and most file IO). In other places
the format of the wide and narrow strings will depend on other factors
and you will need to write code to narrow and widen strings correctly
taking these factors into account.
To correctly support Unicode is more than just turning on wide
character support though. std::wstring is not able to handle Unicode
strings correctly (not any better than std::string can).
How much work you need to do depends a lot on what the application does
and is one of the reasons why you will need to examine every use of a
string or character (remember for example that wchar_t on most
platforms is not big enough for one Unicode character - you need 32
bits, and that's assuming you can avoid any issues of
canonicalisation).
My tip is to start with all the code that performs IO and then look at
any code that makes use of third party APIs. Thin wrappers for all of
these places are often a good way to handle all of this.