They read a Unicode char, and then determine the actual byte sequence
in whatever encoding you choose. For instance, if you use ASCII or
UTF-8 and pass "hi", it'll make a byte[] {0x68, 0x69}. If using
Unicode, it'd be 0x68, 0x00, 0x69, 0x00.
The same applies when you get chars or a string from bytes. It reads
the bytes and then determines what actual Unicode characters they are.
Also, there is no Encoding.GetChars(string) method (since it'd only
return chars of a string, which would always be the .NET internal
representation (Unicode)).
-mike
MVP
"Viorel" <vm*********@moldova.cc> wrote in message
news:eP*************@TK2MSFTNGP10.phx.gbl...
For me is a little bit mysterious how work encoding and decoding
functions, what is underneath of their calling?
Encoding1.GetBytes(string1); in particularly ASCII.GetBytes(string1)
Encoding1.GetChars(string1);
Encoding1.GetChars(arrayofbytes1);
string1=Encoding1.GetString(arrayofbytes1);
If I know (perhaps) that a char is based on 2 bytes (16 bits)
and all Strings in C#(NET) are a set of chars
P.S. Please explain on plane of working with bytes (I come from C
world)
I will appreciate