BigMan wrote:
What is the difference between char and signed char, short and unsigned
short, int and signed int, long and unsigned long?
Well, obviously, the range of values that they represent is different.
Signed types can represent negative numbers whereaas unsigned types use
the same bit patterns to represent large positive numbers.
But of course C and C++ don't have any run-time checks to stop you from
subtracting 1 from an unsigned 0. So it is tempting to think that the
types are not actually any different since in many common cases, e.g.
when you are doing addition and subtraction, the bit patterns will end
up the same whatever the types.
But there are plenty of places where it does matter. For example, when
you do formatted input/output, conversions (e.g. assign a shorter type
to a longer type - is it sign-extended?), comparisons (is 11111111
greater or less than 00000000?), and so on.
Using the correct types also helps the compiler to give you sensible
warnings.
--Phil.