On Dec 12, 4:12 am, RS <rsina_no.ssppa...@comcast.netwrote:
Hi all,
I was told that using unsigned int instead of int can speed up the code.
Is this true? If so, why?
I can only come up with one instance, and it have nothing to do with
the type as such. If you know that a value should never be negative
(like the index in an array) you can use an unsigned and then you don't
have to check for negative values when evaluating input to functions.
Perhaps if you are working on an embedded processor of some sort there
might be a difference between signed and unsigned, but on most
general-purpose machines there is not.
Are there are any other rules one should follow to optimize the code for
speed (i.e. using float instead of double, short instead of int,
unsigned short instead of short, etc.)?
A good rule: use int for integers and double for reals. If you need
lots of reals (millions) you might want to consider float to reduce the
memory, the same goes for ints and shorts. Performance-wise most
computers will work about as fast on floats as on doubles, ints are
always a good bet as the fastest integer type.
If you really need the fastest type to work with and have a C99
compliant compiler/library include the file <stdint.h(will be in the
next C++ standard as <cstdint>) and use the 'int_fastX_t' (or
'uint_fastX_t' for unsigned) where X is the least number of bits you
need, i.e. int_fast32_t will be the fastest integer type of at least 32
bits. On most machines this is the normal int.
--
Erik Wikström