The
usual unary conversions are applied in turn to each operand in an expression. They serve to make the job of the compiler's expression evaluator easier by limiting the number of types that must be supported. One of these rules is:
An unsigned type of rank less than int, all of whose values can be represented by type int is converted to int.
The
usual binary conversions are applied when an expression is evaluated. The usual binary conversions occur after the usual unary conversions and serve to convert all operands to a single common type, which is typically also the type of the result. One of the these rules is:
If either operand has any signed type and the other operand has any signed type, then both are converted to the signed type with the greater rank.
In your case, the usual unary conversions converts both x and y into
ints. The usual binary conversions are satisfied to leave them as
ints and it determines that the result will also be an
int. Thus, your logical expression compares
unsigned int i to the
signed int result of the multiplication. The usual conversions have rules to handle this operation, but it is not uncommon to get a compiler warning to alert you that types whose domains do not overlap are being used.
By the way, this is why you don't always get major improvements in execution time by using the smallest possible types. Your variables might be stored in
chars or
shorts, but your arithmetic still takes place with
ints.