473,386 Members | 1,883 Online
Bytes | Software Development & Data Engineering Community
Post Job

Home Posts Topics Members FAQ

Join Bytes to post your question to a community of 473,386 software developers and data experts.

Question re. integral promotion, signed->unsigned

Hello,

I've been trying to clear up a confusion about
integer promotions during expression evaluation.
I've checked the C FAQ and C++ FAQ (they are
different languages, but I was hoping one would
clear up the confusion), as well as googling
groups and the web.

The confusion is that for a binary operator,
where both operands are of integral type.
If either operand is unsigned long, the other
operand is converted to unsigned long. The
confusing thing is that this has a higher
priority than checking for longs. Similarly,
unsigned ints are checked for before ints.
So it is likely that an int is converted to
an unsigned int, or a long is converted to an
unsigned long. I'm getting this from Schildt's
"The Complete Reference C++", 3rd ed.. I
realize his book is not exactly lauded here, but
I also checked an old draft of the c++ standard:
http://www.csci.csusb.edu/dick/c++std/cd2/conv.html,
and it says something similar (but not the same).
In item 1 of section 4.5, they also say that if
something can't fit into an int, it is interpretted
as an unsigned int.

Why is it OK to reinterpret a signed integral type as
its unsigned counterpart? If it's a negative number,
then all of a sudden, the reinterpretation yields a
large positive number. I'm assuming that the signed
number is modelled as a 2's complement number, even
though it may be implemented differently in hardware.
Likewise, the unsigned is modelled as a straight binary
coded decimal (BCD). Does the sign bit of the 2's
complement number becomes the MSB of the unsigned
number, with all other bits remaining unchanged?

I've TA'd digital electronics before, so I know that
the mechanics of adding a 2'c complement negative
number is the same as adding the BCD interpretation of
those same bits (as was pointed out in one of my google
groups searches). It seems (to me, and perhaps
wrongly) that the rule of converting signed integral
types to their unsigned counterparts is due to lack of
a better way to handle it. Perhaps the reasoning was
that the resulting bits would still be accurate if one
operand was negative while the other was positive.

I was wondering if anyone could confirm or correct my
understanding of this, and maybe offer some insight as
to why this order of promotion is desirable?

Fred

P.S. An interesting thing is that Schildt points out an
exception. If one operand is a long while the other is
an unsigned int whose value can't fit into a long (e.g.
on a platform where both had the same number of bits),
then both operands convert to unsigned longs. Again, the
conversion from a signed integrap type to an unsigned
counterpart.
--
Fred Ma
Dept. of Electronics, Carleton University
1125 Colonel By Drive, Ottawa, Ontario
Canada, K1S 5B6
Nov 14 '05 #1
9 4933
In <40***************@doe.carleton.ca> Fred Ma <fm*@doe.carleton.ca> writes:
The confusion is that for a binary operator,
where both operands are of integral type.
If either operand is unsigned long, the other
operand is converted to unsigned long. The
confusing thing is that this has a higher
priority than checking for longs. Similarly,
unsigned ints are checked for before ints.
So it is likely that an int is converted to
an unsigned int, or a long is converted to an
unsigned long.
This is correct.
Why is it OK to reinterpret a signed integral type as
its unsigned counterpart? If it's a negative number,
then all of a sudden, the reinterpretation yields a
large positive number.
Consider the alternative: the unsigned might have to converted to signed,
but such a conversion isn't well defined if the value of the unsigned
cannot be represented by the signed. In such cases, the result is
usually a negative value, and this isn't any better than the actual
scenario you have described above.

The moral of the story: the programmer MUST know what he's doing when
combining signed and unsigned operands.
I'm assuming that the signed
number is modelled as a 2's complement number, even
though it may be implemented differently in hardware.
No need for such an assumption: the result of the conversion is well
defined, regardless of the representation of negative values.
Likewise, the unsigned is modelled as a straight binary
coded decimal (BCD).
The standard does not allow this. The unsigned must be using a pure
binary encoding.
Does the sign bit of the 2's
complement number becomes the MSB of the unsigned
number, with all other bits remaining unchanged?


If the signed value is negative, it is added the maximum value that
can be represented by the unsigned type plus one. This is true regardless
of representation, but, if the representation is two's complement, no
operation needs to be actually performed: the bit pattern of the signed
is merely reinterpreted as unsigned.

While this conversion is well defined and yields the same result,
regardless of implementation, converting an unsigned value that cannot
be represented by a signed type yields an implementation-defined result
(or, in C99, may even raise a signal). So, the standard has chosen the
well defined conversion for this case, which is a good thing.

Dan
--
Dan Pop
DESY Zeuthen, RZ group
Email: Da*****@ifh.de
Nov 14 '05 #2
Dan Pop wrote:
Why is it OK to reinterpret a signed integral type as
its unsigned counterpart? If it's a negative number,
then all of a sudden, the reinterpretation yields a
large positive number.


Consider the alternative: the unsigned might have to converted to signed,
but such a conversion isn't well defined if the value of the unsigned
cannot be represented by the signed. In such cases, the result is
usually a negative value, and this isn't any better than the actual
scenario you have described above.

The moral of the story: the programmer MUST know what he's doing when
combining signed and unsigned operands.


Yes, it certainly seems so.
I'm assuming that the signed
number is modelled as a 2's complement number, even
though it may be implemented differently in hardware.


No need for such an assumption: the result of the conversion is well
defined, regardless of the representation of negative values.


I use it as a conceptual aid, though I realize that it may
not reflect actual implementation.
Likewise, the unsigned is modelled as a straight binary
coded decimal (BCD).


The standard does not allow this. The unsigned must be using a pure
binary encoding.


Sorry, getting my terminology mixed up. I meant binary encoding.
Does the sign bit of the 2's
complement number becomes the MSB of the unsigned
number, with all other bits remaining unchanged?


If the signed value is negative, it is added the maximum value that
can be represented by the unsigned type plus one. This is true regardless
of representation, but, if the representation is two's complement, no
operation needs to be actually performed: the bit pattern of the signed
is merely reinterpreted as unsigned.

While this conversion is well defined and yields the same result,
regardless of implementation, converting an unsigned value that cannot
be represented by a signed type yields an implementation-defined result
(or, in C99, may even raise a signal). So, the standard has chosen the
well defined conversion for this case, which is a good thing.


Yes, and if the standard had defined it the other way (so that
converting unsigned->signed yields a reinterprettation of the
potentially fictitious 2's complement representation), then that would
be well defined too. It seems like an arbitrary choice. Thanks
for confirming how it works.

Fred
--
Fred Ma
Dept. of Electronics, Carleton University
1125 Colonel By Drive, Ottawa, Ontario
Canada, K1S 5B6
Nov 14 '05 #3
In <40***************@doe.carleton.ca> Fred Ma <fm*@doe.carleton.ca> writes:
Dan Pop wrote:

If the signed value is negative, it is added the maximum value that
can be represented by the unsigned type plus one. This is true regardless
of representation, but, if the representation is two's complement, no
operation needs to be actually performed: the bit pattern of the signed
is merely reinterpreted as unsigned.

While this conversion is well defined and yields the same result,
regardless of implementation, converting an unsigned value that cannot
be represented by a signed type yields an implementation-defined result
(or, in C99, may even raise a signal). So, the standard has chosen the
well defined conversion for this case, which is a good thing.
Yes, and if the standard had defined it the other way (so that
converting unsigned->signed yields a reinterprettation of the
potentially fictitious 2's complement representation), then that would
be well defined too.


That would not be possible, given that the standard doesn't require
two's complement representation and other allowed representations (one's
complement and sign-magnitude) don't have a range as wide as two's
complement.
It seems like an arbitrary choice.


It's less arbitrary than it seems to you. OTOH, the way integral
promotions (not to be confused with the usual arithmetic conversions)
work is based on an arbitrary and suboptimal choice (value preserving
instead of the more natural signedness preserving).

Dan
--
Dan Pop
DESY Zeuthen, RZ group
Email: Da*****@ifh.de
Nov 14 '05 #4
Dan Pop wrote:

In <40***************@doe.carleton.ca> Fred Ma <fm*@doe.carleton.ca> writes:
Dan Pop wrote:

If the signed value is negative, it is added the maximum value that
can be represented by the unsigned type plus one. This is true regardless
of representation, but, if the representation is two's complement, no
operation needs to be actually performed: the bit pattern of the signed
is merely reinterpreted as unsigned.

While this conversion is well defined and yields the same result,
regardless of implementation, converting an unsigned value that cannot
be represented by a signed type yields an implementation-defined result
(or, in C99, may even raise a signal). So, the standard has chosen the
well defined conversion for this case, which is a good thing.


Yes, and if the standard had defined it the other way (so that
converting unsigned->signed yields a reinterprettation of the
potentially fictitious 2's complement representation), then that would
be well defined too.


That would not be possible, given that the standard doesn't require
two's complement representation and other allowed representations (one's
complement and sign-magnitude) don't have a range as wide as two's
complement.


Well, as I said, I'm using two's complement more as a pictorial
guide. Couldn't they just as easily define unsigned->signed
conversion as subtraction of the maxium number representable
by the unsigned type?
It seems like an arbitrary choice.


It's less arbitrary than it seems to you. OTOH, the way integral
promotions (not to be confused with the usual arithmetic conversions)
work is based on an arbitrary and suboptimal choice (value preserving
instead of the more natural signedness preserving).


Actually, they way they do it seems to make sense to me. There
is no way to avoid losing information, so why bother pretending
to try? Why not just make to operations work properly for a small
anticipatable subset of cases? The cases that they target is
probably motivated simple hardware visualization e.g. if someone
was using C integers to emulate hardware. This is done in DSP
that is eventually meant to be implemented as hardware. OTOH,
trying to preserve information (even signedness) may give a softer
failure, or less gross error, but the argument against that is
similar to the argument of wanting a programming bug to be easy
to find. You want a bug to have clear and overt symptoms. The
more dramatic the error and its outward signs, the better.

Sort of makes sense, eh?

Fred
--
Fred Ma
Dept. of Electronics, Carleton University
1125 Colonel By Drive, Ottawa, Ontario
Canada, K1S 5B6
Nov 14 '05 #5
In <40***************@doe.carleton.ca> Fred Ma <fm*@doe.carleton.ca> writes:
Dan Pop wrote:

It's less arbitrary than it seems to you. OTOH, the way integral
promotions (not to be confused with the usual arithmetic conversions)
work is based on an arbitrary and suboptimal choice (value preserving
instead of the more natural signedness preserving).


Actually, they way they do it seems to make sense to me. There
is no way to avoid losing information, so why bother pretending
to try?


You don't know what you're talking about: no loss of information is
possible in the integral promotions.

Dan
--
Dan Pop
DESY Zeuthen, RZ group
Email: Da*****@ifh.de
Nov 14 '05 #6
> >The confusion is that for a binary operator,
where both operands are of integral type.
If either operand is unsigned long, the other
operand is converted to unsigned long.
This is correct.


I don't think it's correct for the binary operator ">>", when
the right-hand operand is an unsigned long
(either that, or my compiler is buggy)
The moral of the story: the programmer MUST know what he's doing when
combining signed and unsigned operands.


A subset of the moral: the programmer MUST know what he's doing
when using C.
Nov 14 '05 #7
Dan Pop wrote:

In <40***************@doe.carleton.ca> Fred Ma <fm*@doe.carleton.ca> writes:
Dan Pop wrote:

It's less arbitrary than it seems to you. OTOH, the way integral
promotions (not to be confused with the usual arithmetic conversions)
work is based on an arbitrary and suboptimal choice (value preserving
instead of the more natural signedness preserving).


Actually, they way they do it seems to make sense to me. There
is no way to avoid losing information, so why bother pretending
to try?


You don't know what you're talking about: no loss of information is
possible in the integral promotions.

Dan


I don't see why you say no information is lost. You basically
lost the original value of the number by adding the maximum
value representable by the unsigned integer. Sure, you can get
it back, but that entails that you keep track of which operands
in an expression were subjected to this reinterprettation of the
bits. That is extra information you need, which is just another
manifestation of lost information.

Fred
--
Fred Ma
Dept. of Electronics, Carleton University
1125 Colonel By Drive, Ottawa, Ontario
Canada, K1S 5B6
Nov 14 '05 #8
Old Wolf <ol*****@inspire.net.nz> spoke thus:
The moral of the story: the programmer MUST know what he's doing when
combining signed and unsigned operands.
A subset of the moral: the programmer MUST know what he's doing
when using C.


Sounds like a superset to me.

--
Christopher Benson-Manica | I *should* know what I'm talking about - if I
ataru(at)cyberspace.org | don't, I need to know. Flames welcome.
Nov 14 '05 #9
On 30 Jan 2004 14:01:30 GMT, Da*****@cern.ch (Dan Pop) wrote:
In <40***************@doe.carleton.ca> Fred Ma <fm*@doe.carleton.ca> writes:
The confusion is that for a binary operator,
where both operands are of integral type.
(Binary arithmetic, comparison or bitwise operator; the "usual
arithmetic conversions" do not apply for shifts and && and || . Or
(noncompound) assignment or comma, which are syntactically binary
although not what people usually think of as binary operations.)
If either operand is unsigned long, the other
operand is converted to unsigned long. The
confusing thing is that this has a higher
priority than checking for longs. Similarly,
unsigned ints are checked for before ints.
So it is likely that an int is converted to
an unsigned int, or a long is converted to an
unsigned long.


This is correct.

Nit: correct for C89, where ulong is the highest type; in C99 only if
the other operand does not have rank higher than ulong.

Otherwise concur.
- David.Thompson1 at worldnet.att.net
Nov 14 '05 #10

This thread has been closed and replies have been disabled. Please start a new discussion.

Similar topics

17
by: Søren Johansen | last post by:
Hi, in Large Scale c++ Design, John Lakos suggests that you should avoid the "unsigned" keyword and use regular ints. Now I know that this book is considered mostly outdated and I do use...
8
by: Rade | last post by:
Following a discussion on another thread here... I have tried to understand what is actually standardized in C++ regarding the representing of integers (signed and unsigned) and their conversions....
2
by: Aire | last post by:
1. If a function defined as: void test_function(unsigned int a) { } Is "test_function(-256);" going to cause undefined behavior? 2. What's "a negative signed value wrapping"?
7
by: CBFalconer | last post by:
Consider: #include <stdlib.h> /* An elementary optimizer is expected to remove most code */ /* Return the absolute value */ unsigned long ulabs(long j) { if (0 == LONG_MIN + LONG_MAX) { if...
5
by: Schüle Daniel | last post by:
Hello Newsgroup, I was thinking about integer literals and what default types they get and how they behave when applied to shift operator assume sizeof(int) == sizeof(unsigend int) = 4 ...
6
by: yves | last post by:
Hello, I'm trying to check if only the lsb 8 bits of an integer are used. bool Func(int x) { if ((x & 0xFFFFFF00) == 0) { return true; }
10
by: =?iso-8859-2?B?SmFuIFJpbmdvuQ==?= | last post by:
Hello everybody, this is my first post to a newsgroup at all. I would like to get some feedback on one proposal I am thinking about: --- begin of proposal --- Proposal to add...
3
by: tcsvikr | last post by:
unsigned int _I = _N; // _N is of type unsigned int for (_Grp = false; *_Pg != CHAR_MAX && '\0' < *_Pg && *_Pg < _I - _Np; _Grp = true)...
3
by: john | last post by:
As far as I know there is only the type wchar_t. However my compiler compiles both "signed wchar_t" and "unsigned wchar_t". Are there both signed and unsigned wchar_t types?
4
by: Lamefif | last post by:
how can the computer tell the difference between the two? i mean a byte is 8 bit can be 1 or 0 11111111 = 255 unsigned byte 10000000 = -128 or 128 ?
0
by: aa123db | last post by:
Variable and constants Use var or let for variables and const fror constants. Var foo ='bar'; Let foo ='bar';const baz ='bar'; Functions function $name$ ($parameters$) { } ...
0
by: ryjfgjl | last post by:
If we have dozens or hundreds of excel to import into the database, if we use the excel import function provided by database editors such as navicat, it will be extremely tedious and time-consuming...
0
by: ryjfgjl | last post by:
In our work, we often receive Excel tables with data in the same format. If we want to analyze these data, it can be difficult to analyze them because the data is spread across multiple Excel files...
0
by: emmanuelkatto | last post by:
Hi All, I am Emmanuel katto from Uganda. I want to ask what challenges you've faced while migrating a website to cloud. Please let me know. Thanks! Emmanuel
0
BarryA
by: BarryA | last post by:
What are the essential steps and strategies outlined in the Data Structures and Algorithms (DSA) roadmap for aspiring data scientists? How can individuals effectively utilize this roadmap to progress...
1
by: Sonnysonu | last post by:
This is the data of csv file 1 2 3 1 2 3 1 2 3 1 2 3 2 3 2 3 3 the lengths should be different i have to store the data by column-wise with in the specific length. suppose the i have to...
0
by: Hystou | last post by:
There are some requirements for setting up RAID: 1. The motherboard and BIOS support RAID configuration. 2. The motherboard has 2 or more available SATA protocol SSD/HDD slots (including MSATA, M.2...
0
marktang
by: marktang | last post by:
ONU (Optical Network Unit) is one of the key components for providing high-speed Internet services. Its primary function is to act as an endpoint device located at the user's premises. However,...
0
jinu1996
by: jinu1996 | last post by:
In today's digital age, having a compelling online presence is paramount for businesses aiming to thrive in a competitive landscape. At the heart of this digital strategy lies an intricately woven...

By using Bytes.com and it's services, you agree to our Privacy Policy and Terms of Use.

To disable or enable advertisements and analytics tracking please visit the manage ads & tracking page.