By using this site, you agree to our updated Privacy Policy and our Terms of Use. Manage your Cookies Settings.
440,827 Members | 813 Online
Bytes IT Community
+ Ask a Question
Need help? Post your question and get tips & solutions from a community of 440,827 IT Pros & Developers. It's quick & easy.

signed int overflow

P: n/a
You know how the saying goes that *unsigned* overflow is...
well.. defined. That means that if you add 1 to its maximum
value, then you know exactly what value it will have
afterward on all implementations.

But then you have signed integers. Let's say a signed
integer is set to its maximum positive value. If you add 1
to it, what happens?:

A) It's implementation defined what value it will
represent, eg. it could roll back around to 0, or it could
roll back around to the maximum negative number.

B) Undefined behaviour.
Please say A!
For instance:

int main()
{
//on a 32-Bit machine
signed int i = 2147483648;

++i;
}

Is that just plain old undefined behaviour, eg. the machine
can blow up and spit nitric acid in your face if it wants
to...

or is it simply benignly just implementation specific what
value "i" will represent after the incrementation?

If B is the case, it looks like I'm off to write "class
signed_dof_int", where dof = defined overflow. I'll use a
template for it, so you can do it with all of the integral
types. What's the best way to figure out the maximum value
of a particular type? I believe that the Standard Library
contains some global const variables, stuff like MAX_INT,
but I'd prefer a method I could use within a template.

-JKop
Jul 22 '05 #1
Share this Question
Share on Google+
44 Replies


P: n/a

"JKop" <NU**@NULL.NULL> wrote in message
You know how the saying goes that *unsigned* overflow is...
well.. defined. That means that if you add 1 to its maximum
value, then you know exactly what value it will have
afterward on all implementations.

But then you have signed integers. Let's say a signed
integer is set to its maximum positive value. If you add 1
to it, what happens?:

A) It's implementation defined what value it will
represent, eg. it could roll back around to 0, or it could
roll back around to the maximum negative number.

B) Undefined behaviour.
Please say A!


Nah..IIRC, it's B.

Sharad

Jul 22 '05 #2

P: n/a

"Sharad Kala" <no******************@yahoo.com> wrote in message
news:2r*************@uni-berlin.de...

"JKop" <NU**@NULL.NULL> wrote in message
You know how the saying goes that *unsigned* overflow is...
well.. defined. That means that if you add 1 to its maximum
value, then you know exactly what value it will have
afterward on all implementations.

But then you have signed integers. Let's say a signed
integer is set to its maximum positive value. If you add 1
to it, what happens?:

A) It's implementation defined what value it will
represent, eg. it could roll back around to 0, or it could
roll back around to the maximum negative number.

B) Undefined behaviour.
Please say A!


Nah..IIRC, it's B.

Sharad


Is unsigned overflow defined in all cases? Where in the C++ standard does it
say that?

The situation with signed overflow is ridiculous. The standard should say
overflow is defined as if twos complement arithmetic was being performed.

john
Jul 22 '05 #3

P: n/a
> Please say A!

In machine architechture that implements two's complement arithmetics on
integer values, you can reinterpret the value as unsigned and reinterpret it
again after the increment to have a good guess what the value might be. It's
not required that compilers implement this however logical it may sound.

I repeat: even if you know how your platform works, how the machine
instructions on the architechture works, the language does not care, because
if it did, every platform would have to care and if that would involve
sequence of instructions which is not implementable efficiently, it would
suck a lot for people working on those platforms. Hence, undefined
behaviour.

But feel free to be more specific which 32 bit architechture you have in
mind and we can treat rest of the thread off-topic to comp.lang.c++ ;-)
Jul 22 '05 #4

P: n/a
JKop wrote:
You know how the saying goes that *unsigned* overflow is...
well.. defined. That means that if you add 1 to its maximum
value, then you know exactly what value it will have
afterward on all implementations.
Actually, the overflow behavior is always undefined, in theory even for
unsigned integers. But that could never happen because the standard says
that unsigned integers don't overflow at all. The wrap-around is just part
of the normal unsigned integer behavior and not seen as overflow:
(2^n here of course means 2 raised to the power of n)

3.9.1 Fundamental types

....

4 Unsigned integers, declared /unsigned/, shall obey the laws of arithmetic
modulo 2^n where n is the number of bits in the value representation of
that particular size of integer. 41)

....
41) This implies that unsigned arithmetic does not overflow because a result
that cannot be represented by the resulting unsigned integer type is
reduced modulo the number that is one greater than the largest value that
can be represented by the resulting unsigned integer type.
But then you have signed integers. Let's say a signed
integer is set to its maximum positive value. If you add 1
to it, what happens?:

A) It's implementation defined what value it will
represent, eg. it could roll back around to 0, or it could
roll back around to the maximum negative number.

B) Undefined behaviour.
Please say A!
"I'm sorry Dave, I'm afraid I can't do that". The answer is B.
From the C++ standard:

If during the evaluation of an expression, the result is not mathematically
defined or not in the range of representable values for its type, the
behavior is undefined, unless such an expression is a constant expression,
in which case the program is ill-formed.
For instance:

int main()
{
//on a 32-Bit machine
signed int i = 2147483648;

++i;
}

Is that just plain old undefined behaviour, eg. the machine
can blow up and spit nitric acid in your face if it wants
to...

or is it simply benignly just implementation specific what
value "i" will represent after the incrementation?

If B is the case, it looks like I'm off to write "class
signed_dof_int", where dof = defined overflow. I'll use a
template for it, so you can do it with all of the integral
types. What's the best way to figure out the maximum value
of a particular type? I believe that the Standard Library
contains some global const variables, stuff like MAX_INT,
but I'd prefer a method I could use within a template.

std::numeric_limits<thetype>::max() from the <limits> header.

Jul 22 '05 #5

P: n/a
> Hence, undefined behaviour.
What I'm trying to establish is whether signed integer
overflow is

A) Just plain old undefined behaviour. The program is well
within its rights to crash.

B) It's just simply implementation specific what value the
signed integer variable will represent after the
incrementation.

I'm currently writing a program that works with signed
integers. I want to know if my program will crash, or if it
will just give different values on different
implementations should an overflow occur.
-JKop
Jul 22 '05 #6

P: n/a

Okay let's say that Standard C++... allows... a program to crash should you
cause a signed int to overflow.

Well... what the hell kind of implementation would allow this?! Even if one
does exist, it would have been abandoned 17 years 6 months and 2 days ago.

Imagine it, boot up WinXP. Open a few documents, play minesweeper, CRASH
(Opps sorry, this computer is shit, it crashes if signed integers overflow).

I'm open to further discussion on this... but at the moment it looks like
I'm going to ignore the directive that signed int overflow is undefined
behaviour and thus that the program may crash. Come on, it's bullshit!
-JKop
Jul 22 '05 #7

P: n/a

"John Harrison" <jo*************@hotmail.com> wrote in message

"Sharad Kala" <no******************@yahoo.com> wrote in message
news:2r*************@uni-berlin.de...

Is unsigned overflow defined in all cases? Where in the C++ standard does

it say that?


Overflow or underflow doesn't occur for unsigned integral types. If an
out-of-range value is assigned to them, it is interpreted modulo TYPE_MAX +
1. You may want to check Section 4.7 of the Standard.

Sharad


Jul 22 '05 #8

P: n/a
JKop wrote:

Okay let's say that Standard C++... allows... a program to crash should
you cause a signed int to overflow.

Well... what the hell kind of implementation would allow this?! Even if
one does exist, it would have been abandoned 17 years 6 months and 2 days
ago.
Why do you think a CPU that silently ignores overflows would be better than
one that signals such an error condition?
Anyway, a crash is not the only instance of undefined behavior. Another
could be an exception being thrown. However, if you don't catch that
exception, the result is similar to a crash - your program gets terminated.
AFAIK, there are implementations that throw an exception on a
division-by-zero, and I could imagine that there could be implementations
that throw on integer overflow.
Imagine it, boot up WinXP. Open a few documents, play minesweeper, CRASH
(Opps sorry, this computer is shit, it crashes if signed integers
overflow).
Then you could also say minesweeper is shit because it invokes undefined
behavor.
I'm open to further discussion on this... but at the moment it looks like
I'm going to ignore the directive that signed int overflow is undefined
behaviour and thus that the program may crash. Come on, it's bullshit!


I don't really get it. You want to overflow an integer and don't care for
the resulting value as long as you don't get a crash? What is the purpose
of that integer if the value doesn't matter?
Jul 22 '05 #9

P: n/a
> I don't really get it. You want to overflow an integer and don't care
for the resulting value as long as you don't get a crash? What is the
purpose of that integer if the value doesn't matter?

The user enters a year:

2004
1582
1906
-6000 (6000 BCE)

I'm writing a program at the moment that deals with dates. I'm not going to
bother putting in safe-guards for signed integer overflow, it's not worth
the effort. As such, if the user enters the following year:

2147483645

I don't care if they get innacurate information, just so long as the machine
doesn't freeze or whatever.

(This program will be portable)

It seems though that according to the C++ Standard, it's well within its
rights to crash...
-JKop
Jul 22 '05 #10

P: n/a
In message <2r*************@uni-berlin.de>, John Harrison
<jo*************@hotmail.com> writes


Is unsigned overflow defined in all cases? Where in the C++ standard does it
say that?

The situation with signed overflow is ridiculous. The standard should say
overflow is defined as if twos complement arithmetic was being performed.

That would be expensive on hardware that doesn't use 2's complement.
Which goes against the "don't pay for what you don't use" philosophy.

--
Richard Herring
Jul 22 '05 #11

P: n/a

"Richard Herring" <ju**@[127.0.0.1]> wrote in message
news:tD**************@baesystems.com...
In message <2r*************@uni-berlin.de>, John Harrison
<jo*************@hotmail.com> writes

The situation with signed overflow is ridiculous. The standard should say
overflow is defined as if twos complement arithmetic was being performed.

That would be expensive on hardware that doesn't use 2's complement.
Which goes against the "don't pay for what you don't use" philosophy.


How much hardware like that is there? Couldn't a compiler for such hardware
provide a 'don't detect overflow' switch for the tiny number of users who
are running such hardware and care about the small expense. That way the
cost of unusual hardware is paid only by the people who have unusual
hardware, and all they have to do is to remember to use a compiler switch to
get the behaviour they want. At the moment everyone pays for the undefined
behaviour when only a very tiny minority of people would be inconvenienced
by a standard the enforces 2's complement and defined overflow behaviour.

John
Jul 22 '05 #12

P: n/a
In message <2r*************@uni-berlin.de>, John Harrison
<jo*************@hotmail.com> writes

"Richard Herring" <ju**@[127.0.0.1]> wrote in message
news:tD**************@baesystems.com...
In message <2r*************@uni-berlin.de>, John Harrison
<jo*************@hotmail.com> writes
>
>The situation with signed overflow is ridiculous. The standard should say
>overflow is defined as if twos complement arithmetic was being performed.
> That would be expensive on hardware that doesn't use 2's complement.
Which goes against the "don't pay for what you don't use" philosophy.


How much hardware like that is there?


I don't know, but the Standard implicitly provides for it.
Couldn't a compiler for such hardware
provide a 'don't detect overflow' switch for the tiny number of users
You don't know it's tiny.
who
are running such hardware and care about the small expense.
You don't know it's small.
That way the
cost of unusual hardware is paid only by the people who have unusual
hardware, and all they have to do is to remember to use a compiler switch to
get the behaviour they want.
Well, I suppose that would take care of the 1's complement and
signed-magnitude hardware. Now, how are you going to deal with the
hardware that generates an exception on overflow?
At the moment everyone pays for the undefined
behaviour when only a very tiny minority of people would be inconvenienced
by a standard the enforces 2's complement and defined overflow behaviour.


--
Richard Herring
Jul 22 '05 #13

P: n/a
John Harrison wrote in news:2r*************@uni-berlin.de in
comp.lang.c++:

"Richard Herring" <ju**@[127.0.0.1]> wrote in message
news:tD**************@baesystems.com...
In message <2r*************@uni-berlin.de>, John Harrison
<jo*************@hotmail.com> writes
>
>The situation with signed overflow is ridiculous. The standard
>should say overflow is defined as if twos complement arithmetic was
>being performed.
>

That would be expensive on hardware that doesn't use 2's complement.
Which goes against the "don't pay for what you don't use" philosophy.


How much hardware like that is there? Couldn't a compiler for such
hardware provide a 'don't detect overflow' switch for the tiny number
of users who are running such hardware and care about the small
expense. That way the cost of unusual hardware is paid only by the
people who have unusual hardware, and all they have to do is to
remember to use a compiler switch to get the behaviour they want. At
the moment everyone pays for the undefined behaviour when only a very
tiny minority of people would be inconvenienced by a standard the
enforces 2's complement and defined overflow behaviour.


I suspect there is a mutch better solution than this, which is
to define a 2's-complement signed type, then people that need
determanistic overflow can have it people that don't care just
use the most optimal type provided by the hardware.

Here is my unsigned_int<> type:

http://www.victim-prime.dsl.pipex.co...int/index.html

The reason I haven't done signed_int<> yet is I had some problem's
deciding the best way to do it, inheritance worked fine with MSVC 7.1
(it emulated the signed to unsigned promotion almost perfectly), but
gcc 3.2 wasn't having it.

Rob.
--
http://www.victim-prime.dsl.pipex.com/
Jul 22 '05 #14

P: n/a

"Richard Herring" <ju**@[127.0.0.1]> wrote in message
news:hu**************@baesystems.com...
In message <2r*************@uni-berlin.de>, John Harrison
<jo*************@hotmail.com> writes

"Richard Herring" <ju**@[127.0.0.1]> wrote in message
news:tD**************@baesystems.com...
In message <2r*************@uni-berlin.de>, John Harrison
<jo*************@hotmail.com> writes
>
>The situation with signed overflow is ridiculous. The standard should say >overflow is defined as if twos complement arithmetic was being performed. >
That would be expensive on hardware that doesn't use 2's complement.
Which goes against the "don't pay for what you don't use" philosophy.


How much hardware like that is there?


I don't know, but the Standard implicitly provides for it.
Couldn't a compiler for such hardware
provide a 'don't detect overflow' switch for the tiny number of users


You don't know it's tiny.
who
are running such hardware and care about the small expense.


You don't know it's small.


I've never come across such hardware. I've heard of a few very old machines
that used ones complement. I'd be interested to hear of any machines still
in use that use anything other than twos complement.
That way the
cost of unusual hardware is paid only by the people who have unusual
hardware, and all they have to do is to remember to use a compiler switch toget the behaviour they want.


Well, I suppose that would take care of the 1's complement and
signed-magnitude hardware. Now, how are you going to deal with the
hardware that generates an exception on overflow?


Well such hardware would already have to deal an exception on unsigned
overflow. Or are you suggesting hardware that generates an exception on
signed overflow only? In any case I don't see any great problem.

john
Jul 22 '05 #15

P: n/a
> It seems though that according to the C++ Standard, it's well within its
rights to crash...


It only means that depending on what architechture the arithmetic is done,
the result is DIFFERENT. Therefore, the standard cannot enforce a specific
rule, what the result should be, because that would force every
non-conforming implementation to add a lot of instructions for verification
and fixing the "situation".

Hence, undefined behaviour is called to help! Since the results are not
predictable (vary from architechture to another) the language washes it's
hands on the issue: it doesn't care! As long as it doesn't care, crashing is
perfectly valid result of signed integer addition overflow! This doesn't
require that crash occurs, but if one did, the standard would be perfectly
happy as... you should get it by now.. it doesn't care!

If you know your architechture and your compiler, what instruction sequences
it generates, you are not invoking undefined behaviour. Behaviour of adding
two signed integers is WELL DEFINED operation in IA32 assembly language. Now
all you need to know what your compiler does and you're ALL SET! It's
PERFECTLY LEGAL! But when you look at it from pure c++ standards point of
view, CRASHING IS ALSO PERFECTLY LEGAL! (the point is that on selected
platforms such operation is NOT undefined!)

You just have to know the context and, voila', you can get the job done (at
cost of the resulting code being LESS portable, which is a relative metric
anyway). Strange, lately a lot of implementation specific issues have
cropped up, what's going on!? :)
Jul 22 '05 #16

P: n/a
assaarpa wrote:
It seems though that according to the C++ Standard, it's well within its
rights to crash...
It only means that depending on what architechture the arithmetic is done,
the result is DIFFERENT. Therefore, the standard cannot enforce a specific
rule, what the result should be, because that would force every
non-conforming implementation to add a lot of instructions for
verification and fixing the "situation".


However, the result could still be implementation-defined or unspecified.
Hence, undefined behaviour is called to help! Since the results are not
predictable (vary from architechture to another)


But on one specific architecture, they are usually predictable.

Jul 22 '05 #17

P: n/a
* JKop:
You know how the saying goes that *unsigned* overflow is...
well.. defined. That means that if you add 1 to its maximum
value, then you know exactly what value it will have
afterward on all implementations.

But then you have signed integers. Let's say a signed
integer is set to its maximum positive value. If you add 1
to it, what happens?:

A) It's implementation defined what value it will
represent, eg. it could roll back around to 0, or it could
roll back around to the maximum negative number.

B) Undefined behaviour.

Please say A!
Sorry, it's B.

You may want to check out the recent thread "int overflow gives UB"
in [comp.std.c++] where this is discussed to death.

Summary: there doesn't seem to be a strong enough need for this to
be defined behavior to outweight the cost of changing the standard
(and motivate someone to do the work); on the other hand, there does
not now seem to be any valid technical arguments against standardizing
the behavior, and that includes the issue of hardware support.
For instance:

int main()
{
//on a 32-Bit machine
signed int i = 2147483648;

++i;
}

Is that just plain old undefined behaviour, eg. the machine
can blow up and spit nitric acid in your face if it wants
to...

or is it simply benignly just implementation specific what
value "i" will represent after the incrementation?

If B is the case, it looks like I'm off to write "class
signed_dof_int", where dof = defined overflow.


You don't have to because there's not one single existing C++
implementation (that I know of) where the result isn't two's
complement wrapping -- and ironically and paradoxically that
extreme case of existing practice is part of the reason why it's
not going to be standardized...

--
A: Because it messes up the order in which people normally read text.
Q: Why is it such a bad thing?
A: Top-posting.
Q: What is the most annoying thing on usenet and in e-mail?
Jul 22 '05 #18

P: n/a
On Mon, 20 Sep 2004 13:22:49 GMT, al***@start.no (Alf P.
Steinbach) wrote:
* JKop:

I'm off to write "class
signed_dof_int", where dof = defined overflow.


You don't have to because there's not one single existing C++
implementation (that I know of) where the result isn't two's
complement wrapping -- and ironically and paradoxically that
extreme case of existing practice is part of the reason why it's
not going to be standardized...


Floating point is a different case, though. Some compilers use
some "NaN" of "Inf" special values to represent errors.

--
Andre Heinen
My address, rot13-encoded: n qbg urvara ng rhebcrnayvax qbg pbz
Jul 22 '05 #19

P: n/a
"John Harrison" <jo*************@hotmail.com> wrote in message
news:2r*************@uni-berlin.de...
Is unsigned overflow defined in all cases? Where in the C++ standard does
it
say that?
Subclause 3.9.1, paragraph 4:
Unsigned integers, declared unsigned, shall obey the laws of arithmetic
modulo 2^n where n is the number of bits in the value representation of that
particular size of integer. [Footnote: This implies that unsigned arithmetic
does not overflow because a result that cannot be represented by the
resulting unsigned integer type is reduced modulo the number that is one
greater than the largest value that can be represented by the resulting
unsigned integer type.]
The situation with signed overflow is ridiculous. The standard should say
overflow is defined as if twos complement arithmetic was being performed.


Why? That would prohibit implementations from treating integer overflow as
an error.
Jul 22 '05 #20

P: n/a

"Andrew Koenig" <ar*@acm.org> wrote in message
news:ZZ*********************@bgtnsc05-news.ops.worldnet.att.net...
"John Harrison" <jo*************@hotmail.com> wrote in message
news:2r*************@uni-berlin.de...
Why? That would prohibit implementations from treating integer overflow as an error.


They already are prevented from treating unsigned integer overflow as an
error.

john
Jul 22 '05 #21

P: n/a
"JKop" <NU**@NULL.NULL> wrote in message
news:Xh*******************@news.indigo.ie...
Okay let's say that Standard C++... allows... a program to crash should
you
cause a signed int to overflow.
It does.
Well... what the hell kind of implementation would allow this?! Even if
one
does exist, it would have been abandoned 17 years 6 months and 2 days ago.
The kind of implementation that would allow it is the kind of implementation
that would try to detect programmer errors rather than ignoring them.
Imagine it, boot up WinXP. Open a few documents, play minesweeper, CRASH
(Opps sorry, this computer is shit, it crashes if signed integers
overflow).
This kind of argument is often referred to as a "straw man." You take the
claims of the viewpoint you oppose, distort them to make them sound
ridiculous, then claim that they can't be right because they sound
ridiculous.

In practice, it is often far from clear whether it is worse for a program to
crash or to quietly give incorrect or nonsensical results.
I'm open to further discussion on this... but at the moment it looks like
I'm going to ignore the directive that signed int overflow is undefined
behaviour and thus that the program may crash. Come on, it's bullshit!


I don't think you're open to further discussion. At least you're not
behaving like you are.
Jul 22 '05 #22

P: n/a
"John Harrison" <jo*************@hotmail.com> wrote in message
news:2r*************@uni-berlin.de...
Why? That would prohibit implementations from treating integer overflow
as an error.
They already are prevented from treating unsigned integer overflow as an
error.


I don't understand your point. Are you suggesting that because unsigned
arithmetic is defined as being modulo 2^n, signed arithmetic should be
defined that way as well? If so, you will have to make a case for that
claim if you wish to convince me.
Jul 22 '05 #23

P: n/a
* Andrew Koenig:
* John Harrison:

The situation with signed overflow is ridiculous. The standard should say
overflow is defined as if twos complement arithmetic was being performed.


Why? That would prohibit implementations from treating integer overflow as
an error.


The current UB prevents programs from checking for error and from utilizing
the overflow behavior, in a formally portable manner.

That's far worse and far more concrete than the extremely marginal and
hypothetical possibility of some error-detecting implementation, which
AFAIK does not exist (if it had some utility, presumably it would exist).

So the argument about allowing an implementation to treat overflow as
error is a fallacy. :-)

On the other hand, the thread in [comp.std.c++] showed that the arguments
in favor of standardization are not especially forceful.

And there the matter rests, I think (personally I agree with John, but nice-
to-have is not need-to-have, and standardizing existing practice would be
a break with the tradition in C++ standardization, wouldn't it?).
Cheers,

- Alf

--
A: Because it messes up the order in which people normally read text.
Q: Why is it such a bad thing?
A: Top-posting.
Q: What is the most annoying thing on usenet and in e-mail?
Jul 22 '05 #24

P: n/a
Andrew Koenig wrote:
"John Harrison" <jo*************@hotmail.com> wrote in message
news:2r*************@uni-berlin.de...
Why? That would prohibit implementations from treating integer overflow
as an error.

They already are prevented from treating unsigned integer overflow as an
error.


I don't understand your point. Are you suggesting that because unsigned
arithmetic is defined as being modulo 2^n, signed arithmetic should be
defined that way as well? If so, you will have to make a case for that
claim if you wish to convince me.


What about consistancy? Why treat them so differently? Can you actually name
any platform that simply does the modulo on unsigned integers and something
completely different and unreproducable on signed ones? I'd say they should
either be both undefined or both not undefined. Actually, I think they
should both be implementation-defined.

Jul 22 '05 #25

P: n/a
> What about consistancy? Why treat them so differently? Can you actually
name
any platform that simply does the modulo on unsigned integers and something completely different and unreproducable on signed ones? I'd say they should either be both undefined or both not undefined. Actually, I think they
should both be implementation-defined.


Speaking of which, anyone knows if someone has ever compiled a chart of
somekind (doesn't matter if out-of-date by 10 years :) where some
implementation specific things would have been mapped? -> loading
www.google.com ... okay, let's re-phrase.. anyone knows any GOOD maps of
this sort? :)
Jul 22 '05 #26

P: n/a

"Rolf Magnus" <ra******@t-online.de> wrote in message news:ci*************@news.t-online.com...

What about consistancy? Why treat them so differently? Can you actually name
any platform that simply does the modulo on unsigned integers and something
completely different and unreproducable on signed ones? I'd say they should
either be both undefined or both not undefined. Actually, I think they
should both be implementation-defined.


Certainly, I can. Any one's complement machine most likely does a non-modulo
answer on integer roll over. Further, I have worked on machines (Gould SEL) that
trapped integer overflows.
Jul 22 '05 #27

P: n/a
* Ron Natalie:

"Rolf Magnus" <ra******@t-online.de> wrote in message news:ci*************@news.t-online.com...

What about consistancy? Why treat them so differently? Can you actually name
any platform that simply does the modulo on unsigned integers and something
completely different and unreproducable on signed ones? I'd say they should
either be both undefined or both not undefined. Actually, I think they
should both be implementation-defined.

Certainly, I can. Any one's complement machine most likely does a non-modulo
answer on integer roll over.


Have you an example of a one's complement machine in use, for which a C++
implementation exists?

Further, I have worked on machines (Gould SEL) that trapped integer overflows.


I just Googled that and found e.g. <url: http://213.130.56.75/gould.html>
<quote>
Real-time programs should not crash. Halting hardware and man-in-the-loop
simulations damages equipment and hurts people. Gould-SEL run-time systems are
therefore tolerant of division by zero (It is assumed that the result can be
ignored)
</quote>

Given that Gould SEL runtime systems tolerate even division by zero it seems
amazing that they do not tolerate signed integer overflow; comment?

Anyway, to be conforming a C++ compiler for Gould SEL cannot trap overflow on
unsigned ints, and it's no hassle to implement signed ints in terms of
unsigned ints (addition and subtraction map directly to unsigned operations
with no extra processing, multiplication and division requires a small check).

So was your experience with a C++ compiler (if so which), or another language?

Remember that behavior in a given language is not necessarily a constraint
imposed by the machine, and as mentioned above, even with silly constraints
imposed by the machine a language implementation can easily work around it.

--
A: Because it messes up the order in which people normally read text.
Q: Why is it such a bad thing?
A: Top-posting.
Q: What is the most annoying thing on usenet and in e-mail?
Jul 22 '05 #28

P: n/a
Ron Natalie wrote:
"Rolf Magnus" <ra******@t-online.de> wrote in message
news:ci*************@news.t-online.com...

What about consistancy? Why treat them so differently? Can you actually
name any platform that simply does the modulo on unsigned integers and
something completely different and unreproducable on signed ones? I'd say
they should either be both undefined or both not undefined. Actually, I
think they should both be implementation-defined.

Certainly, I can.


Why don't you do it then? :-)
Any one's complement machine most likely does a non-modulo answer on
integer roll over. Further, I have worked on machines (Gould
SEL) that trapped integer overflows.


And it did so only for signed ones and not unsigned?

Jul 22 '05 #29

P: n/a
In article <%N*******************@news.indigo.ie>,
JKop <NU**@NULL.NULL> wrote:
I don't really get it. You want to overflow an integer and don't care
for the resulting value as long as you don't get a crash? What is the
purpose of that integer if the value doesn't matter?

The user enters a year:

2004
1582
1906
-6000 (6000 BCE)

I'm writing a program at the moment that deals with dates. I'm not going to
bother putting in safe-guards for signed integer overflow, it's not worth
the effort. As such, if the user enters the following year:

2147483645

I don't care if they get innacurate information, just so long as the machine
doesn't freeze or whatever.

(This program will be portable)

It seems though that according to the C++ Standard, it's well within its
rights to crash...


Indeed it is.

Playing devil advocate:
BTW, would you care if instead of dates, you were programming
the systems at the bank where your bank account is, and those
values were your paycheck? Or even sticking with dates,
the dates your pension should kick it and direct deposit
your account? This program will be portable (sic) :)
(I'm not necessarily presenting a solution here (yet),
mostly just raising more situations.)
--
Greg Comeau / Comeau C++ 4.3.3, for C++03 core language support
Comeau C/C++ ONLINE ==> http://www.comeaucomputing.com/tryitout
World Class Compilers: Breathtaking C++, Amazing C99, Fabulous C90.
Comeau C/C++ with Dinkumware's Libraries... Have you tried it?
Jul 22 '05 #30

P: n/a
In article <41****************@news.individual.net>,
Alf P. Steinbach <al***@start.no> wrote:
* Greg Comeau -> JKop:
Playing devil advocate:
BTW, would you care if instead of dates, you were programming
the systems at the bank where your bank account is, and those
values were your paycheck? Or even sticking with dates,
the dates your pension should kick it and direct deposit
your account? This program will be portable (sic) :)
(I'm not necessarily presenting a solution here (yet),
mostly just raising more situations.)
The question you raise is important (it's the same question raised by
Andrew Koenig, and I suspect the same misconception that UB is in some
way a Good Thing in this context).


Well, it certainly allows for flexibility.
With UB there's no portable and efficient way to do error checking.
This may be so for UB, but...
With defined behavior it's trivial (well, uhm, trivial and trivial,
a few folks including me, ahem, managed to get it wrong, but at least
it's trivial for most cases, and when you know general solution... ;-) ).


.... that does not mean the error checking can't be done through
other means, if necessary.
--
Greg Comeau / Comeau C++ 4.3.3, for C++03 core language support
Comeau C/C++ ONLINE ==> http://www.comeaucomputing.com/tryitout
World Class Compilers: Breathtaking C++, Amazing C99, Fabulous C90.
Comeau C/C++ with Dinkumware's Libraries... Have you tried it?
Jul 22 '05 #31

P: n/a
Alf P. Steinbach wrote:
* Greg Comeau -> JKop:

Playing devil advocate:
BTW, would you care if instead of dates, you were programming
the systems at the bank where your bank account is, and those
values were your paycheck? Or even sticking with dates,
the dates your pension should kick it and direct deposit
your account? This program will be portable (sic) :)
(I'm not necessarily presenting a solution here (yet),
mostly just raising more situations.)
The question you raise is important (it's the same question raised by
Andrew Koenig, and I suspect the same misconception that UB is in some
way a Good Thing in this context).

With UB there's no portable and efficient way to do error checking.


There is no portable and efficent way anyway, since lots of systems don't
support that in hardware.
With defined behavior it's trivial (well, uhm, trivial and trivial,
a few folks including me, ahem, managed to get it wrong, but at least
it's trivial for most cases, and when you know general solution... ;-) ).


It is? Could you give an example?

Jul 22 '05 #32

P: n/a
> There is no portable and efficent way anyway, since lots
of systems
don't support that in hardware.


Sure there is! Hijack the operators. If you see that it's
about to go past its limit, send it back to zero, or just
freeze it at its maximum value.

Simple stuff, if you're bothered writing it...
-JKop
Jul 22 '05 #33

P: n/a
* Rolf Magnus:
Alf P. Steinbach wrote:
* Greg Comeau -> JKop:

Playing devil advocate:
BTW, would you care if instead of dates, you were programming
the systems at the bank where your bank account is, and those
values were your paycheck? Or even sticking with dates,
the dates your pension should kick it and direct deposit
your account? This program will be portable (sic) :)
(I'm not necessarily presenting a solution here (yet),
mostly just raising more situations.)


The question you raise is important (it's the same question raised by
Andrew Koenig, and I suspect the same misconception that UB is in some
way a Good Thing in this context).

With UB there's no portable and efficient way to do error checking.


There is no portable and efficent way anyway, since lots of systems don't
support that in hardware.


That is incorrect.

All systems that support arithmetic and conditionals (which they must to
be Turing complete) support error checking.

With defined behavior it's trivial (well, uhm, trivial and trivial,
a few folks including me, ahem, managed to get it wrong, but at least
it's trivial for most cases, and when you know general solution... ;-) ).


It is? Could you give an example?


The simplest example is the addition of two non-negative numbers. With 2's
complement signed integer you have overflow if and only if the result is
negative. The rest was discussed in the [comp.std.c++] thread about this.

--
A: Because it messes up the order in which people normally read text.
Q: Why is it such a bad thing?
A: Top-posting.
Q: What is the most annoying thing on usenet and in e-mail?
Jul 22 '05 #34

P: n/a

"JKop" <NU**@NULL.NULL> wrote in message news:5z*******************@news.indigo.ie...
Simple stuff, if you're bothered writing it...


Nobody said it was hard....the issue is how inefficient do you want to make the
language.
Jul 22 '05 #35

P: n/a
On Wed, 22 Sep 2004 11:07:30 GMT, al***@start.no (Alf P. Steinbach)
wrote:
* Rolf Magnus:
Alf P. Steinbach wrote:
> * Greg Comeau -> JKop:
>>
>> Playing devil advocate:
>> BTW, would you care if instead of dates, you were programming
>> the systems at the bank where your bank account is, and those
>> values were your paycheck? Or even sticking with dates,
>> the dates your pension should kick it and direct deposit
>> your account? This program will be portable (sic) :)
>> (I'm not necessarily presenting a solution here (yet),
>> mostly just raising more situations.)
>
> The question you raise is important (it's the same question raised by
> Andrew Koenig, and I suspect the same misconception that UB is in some
> way a Good Thing in this context).
>
> With UB there's no portable and efficient way to do error checking.


There is no portable and efficent way anyway, since lots of systems don't
support that in hardware.


That is incorrect.

All systems that support arithmetic and conditionals (which they must to
be Turing complete) support error checking.


But if the CPU traps when it detects signed integer overflow (gcc has
an option for this, I can certainly imagine CPUs designed for mission
critical applications doing this)? On such a machine, you'd have to do
the arithmetic in 64-bits and then check for 32-bit overflow.
> With defined behavior it's trivial (well, uhm, trivial and trivial,
> a few folks including me, ahem, managed to get it wrong, but at least
> it's trivial for most cases, and when you know general solution... ;-) ).


It is? Could you give an example?


The simplest example is the addition of two non-negative numbers. With 2's
complement signed integer you have overflow if and only if the result is
negative. The rest was discussed in the [comp.std.c++] thread about this.


If your system documents that it does this, and you only intend to
port to other machines that also do this, then what's the problem? The
code isn't 100% portable, but it is fine for your needs.

Tom
Jul 22 '05 #36

P: n/a
* Tom Widmer:
On Wed, 22 Sep 2004 11:07:30 GMT, al***@start.no (Alf P. Steinbach)
wrote:
All systems that support arithmetic and conditionals (which they must to
be Turing complete) support error checking.
But if the CPU traps when it detects signed integer overflow (gcc has
an option for this, I can certainly imagine CPUs designed for mission
critical applications doing this)? On such a machine, you'd have to do
the arithmetic in 64-bits and then check for 32-bit overflow.


No.

> With defined behavior it's trivial (well, uhm, trivial and trivial,
> a few folks including me, ahem, managed to get it wrong, but at least
> it's trivial for most cases, and when you know general solution... ;-) ).

It is? Could you give an example?


The simplest example is the addition of two non-negative numbers. With 2's
complement signed integer you have overflow if and only if the result is
negative. The rest was discussed in the [comp.std.c++] thread about this.


If your system documents that it does this, and you only intend to
port to other machines that also do this,

hypothetical

then what's the problem?
UB

The code isn't 100% portable, but it is fine for your needs.


no

--
A: Because it messes up the order in which people normally read text.
Q: Why is it such a bad thing?
A: Top-posting.
Q: What is the most annoying thing on usenet and in e-mail?
Jul 22 '05 #37

P: n/a
On Wed, 22 Sep 2004 13:11:35 GMT, al***@start.no (Alf P. Steinbach)
wrote:
* Tom Widmer:
On Wed, 22 Sep 2004 11:07:30 GMT, al***@start.no (Alf P. Steinbach)
wrote:
>All systems that support arithmetic and conditionals (which they must to
>be Turing complete) support error checking.


But if the CPU traps when it detects signed integer overflow (gcc has
an option for this, I can certainly imagine CPUs designed for mission
critical applications doing this)? On such a machine, you'd have to do
the arithmetic in 64-bits and then check for 32-bit overflow.


No.


You have two values,
int a = blah;
int b = blah2;

How do you portably, efficiently and simply check whether "a + b"
would overflow or not? (this is a genuine question - I don't know
whether it is possible or not). I can think of inefficient techiques,
for sure.
>> > With defined behavior it's trivial (well, uhm, trivial and trivial,
>> > a few folks including me, ahem, managed to get it wrong, but at least
>> > it's trivial for most cases, and when you know general solution... ;-) ).
>>
>> It is? Could you give an example?
>
>The simplest example is the addition of two non-negative numbers. With 2's
>complement signed integer you have overflow if and only if the result is
>negative. The rest was discussed in the [comp.std.c++] thread about this.


If your system documents that it does this, and you only intend to
port to other machines that also do this,

hypothetical

then what's the problem?


UB


And indeed it is UB. On many platforms, you get 2s complement
wraparound, but on some you get a CPU trap, and on some you get 1s
complement results. Requiring anything else in the language makes
*all* integer arithmetic inefficient, at least on platforms which
don't naturally do the mandated behaviour (whatever you want that to
be).
The code isn't 100% portable, but it is fine for your needs.


no


What, precisely, do you want the standard to say then, if not UB? In
what way isn't it sufficient for your needs?

Tom
Jul 22 '05 #38

P: n/a
* Tom Widmer:
And indeed it is UB. On many platforms, you get 2s complement
wraparound, but on some you get a CPU trap, and on some you get 1s
complement results. Requiring anything else in the language makes
*all* integer arithmetic inefficient, at least on platforms which
don't naturally do the mandated behaviour (whatever you want that to
be).


AFAIK the above is incorrect.

No-one has been able to come up with examples of C++ compilers that do
anything but 2's complement.

If you have even a single such example please give it.

--
A: Because it messes up the order in which people normally read text.
Q: Why is it such a bad thing?
A: Top-posting.
Q: What is the most annoying thing on usenet and in e-mail?
Jul 22 '05 #39

P: n/a

"Alf P. Steinbach" <al***@start.no> wrote in message news:41****************@news.individual.net...
No-one has been able to come up with examples of C++ compilers that do
anything but 2's complement.

If you have even a single such example please give it.


I can't say I've used C++ on it, but I've certainly used C on UNISYS 1100.
Not only is it a 1's complement machine, it's also got a 36 bit wordsize and
some very odd pointer issues. However, I'm fairly sure there is a C++
compiler for it as well.

If for no other reason than the C standard permits such architectures explicitly
they must be propagated in C++. If we were free to divorce ourselves from
certain C stupidities, C++ could be a cleaner language.
Jul 22 '05 #40

P: n/a
* Ron Natalie:

"Alf P. Steinbach" <al***@start.no> wrote in message news:41****************@news.individual.net...
No-one has been able to come up with examples of C++ compilers that do
anything but 2's complement.

If you have even a single such example please give it.
I can't say I've used C++ on it, but I've certainly used C on UNISYS 1100.
Not only is it a 1's complement machine, it's also got a 36 bit wordsize and
some very odd pointer issues. However, I'm fairly sure there is a C++
compiler for it as well.


Perhaps you're fairly sure because with so many people saying there are
such compilers, surely there must be one? Well, drink blood. Billions
upon billions of gnats insist it is a lovely drink.

Now even you who has worked on a machine with 1's complement don't know
about a C++ compiler that actually uses that approach.

Perhaps there is some pre-standard C++ compiler for that machine, but I'd
dismiss that.

If for no other reason than the C standard permits such architectures explicitly
they must be propagated in C++. If we were free to divorce ourselves from
certain C stupidities, C++ could be a cleaner language.


Agree with the latter but not the former sentence. ;-)

Say that someone (not quite of their right mind) marketed a one's complement
micro-controller with a C++ compiler that also used one's complement (these
are two different things). What are the chances that compiler would be a
conforming one? About zilch, nada, nix; so why should we insist on providing
a path to conformance in one particular small area where the impact on all
_real_ and actual language implementations is negative?

--
A: Because it messes up the order in which people normally read text.
Q: Why is it such a bad thing?
A: Top-posting.
Q: What is the most annoying thing on usenet and in e-mail?
Jul 22 '05 #41

P: n/a
On Thu, 23 Sep 2004 11:59:12 GMT, al***@start.no (Alf P. Steinbach)
wrote:
* Tom Widmer:
And indeed it is UB. On many platforms, you get 2s complement
wraparound, but on some you get a CPU trap, and on some you get 1s
complement results. Requiring anything else in the language makes
*all* integer arithmetic inefficient, at least on platforms which
don't naturally do the mandated behaviour (whatever you want that to
be).


AFAIK the above is incorrect.

No-one has been able to come up with examples of C++ compilers that do
anything but 2's complement.

If you have even a single such example please give it.


GCC on all platforms can be configured not to do wraparound overflow:
http://gcc.gnu.org/onlinedocs/gcc-3....de-Gen-Options
See -ftrapv

I suspect there are programs that are relying on this "defined
undefined" behaviour, and such features shouldn't really make the mode
non-conforming IMHO.

As for other signed integer formats, I don't know of any current
machines using them, but you could try asking in an embedded
programming group; it may be that there are no current systems that
don't use 2s complement at all, but there may be some DSPs and similar
that do.

Tom
Jul 22 '05 #42

P: n/a
* Tom Widmer:
On Thu, 23 Sep 2004 11:59:12 GMT, al***@start.no (Alf P. Steinbach)
wrote:
* Tom Widmer:
And indeed it is UB. On many platforms, you get 2s complement
wraparound, but on some you get a CPU trap, and on some you get 1s
complement results. Requiring anything else in the language makes
*all* integer arithmetic inefficient, at least on platforms which
don't naturally do the mandated behaviour (whatever you want that to
be).
AFAIK the above is incorrect.

No-one has been able to come up with examples of C++ compilers that do
anything but 2's complement.

If you have even a single such example please give it.


GCC on all platforms can be configured not to do wraparound overflow:
http://gcc.gnu.org/onlinedocs/gcc-3....de-Gen-Options
See -ftrapv

I suspect there are programs that are relying on this "defined
undefined" behaviour,


I suspect not... If so an example should have popped up long ago.

and such features shouldn't really make the mode non-conforming IMHO.
That concerns the wording of the standard.

For example, one could allow GCC's option by requiring a symbol like
NDEBUG, for example, _TRAPV_INT. Any old program relying on the
GCC compiler option (of which none are known) would then be conforming
just by pretending it had been compiled with this symbol defined. And
actual, real programs could then get standard-mandated trapping --
with implementation effect unless "trap" is also defined -- by using
the symbol.

Best of both worlds, that. :-)

As for other signed integer formats, I don't know of any current
machines using them, but you could try asking in an embedded
programming group; it may be that there are no current systems that
don't use 2s complement at all, but there may be some DSPs and similar
that do.


Since this has been discussed many times in many forums with many experts in
this area, I rather doubt it; anyway the question is C++ compilers.

--
A: Because it messes up the order in which people normally read text.
Q: Why is it such a bad thing?
A: Top-posting.
Q: What is the most annoying thing on usenet and in e-mail?
Jul 22 '05 #43

P: n/a
On Thu, 23 Sep 2004 16:54:08 GMT, al***@start.no (Alf P. Steinbach)
wrote:
* Tom Widmer:
On Thu, 23 Sep 2004 11:59:12 GMT, al***@start.no (Alf P. Steinbach)
wrote:
>* Tom Widmer:
>> And indeed it is UB. On many platforms, you get 2s complement
>> wraparound, but on some you get a CPU trap, and on some you get 1s
>> complement results. Requiring anything else in the language makes
>> *all* integer arithmetic inefficient, at least on platforms which
>> don't naturally do the mandated behaviour (whatever you want that to
>> be).
>
>AFAIK the above is incorrect.
>
>No-one has been able to come up with examples of C++ compilers that do
>anything but 2's complement.
>
>If you have even a single such example please give it.


GCC on all platforms can be configured not to do wraparound overflow:
http://gcc.gnu.org/onlinedocs/gcc-3....de-Gen-Options
See -ftrapv

I suspect there are programs that are relying on this "defined
undefined" behaviour,


I suspect not... If so an example should have popped up long ago.


This newsgroup and newsgroups in general are only read by a small
proportion of the C++ community.
and such features shouldn't really make the mode non-conforming IMHO.


That concerns the wording of the standard.

For example, one could allow GCC's option by requiring a symbol like
NDEBUG, for example, _TRAPV_INT. Any old program relying on the
GCC compiler option (of which none are known) would then be conforming
just by pretending it had been compiled with this symbol defined. And
actual, real programs could then get standard-mandated trapping --
with implementation effect unless "trap" is also defined -- by using
the symbol.

Best of both worlds, that. :-)


Well, the standard would have to define what a "trap" is. A particular
signal? Not very portable to Windows I think...

I suppose: "If _SIGNED_OVERFLOW_UNDEFINED is defined, the effects of
signed integer overflow are undefined, otherwise they are defined as
follows:"... Then implementations on funny (or future) processors or
with code generated for traps could just define that symbol.
As for other signed integer formats, I don't know of any current
machines using them, but you could try asking in an embedded
programming group; it may be that there are no current systems that
don't use 2s complement at all, but there may be some DSPs and similar
that do.


Since this has been discussed many times in many forums with many experts in
this area, I rather doubt it; anyway the question is C++ compilers.


If Greg Comeau doesn't know of any current systems, then I doubt there
are any - I imagine Comeau C++ has been ported to a number of embedded
platforms. Has Greg ever posted on this issue?

Tom
Jul 22 '05 #44

P: n/a
In article <gt********************************@4ax.com>,
Tom Widmer <to********@hotmail.com> wrote:
If Greg Comeau doesn't know of any current systems, then I doubt there
are any - I imagine Comeau C++ has been ported to a number of embedded
platforms. Has Greg ever posted on this issue?


I have not. Partly because I can't remember (one would think
I would, but well, everything is always important on each port,
so lots gets blurry after a while), and partly because in the
cases where I can remember, I'm not allowed to say.
--
Greg Comeau / Comeau C++ 4.3.3, for C++03 core language support
Comeau C/C++ ONLINE ==> http://www.comeaucomputing.com/tryitout
World Class Compilers: Breathtaking C++, Amazing C99, Fabulous C90.
Comeau C/C++ with Dinkumware's Libraries... Have you tried it?
Jul 22 '05 #45

This discussion thread is closed

Replies have been disabled for this discussion.