473,238 Members | 1,791 Online

# detecting integer overflow

Is there any way by which the overflow during addition of two integers
may
be detected ?

eg.

suppose we have three unsigned integers, a ,b, c.
we are doing a check like
if ((a +b) > c)
do something;
else
do something else.

If addition of a and b genearates overflow, the check may fail even if
a + b is larger than c.
How can we avoid such conditions from failure ?

Thanx in advance for any help ....

Dec 5 '05 #1
25 6176
ju**********@yahoo.co.in wrote:
Is there any way by which the overflow during addition of two integers
may
be detected ?

By the standard, I would personally guess that the state of an
overflown integer is undefined. On most PCs though I would imagine that
the result will always be less than the lower of the two values
added--which would be a way to test.

if ((c = (a + b)) > (a < b ? a : b)) {
....
}
else { /* overflowed! */
....
}

I'm just saying this off the top of my head, and you shouldn't trust it
until you've tested throroughly though.

Dec 5 '05 #2
ju**********@yahoo.co.in wrote:
Is there any way by which the overflow during addition of two integers
may
be detected ?

eg.

suppose we have three unsigned integers, a ,b, c.
we are doing a check like
if ((a +b) > c)
do something;
else
do something else.

If addition of a and b genearates overflow, the check may fail even if
a + b is larger than c.
How can we avoid such conditions from failure ?

If the intention is just to avoid a situation like the one mentioned, a
quick approach could be to promote the type of the variables from int
to double or float just inside the comparison by using an explicit
typecast.

if (((double)a+b) > c)
do something;
else
do something else;

Of course a float can overflow too, but it is not likely to happen when
two or more integers are added.

Dec 5 '05 #3
ju**********@yahoo.co.in wrote:
Is there any way by which the overflow during addition of two integers
may
be detected ?

eg.

suppose we have three unsigned integers, a ,b, c.
we are doing a check like
if ((a +b) > c)
do something;
else
do something else.

If addition of a and b genearates overflow, the check may fail even if
a + b is larger than c.
How can we avoid such conditions from failure ?

Thanx in advance for any help ....

Most of the PCs I've worked with wraps the integer (so you could test it
for wrapping around.) But the standard clearly states that integer
overflow is an UB. So it could result in a trap(or "Nasal Demons".)

So one way to check in the conforming way, you could do the following.

assuming INT type.. (can be extended to any type)

/* 0 -> No overflow after adding a & b
non-0 -> Overflow!!!

Checks *only* for Overflow.
_So any -ve number returns 0._
Overflow : a + b > INT_MAX (for all a > 0 and b > 0 )
Since we can't safely do (a+b) (as it might result in overflow),
we change the inequalities as : b > INT_MAX - a

*/

inline int isSumOverFlow(int a, int b)
{
return a > 0 && b > 0 && b > (INT_MAX - a);
}
--
(Welcome) http://www.ungerhu.com/jxh/clc.welcome.txt
(clc FAQ) http://www.eskimo.com/~scs/C-faq/top.html
Dec 5 '05 #4
On 4 Dec 2005 23:41:03 -0800, "ju**********@yahoo.co.in"
<ju**********@yahoo.co.in> wrote in comp.lang.c:
Is there any way by which the overflow during addition of two integers
may
be detected ?
According to the C standard, overflow during operations with any of
the signed integer types causes undefined behavior. Once you invoke
undefined behavior, the C language no longer places any requirements
on the results. So for signed integer types, there is no standard way
to detect overflow after the fact.
eg.

suppose we have three unsigned integers, a ,b, c.
we are doing a check like
if ((a +b) > c)
do something;
else
do something else.
The unsigned integer types, on the other hand, can't actually overflow
or underflow. The behavior is well defined. If the result of an
operation on unsigned types is outside the range of the type, the
value is adjusted by adding or subtracting (maximum value of the type
+ 1) until the result is within range.
If addition of a and b genearates overflow, the check may fail even if
a + b is larger than c.
How can we avoid such conditions from failure ?

Thanx in advance for any help ....

The best method is to perform the check before performing the
operation. The constants defined in <limits.h> can help.

Consider the signed integer types first. If the two values have
opposite sign, no overflow is possible with addition. The only
possibility is if both are positive or both negative. That doesn't
help much for unsigned types, which are always positive or 0, and
never negative.

bool will_overflow(unsigned a, unsigned b)
{
if ((UINT_MAX - a) > b)
{
return true;
}
else
{
return false;
}
}

Similar logic can be performed for other types. UINT_MAX and related
macros are defined in <limits.h>.

--
Jack Klein
Home: http://JK-Technology.Com
FAQs for
comp.lang.c http://www.eskimo.com/~scs/C-faq/top.html
comp.lang.c++ http://www.parashift.com/c++-faq-lite/
alt.comp.lang.learn.c-c++
http://www.contrib.andrew.cmu.edu/~a...FAQ-acllc.html
Dec 5 '05 #5
Chris Williams wrote:

ju**********@yahoo.co.in wrote:
Is there any way by which the overflow
during addition of two integers may be detected ?

By the standard, I would personally guess that the state of an
overflown integer is undefined.
On most PCs though I would imagine that
the result will always be less than the lower of the two values
added--which would be a way to test.

if ((c = (a + b)) > (a < b ? a : b)) {
...
}
else { /* overflowed! */
...
}

I'm just saying this off the top of my head,
and you shouldn't trust it
until you've tested throroughly though.

Tested or not, code like that,
prevents a C program from being defined.

--
pete
Dec 5 '05 #6
pete wrote:

Chris Williams wrote:

ju**********@yahoo.co.in wrote:
Is there any way by which the overflow
during addition of two integers may be detected ?

By the standard, I would personally guess that the state of an
overflown integer is undefined.
On most PCs though I would imagine that
the result will always be less than the lower of the two values
added--which would be a way to test.

if ((c = (a + b)) > (a < b ? a : b)) {
...
}
else { /* overflowed! */
...
}

I'm just saying this off the top of my head,
and you shouldn't trust it
until you've tested throroughly though.

Tested or not, code like that,
prevents a C program from being defined.

I made a mistake.
I missed that a,b and c are unsigned.

--
pete
Dec 5 '05 #7
ju**********@yahoo.co.in wrote:

Is there any way by which the overflow during addition of two integers
may
be detected ?

eg.

suppose we have three unsigned integers, a ,b, c.
we are doing a check like
if ((a +b) > c)
do something;
else
do something else.

If addition of a and b genearates overflow, the check may fail even if
a + b is larger than c.
How can we avoid such conditions from failure ?

For unsigned types, if (a + b > a),
then that means that (a + b) didn't wrap around.

--
pete
Dec 5 '05 #8
pete wrote:

ju**********@yahoo.co.in wrote:

Is there any way by which the overflow during addition of two integers
may
be detected ?

eg.

suppose we have three unsigned integers, a ,b, c.
we are doing a check like
if ((a +b) > c)
do something;
else
do something else.

If addition of a and b genearates overflow,
the check may fail even if
a + b is larger than c.
How can we avoid such conditions from failure ?

For unsigned types, if (a + b > a),
then that means that (a + b) didn't wrap around.

Better make that: if (a + b >= a),
then that means that (a + b) didn't wrap around.

--
pete
Dec 5 '05 #9
Anand wrote:
ju**********@yahoo.co.in wrote:
Is there any way by which the overflow during addition of two integers
may
be detected ?

eg.

suppose we have three unsigned integers, a ,b, c.
we are doing a check like
if ((a +b) > c)
do something;
else
do something else.

If addition of a and b genearates overflow, the check may fail even if
a + b is larger than c.
How can we avoid such conditions from failure ?

Thanx in advance for any help ....

Most of the PCs I've worked with wraps the integer (so you could test it
for wrapping around.) But the standard clearly states that integer
overflow is an UB. So it could result in a trap(or "Nasal Demons".)

Sorry missed the "unsigned int" in the OP.
Here's the copy and paste from Annexure H:
| C's unsigned integer types are "modulo" in the language-independent
| arithmetic (LIAâˆ’1) sense in that overflows or out-of-bounds results
| silently wrap. An implementation that defines signed integer types as
| also being modulo need not detect integer overflow, in which case,
| only integer divide-by-zero need be detected.
Dec 5 '05 #10
ju**********@yahoo.co.in wrote:
Is there any way by which the overflow during addition of two integers
may
be detected ?

eg.

suppose we have three unsigned integers, a ,b, c.
we are doing a check like
if ((a +b) > c)
do something;
else
do something else.

If addition of a and b genearates overflow, the check may fail even if
a + b is larger than c.
How can we avoid such conditions from failure ?

Thanx in advance for any help ....

As many posters have replied here, you *can* test for overflow
yourself. What bothers me, is that the language does not enforce this
even if it is a break of the standard, as Jack Klein has noted in

The lcc-win32 compiler tests for overflow with a command line
switch. This wasn't very difficult to do, and the impact
in the run time is minimal, in any case not even measurable
for a normal application in a PC environment.

OF COURSE there are other environments where each microsecond counts
and where the quality of the results doesn't matter so much...

For *those* environments the language stdandard *could* make an
exception, for instance

#pragma STDC intergeroverflow(off)

or similar.

But in a normal case, integer overflow is a serious error,
an error that is very difficult to catch if not done at the
compiler level. You think you could have an overflow *here* or
*there* but actually you got an overflow in a completely unexpected
place!
The code generated is not specially clever. Here is the code for this
program:
int main(void)
{
int a = 0x7fffffff,b=0x7fffffff;
int c = a*b;
}

For the relevant line we get this:
; 4 int c = a*b;
.line 4
movl -4(%ebp),%edi ; put a in edi
imull -8(%ebp),%edi ; multiply by b
jno _\$L2 ; if no overflow continue at L2
pusha ; OVERFLOW save all regs
pushl \$4 ; push the line number
pushl \$main__labelname ; push the name of the function
call __overflow ; call the exception handler
addl \$8,%esp ; restore stack
popa ; restore all regs
_\$L2: go on

The overflow handler is prototyped as:

void __overflow(char *function name, int line);

Other mechanisms could be considered, and the generated code *could*
be better. I have found that in PC environnments, where lcc-win32 runs,
this is absolutely not measurable for normal applications...

jacob
Dec 5 '05 #11
jacob navia wrote:
ju**********@yahoo.co.in wrote:
Is there any way by which the overflow during addition of two integers
may
be detected ?

eg.

suppose we have three unsigned integers, a ,b, c.
we are doing a check like
if ((a +b) > c)
do something;
else
do something else.

If addition of a and b genearates overflow, the check may fail even if
a + b is larger than c.
How can we avoid such conditions from failure ?

Thanx in advance for any help ....
As many posters have replied here, you *can* test for overflow
yourself. What bothers me, is that the language does not enforce this
even if it is a break of the standard, as Jack Klein has noted in

It does not do this for the same reason that all other undefined behavior is
not necessarily diagnosed: to avoid constraining implementations.

If an implementation generates a hardware trap on overflow, for example, and
there is nothing a C program or compiler can do to modify this behavior in
the slightest, there is little to gain from making provisions in the

If defined behavior is necessary on such platforms, two integer calculations
have to be performed for every single one to be checked. It would obviously
be ill-advised for the standard to mandate this.
The lcc-win32 compiler tests for overflow with a command line
switch. This wasn't very difficult to do, and the impact
in the run time is minimal, in any case not even measurable
for a normal application in a PC environment.

OF COURSE there are other environments where each microsecond counts
and where the quality of the results doesn't matter so much...
Or where the C compiler is not expected to guarantee the quality of the
results, but other methods are employed.
For *those* environments the language stdandard *could* make an
exception, for instance

#pragma STDC intergeroverflow(off)

or similar.
This same could be done for many language features that trigger UB, giving
well-defined behavior for constructs unless a pragma was defined. The only
fly in the ointment is that some checks would be very cheap on most
platforms (integer overflow) while others could be impossible to check
without degrading performance to nonexistent levels on some platforms
(making sure the aliasing rules are not broken).

In the end, that's just not what C is about.
But in a normal case, integer overflow is a serious error,
an error that is very difficult to catch if not done at the
compiler level. You think you could have an overflow *here* or
*there* but actually you got an overflow in a completely unexpected
place!
And the same is true for accessing null pointers, uninitialized variables,
defining duplicate external symbols, accessing freed memory and a myriad
other things that the standard could mandate well-defined behavior for, but
deliberately doesn't. Many languages safer than C have been created in
response to this, but C persists.

In C, it is always the programmer's responsibility to make sure
platform-defined limits are not exceeded, and the programmer's
responsibility to explicitly check for them if it cannot be guaranteed. The
programmer may even choose to exploit their platform's known behavior
despite it being left undefined by the standard, and sacrifice portability
for performance. The merits of this are debatable, of course.

[snip x86 code] Other mechanisms could be considered, and the generated code *could*
be better. I have found that in PC environnments, where lcc-win32 runs,
this is absolutely not measurable for normal applications...

And I'm sure the lcc overflow detection is a valuable help to Win32
programmers. But from a practical and philosophical standpoint, the standard
can't include it. You could argue that it's so cheap and portable that it
ought to be done on every platform, but I don't know if that's true, and I'm
not on the committee.

S.
Dec 5 '05 #12
Skarmander <in*****@dontmailme.com> writes:
jacob navia wrote:

[...]
As many posters have replied here, you *can* test for overflow
yourself. What bothers me, is that the language does not enforce this
even if it is a break of the standard, as Jack Klein has noted in

It does not do this for the same reason that all other undefined
behavior is not necessarily diagnosed: to avoid constraining
implementations.

If an implementation generates a hardware trap on overflow, for
example, and there is nothing a C program or compiler can do to modify
this behavior in the slightest, there is little to gain from making
provisions in the standard about it.

If defined behavior is necessary on such platforms, two integer
calculations have to be performed for every single one to be
checked. It would obviously be ill-advised for the standard to mandate
this.

The problem, though, is that C doesn't *let* you do efficient overflow
checks.

On many (most?) platforms, an overflow check can be done fairly
efficiently, typically by checking a status flag after the operation.
It's likely to be more expensive than performing the operation without
checking for overflow, but cheaper and cleaner than doing the check in
pure C (by pre-checking the operands before performing the operation).
Furthermore, if overflow checking were built into the language, the
compiler could in many cases determine that a check isn't necessary
because it knows something about the possible values of the operands.
A programmer doing the checks manually isn't likely to be able to do
this reliably.

Another problem, of course, is deciding what to do if the check fails.

(The counterargument to this is, "If you want Ada, you know where to
find it.")

--
Keith Thompson (The_Other_Keith) ks***@mib.org <http://www.ghoti.net/~kst>
San Diego Supercomputer Center <*> <http://users.sdsc.edu/~kst>
We must do something. This is something. Therefore, we must do this.
Dec 5 '05 #13
Keith Thompson wrote:
Skarmander <in*****@dontmailme.com> writes:
jacob navia wrote: [...]
As many posters have replied here, you *can* test for overflow
yourself. What bothers me, is that the language does not enforce this
even if it is a break of the standard, as Jack Klein has noted in

It does not do this for the same reason that all other undefined
behavior is not necessarily diagnosed: to avoid constraining
implementations.

If an implementation generates a hardware trap on overflow, for
example, and there is nothing a C program or compiler can do to modify
this behavior in the slightest, there is little to gain from making
provisions in the standard about it.

If defined behavior is necessary on such platforms, two integer
calculations have to be performed for every single one to be
checked. It would obviously be ill-advised for the standard to mandate
this.

The problem, though, is that C doesn't *let* you do efficient overflow
checks.

True. There would be merit to a feature that allowed you to check whether an
expression will overflow, without having to explicitly write a subtraction
or division (which is less clear to boot, and you may even get it wrong).
On many (most?) platforms, an overflow check can be done fairly
efficiently, typically by checking a status flag after the operation.
It's likely to be more expensive than performing the operation without
checking for overflow, but cheaper and cleaner than doing the check in
pure C (by pre-checking the operands before performing the operation). Furthermore, if overflow checking were built into the language, the
compiler could in many cases determine that a check isn't necessary
because it knows something about the possible values of the operands.
A programmer doing the checks manually isn't likely to be able to do
this reliably.
That's true, but note that if the compiler does know about the values of the
operands, it can often optimize the conditional of a generic overflow check
away as well. Obviously this is harder to do than optimizing away an
explicit overflow check, but still.
Another problem, of course, is deciding what to do if the check fails.
That's easy; throw an exception. Except that C doesn't have those.

You could simply define a magic macro OVERFLOW_EVAL(x) or suchlike that
evaluates x, and then evaluates to a boolean value indicating whether the
expression overflowed. Programmers who want overflow checks can then wrap
their calculations in this macro, and deal with them explicitly.

Another possibility is to provide overflow-safe integer types. The obvious
objection against that is again that there is no explicit way to handle the
overflow condition. You could kludge it by defining a HAS_OVERFLOW(x) macro
that evaluates whether a given overflow-safe integer overflowed (similar to
NaN and Inf for floating-point) but then compilers have to jump through
hoops to get the overflow information to where it's needed.

Third and finally, C could provide a new block structure specifically for
dealing with overflow exceptions, but this is very unlikely to be
implemented before general exceptions -- which are in turn unlikely to be
implemented.
(The counterargument to this is, "If you want Ada, you know where to
find it.")

Not necessarily. I'm reminded of the carry bit. Most implementations have a
carry bit, allowing you to implement greater precision arithmetic and take
advantage of several calculation tricks you simply cannot express in C.

For obvious reasons it would not pay off for C to provide access to such a
bit; if you want assembler, you know where to get it. Overflow is a
different matter, though: it is always possible to define a check for it,
some platforms may allow very efficient versions of it, and no platforms
will have a check that's slower than what the programmer already can come up
with.

Compare the intN_least_t types.

S.
Dec 5 '05 #14
Skarmander wrote:
<snip>
Compare the intN_least_t types.

Better yet, compare the int_leastN_t types.

S.
Dec 5 '05 #15
Skarmander wrote:
Skarmander wrote:
<snip>
Compare the intN_least_t types.

Better yet, compare the int_leastN_t types.

Or better *still*, compare the int_fastN_t types!

Excuse me, I'm suffering from caffeine deficiency. I'll remedy this with a
pot of tea.

S.
Dec 5 '05 #16
Keith Thompson a écrit :
Skarmander <in*****@dontmailme.com> writes:

On many (most?) platforms, an overflow check can be done fairly
efficiently, typically by checking a status flag after the operation.
Yes. We are in 2005, almost 2006. Please let's be realistic.
Even embedded CPUs today have of course an overflow flag reporting
after addition and multiplication... I would be surprised if we
would find ONE cpu that doesn't have it.
A programmer doing the checks manually isn't likely to be able to do
this reliably.

This is the crucial point. You just can't do it. If each multiplication
and addition has to be tested you just can't write numerical software.
Another problem, of course, is deciding what to do if the check fails.

The lcc-win32 solution is quite simple: just call a function providing
the source code coordinates. This function by default exists the program
but the user can override it by defining his/her own one.

jacob
Dec 6 '05 #17

jacob navia wrote:
Yes. We are in 2005, almost 2006. Please let's be realistic.
Even embedded CPUs today have of course an overflow flag reporting
after addition and multiplication... I would be surprised if we
would find ONE cpu that doesn't have it.

That's a rather stronger statement than is justified.

Alpha allows (optional) traps on integer add/subtract/multiply
overflows, but has no overflow flag. MIPS allows traps on adds and
subtracts (but no flag). ARM has an (optional) overflow flag, but it's
not set by the multiply instructions. S/360 (zSeries) and PPC can set
an overflow condition code on adds and subtracts, or take a trap, but
not on multiplications. PA-RISC can trap on addition and subtraction,
and can trap on multiplication with a special multi-instruction
sequence. SPARC has the traditional condition code for add and
subtract, but not for multiply. IPF and PIC provide niether traps or
flags for integer arithmetic.

Most vector ISA extensions have very limited, if any, support for
integer overflows during vector operations.

I'm guessing you're surprised?

Dec 6 '05 #18
>Keith Thompson a écrit :
On many (most?) platforms, an overflow check can be done fairly
efficiently, typically by checking a status flag after the operation.

jacob navia <ja***@jacob.remcomp.fr> wrote:Yes. We are in 2005, almost 2006. Please let's be realistic.
Even embedded CPUs today have of course an overflow flag reporting
after addition and multiplication... I would be surprised if we
would find ONE cpu that doesn't have it.

MIPS CPUs do not have overflow flags. In fact, there are no
flags at all. To compute whether register A is less than
register B, you use the "slt" -- Set Less Than -- instruction
to put the result of the comparison in a third register. Thus,
the straightforward (and correct) C code to implement wide
arithmetic:

/* "mw" = "multiword" (64 or more bit integers, unsigned) */
/* use "inline" in C99 */
void add_mw(struct mw *result, struct mw *a, struct mw *b) {
result->lo = a->lo + b->lo;
result->hi = a->hi + b->hi + (result->lo < a->lo);
}

compiles to (e.g., and commenting for the obvious loads and stores):

# get inputs into t0/a2 and t1/a3 (lo/hi)
# outputs in v0/v1 (lo/hi)
slt t1,t0,t1
# store result at 0(a0) and 4(a0)

if I have the argument order right (I have no idea whether I do,
I have not used MIPS assembly since the late 1980s). This is not
merely an almost-literal translation translation of the C code,
but also about the best you can do (on the R2000 and R3000 anyway;
the R4000 onward are 64-bit). (I cannot even remember how many
registers are assigned to the "t"s vs "v"s, just that there were
four "a"s for up to four arguments.)

(The CPU has separate instructions, e.g., "addi" vs "addiu" to
add an immediate value, either signed -- which traps on overflow
-- or unsigned. See http://en.wikipedia.org/wiki/MIPS_architecture

Overflow flags (or instructions) do not solve the problem for
narrow and bitfield types, of course: adding two 4-bit fields
may produce a 5-bit result. Of course, we can code for that
easily, too:

result = a + b;
if (result & ~0xf) ... overflow ...

Again, all this works much more nicely in Ada. There are plenty
of Ada compilers out there, including a GNU version, for everyone
who wants to write in Ada.
--
In-Real-Life: Chris Torek, Wind River Systems
Salt Lake City, UT, USA (40°39.22'N, 111°50.29'W) +1 801 277 2603
email: forget about it http://web.torek.net/torek/index.html
Reading email is like searching for food in the garbage, thanks to spammers.
Dec 6 '05 #19
jacob navia wrote
Yes. We are in 2005, almost 2006. Please let's be realistic.
Here it comes, wait for it...
Even embedded CPUs today have of course an overflow flag reporting
after addition and multiplication... I would be surprised if we
would find ONE cpu that doesn't have it.

This is going to hurt.
--
Randy Howard (2reply remove FOOBAR)
"The power of accurate observation is called cynicism by those
who have not got it." - George Bernard Shaw

Dec 6 '05 #20
On 5 Dec 2005 23:45:01 -0800, in comp.lang.c ,
"ro***********@yahoo.com" <ro***********@yahoo.com> wrote:

jacob navia wrote:
I would be surprised if we
would find ONE cpu that doesn't have it.

Alpha allows (optional) traps on integer add/subtract/multiply
overflows, but has no overflow flag. MIPS allows traps on adds and
subtracts (but no flag). ARM has an (optional) overflow flag, but it's
not set by the multiply instructions. S/360 (zSeries) and PPC can set
an overflow condition code on adds and subtracts, or take a trap, but
not on multiplications. PA-RISC can trap on addition and subtraction,
and can trap on multiplication with a special multi-instruction
sequence. SPARC has the traditional condition code for add and
subtract, but not for multiply. IPF and PIC provide niether traps or
flags for integer arithmetic.

Most vector ISA extensions have very limited, if any, support for
integer overflows during vector operations.

I'm guessing you're surprised?

I'm not. Neither am I surprised that Jacob posted what he did - he
apparently has very little experience outside the wintel world, but
doesn't seem to realise this.

----== Posted via Newsfeeds.Com - Unlimited-Unrestricted-Secure Usenet News==----
http://www.newsfeeds.com The #1 Newsgroup Service in the World! 120,000+ Newsgroups
----= East and West-Coast Server Farms - Total Privacy via Encryption =----
Dec 6 '05 #21
ro***********@yahoo.com a écrit :
jacob navia wrote:
Yes. We are in 2005, almost 2006. Please let's be realistic.
Even embedded CPUs today have of course an overflow flag reporting
after addition and multiplication... I would be surprised if we
would find ONE cpu that doesn't have it.

That's a rather stronger statement than is justified.

I'm guessing you're surprised?

Yes, how many brain dead machines...

In any case this is OK, people using those machines can always do

#pragma STDC integeroverflow(off)

or the compiler can state in the documentation that overflow catching
does not work.

What is important is that we stop levelling by the lowest possible
common feature, and try to improve the language definition allowing for
machines that do not have the feature to run.

If the C standard allows for a very fine definition of the floating
point flags environment, I do not see why integers would not be allowed
the same properties!

What happens in the FPU that has no flags???

Well, the compiler can issue bogus functions or return always
zero, etc. Nonetheless the FPU flags are in the standard.
jacob
Dec 6 '05 #22
Randy Howard a écrit :
jacob navia wrote

Yes. We are in 2005, almost 2006. Please let's be realistic.

Here it comes, wait for it...

Even embedded CPUs today have of course an overflow flag reporting
after addition and multiplication... I would be surprised if we
would find ONE cpu that doesn't have it.

This is going to hurt.

Well, apparently it hurted so much that you forgot
the rest of the message ???

:-)

jacob
Dec 6 '05 #23
Mark McIntyre a écrit :
On 5 Dec 2005 23:45:01 -0800, in comp.lang.c ,
"ro***********@yahoo.com" <ro***********@yahoo.com> wrote:

jacob navia wrote:
I would be surprised if we
would find ONE cpu that doesn't have it.

Alpha allows (optional) traps on integer add/subtract/multiply
overflows, but has no overflow flag. MIPS allows traps on adds and
subtracts (but no flag). ARM has an (optional) overflow flag, but it's
not set by the multiply instructions. S/360 (zSeries) and PPC can set
an overflow condition code on adds and subtracts, or take a trap, but
not on multiplications. PA-RISC can trap on addition and subtraction,
and can trap on multiplication with a special multi-instruction
sequence. SPARC has the traditional condition code for add and
subtract, but not for multiply. IPF and PIC provide niether traps or
flags for integer arithmetic.

Most vector ISA extensions have very limited, if any, support for
integer overflows during vector operations.

I'm guessing you're surprised?

I'm not. Neither am I surprised that Jacob posted what he did - he
apparently has very little experience outside the wintel world, but
doesn't seem to realise this.

Maybe, I have done in the last year only some DSP assembly,
and my 68000 experience is very rusty.

Who cares? I am not "the guru", and when I stand corrected
I feel no shame to acknowledge that I was wrong.

Whats the point in discussing if you want to be always right?

jacob
Dec 6 '05 #24
jacob navia wrote
Randy Howard a écrit :
jacob navia wrote

Yes. We are in 2005, almost 2006. Please let's be realistic.

Here it comes, wait for it...

Even embedded CPUs today have of course an overflow flag reporting
after addition and multiplication... I would be surprised if we
would find ONE cpu that doesn't have it.

This is going to hurt.

Well, apparently it hurted so much that you forgot
the rest of the message ???

What's a "hurted"?

At any rate, the other responses to this thread have already
demonstrated the point sufficiently. Are you now "surprised"?

--
Randy Howard (2reply remove FOOBAR)
"The power of accurate observation is called cynicism by those
who have not got it." - George Bernard Shaw

Dec 6 '05 #25
On Mon, 05 Dec 2005 18:56:39 +0100, jacob navia
<ja***@jacob.remcomp.fr> wrote:
ju**********@yahoo.co.in wrote:
Is there any way by which the overflow during addition of two integers
may
be detected ?

eg.

suppose we have three unsigned integers, a ,b, c.
we are doing a check like
if ((a +b) > c)
do something;
else
do something else.

If addition of a and b genearates overflow, the check may fail even if
a + b is larger than c.
How can we avoid such conditions from failure ?

Thanx in advance for any help ....
As many posters have replied here, you *can* test for overflow
yourself. What bothers me, is that the language does not enforce this
even if it is a break of the standard, as Jack Klein has noted in

The lcc-win32 compiler tests for overflow with a command line
switch. This wasn't very difficult to do, and the impact
in the run time is minimal, in any case not even measurable
for a normal application in a PC environment.

OF COURSE there are other environments where each microsecond counts
and where the quality of the results doesn't matter so much...

For *those* environments the language stdandard *could* make an
exception, for instance

#pragma STDC intergeroverflow(off)

or similar.

But in a normal case, integer overflow is a serious error,
an error that is very difficult to catch if not done at the
compiler level. You think you could have an overflow *here* or
*there* but actually you got an overflow in a completely unexpected
place!
The code generated is not specially clever. Here is the code for this
program:
int main(void)
{
int a = 0x7fffffff,b=0x7fffffff;
int c = a*b;
}

For the relevant line we get this:
; 4 int c = a*b;
.line 4
movl -4(%ebp),%edi ; put a in edi
imull -8(%ebp),%edi ; multiply by b
jno _\$L2 ; if no overflow continue at L2
pusha ; OVERFLOW save all regs
pushl \$4 ; push the line number
pushl \$main__labelname ; push the name of the function
call __overflow ; call the exception handler
addl \$8,%esp ; restore stack
popa ; restore all regs
_\$L2: go on

The overflow handler is prototyped as:

void __overflow(char *function name, int line);
i like that. I like to imagine too ... :)
it will be more useful if the programmer can set in the compiler, the
errors that want to see in the "runtime programme" (e.g. overflow
errors, memory leak, out of bound in writing array etc)
and there is reserved memory (e.g. 100kb) *structured like in a queue*
where the "handler " write the error that find and where the error is.

Then, at the end of programme, the memory is written down in a file:
error_programm_name.txt

So i read error_programm_name.txt for see if all is ok
while all you don't know what is "going wrong" and "where"
Other mechanisms could be considered, and the generated code *could*
be better. I have found that in PC environnments, where lcc-win32 runs,
this is absolutely not measurable for normal applications...

jacob

Buon Natale a tutti
Dec 18 '05 #26

This thread has been closed and replies have been disabled. Please start a new discussion.