468,463 Members | 1,988 Online

# Direct computation of integer limits in K&R2?

Hello all,

In K&R2 one exercise asks the reader to compute and print the limits for
the basic integer types. This is trivial for unsigned types. But is it
possible for signed types without invoking undefined behaviour
triggered by overflow? Remember that the constants in limits.h cannot
be used.

Mar 11 '08 #1
88 3265
santosh wrote:
Hello all,

In K&R2 one exercise asks the reader to compute and print the limits for
the basic integer types. This is trivial for unsigned types. But is it
possible for signed types without invoking undefined behaviour
triggered by overflow? Remember that the constants in limits.h cannot
be used.
Isn't it possible to calculate this based on the unsigned types of the
same size?

--
Ian Collins.
Mar 11 '08 #2
Ian Collins wrote:
santosh wrote:
>Hello all,

In K&R2 one exercise asks the reader to compute and print the limits
for the basic integer types. This is trivial for unsigned types. But
is it possible for signed types without invoking undefined behaviour
triggered by overflow? Remember that the constants in limits.h cannot
be used.
Isn't it possible to calculate this based on the unsigned types of the
same size?
Won't this require knowledge of the encoding used, whether twos
complement or sign and magnitude etc?

Mar 11 '08 #3
On Wed, 12 Mar 2008 03:07:48 +0530, santosh wrote:
Hello all,

In K&R2 one exercise asks the reader to compute and print the limits for
the basic integer types. This is trivial for unsigned types. But is it
possible for signed types without invoking undefined behaviour triggered
by overflow? Remember that the constants in limits.h cannot be used.
#include <stdio.h>
int main(void) {
unsigned u = -1;
int i;
while ((i = u) < 0 || i != u)
u = u >1;
printf("INT_MAX == %u\n", u);
}

This is not guaranteed to work in C99, where the conversion of an out-of-
range integer may raise a signal, but it's valid C90, since the result of
the conversion must be a valid int, and therefore between INT_MIN and
INT_MAX.
Mar 11 '08 #4
santosh wrote:
Ian Collins wrote:
>santosh wrote:
>>Hello all,

In K&R2 one exercise asks the reader to compute and print the limits
for the basic integer types. This is trivial for unsigned types. But
is it possible for signed types without invoking undefined behaviour
triggered by overflow? Remember that the constants in limits.h cannot
be used.
Isn't it possible to calculate this based on the unsigned types of the
same size?

Won't this require knowledge of the encoding used, whether twos
complement or sign and magnitude etc?
I think so, I should have added that.

--
Ian Collins.
Mar 11 '08 #5
santosh <santosh....@gmail.comwrote:
print the limits for the basic integer types. This is
trivial for unsigned types. But is it possible for
signed types without invoking undefined behaviour
triggered by overflow? Remember that the constants
in limits.h cannot be used.
Yes. Unlike C99, unsigned to signed integer conversion
is implementation defined without the possibility of
raising a signal. So...

INT_MIN isn't computed per se, rather it's derived by
determining the representation for negative ints. [I
know pete posted some very simple constant expressions,
though it was some time ago.]

--
Peter
Mar 11 '08 #6
Harald van D?k wrote:
On Wed, 12 Mar 2008 03:07:48 +0530, santosh wrote:
>Hello all,

In K&R2 one exercise asks the reader to compute and print the limits
for the basic integer types. This is trivial for unsigned types. But
is it possible for signed types without invoking undefined behaviour
triggered by overflow? Remember that the constants in limits.h cannot
be used.

#include <stdio.h>
int main(void) {
unsigned u = -1;
int i;
while ((i = u) < 0 || i != u)
u = u >1;
printf("INT_MAX == %u\n", u);
}

This is not guaranteed to work in C99, where the conversion of an
out-of- range integer may raise a signal, but it's valid C90, since
the result of the conversion must be a valid int, and therefore
between INT_MIN and INT_MAX.

Mar 11 '08 #7
Peter Nilsson wrote:
santosh <santosh....@gmail.comwrote:
print the limits for the basic integer types. This is
trivial for unsigned types. But is it possible for
signed types without invoking undefined behaviour
triggered by overflow? Remember that the constants
in limits.h cannot be used.

Yes. Unlike C99, unsigned to signed integer conversion
is implementation defined without the possibility of
raising a signal. So...

INT_MIN isn't computed per se, rather it's derived by
determining the representation for negative ints. [I
know pete posted some very simple constant expressions,
though it was some time ago.]
Would you say that this exercise is overly complex for that point in
K&R2?

Mar 11 '08 #8
On Wed, 12 Mar 2008 03:29:53 +0530, santosh wrote:
Harald van D?k wrote:
>On Wed, 12 Mar 2008 03:07:48 +0530, santosh wrote:
>>Hello all,

In K&R2 one exercise asks the reader to compute and print the limits
for the basic integer types. This is trivial for unsigned types. But
is it possible for signed types without invoking undefined behaviour
triggered by overflow? Remember that the constants in limits.h cannot
be used.

#include <stdio.h>
int main(void) {
unsigned u = -1;
int i;
while ((i = u) < 0 || i != u)
u = u >1;
printf("INT_MAX == %u\n", u);
}

This is not guaranteed to work in C99, where the conversion of an
out-of- range integer may raise a signal, but it's valid C90, since the
result of the conversion must be a valid int, and therefore between
INT_MIN and INT_MAX.

Up to INT_MIN, you can use this same idea, except start from LONG_MIN
instead of UINT_MAX. For LONG_MIN, I would cheat with
strtol("-999999999", 0, 0)
adding 9s until a range error is returned. :-)
Mar 11 '08 #9
Harald van D?k wrote:
On Wed, 12 Mar 2008 03:29:53 +0530, santosh wrote:
>Harald van D?k wrote:
>>On Wed, 12 Mar 2008 03:07:48 +0530, santosh wrote:
Hello all,

In K&R2 one exercise asks the reader to compute and print the
limits for the basic integer types. This is trivial for unsigned
types. But is it possible for signed types without invoking
undefined behaviour triggered by overflow? Remember that the
constants in limits.h cannot be used.

#include <stdio.h>
int main(void) {
unsigned u = -1;
int i;
while ((i = u) < 0 || i != u)
u = u >1;
printf("INT_MAX == %u\n", u);
}

This is not guaranteed to work in C99, where the conversion of an
out-of- range integer may raise a signal, but it's valid C90, since
the result of the conversion must be a valid int, and therefore
between INT_MIN and INT_MAX.

Up to INT_MIN, you can use this same idea, except start from LONG_MIN
instead of UINT_MAX. For LONG_MIN, I would cheat with
strtol("-999999999", 0, 0)
adding 9s until a range error is returned. :-)
Okay. I for one am glad that limits.h exists. :-)

Mar 11 '08 #10
Ian Collins wrote, On 11/03/08 21:54:
santosh wrote:
>Ian Collins wrote:
>>santosh wrote:
Hello all,

In K&R2 one exercise asks the reader to compute and print the limits
for the basic integer types. This is trivial for unsigned types. But
is it possible for signed types without invoking undefined behaviour
triggered by overflow? Remember that the constants in limits.h cannot
be used.

Isn't it possible to calculate this based on the unsigned types of the
same size?
Won't this require knowledge of the encoding used, whether twos
complement or sign and magnitude etc?
I think so, I should have added that.
Even if you know it is 2s complement you still can't do it. You need to
know whether sign bit = 1 and all value bits = 0 is a trap or not since
it is allowed to be a trap representation.
--
Flash Gordon
Mar 11 '08 #11
Flash Gordon <sp**@flash-gordon.me.ukwrites:
Ian Collins wrote, On 11/03/08 21:54:
>santosh wrote:
>>Ian Collins wrote:

santosh wrote:
Hello all,
>
In K&R2 one exercise asks the reader to compute and print the limits
for the basic integer types. This is trivial for unsigned types. But
is it possible for signed types without invoking undefined behaviour
triggered by overflow? Remember that the constants in limits.h cannot
be used.
>
Isn't it possible to calculate this based on the unsigned types of the
same size?
Won't this require knowledge of the encoding used, whether twos
complement or sign and magnitude etc?
I think so, I should have added that.

Even if you know it is 2s complement you still can't do it. You need
to know whether sign bit = 1 and all value bits = 0 is a trap or not
since it is allowed to be a trap representation.
It's only allowed to be a trap representation on _non_ two's
complement representations. sign bit = 1 and all value bits = 0 (and
padding bits at non-trap values) would necessarily be the minimum
representable value.

--
Micah J. Cowan
Programmer, musician, typesetting enthusiast, gamer...
http://micah.cowan.name/
Mar 12 '08 #12
On Mar 11, 2:37*pm, santosh <santosh....@gmail.comwrote:
Hello all,

In K&R2 one exercise asks the reader to compute and print the limits for
the basic integer types. This is trivial for unsigned types. But is it
possible for signed types without invoking undefined behaviour
triggered by overflow? Remember that the constants in limits.h cannot
be used.
/*
The standard hearder <limits.hwas introduced on the same page (36)
as the exercise.
We are told to compute the values by standard headers and by direct
computation.
We are also told to determine the ranges of the various floating point
types.

The only hard part I see is the signed integer min and max values
without using <limits.hbecause I do not see how you can do it
portably. We can probably deduce the hardware type, but I am not sure
about what guarantees we have as to internal representation. I guess
also we will need separate routines for 2's complement, 1's
complement, sign magnitude, and whatever other types are allowed (e.g.
is decimal storage allowed? I know of CPUs that had BCD instructions
in hardware).

Anyway, here are all the trivial answers:

*/
#include <stdio.h>
#include <limits.h>
#include <float.h>

void floating_limits(void)
{
puts("\nFloating point limits:");
printf("DBL_DIG %u\n", (unsigned) DBL_DIG);
printf("DBL_EPSILON %*.*g\n", DBL_DIG + 3, DBL_DIG,
DBL_EPSILON);
printf("DBL_MANT_DIG %u\n", (unsigned) DBL_MANT_DIG);
printf("DBL_MAX %*.*g\n", DBL_DIG + 3, DBL_DIG, DBL_MAX);
printf("DBL_MAX_10_EXP %u\n", (unsigned) DBL_MAX_10_EXP);
printf("DBL_MAX_EXP %u\n", (unsigned) DBL_MAX_EXP);
printf("DBL_MIN %*.*g\n", DBL_DIG + 3, DBL_DIG, DBL_MIN);
printf("DBL_MIN_10_EXP %d\n", DBL_MIN_10_EXP);
printf("DBL_MIN_EXP %d\n", DBL_MIN_EXP);
#endif
#ifdef DBL_ROUNDS
printf("DBL_ROUNDS %u\n", (unsigned) DBL_ROUNDS);
#endif
printf("FLT_DIG %u\n", (unsigned) FLT_DIG);
printf("FLT_EPSILON %*.*g\n", FLT_DIG + 3, FLT_DIG,
FLT_EPSILON);
#ifdef FLT_GUARD
printf("FLT_GUARD %u\n", (unsigned) FLT_GUARD);
#endif
printf("FLT_MANT_DIG %u\n", (unsigned) FLT_MANT_DIG);
printf("FLT_MAX %*.*g\n", FLT_DIG + 3, FLT_DIG, FLT_MAX);
printf("FLT_MAX_10_EXP %u\n", (unsigned) FLT_MAX_10_EXP);
printf("FLT_MAX_EXP %u\n", (unsigned) FLT_MAX_EXP);
printf("FLT_MIN %*.*g\n", FLT_DIG + 3, FLT_DIG, FLT_MIN);
printf("FLT_MIN_10_EXP %d\n", FLT_MIN_10_EXP);
printf("FLT_MIN_EXP %d\n", FLT_MIN_EXP);
printf("LDBL_DIG %u\n", (unsigned) LDBL_DIG);
printf("LDBL_EPSILON %*.*Lg\n", LDBL_DIG + 3, LDBL_DIG, (long
double) LDBL_EPSILON);
printf("LDBL_MANT_DIG %u\n", (unsigned) LDBL_MANT_DIG);
printf("LDBL_MAX %*.*Lg\n", LDBL_DIG + 3, LDBL_DIG, (long
double) LDBL_MAX);
printf("LDBL_MAX_10_EXP %u\n", (unsigned) LDBL_MAX_10_EXP);
printf("LDBL_MAX_EXP %u\n", (unsigned) LDBL_MAX_EXP);
printf("LDBL_MIN %*.*Lg\n", LDBL_DIG + 3, LDBL_DIG, (long
double) LDBL_MIN);
printf("LDBL_MIN_10_EXP %d\n", LDBL_MIN_10_EXP);
printf("LDBL_MIN_EXP %d\n", LDBL_MIN_EXP);
#endif
#ifdef LDBL_ROUNDS
printf("LDBL_ROUNDS %u\n", (unsigned) LDBL_ROUNDS);
#endif
}

void signed_limits_guarantee(void)
{
static const short shrt_min_est = -32767;
static const short shrt_max_est = +32767;
static const int int_min_est = -32767;
static const int int_max_est = +32767;
static const long long_min_est = -2147483647L;
static const long long_max_est = +2147483647L;
static const long long llong_min_est = -9223372036854775807LL;
static const long long llong_max_est = +9223372036854775807LL;
puts("\nSigned limits guaranteed by the standard to be at
least:");
printf("Signed short min %d\n", shrt_min_est);
printf("Signed short max %d\n", shrt_max_est);
printf("Signed int min %d\n", int_min_est);
printf("Signed int max %d\n", int_max_est);
printf("Signed long min %ld\n", long_min_est);
printf("Signed long max %ld\n", long_max_est);
printf("Signed long long min %lld\n", llong_min_est);
printf("Signed long long max %lld\n", llong_max_est);

}

void limits_lookup(void)
{
puts("\nLookup from limits.h:");
printf("Width of Char %d\n", CHAR_BIT);
printf("Signed Char max %d\n", CHAR_MAX);
printf("Signed Char min %d\n", CHAR_MIN);
printf("Unsigned Char max %d\n", UCHAR_MAX);
printf("Signed short min %d\n", SHRT_MIN);
printf("Signed short max %d\n", SHRT_MAX);
printf("Unsigned short max %u\n", USHRT_MAX);
printf("Signed int min %d\n", INT_MIN);
printf("Signed int max %d\n", INT_MAX);
printf("Unsigned int max %u\n", UINT_MAX);
printf("Signed long min %ld\n", LONG_MIN);
printf("Signed long max %ld\n", LONG_MAX);
printf("Unsigned long max %lu\n", ULONG_MAX);
printf("Signed long long min %lld\n", LLONG_MIN);
printf("Signed long long max %lld\n", LLONG_MAX);
printf("Unsigned long long max %llu\n", ULLONG_MAX);
}

void compute_unsigned_max(void)
{
unsigned long long ullm = -1;
unsigned um = -1;
unsigned long ulm = -1;
unsigned short usm = -1;
unsigned char ucm = -1;
puts("\nSimple computation of unsigned maximums:");
printf("Unsigned Char max %d\n", ucm);
printf("Unsigned short max %u\n", usm);
printf("Unsigned int max %u\n", um);
printf("Unsigned long max %lu\n", ulm);
printf("Unsigned long long max %llu\n", ullm);
}

int main(void)
{
limits_lookup();
compute_unsigned_max();
signed_limits_guarantee();
floating_limits();
return 0;
}
Mar 12 '08 #13
On Mar 11, 2:54 pm, Harald van D©¦k <true...@gmail.comwrote:
On Wed, 12 Mar 2008 03:07:48 +0530, santosh wrote:
Hello all,
In K&R2 one exercise asks the reader to compute and print the limits for
the basic integer types. This is trivial for unsigned types. But is it
possible for signed types without invoking undefined behaviour triggered
by overflow? Remember that the constants in limits.h cannot be used.

#include <stdio.h>
int main(void) {
unsigned u = -1;
int i;
while ((i = u) < 0 || i != u)
u = u >1;
printf("INT_MAX == %u\n", u);

}

This is not guaranteed to work in C99, where the conversion of an out-of-
range integer may raise a signal, but it's valid C90, since the result of
the conversion must be a valid int, and therefore between INT_MIN and
INT_MAX.
What happens if INT_MAX is larger than UINT_MAX? I see no guarantees
that this is not possible.
Mar 12 '08 #14
On Mar 11, 7:30 pm, Micah Cowan <mi...@cowan.namewrote:
Flash Gordon <s...@flash-gordon.me.ukwrites:
Ian Collins wrote, On 11/03/08 21:54:
santosh wrote:
Ian Collins wrote:
>>santosh wrote:
Hello all,
>>>In K&R2 one exercise asks the reader to compute and print the limits
for the basic integer types. This is trivial for unsigned types. But
is it possible for signed types without invoking undefined behaviour
triggered by overflow? Remember that the constants in limits.h cannot
be used.
>>Isn't it possible to calculate this based on the unsigned types of the
same size?
Won't this require knowledge of the encoding used, whether twos
complement or sign and magnitude etc?
I think so, I should have added that.
Even if you know it is 2s complement you still can't do it. You need
to know whether sign bit = 1 and all value bits = 0 is a trap or not
since it is allowed to be a trap representation.

It's only allowed to be a trap representation on _non_ two's
complement representations. sign bit = 1 and all value bits = 0 (and
padding bits at non-trap values) would necessarily be the minimum
representable value.
6.5.6.2p2 says ("the first two" below are sign-and-magnitude and
two's complement):

"Which of these applies is implementation-defined, as is whether the
value with sign bit 1 and all value bits zero (for the first two),
or with sign bit and all value bits 1 (for ones' complement), is a
trap representation or a normal value."
Mar 12 '08 #15
On Mar 11, 10:15 pm, user923005 <dcor...@connx.comwrote:
On Mar 11, 2:54 pm, Harald van D©¦k <true...@gmail.comwrote:
On Wed, 12 Mar 2008 03:07:48 +0530, santosh wrote:
Hello all,
In K&R2 one exercise asks the reader to compute and print the limits for
the basic integer types. This is trivial for unsigned types. But is it
possible for signed types without invoking undefined behaviour triggered
by overflow? Remember that the constants in limits.h cannot be used.
#include <stdio.h>
int main(void) {
unsigned u = -1;
int i;
while ((i = u) < 0 || i != u)
u = u >1;
printf("INT_MAX == %u\n", u);
}
This is not guaranteed to work in C99, where the conversion of an out-of-
range integer may raise a signal, but it's valid C90, since the result of
the conversion must be a valid int, and therefore between INT_MIN and
INT_MAX.

What happens if INT_MAX is larger than UINT_MAX? I see no guarantees
that this is not possible.
6.2.6.2p1-2 say that: INT_MAX = 2**M - 1, UINT_MAX = 2**N - 1,
and M <= N, where M is the number of value bits in int, N is
the number of value bits in unsigned int.
I wonder if there was an implementation where INT_MAX was
equal to UINT_MAX.

Yevgen
Mar 12 '08 #16
mu*****@gmail.com writes:
On Mar 11, 7:30 pm, Micah Cowan <mi...@cowan.namewrote:
>Flash Gordon <s...@flash-gordon.me.ukwrites:
Even if you know it is 2s complement you still can't do it. You need
to know whether sign bit = 1 and all value bits = 0 is a trap or not
since it is allowed to be a trap representation.

It's only allowed to be a trap representation on _non_ two's
complement representations. sign bit = 1 and all value bits = 0 (and
padding bits at non-trap values) would necessarily be the minimum
representable value.

6.5.6.2p2 says ("the first two" below are sign-and-magnitude and
two's complement):

"Which of these applies is implementation-defined, as is whether the
value with sign bit 1 and all value bits zero (for the first two),
or with sign bit and all value bits 1 (for ones' complement), is a
trap representation or a normal value."
Huh. I managed to forget that somehow. My bad, Flash.

--
Micah J. Cowan
Programmer, musician, typesetting enthusiast, gamer...
http://micah.cowan.name/
Mar 12 '08 #17
On Mar 11, 3:01*pm, santosh <santosh....@gmail.comwrote:
Peter Nilsson wrote:
santosh <santosh....@gmail.comwrote:
print the limits for the basic integer types. This is
trivial for unsigned types. But is it possible for
signed types without invoking undefined behaviour
triggered by overflow? Remember that the constants
in limits.h cannot be used.
Yes. Unlike C99, unsigned to signed integer conversion
is implementation defined without the possibility of
raising a signal. So...
INT_MIN isn't computed per se, rather it's derived by
determining the representation for negative ints. [I
know pete posted some very simple constant expressions,
though it was some time ago.]

Would you say that this exercise is overly complex for that point in
K&R2?
I will be pretty amazed to see anyone write a portable solution that
does it all (floating point is also requested).
I guess that signed integer <TYPE>_MIN values will be hard to come up
with.

Will computation of DBL_MAX signal a floating point exception?

I guess that it is the hardest exercise in the whole book, by far.
Mar 12 '08 #18
Micah Cowan wrote, On 12/03/08 00:30:
Flash Gordon <sp**@flash-gordon.me.ukwrites:
>Ian Collins wrote, On 11/03/08 21:54:
>>santosh wrote:
Ian Collins wrote:

santosh wrote:
>Hello all,
>>
>In K&R2 one exercise asks the reader to compute and print the limits
>for the basic integer types. This is trivial for unsigned types. But
>is it possible for signed types without invoking undefined behaviour
>triggered by overflow? Remember that the constants in limits.h cannot
>be used.
>>
Isn't it possible to calculate this based on the unsigned types of the
same size?
Won't this require knowledge of the encoding used, whether twos
complement or sign and magnitude etc?

I think so, I should have added that.
Even if you know it is 2s complement you still can't do it. You need
to know whether sign bit = 1 and all value bits = 0 is a trap or not
since it is allowed to be a trap representation.

It's only allowed to be a trap representation on _non_ two's
complement representations. sign bit = 1 and all value bits = 0 (and
padding bits at non-trap values) would necessarily be the minimum
representable value.
Wrong. The C standard explicitly allows for it to be a trap
representation on two's complement representations. Quoting from N1256...

| ... If the sign bit is one, the value shall be modiļ¬ed in one of
| the following ways:
| ā the corresponding value with sign bit 0 is negated (sign and
| magnitude);
| ā the sign bit has the value ā(2 N ) (twoās complement);
| ā the sign bit has the value ā(2 N ā 1) (onesā complement ).
| Which of these applies is implementation-deļ¬ned, as is whether the
| value with sign bit 1 and all value bits zero (for the ļ¬rst two), or
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ^^^^^^^^^^^^^^
| with sign bit and all value bits 1 (for onesā complement), is a trap
| representation or a normal value. In the case of sign and magnitude
| and onesā complement, if this representation is a normal value it is
| called a negative zero.

two's complement is one of the first two.

The above is from section 6.2.6.2 para 2.
--
Flash Gordon
Mar 12 '08 #19
Micah Cowan wrote, On 12/03/08 05:55:
mu*****@gmail.com writes:
>On Mar 11, 7:30 pm, Micah Cowan <mi...@cowan.namewrote:
>>Flash Gordon <s...@flash-gordon.me.ukwrites:
<snip trap representation in 2s complement>
Huh. I managed to forget that somehow. My bad, Flash.
It's easy to forget. I'm not actually aware of any implementations which
make use of this freedom.
--
Flash Gordon
Mar 12 '08 #20
On Mar 11, 2:37*pm, santosh <santosh....@gmail.comwrote:
Hello all,

In K&R2 one exercise asks the reader to compute and print the limits for
the basic integer types. This is trivial for unsigned types. But is it
possible for signed types without invoking undefined behaviour
triggered by overflow? Remember that the constants in limits.h cannot
be used.
You can use shifting to determine how many bits there are in the given
signed integral type. Start with 1 and keep shifting it left until it
drops off. With that information, you can construct the greatest
possible positive integer value: which is all 1's except for the sign
bit, which is zero. The greatest possible negative value is either the
additive inverse of that value, or, in the case of two's complement,
that value less one. You can detect whether two's complement is in
effect by applying a simple test to the value -1:

switch (-1 & 3) {
case 1: /* ...01: sign magnitude */
break;
case 2: /* ...10: one's complement */
break;
case 3: /* ...11: two's complement */
break;
}

That's the general approach I'd take to the exercise.

Mar 12 '08 #21
On Mar 12, 2:35 pm, Kaz Kylheku <kkylh...@gmail.comwrote:
On Mar 11, 2:37 pm, santosh <santosh....@gmail.comwrote:
Hello all,
In K&R2 one exercise asks the reader to compute and print the limits for
the basic integer types. This is trivial for unsigned types. But is it
possible for signed types without invoking undefined behaviour
triggered by overflow? Remember that the constants in limits.h cannot
be used.

You can use shifting to determine how many bits there are in the given
signed integral type. Start with 1 and keep shifting it left until it
drops off.
That's UB, no?
With that information, you can construct the greatest
possible positive integer value: which is all 1's except for the sign
bit, which is zero. The greatest possible negative value is either the
additive inverse of that value, or, in the case of two's complement,
that value less one.
And this may be a trap representation.

Yevgen
Mar 12 '08 #22

"santosh" <sa*********@gmail.comwrote in message
news:fr**********@registered.motzarella.org...
Hello all,

In K&R2 one exercise asks the reader to compute and print the limits for
the basic integer types. This is trivial for unsigned types. But is it
possible for signed types without invoking undefined behaviour
triggered by overflow? Remember that the constants in limits.h cannot
be used.
I don't think there's a perfect answer.

However this is the closest I could get.

double x = 0;
int testme;

do
{
x++;
testme = (int) x;
} while((double) testme == x);

printf("Biggest integer %g\n", x - 1);

It will fail if all ints are not exactly representable by a double. Which is
an int 64 machine.
(Wail, gnash).

--
Free games and programming goodies.
http://www.personal.leeds.ac.uk/~bgy1mm
Mar 12 '08 #23
On Mar 12, 1:23*pm, ymunt...@gmail.com wrote:
On Mar 12, 2:35 pm, Kaz Kylheku <kkylh...@gmail.comwrote:
You can use shifting to determine how many bits there are in the given
signed integral type. Start with 1 and keep shifting it left until it
drops off.

That's UB, no?
Unfortunately it is. Shifting a bit into the sign is UB. Only a
positive value whose double is representable may be shifted left by
one bit.

This means that the sign bit is quite impervious to bit manipulation.

Mar 12 '08 #24
"Malcolm McLean" <re*******@btinternet.comwrites:
"santosh" <sa*********@gmail.comwrote in message
news:fr**********@registered.motzarella.org...
>Hello all,

In K&R2 one exercise asks the reader to compute and print the limits for
the basic integer types. This is trivial for unsigned types. But is it
possible for signed types without invoking undefined behaviour
triggered by overflow? Remember that the constants in limits.h cannot
be used.
I don't think there's a perfect answer.

However this is the closest I could get.

double x = 0;
int testme;

do
{
x++;
testme = (int) x;
} while((double) testme == x);

printf("Biggest integer %g\n", x - 1);
I don't think you need to be so cautious -- ints must use binary, so
you could start at 1 and repeatedly double x and try to convert x-1.
Even so, you have not gained anything -- the conversion to int, when
it is out of range, is still undefined.

--
Ben.
Mar 12 '08 #25
Kaz Kylheku <kk******@gmail.comwrites:
On Mar 12, 1:23Ā*pm, ymunt...@gmail.com wrote:
>On Mar 12, 2:35 pm, Kaz Kylheku <kkylh...@gmail.comwrote:
You can use shifting to determine how many bits there are in the given
signed integral type. Start with 1 and keep shifting it left until it
drops off.

That's UB, no?

Unfortunately it is. Shifting a bit into the sign is UB. Only a
positive value whose double is representable may be shifted left by
one bit.

This means that the sign bit is quite impervious to bit
manipulation.
It must participate in other bit operations, though, like ~, &, | and
^. Even so,I can't see any way to avoid UB when trying to calculate
the range of int. Equally, I don't have a persuasive argument that it
*can't* be done, either.

--
Ben.
Mar 13 '08 #26
On Mar 12, 5:30*pm, Ben Bacarisse <ben.use...@bsb.me.ukwrote:
Kaz Kylheku <kkylh...@gmail.comwrites:
On Mar 12, 1:23*pm, ymunt...@gmail.com wrote:
On Mar 12, 2:35 pm, Kaz Kylheku <kkylh...@gmail.comwrote:
You can use shifting to determine how many bits there are in the given
signed integral type. Start with 1 and keep shifting it left until it
drops off.
That's UB, no?
Unfortunately it is. Shifting a bit into the sign is UB. Only a
positive value whose double is representable may be shifted left by
one bit.
This means that the sign bit is quite impervious to bit
manipulation.

It must participate in other bit operations, though, like ~, &, | and
^. *Even so,I can't see any way to avoid UB when trying to calculate
the range of int. *Equally, I don't have a persuasive argument that it
*can't* be done, either.
To compound things, imagine a C implementation where all integral
types were 64 bits (including char).
Even the undefined behavior hacks I posted will fail on those.
In short, I think it is a really difficult problem to solve.
If someone can define a sensible solution, I would be very pleased to
see it.
It might be interesting to see what DMR has to say about it.
Mar 13 '08 #27
santosh wrote:
Hello all,

In K&R2 one exercise asks the reader to compute and print the limits for
the basic integer types. This is trivial for unsigned types. But is it
possible for signed types without invoking undefined behaviour
triggered by overflow? Remember that the constants in limits.h cannot
be used.

C95:

#include <stdio.h>
int main()
{
unsigned x= -1;

int INTMAX=x /2;

int INTMIN= -INTMAX -1;

printf("INTMIN: %d\t", INTMIN);

printf("INTMAX: %d\n", INTMAX);

return 0;
}
Mar 13 '08 #28
Ioannis Vranos <ivra...@nospam.no.spamfreemail.grwrote:
#include <stdio.h>

int main()
{
* * *unsigned x= -1;
* * *int INTMAX=x /2;
What if UINT_MAX == INT_MAX, or UINT_MAX = 4*INT_MAX+3?
* * *int INTMIN= -INTMAX -1;
What if INT_MIN == -INT_MAX?
* * *printf("INTMIN: %d\t", INTMIN);
* * *printf("INTMAX: %d\n", INTMAX);
* * *return 0;
}
--
Peter
Mar 13 '08 #29
Ioannis Vranos said:

<snip>
>Since sizeof(N)= sizeof(signed N)= sizeof(unsigned N)

where N can be char, short, int, long

and as you mentioned they use the same amount of storage, how can
INT_MIN be equal to -INT_MAX since the range of values is the same.
Sign-and-magnitude representation.

--
Richard Heathfield <http://www.cpax.org.uk>
Email: -http://www. +rjh@
"Usenet is a strange place" - dmr 29 July 1999
Mar 13 '08 #30
In article <8f************@news.flash-gordon.me.ukFlash Gordon <sp**@flash-gordon.me.ukwrites:
Micah Cowan wrote, On 12/03/08 05:55:
mu*****@gmail.com writes:
On Mar 11, 7:30 pm, Micah Cowan <mi...@cowan.namewrote:
Flash Gordon <s...@flash-gordon.me.ukwrites:

<snip trap representation in 2s complement>
Huh. I managed to forget that somehow. My bad, Flash.

It's easy to forget. I'm not actually aware of any implementations which
make use of this freedom.
(Sign bit 1 other bits 0 is a trap representation.) Some Gould machines
did it, or were it the Modcomps? I disremember and do not have the manuals
here, but one of the two.
--
dik t. winter, cwi, kruislaan 413, 1098 sj amsterdam, nederland, +31205924131
home: bovenover 215, 1025 jn amsterdam, nederland; http://www.cwi.nl/~dik/
Mar 13 '08 #31
Richard Heathfield wrote:
>
Peter Nilsson said:
Ioannis Vranos <ivra...@nospam.no.spamfreemail.grwrote:
#include <stdio.h>

int main()
{
unsigned x= -1;
int INTMAX=x /2;
What if UINT_MAX == INT_MAX,

I don't think it can.
It can.

sizeof(int) == 2
sizeof(unsigned) == 2
CHAR_BIT == 16
INT_MAX == 0xffff
UINT_MAX == 0xffff

--
pete
Mar 13 '08 #32
pete said:
Richard Heathfield wrote:
>>
Peter Nilsson said:
Ioannis Vranos <ivra...@nospam.no.spamfreemail.grwrote:
#include <stdio.h>

int main()
{
unsigned x= -1;
int INTMAX=x /2;

What if UINT_MAX == INT_MAX,

I don't think it can.

It can.

sizeof(int) == 2
sizeof(unsigned) == 2
CHAR_BIT == 16
INT_MAX == 0xffff
UINT_MAX == 0xffff
"The range of nonnegative values of a signed integer type is a subrange of
the corresponding unsigned integer type, and the representation of the
same value in each type is the same."

Since, in your example, int has 16 value bits, and since there must also be
a sign bit, that makes 17 bits altogether that contribute to the value. I
could be wrong, of course, but doesn't that mean that unsigned int must
also have 17 bits that contribute to the value?

--
Richard Heathfield <http://www.cpax.org.uk>
Email: -http://www. +rjh@
"Usenet is a strange place" - dmr 29 July 1999
Mar 13 '08 #33
Richard Heathfield wrote:
>
pete said:
Richard Heathfield wrote:
>
Peter Nilsson said:

Ioannis Vranos <ivra...@nospam.no.spamfreemail.grwrote:
#include <stdio.h>

int main()
{
unsigned x= -1;
int INTMAX=x /2;

What if UINT_MAX == INT_MAX,

I don't think it can.
It can.

sizeof(int) == 2
sizeof(unsigned) == 2
CHAR_BIT == 16
INT_MAX == 0xffff
UINT_MAX == 0xffff

"The range of nonnegative values
of a signed integer type is a subrange of
the corresponding unsigned integer type,
and the representation of the
same value in each type is the same."
That's exactly what I've shown.
Since, in your example, int has 16 value bits,
and since there must also be a sign bit,
that makes 17 bits altogether that contribute to the value. I
could be wrong, of course,
but doesn't that mean that unsigned int must
also have 17 bits that contribute to the value?
You're mixing terms.

"value bits" != "bits that contribute to the value"

N869
6.2.6.2 Integer types
[#2] For signed integer types, the bits of the object
representation shall be divided into three groups: value
bits, padding bits, and the sign bit. There need not be any
padding bits; there shall be exactly one sign bit. Each bit
that is a value bit shall have the same value as the same
bit in the object representation of the corresponding
unsigned type (if there are M value bits in the signed type
and N in the unsigned type, then M<=N).
Your claim implies that you believe that M can't equal N.

--
pete
Mar 13 '08 #34
pete said:

<snip>
(if there are M value bits in the signed type
and N in the unsigned type, then M<=N).
Your claim implies that you believe that M can't equal N.
I sit corrected. Thank you.

--
Richard Heathfield <http://www.cpax.org.uk>
Email: -http://www. +rjh@
"Usenet is a strange place" - dmr 29 July 1999
Mar 13 '08 #35
pete wrote:
>
You're mixing terms.

"value bits" != "bits that contribute to the value"

N869
6.2.6.2 Integer types
[#2] For signed integer types, the bits of the object
representation shall be divided into three groups: value
bits, padding bits, and the sign bit. There need not be any
padding bits; there shall be exactly one sign bit. Each bit
that is a value bit shall have the same value as the same
bit in the object representation of the corresponding
unsigned type (if there are M value bits in the signed type
and N in the unsigned type, then M<=N).
Your claim implies that you believe that M can't equal N.

What is N869? My answer as C95 based. Actually since it is an exercise
of K&R2, it is a C90 question.

Mar 13 '08 #36
santosh wrote:
Hello all,

In K&R2 one exercise asks the reader to compute and print the limits for
the basic integer types. This is trivial for unsigned types. But is it
possible for signed types without invoking undefined behaviour
triggered by overflow? Remember that the constants in limits.h cannot
be used.

Can you mention the chapter of K&R2 where this exercise is?

Mar 13 '08 #37
On Mar 13, 1:53 pm, Ioannis Vranos <ivra...@nospam.no.spamfreemail.gr>
wrote:
pete wrote:
You're mixing terms.
"value bits" != "bits that contribute to the value"
N869
6.2.6.2 Integer types
[#2] For signed integer types, the bits of the object
representation shall be divided into three groups: value
bits, padding bits, and the sign bit. There need not be any
padding bits; there shall be exactly one sign bit. Each bit
that is a value bit shall have the same value as the same
bit in the object representation of the corresponding
unsigned type (if there are M value bits in the signed type
and N in the unsigned type, then M<=N).
Your claim implies that you believe that M can't equal N.

What is N869? My answer as C95 based. Actually since it is an exercise
of K&R2, it is a C90 question.
Does C90 guarantee absence of padding bits? I didn't find
anything like that (though I didn't try hard because this
permitted by the C standard).

The quoted text is from C99, N869 is a C99 draft, and
the C90 text (quoted by Richard) is still true in C99.

Yevgen
Mar 13 '08 #38
On Mar 13, 12:24*pm, Ioannis Vranos
<ivra...@nospam.no.spamfreemail.grwrote:
santosh wrote:
Hello all,
In K&R2 one exercise asks the reader to compute and print the limits for
the basic integer types. This is trivial for unsigned types. But is it
possible for signed types without invoking undefined behaviour
triggered by overflow? Remember that the constants in limits.h cannot
be used.

Can you mention the chapter of K&R2 where this exercise is?
Exercise 2.1, page 36
Mar 13 '08 #39
ym******@gmail.com wrote:

<snip>
Does C90 guarantee absence of padding bits? I didn't find
anything like that (though I didn't try hard because this
permitted by the C standard).
C90 doesn't seem to mention padding bits. But is that equivalent to not
allowing them?
The quoted text is from C99, N869 is a C99 draft, and
the C90 text (quoted by Richard) is still true in C99.
Yes. A draft of C89 is available in the clc-wiki site, formerly hosted
by Dan Pop.

Mar 13 '08 #40
santosh <sa*********@gmail.comwrites:
ym******@gmail.com wrote:

<snip>
>Does C90 guarantee absence of padding bits? I didn't find
anything like that (though I didn't try hard because this
permitted by the C standard).

C90 doesn't seem to mention padding bits. But is that equivalent to not
allowing them?
>The quoted text is from C99, N869 is a C99 draft, and
the C90 text (quoted by Richard) is still true in C99.

Yes. A draft of C89 is available in the clc-wiki site, formerly hosted
by Dan Pop.
Take a point for gratuitous name dropping there Santosh :-;
Mar 13 '08 #41
user923005 wrote:
>Can you mention the chapter of K&R2 where this exercise is?

Exercise 2.1, page 36
I have not the English text but a local-language translated book, so I
translate the exercise in English here (if anyone can post the English
version of this K&R2 exercise, it will be great):
Translated:

Exercise 2-1. Write a program to determine the value ranges of the
variable types char, short, int, and long, both for signed and the
unsigned ones, displaying the appropriate values from the standard
headers, and by direct calculation. And something more difficult, if you
attempt to calculate it: determine the value ranges of the various
floating-point types.
Mar 13 '08 #42
Ioannis Vranos said:
user923005 wrote:
>>Can you mention the chapter of K&R2 where this exercise is?

Exercise 2.1, page 36

I have not the English text but a local-language translated book, so I
translate the exercise in English here (if anyone can post the English
version of this K&R2 exercise, it will be great):
Translated:

Exercise 2-1. Write a program to determine the value ranges of the
variable types char, short, int, and long, both for signed and the
unsigned ones, displaying the appropriate values from the standard
headers, and by direct calculation. And something more difficult, if you
attempt to calculate it: determine the value ranges of the various
floating-point types.
Original:

Exercise 2-1. Write a program to determine the ranges of char, short, int,
and long, both signed and unsigned, by printing appropriate values from
standard headers and by direct computation. Harder if you compute them:
determine the ranges of the various floating-point types.

--
Richard Heathfield <http://www.cpax.org.uk>
Email: -http://www. +rjh@
"Usenet is a strange place" - dmr 29 July 1999
Mar 13 '08 #43
Ioannis Vranos wrote:
>
Translated:

Exercise 2-1. Write a program to determine the value ranges of the
variable types char, short, int, and long, both for signed and the
unsigned ones, displaying the appropriate values from the standard
headers, and by direct calculation. And something more difficult, if you
attempt to calculate it: determine the value ranges of the various
floating-point types.

I think as it is spelled above we can get some flexibility. I think by
displaying the integer min and max values from limits.h, we can use this
information to determine how these types are implemented.

What do you think?
Mar 13 '08 #44
Ioannis Vranos wrote:
Ioannis Vranos wrote:
>>
Translated:

Exercise 2-1. Write a program to determine the value ranges of the
variable types char, short, int, and long, both for signed and the
unsigned ones, displaying the appropriate values from the standard
headers, and by direct calculation. And something more difficult, if
you attempt to calculate it: determine the value ranges of the
various floating-point types.

I think as it is spelled above we can get some flexibility. I think by
displaying the integer min and max values from limits.h, we can use
this information to determine how these types are implemented.

What do you think?
Perhaps, but that's a bit of a cheat, IMO.

In any case how do you propose to use the min. and max. values in
limits.h to work out the integer representation?

Mar 13 '08 #45
Harald van D?k wrote:
santosh wrote:
>In K&R2 one exercise asks the reader to compute and print the
limits for the basic integer types. This is trivial for unsigned
types. But is it possible for signed types without invoking
undefined behaviour triggered by overflow? Remember that the
constants in limits.h cannot be used.

#include <stdio.h>
int main(void) {
unsigned u = -1;
int i;
while ((i = u) < 0 || i != u)
u = u >1;
printf("INT_MAX == %u\n", u);
}

This is not guaranteed to work in C99, where the conversion of
an out-of- range integer may raise a signal, but it's valid C90,
since the result of the conversion must be a valid int, and
therefore between INT_MIN and INT_MAX.
This CAN'T work everywhere. The u = -1 statement is legal, and
results in UINT_MAX value. However the first i = u statement
always overruns the INT_MAX value for i, unless the system has
INT_MAX defined to be equal to UINT_MAX. Very rare. So the result
of that statement is implementation defined.

--
[mail]: Chuck F (cbfalconer at maineline dot net)
[page]: <http://cbfalconer.home.att.net>

--
Posted via a free Usenet account from http://www.teranews.com

Mar 13 '08 #46
Ben Bacarisse wrote:
Kaz Kylheku <kk******@gmail.comwrites:
.... snip ...
>
>This means that the sign bit is quite impervious to bit
manipulation.

It must participate in other bit operations, though, like ~, &,
| and ^. Even so,I can't see any way to avoid UB when trying to
calculate the range of int. Equally, I don't have a persuasive
argument that it *can't* be done, either.
Totally unnecessary. All those integral max values are specified
in <limits.h>. That's why.

--
[mail]: Chuck F (cbfalconer at maineline dot net)
[page]: <http://cbfalconer.home.att.net>

--
Posted via a free Usenet account from http://www.teranews.com

Mar 13 '08 #47
Ark Khasin wrote:
Peter Nilsson wrote:
.... snip ...
>>
Unfortunately, many implementations are somewhat inconsistent.
Consider...

#include <limits.h>
#include <stdio.h>

int main(void) {
printf("ULONG_MAX = %lu\n", ULONG_MAX);

#if -1 == -1ul
puts("-1 == -1ul [pre]");
#endif

if (-1 == -1ul)
puts("-1 == -1ul");

#if 4294967295 == -1ul
puts("4294967295 == -1ul [pre]");
#endif

if (4294967295 == -1ul)
puts("4294967295 == -1ul");
return 0;
}

The output for me using delorie gcc 4.2.1 with -ansi
-pedantic is...

ULONG_MAX = 4294967295
-1 == -1ul [pre]
-1 == -1ul
4294967295 == -1ul

As you can see, there is a discrepancy between the way
that preprocessor arithmetic is evaluated. Fact is,
gcc is not the only compiler to show problems.
What's wrong with that, remembering that (for gcc, on xx86's) a
long is defined to be identical to an int.

--
[mail]: Chuck F (cbfalconer at maineline dot net)
[page]: <http://cbfalconer.home.att.net>

--
Posted via a free Usenet account from http://www.teranews.com

Mar 13 '08 #48
Kaz Kylheku wrote:
santosh <santosh....@gmail.comwrote:
>>
In K&R2 one exercise asks the reader to compute and print the
limits for the basic integer types. This is trivial for unsigned
types. But is it possible for signed types without invoking
undefined behaviour triggered by overflow? Remember that the
constants in limits.h cannot be used.

You can use shifting to determine how many bits there are in the
left until it drops off. With that information, you can construct
....

No, because the moment it 'drops off' you have run into
implementation (or undefined) behaviour. You can't write portable
code to do this. You can possibly write code that executes on YOUR
machinery.

--
[mail]: Chuck F (cbfalconer at maineline dot net)
[page]: <http://cbfalconer.home.att.net>

--
Posted via a free Usenet account from http://www.teranews.com

Mar 13 '08 #49
CBFalconer said:
Ben Bacarisse wrote:
>Kaz Kylheku <kk******@gmail.comwrites:
... snip ...
>>
>>This means that the sign bit is quite impervious to bit
manipulation.

It must participate in other bit operations, though, like ~, &,
| and ^. Even so,I can't see any way to avoid UB when trying to
calculate the range of int. Equally, I don't have a persuasive
argument that it *can't* be done, either.

Totally unnecessary. All those integral max values are specified
in <limits.h>. That's why.
Is it, then, your claim that doing K&R exercises is a waste of time?

--
Richard Heathfield <http://www.cpax.org.uk>
Email: -http://www. +rjh@