By using this site, you agree to our updated Privacy Policy and our Terms of Use. Manage your Cookies Settings.
446,171 Members | 1,037 Online
Bytes IT Community
+ Ask a Question
Need help? Post your question and get tips & solutions from a community of 446,171 IT Pros & Developers. It's quick & easy.

I don't get how the computer arrives at 2^31

P: n/a
The question is related to the following lines of code:

#include <stdio.h>
#include <math.h>

int main(void) {

int a = (int)pow(2.0 ,32.0);
double b = pow(2.0 , 32.0);

printf("The value of a is: %u\n",a);
printf("The value of b is: %0.0f\n",b);
printf("The value of int is: %d\n", sizeof(int));
printf("The value of double is: %d\n", sizeof(double));

return 0;
}

The output is:

$./pw
The value of a is: 2147483648
The value of b is: 4294967296
The value of int is: 4
The value of double is: 8
The value of 'a' (on my machine) is 2147483648 or 2^31. Is this anyway
related to the fact that 1 int in this case is 32 bits? I

Thanks in advance.
Chad

Dec 13 '05 #1
Share this Question
Share on Google+
78 Replies


P: n/a
Chad wrote:
The question is related to the following lines of code:

#include <stdio.h>
#include <math.h>

int main(void) {

int a = (int)pow(2.0 ,32.0);
double b = pow(2.0 , 32.0);

printf("The value of a is: %u\n",a);
printf("The value of b is: %0.0f\n",b);
printf("The value of int is: %d\n", sizeof(int));
printf("The value of double is: %d\n", sizeof(double));

return 0;
}

The output is:

$./pw
The value of a is: 2147483648
The value of b is: 4294967296
The value of int is: 4
The value of double is: 8
The value of 'a' (on my machine) is 2147483648 or 2^31. Is this anyway
related to the fact that 1 int in this case is 32 bits? I


Try unsigned int. Unless this is a simple test code, for the sizes the
numbers you are handling I'd recommend long long.

Dec 13 '05 #2

P: n/a

sl*******@yahoo.com wrote:
Chad wrote:
The question is related to the following lines of code:

#include <stdio.h>
#include <math.h>

int main(void) {

int a = (int)pow(2.0 ,32.0);
double b = pow(2.0 , 32.0);

printf("The value of a is: %u\n",a);
printf("The value of b is: %0.0f\n",b);
printf("The value of int is: %d\n", sizeof(int));
printf("The value of double is: %d\n", sizeof(double));

return 0;
}

The output is:

$./pw
The value of a is: 2147483648
The value of b is: 4294967296
The value of int is: 4
The value of double is: 8
The value of 'a' (on my machine) is 2147483648 or 2^31. Is this anyway
related to the fact that 1 int in this case is 32 bits? I


Try unsigned int. Unless this is a simple test code, for the sizes the
numbers you are handling I'd recommend long long.


Okay, I tried this. I also forgot to use %lu vs %u.

#include <stdio.h>
#include <math.h>

int main(void) {

unsigned int a = (unsigned int)pow(2.0 ,32.0);
double b = pow(2.0 , 32.0);

printf("The value of a is: %lu\n",a);
printf("The value of b is: %0.0f\n",b);
printf("The value of int is: %d\n", sizeof(int));
printf("The value of double is: %d\n", sizeof(double));

return 0;
}

$./pw
The value of a is: 0
The value of b is: 4294967296
The value of int is: 4
The value of double is: 8

Now 'a' is zero! Ahhhhhhhhhhhhh........ I'm even more confused.

Chad

Dec 13 '05 #3

P: n/a
It is happening so be cause MSB of the 'a' is used for sign bit. So
the "4294967296" will be stored in only in the remaining bits of 'a'.
So the truncation to "2147483648". i.e right shift 4294967296 once that
will be "2147483648".

Dec 13 '05 #4

P: n/a
On 2005-12-13, Chad <cd*****@gmail.com> wrote:

sl*******@yahoo.com wrote:
Chad wrote:
> The question is related to the following lines of code:
>
> #include <stdio.h>
> #include <math.h>
>
> int main(void) {
>
> int a = (int)pow(2.0 ,32.0);
> double b = pow(2.0 , 32.0);
>
> printf("The value of a is: %u\n",a);
> printf("The value of b is: %0.0f\n",b);
> printf("The value of int is: %d\n", sizeof(int));
> printf("The value of double is: %d\n", sizeof(double));
>
> return 0;
> }
>
> The output is:
>
> $./pw
> The value of a is: 2147483648
> The value of b is: 4294967296
> The value of int is: 4
> The value of double is: 8
>
>
> The value of 'a' (on my machine) is 2147483648 or 2^31. Is this anyway
> related to the fact that 1 int in this case is 32 bits? I
>
Try unsigned int. Unless this is a simple test code, for the sizes the
numbers you are handling I'd recommend long long.


Okay, I tried this. I also forgot to use %lu vs %u.


%lu is for an unsigned long, not for an unsigned int - it should be %u
Now 'a' is zero! Ahhhhhhhhhhhhh........ I'm even more confused.


it's an overflow - neither is big enough to contain 2^32 - the value of
it when it was an int was actually -2^31.
Dec 13 '05 #5

P: n/a
the valuue you got for a is for (2^32) /2.
this is because 1 int=2 bytes=2*16=32 bits.

2^32=4294967296. now this is signed integer('cause u used %d"),therfore
the range is split into two(-ve &+ve)
i.e 4294967296/2. or (2^32)/2=2^31

Dec 13 '05 #6

P: n/a
this is because u used %lu whereas u declared a as only unsigned int
not unsigned long int.

Dec 13 '05 #7

P: n/a
Chad wrote:
sl*******@yahoo.com wrote:
Chad wrote:
The question is related to the following lines of code:

#include <stdio.h>
#include <math.h>

int main(void) {

int a = (int)pow(2.0 ,32.0);
double b = pow(2.0 , 32.0);

printf("The value of a is: %u\n",a);
printf("The value of b is: %0.0f\n",b);
printf("The value of int is: %d\n", sizeof(int));
printf("The value of double is: %d\n", sizeof(double));

return 0;
}

The output is:

$./pw
The value of a is: 2147483648
The value of b is: 4294967296
The value of int is: 4
The value of double is: 8
The value of 'a' (on my machine) is 2147483648 or 2^31. Is this anyway
related to the fact that 1 int in this case is 32 bits? I


Try unsigned int. Unless this is a simple test code, for the sizes the
numbers you are handling I'd recommend long long.


Okay, I tried this. I also forgot to use %lu vs %u.

#include <stdio.h>
#include <math.h>

int main(void) {

unsigned int a = (unsigned int)pow(2.0 ,32.0);
double b = pow(2.0 , 32.0);

printf("The value of a is: %lu\n",a);
printf("The value of b is: %0.0f\n",b);
printf("The value of int is: %d\n", sizeof(int));
printf("The value of double is: %d\n", sizeof(double));

return 0;
}

$./pw
The value of a is: 0
The value of b is: 4294967296
The value of int is: 4
The value of double is: 8

Now 'a' is zero! Ahhhhhhhhhhhhh........ I'm even more confused.


Ahh... that just means that on your platform an int is 32 bits:

4294967296 = (binary) 100000000000000000000000000000000
which is 33 bits. So when your computer does its calculation, it will
only store 32 of those 33 bits, resulting in:
(binary, first 1 chopped off) 00000000000000000000000000000000
which is zero.

The largest 32bit number is (2^32)-1. Of course, this number can't be
calculated on a 32bit CPU (unless using long long of course) since the
2^32 part will simply result in a zero.

Another interesting experiment to see if your CPU uses 2s complement
arithmetic:

unsigned int a = (int) -1;
printf("The value of a is: %u\n",a);

Anyway, all this is probably a little OT. And my code above is not
strictly portable. The best advice if you really want to be handling
large numbers is to use long long which will give you at least 64 bits.

Dec 13 '05 #8

P: n/a
sl*******@yahoo.com wrote:
Chad wrote:
sl*******@yahoo.com wrote:
Chad wrote:
> The question is related to the following lines of code:
> <snip>
> The output is:
>
> $./pw
> The value of a is: 2147483648
> The value of b is: 4294967296
> The value of int is: 4
> The value of double is: 8
>
> The value of 'a' (on my machine) is 2147483648 or 2^31. Is this anyway
> related to the fact that 1 int in this case is 32 bits? I

Try unsigned int. Unless this is a simple test code, for the sizes the
numbers you are handling I'd recommend long long.


Okay, I tried this. I also forgot to use %lu vs %u.
<snip>
$./pw
The value of a is: 0
The value of b is: 4294967296
The value of int is: 4
The value of double is: 8

Now 'a' is zero! Ahhhhhhhhhhhhh........ I'm even more confused.


Ahh... that just means that on your platform an int is 32 bits:

4294967296 = (binary) 100000000000000000000000000000000
which is 33 bits. So when your computer does its calculation, it will
only store 32 of those 33 bits, resulting in:
(binary, first 1 chopped off) 00000000000000000000000000000000
which is zero.

The largest 32bit number is (2^32)-1. Of course, this number can't be
calculated on a 32bit CPU (unless using long long of course) since the
2^32 part will simply result in a zero.

Another interesting experiment to see if your CPU uses 2s complement
arithmetic:

unsigned int a = (int) -1;
printf("The value of a is: %u\n",a);


Forgot to mention, if you get (2^32)-1 which is 4294967295 then your
CPU uses 2's complement math. If instead you get 2^31 which is
2147483648 then your CPU uses a simple integer with sign bit.

Dec 13 '05 #9

P: n/a
> Ahh... that just means that on your platform an int is 32 bits:

4294967296 = (binary) 100000000000000000000000000000000
which is 33 bits. So when your computer does its calculation, it will
only store 32 of those 33 bits, resulting in:
(binary, first 1 chopped off) 00000000000000000000000000000000
which is zero.

The largest 32bit number is (2^32)-1. Of course, this number can't be
calculated on a 32bit CPU (unless using long long of course) since the
2^32 part will simply result in a zero.

Another interesting experiment to see if your CPU uses 2s complement
arithmetic:

unsigned int a = (int) -1;
printf("The value of a is: %u\n",a);

Anyway, all this is probably a little OT. And my code above is not
strictly portable. The best advice if you really want to be handling
large numbers is to use long long which will give you at least 64 bits.


Speaking of long long, I tried this:

include <stdio.h>
#include <math.h>

int main(void) {

int a = (int)pow(2.0 ,32.0);
double b = pow(2.0 , 32.0);
long long c = 4294967296;

printf("The value of a is: %u\n",a);
printf("The value of b is: %0.0f\n",b);
printf("The value of int is: %d\n", sizeof(int));
printf("The value of double is: %d\n", sizeof(double));

printf("The value of c is: %llu\n",c>>1);
}

The output is:
$gcc pw.c -o pw -lm
pw.c: In function `main':
pw.c:8: warning: integer constant is too large for "long" type
$./pw
The value of a is: 2147483648
The value of b is: 4294967296
The value of int is: 4
The value of double is: 8
The value of c is: 2147483648
$
Interesting.

Chad

Dec 13 '05 #10

P: n/a

Chad wrote:
Ahh... that just means that on your platform an int is 32 bits:

4294967296 = (binary) 100000000000000000000000000000000
which is 33 bits. So when your computer does its calculation, it will
only store 32 of those 33 bits, resulting in:
(binary, first 1 chopped off) 00000000000000000000000000000000
which is zero.

The largest 32bit number is (2^32)-1. Of course, this number can't be
calculated on a 32bit CPU (unless using long long of course) since the
2^32 part will simply result in a zero.

Another interesting experiment to see if your CPU uses 2s complement
arithmetic:

unsigned int a = (int) -1;
printf("The value of a is: %u\n",a);

Anyway, all this is probably a little OT. And my code above is not
strictly portable. The best advice if you really want to be handling
large numbers is to use long long which will give you at least 64 bits.


Speaking of long long, I tried this:

include <stdio.h>
#include <math.h>

int main(void) {

int a = (int)pow(2.0 ,32.0);
double b = pow(2.0 , 32.0);
long long c = 4294967296;

printf("The value of a is: %u\n",a);
printf("The value of b is: %0.0f\n",b);
printf("The value of int is: %d\n", sizeof(int));
printf("The value of double is: %d\n", sizeof(double));

printf("The value of c is: %llu\n",c>>1);
}

The output is:
$gcc pw.c -o pw -lm
pw.c: In function `main':
pw.c:8: warning: integer constant is too large for "long" type
$./pw
The value of a is: 2147483648
The value of b is: 4294967296
The value of int is: 4
The value of double is: 8
The value of c is: 2147483648
$
Interesting.

Chad


Before I forget. In regards to using %lu vs %u, I need day or two to
let in sink in.

Chad

Dec 13 '05 #11

P: n/a
Chad wrote:
The question is related to the following lines of code:

#include <stdio.h>
#include <math.h>

int main(void) {

int a = (int)pow(2.0 ,32.0);
Undefined behaviour -- the return value from pow() is greater
than INT_MAX .
printf("The value of a is: %u\n",a);
Undefined behaviour -- %u is for unsigned ints.
Also, %llu or %Lu is for unsigned long longs
(in another message you used it for a signed long long).
The value of a is: 2147483648


That isn't even a valid int value.

Dec 13 '05 #12

P: n/a
On 2005-12-13, Old Wolf <ol*****@inspire.net.nz> wrote:
int a /* = INT_MIN */;
printf("The value of a is: %u\n",a);
Undefined behaviour -- %u is for unsigned ints.


it's not clear that it's undefined - an unsigned int and a signed int
have the same size, and it's not clear that int can have any valid
representations that are trap representations for unsigned.
Also, %llu
yes.
or %Lu
no.
is for unsigned long longs

Dec 13 '05 #13

P: n/a
Jordan Abel wrote:

On 2005-12-13, Old Wolf <ol*****@inspire.net.nz> wrote:
int a /* = INT_MIN */;
printf("The value of a is: %u\n",a);


Undefined behaviour -- %u is for unsigned ints.


it's not clear that it's undefined -


It's undefined because the standard says that
%u is for unsigned ints.
You don't need to be able to think of a failure mechanism,
for undefined code to be undefined.
Undefined behavior makes learning the language simpler.

For the case of
i = 0;
i = i++;
it makes sense to me that the final value of i
could be either 1 or 2,
but that code is not unspecified, it's undefined.

--
pete
Dec 14 '05 #14

P: n/a
Jordan Abel wrote:
On 2005-12-13, Old Wolf <ol*****@inspire.net.nz> wrote:
int a /* = INT_MIN */;
printf("The value of a is: %u\n",a);

Undefined behaviour -- %u is for unsigned ints.


it's not clear that it's undefined - an unsigned int and a signed int
have the same size, and it's not clear that int can have any valid
representations that are trap representations for unsigned.

No, this is pretty clear. The type of the argument required for a "%u"
format specifier is "unsigned int". From 7.9.16.1:

"If any argument is not the correct type for the corresponding conversion
specification, the behavior is undefined."

While, for the reasons you mention, most if not all platforms will treat
this as expected, the standard does not explicitly allow it. "int" is not
the correct type.

S.
Dec 14 '05 #15

P: n/a
On 2005-12-13, pete <pf*****@mindspring.com> wrote:
Jordan Abel wrote:

On 2005-12-13, Old Wolf <ol*****@inspire.net.nz> wrote:
int a /* = INT_MIN */;
>> printf("The value of a is: %u\n",a);
>
> Undefined behaviour -- %u is for unsigned ints.
it's not clear that it's undefined -


It's undefined because the standard says that %u is for unsigned ints.


Things are only undefined because the standard says that they are
undefined, not for any other reason. As it happens, this is in fact the
case, 7.19.6.1p9 "If any argument is not the correct type for the
corresponding conversion specification, the behavior is undefined".

However, if we assume that printf is a variadic function implemented
along the lines of the stdarg.h macros, we see 7.15.1.1p2 "...except for
the following cases: - one type is a signed integer type, the other type
is the corresponding unsigned integer type, and the value is
representable in both types" - In this case, the value is not, but "%u
is for unsigned ints" as a blanket statement would seem to be incorrect.
comp.std.c added - do the signed/unsigned exception, and the char*/void*
one, to va_arg type rules also apply to printf?

Incidentally, my copy of some c89 draft says that %u takes an int and
converts it to unsigned. c99 changes this to take an unsigned int. There
are several possibilities: Perhaps they screwed up (but the c99
rationale does not comment on this), or they thought the difference was
insignificant enough not to matter.
You don't need to be able to think of a failure mechanism, for
undefined code to be undefined. Undefined behavior makes learning the
language simpler.

Dec 14 '05 #16

P: n/a
Jordan Abel <jm****@purdue.edu> writes:
Things are only undefined because the standard says that they are
undefined, not for any other reason.


That is not true. Here is the definition of undefined behavior:

1 undefined behavior
behavior, upon use of a nonportable or erroneous program
construct or of erroneous data, for which this International
Standard imposes no requirements

Anything that the Standard does not define is undefined.
--
Bite me! said C.
Dec 14 '05 #17

P: n/a
On 2005-12-14, Ben Pfaff <bl*@cs.stanford.edu> wrote:
Jordan Abel <jm****@purdue.edu> writes:
Things are only undefined because the standard says that they are
undefined, not for any other reason.


That is not true. Here is the definition of undefined behavior:

1 undefined behavior
behavior, upon use of a nonportable or erroneous program
construct or of erroneous data, for which this International
Standard imposes no requirements

Anything that the Standard does not define is undefined.


Then why does it go to such pains to declare things to be "undefined
behavior"? the word undefined appears 182 times in c99, and there are
191 points in J.2 [yes, i counted them. by hand. these things should
really be numbered - at least they contain pointers to the sections they
refer to.]

Name one instance of undefined behavior that is not explicitly declared
undefined by the standard?

Alternative hypothesis: the definition you quoted is meant to explain
that when they [explicitly] say something is undefined, that _means_ no
requirements can be inferred from other sections to apply to that
behavior.
Dec 14 '05 #18

P: n/a
Jordan Abel <jm****@purdue.edu> writes:
On 2005-12-14, Ben Pfaff <bl*@cs.stanford.edu> wrote:
Anything that the Standard does not define is undefined.
Then why does it go to such pains to declare things to be "undefined
behavior"? the word undefined appears 182 times in c99, and there are
191 points in J.2 [yes, i counted them. by hand. these things should
really be numbered - at least they contain pointers to the sections they
refer to.]


In my opinion, it is important that we have clearly delineated
areas of doubt and uncertainty, where possible.
Name one instance of undefined behavior that is not explicitly declared
undefined by the standard?

Alternative hypothesis: the definition you quoted is meant to explain
that when they [explicitly] say something is undefined, that _means_ no
requirements can be inferred from other sections to apply to that
behavior.


There is an easy answer to this question. There are committee
members in comp.std.c. They can answer "yea" or "nay", should
they deign. Based on historical discussion in comp.lang.c, I
have one opinion. You have another.
--
"What is appropriate for the master is not appropriate for the novice.
You must understand the Tao before transcending structure."
--The Tao of Programming
Dec 14 '05 #19

P: n/a
Jordan Abel wrote:
Incidentally, my copy of some c89 draft says that %u takes an int and
converts it to unsigned. c99 changes this to take an unsigned int.


ISO/IEC 9899: 1990

7.9.6.1 The fprintf function

o, u, x, X The unsigned int argument is converted to

--
pete
Dec 14 '05 #20

P: n/a
"Jordan Abel" <jm****@purdue.edu> wrote in message
news:sl*******************@random.yi.org...
On 2005-12-14, Ben Pfaff <bl*@cs.stanford.edu> wrote:

Anything that the Standard does not define is undefined.


Then why does it go to such pains to declare things to be "undefined
behavior"? the word undefined appears 182 times in c99, and there are
191 points in J.2 [yes, i counted them. by hand. these things should
really be numbered - at least they contain pointers to the sections they
refer to.]


4#2 If a "shall" or "shall not" requirement that appears outside of a
constraint is violated, the behavior is undefined. Undefined behavior is
otherwise indicated in this International Standard by the words "undefined
behavior" or by the omission of any explicit definition of behavior. There
is no difference in emphasis among these three; they all describe "behavior
that is undefined".
Dec 14 '05 #21

P: n/a
On 2005-12-14, Wojtek Lerch <Wo******@yahoo.ca> wrote:
"Jordan Abel" <jm****@purdue.edu> wrote in message
news:sl*******************@random.yi.org...
On 2005-12-14, Ben Pfaff <bl*@cs.stanford.edu> wrote:

Anything that the Standard does not define is undefined.


Then why does it go to such pains to declare things to be "undefined
behavior"? the word undefined appears 182 times in c99, and there are
191 points in J.2 [yes, i counted them. by hand. these things should
really be numbered - at least they contain pointers to the sections they
refer to.]


4#2 If a "shall" or "shall not" requirement that appears outside of a
constraint is violated, the behavior is undefined. Undefined behavior is
otherwise indicated in this International Standard by the words "undefined
behavior" or by the omission of any explicit definition of behavior. There
is no difference in emphasis among these three; they all describe "behavior
that is undefined".


I thought the claim i was disputing was that there could be undefined
behavior without the standard making any explicit statement, not that
the explicit statement could be worded some particular other way.

The standard doesn't define the effects of the phase of the moon on the
program - does that mean running a program while the moon is full is
undefined? how about the first quarter?
Dec 14 '05 #22

P: n/a
Jordan Abel <jm****@purdue.edu> writes:
On 2005-12-14, Wojtek Lerch <Wo******@yahoo.ca> wrote:
4#2 If a "shall" or "shall not" requirement that appears outside of a
constraint is violated, the behavior is undefined. Undefined behavior is
otherwise indicated in this International Standard by the words "undefined
behavior" or by the omission of any explicit definition of behavior. There
is no difference in emphasis among these three; they all describe "behavior
that is undefined".

Ah, there's the paragraph I was thinking of, but couldn't locate.
I thought the claim i was disputing was that there could be undefined
behavior without the standard making any explicit statement, not that
the explicit statement could be worded some particular other way.


"...or by the omission of any explicit definition of behavior"
does not say that anything not defined is undefined behavior? If
not, then I believe that we are at an impasse--I interpret that
phrase one way, and you do another.
--
"The fact that there is a holy war doesn't mean that one of the sides
doesn't suck - usually both do..."
--Alexander Viro
Dec 14 '05 #23

P: n/a
On 2005-12-14, Ben Pfaff <bl*@cs.stanford.edu> wrote:
Jordan Abel <jm****@purdue.edu> writes:
On 2005-12-14, Wojtek Lerch <Wo******@yahoo.ca> wrote:
4#2 If a "shall" or "shall not" requirement that appears outside of a
constraint is violated, the behavior is undefined. Undefined behavior is
otherwise indicated in this International Standard by the words "undefined
behavior" or by the omission of any explicit definition of behavior. There
is no difference in emphasis among these three; they all describe "behavior
that is undefined".

Ah, there's the paragraph I was thinking of, but couldn't locate.
I thought the claim i was disputing was that there could be undefined
behavior without the standard making any explicit statement, not that
the explicit statement could be worded some particular other way.


"...or by the omission of any explicit definition of behavior"
does not say that anything not defined is undefined behavior?


there's no explicit definition of what effect the phase of the moon has
on programs [which you not only did not reply to, but snipped.]

As it happens, a positive signed int is permitted in general for
variadic functions that take an unsigned int [same for an unsigned <
INT_MAX for a signed] - The reason i added comp.std.c, was for the
question of whether this same exception would apply to printf.
If not, then I believe that we are at an impasse--I interpret that
phrase one way, and you do another.

Dec 14 '05 #24

P: n/a

"Jordan Abel" <jm****@purdue.edu> wrote in message
news:sl*******************@random.yi.org...
On 2005-12-14, Ben Pfaff <bl*@cs.stanford.edu> wrote:
"...or by the omission of any explicit definition of behavior"
does not say that anything not defined is undefined behavior?


there's no explicit definition of what effect the phase of the moon has
on programs [which you not only did not reply to, but snipped.]


If the behaviour of a program or construct is explicitly defined by the
standard, then there's no omission of an explicit definition of behaviour,
even if the definition doesn't mention the phase of the moon.
Dec 14 '05 #25

P: n/a
Jordan Abel <jm****@purdue.edu> writes:
On 2005-12-14, Ben Pfaff <bl*@cs.stanford.edu> wrote:
Jordan Abel <jm****@purdue.edu> writes:
"...or by the omission of any explicit definition of behavior"
does not say that anything not defined is undefined behavior?


there's no explicit definition of what effect the phase of the moon has
on programs [which you not only did not reply to, but snipped.]


The standard isn't defining the moon. The behavior of the moon
is indeed undefined by the standard. That doesn't mean that the
behavior of an implementation is dependent on the phase of the
moon.
--
"I should killfile you where you stand, worthless human." --Kaz
Dec 14 '05 #26

P: n/a
On 2005-12-14, Ben Pfaff <bl*@cs.stanford.edu> wrote:
Jordan Abel <jm****@purdue.edu> writes:
On 2005-12-14, Ben Pfaff <bl*@cs.stanford.edu> wrote:
Jordan Abel <jm****@purdue.edu> writes:
"...or by the omission of any explicit definition of behavior"
does not say that anything not defined is undefined behavior?


there's no explicit definition of what effect the phase of the moon has
on programs [which you not only did not reply to, but snipped.]


The standard isn't defining the moon. The behavior of the moon
is indeed undefined by the standard. That doesn't mean that the
behavior of an implementation is dependent on the phase of the
moon.


But it's not not permitted to be.

regardless, the question i meant to ask for comp.std.c is still
unanswered - does the rule that allows va_arg to accept an unsigned for
signed and vice versa if it's in the right range, also apply to printf?
Dec 14 '05 #27

P: n/a
Ben Pfaff wrote:
.... snip ...
In my opinion, it is important that we have clearly delineated
areas of doubt and uncertainty, where possible.


That belongs in someones sig files. You omitted fear.

--
Read about the Sony stealthware that is a security leak, phones
home, and is generally illegal in most parts of the world. Also
the apparent connivance of the various security software firms.
http://www.schneier.com/blog/archive...drm_rootk.html
Dec 14 '05 #28

P: n/a
Chuck F. said:
Ben Pfaff wrote:

... snip ...

In my opinion, it is important that we have clearly delineated
areas of doubt and uncertainty, where possible.


That belongs in someones sig files. You omitted fear.


The original is "rigidly defined areas of doubt and uncertainty", and is
from the "Hitch-hikers' Guide to the Galaxy", which was first broadcast in
1976 IIRC.

--
Richard Heathfield
"Usenet is a strange place" - dmr 29/7/1999
http://www.cpax.org.uk
email: rjh at above domain (but drop the www, obviously)
Dec 14 '05 #29

P: n/a
Richard Heathfield <in*****@invalid.invalid> writes:
Chuck F. said:
Ben Pfaff wrote:
In my opinion, it is important that we have clearly delineated
areas of doubt and uncertainty, where possible.


That belongs in someones sig files. You omitted fear.


The original is "rigidly defined areas of doubt and uncertainty", and is
from the "Hitch-hikers' Guide to the Galaxy", which was first broadcast in
1976 IIRC.


I'm glad that *someone* is paying attention.
--
int main(void){char p[]="ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuv wxyz.\
\n",*q="kl BIcNBFr.NKEzjwCIxNJC";int i=sizeof p/2;char *strchr();int putchar(\
);while(*q){i+=strchr(p,*q++)-p;if(i>=(int)sizeof p)i-=sizeof p-1;putchar(p[i]\
);}return 0;}
Dec 14 '05 #30

P: n/a
"Old Wolf" <ol*****@inspire.net.nz> writes:
Chad wrote:
The question is related to the following lines of code:

#include <stdio.h>
#include <math.h>

int main(void) {

int a = (int)pow(2.0 ,32.0);


Undefined behaviour -- the return value from pow() is greater
than INT_MAX .


You mean implementation defined, not undefined. ("Implementation
defined" could mean raising an implementation defined signal in
this case, but still implmentation defined.)

Dec 14 '05 #31

P: n/a
Skarmander <in*****@dontmailme.com> writes:
Jordan Abel wrote:
On 2005-12-13, Old Wolf <ol*****@inspire.net.nz> wrote:
int a /* = INT_MIN */;
printf("The value of a is: %u\n",a);
Undefined behaviour -- %u is for unsigned ints.


it's not clear that it's undefined - an unsigned int and a signed int
have the same size, and it's not clear that int can have any valid
representations that are trap representations for unsigned.

No, this is pretty clear. The type of the argument required for a "%u"
format specifier is "unsigned int". From 7.9.16.1:

"If any argument is not the correct type for the corresponding conversion
specification, the behavior is undefined."

While, for the reasons you mention, most if not all platforms will treat
this as expected, the standard does not explicitly allow it. "int" is not
the correct type.


Presumably you meant 7.19.6.1.

Reading the rules for va_arg in 7.15.1.1, it seems clear that the
Standard intends that an int argument should work for an unsigned int
specifier, if the argument value is representable as an unsigned int.
The way the va_arg rules work make an int argument be "the correct type"
in this case (again, assuming the value is representable as an unsigned
int).

Dec 14 '05 #32

P: n/a
Okay, maybe I'm going a bit off topic here, but, I think I'm missing
it. When I go something like:

include <stdio.h>
#include <math.h>

int main(void) {
int i = 0;
double sum = 0;

for (i = 0; i <= 30; i++) {
sum = pow(2.0, i) + sum;
}

printf("The value of c is: %0.0f\n",sum);

return 0;

}

The output is:
$./pw
The value of c is: 2147483647 (not 2147483648).

The way I understood this was that for 32 bits, pow(2.0, 31.0) would
look something like the following:

1111 1111 1111 1111 1111 1111 1111 1111

The first bit would be signed. This means that the value should be the
sum of:
1*2^0 + 1*2^1..... 1*3^30

Why is the value off by one?

Chad

Dec 14 '05 #33

P: n/a

Chad wrote:
Okay, maybe I'm going a bit off topic here, but, I think I'm missing
it. When I go something like:

include <stdio.h>
#include <math.h>

int main(void) {
int i = 0;
double sum = 0;

for (i = 0; i <= 30; i++) {
sum = pow(2.0, i) + sum;
}

printf("The value of c is: %0.0f\n",sum);

return 0;

}

The output is:
$./pw
The value of c is: 2147483647 (not 2147483648).

The way I understood this was that for 32 bits, pow(2.0, 31.0) would
look something like the following:

1111 1111 1111 1111 1111 1111 1111 1111

The first bit would be signed. This means that the value should be the
sum of:
1*2^0 + 1*2^1..... 1*3^30

Why is the value off by one?

Chad


I mean sum of 1*2^0 + 1*2^1..... 1*2^30.

Chad

Dec 14 '05 #34

P: n/a

Chad wrote:
Okay, maybe I'm going a bit off topic here, but, I think I'm missing
it. When I go something like:

include <stdio.h>
#include <math.h>

int main(void) {
int i = 0;
double sum = 0;

for (i = 0; i <= 30; i++) {
sum = pow(2.0, i) + sum;
}

printf("The value of c is: %0.0f\n",sum);

return 0;

}

The output is:
$./pw
The value of c is: 2147483647 (not 2147483648).

The way I understood this was that for 32 bits, pow(2.0, 31.0) would
look something like the following:

1111 1111 1111 1111 1111 1111 1111 1111

The first bit would be signed. This means that the value should be the
sum of:
1*2^0 + 1*2^1..... 1*3^30

Why is the value off by one?

Chad


I mean sum of 1*2^0 + 1*2^1..... 1*2^30.

Chad

Dec 14 '05 #35

P: n/a
Jordan Abel wrote:
On 2005-12-14, Ben Pfaff <bl*@cs.stanford.edu> wrote:
Jordan Abel <jm****@purdue.edu> writes:
On 2005-12-14, Ben Pfaff <bl*@cs.stanford.edu> wrote:
Jordan Abel <jm****@purdue.edu> writes:
"...or by the omission of any explicit definition of behavior"
does not say that anything not defined is undefined behavior?
there's no explicit definition of what effect the phase of the moon has
on programs [which you not only did not reply to, but snipped.] The standard isn't defining the moon. The behavior of the moon
is indeed undefined by the standard. That doesn't mean that the
behavior of an implementation is dependent on the phase of the
moon.


But it's not not permitted to be.


It can indeed depend on the phase of the moon where the standard has not
otherwise defined the behaviour. For example, undefined behaviour may
only make daemons fly out of your nose if the moon is not full, and make
werewolves attack you instead on the full moon.

Equally, all calls to printf could fail on the new moon, because the
standard does not disallow it.

Equally it could change the order of evaluation of parameters depending
on the phase of the moon.

However, the number of bits in a char can't change depending on the
phase of the moon, nor can sizeof(int) nor any of the other items
explicitly defined by the standard or which the standard requires that
an implementation document.
regardless, the question i meant to ask for comp.std.c is still
unanswered - does the rule that allows va_arg to accept an unsigned for
signed and vice versa if it's in the right range, also apply to printf?


I would argue that it is undefined because in the specific case of
printf it is explicitly stated that it the type is different it is
undefined. After all, printf might not be implemented in C and so might
make assumptions that code written in C is not allowed to make. Although
I can't think how it could manage to break this.
--
Flash Gordon
Living in interesting times.
Although my email address says spam, it is real and I read it.
Dec 14 '05 #36

P: n/a
On Tue, 13 Dec 2005 23:42:18 -0800, in comp.lang.c , Ben Pfaff
<bl*@cs.stanford.edu> wrote:
Richard Heathfield <in*****@invalid.invalid> writes:
Chuck F. said:
Ben Pfaff wrote:
In my opinion, it is important that we have clearly delineated
areas of doubt and uncertainty, where possible.

That belongs in someones sig files. You omitted fear.


The original is "rigidly defined areas of doubt and uncertainty", and is
from the "Hitch-hikers' Guide to the Galaxy", which was first broadcast in
1976 IIRC.


I'm glad that *someone* is paying attention.


Though fear would add to our weapons... I'll come in again.

----== Posted via Newsfeeds.Com - Unlimited-Unrestricted-Secure Usenet News==----
http://www.newsfeeds.com The #1 Newsgroup Service in the World! 120,000+ Newsgroups
----= East and West-Coast Server Farms - Total Privacy via Encryption =----
Dec 14 '05 #37

P: n/a
Jordan Abel wrote:
....
Things are only undefined because the standard says that they are
undefined, not for any other reason. ...
As it happens, the standard explicitly states that when there is no
explicit definition of the behavior, it is undefined. There is no part
of the standard anywhere that specifies the behavior of the printf()
family of functions when there is a mis-match between the format code
and the actual promoted type of the arguments.
... As it happens, this is in fact the
case, 7.19.6.1p9 "If any argument is not the correct type for the
corresponding conversion specification, the behavior is undefined".

However, if we assume that printf is a variadic function implemented
along the lines of the stdarg.h macros, we see 7.15.1.1p2 "...except for
the following cases: - one type is a signed integer type, the other type
is the corresponding unsigned integer type, and the value is
representable in both types" - In this case, the value is not, but "%u
is for unsigned ints" as a blanket statement would seem to be incorrect.
comp.std.c added - do the signed/unsigned exception, and the char*/void*
one, to va_arg type rules also apply to printf?
The standard doesn't require the use of the <stdarg.h> macros. Whatever
method is used must be interface compatible with those macros,
otherwise suitably cast function pointers couldn't be used to invoke
printf(). However, since the standard specifies that the behavior for
printf() is undefined in this case, that allows it do something other
than, or in addition to, using stdarg.h macros. For instance, the
implementation might provide as an extension some way of querying what
the actual type an argument was, even though the standard provides no
method of doing so. If printf() is implemented using that extension, it
can identify the type mis-match, and because the behavior is explicitly
undefined when there is such a mismatch, it's permitted to do whatever
the implementors want it to do; the most plausible choice would be a
run-time diagnostic sent to stderr; though assert() and abort() are
other reasonable options.
Incidentally, my copy of some c89 draft says that %u takes an int and
converts it to unsigned. c99 changes this to take an unsigned int. There
are several possibilities: Perhaps they screwed up (but the c99
rationale does not comment on this), or they thought the difference was
insignificant enough not to matter.


I think that it was felt that the difference was important, and
desireable, but not worthy of an explicit mention in the Rationale. Do
you think the C99 specification is undesireable?

Dec 14 '05 #38

P: n/a
Jordan Abel wrote:
On 2005-12-14, Wojtek Lerch <Wo******@yahoo.ca> wrote: ....
4#2 If a "shall" or "shall not" requirement that appears outside of a
constraint is violated, the behavior is undefined. Undefined behavior is
otherwise indicated in this International Standard by the words "undefined
behavior" or by the omission of any explicit definition of behavior. There
is no difference in emphasis among these three; they all describe "behavior
that is undefined".


I thought the claim i was disputing was that there could be undefined
behavior without the standard making any explicit statement, not that
the explicit statement could be worded some particular other way.


Well, the "particular other way" being referred to in this case is "by
the omission of any explicit definition of behavior". That seems to
fit "without the standard making any explicit statement", as far as I
can see.
The standard doesn't define the effects of the phase of the moon on the
program - does that mean running a program while the moon is full is
undefined? how about the first quarter?


The behavior of some C programs is defined by the standard, regardless
of the phase of the moon, so they must have that behavior. However, it
is indeed permitted for the phase of the moon to affect any aspect of
the behavior that is NOT specified by the standard. For instance, it's
permitted to affect the speed with which computations are carried out.
Whenever the behavior is implementation-defined, it is permissible for
the implementation to define the behavior as depending upon the phase
of the moon.

The behavior of printf() is defined only for those cases where the
types match. The standard nowhere defines what they do when there's a
mismatch.

Dec 14 '05 #39

P: n/a
"Jordan Abel" <jm****@purdue.edu> wrote in message
news:sl*******************@random.yi.org...
regardless, the question i meant to ask for comp.std.c is still
unanswered - does the rule that allows va_arg to accept an unsigned for
signed and vice versa if it's in the right range, also apply to printf?


There was a discussion about that here in comp.std.c a while ago. The
bottom line is, I think, that it's pretty clear that the intention was to
allow it, but there's no agreement about whether the normative text actually
says it:

http://groups.google.com/group/comp....6d6f482bc2a6fa
Dec 14 '05 #40

P: n/a

Jordan Abel wrote:
On 2005-12-14, Ben Pfaff <bl*@cs.stanford.edu> wrote:
Jordan Abel <jm****@purdue.edu> writes:
On 2005-12-14, Ben Pfaff <bl*@cs.stanford.edu> wrote:
Jordan Abel <jm****@purdue.edu> writes:
"...or by the omission of any explicit definition of behavior"
does not say that anything not defined is undefined behavior?

there's no explicit definition of what effect the phase of the moon has
on programs [which you not only did not reply to, but snipped.]
The standard isn't defining the moon. The behavior of the moon
is indeed undefined by the standard. That doesn't mean that the
behavior of an implementation is dependent on the phase of the
moon.


But it's not not permitted to be.


Actually, it is. The standard nowhere specifies how fast a program must
execute; therefore it's permissible for a program's processing speed to
depend upon the phase of the moon. The order of evaluation of f() and
g() in the expression f()+g(), therefore an implementation is allowed
to use a different order depending upon the phase of the moon.
regardless, the question i meant to ask for comp.std.c is still
unanswered - does the rule that allows va_arg to accept an unsigned for
signed and vice versa if it's in the right range, also apply to printf?


No. printf() isn't required to use va_arg(). It must use something that
is binary-compatible with it, to permit seperate compilation of code
that accesses printf() only indirectly, through a pointer. However,
whatever method it uses might have additional capabilities beyond those
defined by the standard for <stdarg.h>, capabilities not available to
strictly conforming user code.

Dec 14 '05 #41

P: n/a
Tim Rentsch wrote:
Skarmander <in*****@dontmailme.com> writes:
Jordan Abel wrote:
On 2005-12-13, Old Wolf <ol*****@inspire.net.nz> wrote:
int a /* = INT_MIN */;
> printf("The value of a is: %u\n",a);
Undefined behaviour -- %u is for unsigned ints.
it's not clear that it's undefined - an unsigned int and a signed int
have the same size, and it's not clear that int can have any valid
representations that are trap representations for unsigned.
No, this is pretty clear. The type of the argument required for a "%u"
format specifier is "unsigned int". From 7.9.16.1:

"If any argument is not the correct type for the corresponding conversion
specification, the behavior is undefined."

While, for the reasons you mention, most if not all platforms will treat
this as expected, the standard does not explicitly allow it. "int" is not
the correct type.


Presumably you meant 7.19.6.1.

Yes. Neat little shift on the 1 there.
Reading the rules for va_arg in 7.15.1.1, it seems clear that the
Standard intends that an int argument should work for an unsigned int
specifier, if the argument value is representable as an unsigned int.
The way the va_arg rules work make an int argument be "the correct type"
in this case (again, assuming the value is representable as an unsigned
int).

That's a very reasonable interpretation, though the standard should arguably
be clarified at this point with a footnote if the intent is to treat
printf() as "just another va_arg-using function" in this regard.

S.
Dec 14 '05 #42

P: n/a
Jordan Abel <jm****@purdue.edu> writes:

[snip]

regardless, the question i meant to ask for comp.std.c is still
unanswered - does the rule that allows va_arg to accept an unsigned for
signed and vice versa if it's in the right range, also apply to printf?


I believe that's the most sensible interpretation, but to get
an authoritative answer rather than just statements of opinion
probably the best thing to do is submit a Defect Report.
Dec 14 '05 #43

P: n/a
Flash Gordon <sp**@flash-gordon.me.uk> writes:
Jordan Abel wrote:

[snip]
regardless, the question i meant to ask for comp.std.c is still
unanswered - does the rule that allows va_arg to accept an unsigned for
signed and vice versa if it's in the right range, also apply to printf?


I would argue that it is undefined because in the specific case of
printf it is explicitly stated that it the type is different it is
undefined. After all, printf might not be implemented in C and so might
make assumptions that code written in C is not allowed to make. Although
I can't think how it could manage to break this.


The Standard doesn't say that if the type is different then it's
undefined; what it does say is that if the argument is not of the
correct type then it's undefined. Absent an explicit indication to
the contrary, the most sensible interpretation of "the correct type"
would (IMO) be "the correct type after taking into account the rules
for function argument transmission". Of course, other interpretations
are possible; I just don't find any evidence to support the theory
that any other interpretation is what the Standard intends.

And, as I said in another response, the best way to get an
authoritative statement on the matter is to submit a Defect
Report.
Dec 14 '05 #44

P: n/a
Skarmander <in*****@dontmailme.com> writes:
Tim Rentsch wrote:
Skarmander <in*****@dontmailme.com> writes:
Jordan Abel wrote:
On 2005-12-13, Old Wolf <ol*****@inspire.net.nz> wrote:
int a /* = INT_MIN */;
>> printf("The value of a is: %u\n",a);
> Undefined behaviour -- %u is for unsigned ints.
it's not clear that it's undefined - an unsigned int and a signed int
have the same size, and it's not clear that int can have any valid
representations that are trap representations for unsigned.

No, this is pretty clear. The type of the argument required for a "%u"
format specifier is "unsigned int". From 7.9.16.1:

"If any argument is not the correct type for the corresponding conversion
specification, the behavior is undefined."

While, for the reasons you mention, most if not all platforms will treat
this as expected, the standard does not explicitly allow it. "int" is not
the correct type.


Presumably you meant 7.19.6.1.

Yes. Neat little shift on the 1 there.
Reading the rules for va_arg in 7.15.1.1, it seems clear that the
Standard intends that an int argument should work for an unsigned int
specifier, if the argument value is representable as an unsigned int.
The way the va_arg rules work make an int argument be "the correct type"
in this case (again, assuming the value is representable as an unsigned
int).

That's a very reasonable interpretation, though the standard should arguably
be clarified at this point with a footnote if the intent is to treat
printf() as "just another va_arg-using function" in this regard.


Yes, I agree the wording in the Standard needs clarifying here.
Dec 14 '05 #45

P: n/a
ku****@wizard.net writes:

[snip]
The behavior of printf() is defined only for those cases where the
types match. The standard nowhere defines what they do when there's a
mismatch.


It doesn't say "where the types match", it says when an argument is
not of the correct type. Since the phrase "of the correct type" isn't
given any specific definition, the most sensible interpretation is
"the correct type after taking into account other rules for function
argument transmission". There isn't any evidence to support your
theory that the Standard intends anything else here. There is,
however, evidence to support the theory that it intends int's to
be usable as unsigned int's (obviously provided that the argument
values are suitable).

Dec 14 '05 #46

P: n/a
In news:sl*******************@random.yi.org, Jordan Abel va escriure:
191 points in J.2 [yes, i counted them. by hand. these things should
really be numbered


What a great idea! This way, every time a technical corrigendum introduces a
new explicited case for undefined behaviour, since it is expected it should
be inserted in the proper place in the J.2 list, the technical corrigendum
should re-list the whole rest of annex J.2, with the new numbers.
As a result, those numbers would instantaneously become without useful
profit (since it would be a great pain to track varying numbers).

If you need a numerical gimmick to design them, just use the
subclause&paragraph numbers (chapters and verses), as everybody does.
If it is just to help you counting them, assuming you are not proficient in
the use of sed/awk over Nxxxx.txt, just ask Larry, he does.
Antoine

Dec 14 '05 #47

P: n/a
Chad wrote:
Speaking of long long, I tried this:

include <stdio.h>
#include <math.h>

int main(void) {

int a = (int)pow(2.0 ,32.0);
double b = pow(2.0 , 32.0);
long long c = 4294967296;

printf("The value of a is: %u\n",a);
printf("The value of b is: %0.0f\n",b);
printf("The value of int is: %d\n", sizeof(int));
printf("The value of double is: %d\n", sizeof(double));

printf("The value of c is: %llu\n",c>>1);
}

The output is:
$gcc pw.c -o pw -lm
pw.c: In function `main':
pw.c:8: warning: integer constant is too large for "long" type
$./pw
The value of a is: 2147483648
The value of b is: 4294967296
The value of int is: 4
The value of double is: 8
The value of c is: 2147483648
$


try long long c = 4294967296LL;

Dec 14 '05 #48

P: n/a
Tim Rentsch <tx*@alumnus.caltech.edu> writes:
"Old Wolf" <ol*****@inspire.net.nz> writes:
Chad wrote:
> The question is related to the following lines of code:
>
> #include <stdio.h>
> #include <math.h>
>
> int main(void) {
>
> int a = (int)pow(2.0 ,32.0);


Undefined behaviour -- the return value from pow() is greater
than INT_MAX .


You mean implementation defined, not undefined. ("Implementation
defined" could mean raising an implementation defined signal in
this case, but still implmentation defined.)


No, it's undefined.

C99 6.3.1.4p1:

When a finite value of real floating type is converted to an
integer type other than _Bool, the fractional part is discarded
(i.e., the value is truncated toward zero). If the value of the
integral part cannot be represented by the integer type, the
behavior is undefined.

--
Keith Thompson (The_Other_Keith) ks***@mib.org <http://www.ghoti.net/~kst>
San Diego Supercomputer Center <*> <http://users.sdsc.edu/~kst>
We must do something. This is something. Therefore, we must do this.
Dec 14 '05 #49

P: n/a
"Chad" <cd*****@gmail.com> writes:
Okay, maybe I'm going a bit off topic here, but, I think I'm missing
it. When I go something like:

include <stdio.h>
#include <math.h>

int main(void) {
int i = 0;
double sum = 0;

for (i = 0; i <= 30; i++) {
sum = pow(2.0, i) + sum;
}

printf("The value of c is: %0.0f\n",sum);

return 0;

}

The output is:
$./pw
The value of c is: 2147483647 (not 2147483648).

The way I understood this was that for 32 bits, pow(2.0, 31.0) would
look something like the following:

1111 1111 1111 1111 1111 1111 1111 1111
No. The pow() function returns a result of type double; specifically,
it's 2147483648.0. In the code above, nothing is converted to any
32-bit integer type; it's all double, so it doesn't make much sense to
talk about the binary representation.
The first bit would be signed. This means that the value should be the
sum of:
1*2^0 + 1*2^1..... 1*3^30

Why is the value off by one?


The sign bit doesn't enter into this. Floating-point types do
typically have a sign bit, but all the values you're dealing with here
are representable, so all the computed values will match the
mathematical results.

You're computing

1.0 + 2.0 + 4.0 + 8.0 + ... + 1073741824.0

The result is 2147483647.0.

This might be a little clearer if you use a "%0.1f" format. The
"%0.0f" format makes the numbers look like integers; "%0.1f" makes it
clear that they're floating-point.

--
Keith Thompson (The_Other_Keith) ks***@mib.org <http://www.ghoti.net/~kst>
San Diego Supercomputer Center <*> <http://users.sdsc.edu/~kst>
We must do something. This is something. Therefore, we must do this.
Dec 14 '05 #50

78 Replies

This discussion thread is closed

Replies have been disabled for this discussion.