473,387 Members | 3,820 Online
Bytes | Software Development & Data Engineering Community
Post Job

Home Posts Topics Members FAQ

Join Bytes to post your question to a community of 473,387 software developers and data experts.

I don't get how the computer arrives at 2^31

The question is related to the following lines of code:

#include <stdio.h>
#include <math.h>

int main(void) {

int a = (int)pow(2.0 ,32.0);
double b = pow(2.0 , 32.0);

printf("The value of a is: %u\n",a);
printf("The value of b is: %0.0f\n",b);
printf("The value of int is: %d\n", sizeof(int));
printf("The value of double is: %d\n", sizeof(double));

return 0;
}

The output is:

$./pw
The value of a is: 2147483648
The value of b is: 4294967296
The value of int is: 4
The value of double is: 8
The value of 'a' (on my machine) is 2147483648 or 2^31. Is this anyway
related to the fact that 1 int in this case is 32 bits? I

Thanks in advance.
Chad

Dec 13 '05
78 1970
Tim Rentsch wrote:
Flash Gordon <sp**@flash-gordon.me.uk> writes:
Jordan Abel wrote: [snip]
regardless, the question i meant to ask for comp.std.c is still
unanswered - does the rule that allows va_arg to accept an unsigned for
signed and vice versa if it's in the right range, also apply to printf?

I would argue that it is undefined because in the specific case of
printf it is explicitly stated that it the type is different it is
undefined. After all, printf might not be implemented in C and so might
make assumptions that code written in C is not allowed to make. Although
I can't think how it could manage to break this.


The Standard doesn't say that if the type is different then it's
undefined; what it does say is that if the argument is not of the
correct type then it's undefined. Absent an explicit indication to
the contrary, the most sensible interpretation of "the correct type"
would (IMO) be "the correct type after taking into account the rules
for function argument transmission".


I agree that is a reasonable position.
Of course, other interpretations
are possible; I just don't find any evidence to support the theory
that any other interpretation is what the Standard intends.
I'm just a awkward sod sometimes ;-)
And, as I said in another response, the best way to get an
authoritative statement on the matter is to submit a Defect
Report.


I'm not bothered enough by this one.
--
Flash Gordon
Living in interesting times.
Although my email address says spam, it is real and I read it.
Dec 14 '05 #51
In comp.std.c Tim Rentsch <tx*@alumnus.caltech.edu> wrote:

The Standard doesn't say that if the type is different then it's
undefined; what it does say is that if the argument is not of the
correct type then it's undefined. Absent an explicit indication to
the contrary, the most sensible interpretation of "the correct type"
would (IMO) be "the correct type after taking into account the rules
for function argument transmission".


Indeed. The difficulty of specifying the rules precisely is why the
committee weaseled out and used the fuzzy term "correct" instead of more
explicit language.

-Larry Jones

At times like these, all Mom can think of is how long she was in
labor with me. -- Calvin
Dec 14 '05 #52
On 2005-12-14, ku****@wizard.net <ku****@wizard.net> wrote:
No. printf() isn't required to use va_arg(). It must use something that
is binary-compatible with it, to permit seperate compilation of code
that accesses printf() only indirectly, through a pointer. However,
whatever method it uses might have additional capabilities beyond those
defined by the standard for <stdarg.h>, capabilities not available to
strictly conforming user code.


However, it would be reasonable to think that the compatibility between
signed and unsigned integers where they have the same value is a
required part of the binary interface of variadic functions.
Dec 14 '05 #53
Chad wrote:

Okay, maybe I'm going a bit off topic here, but, I think I'm
missing it. When I go something like:

include <stdio.h>
#include <math.h>

int main(void) {
int i = 0;
double sum = 0;
Why initialize these, when the initial values will never be used?

for (i = 0; i <= 30; i++) {
sum = pow(2.0, i) + sum;
}
printf("The value of c is: %0.0f\n",sum);
return 0;
}

The output is:
$./pw
The value of c is: 2147483647 (not 2147483648).

The way I understood this was that for 32 bits, pow(2.0, 31.0)
would look something like the following:

1111 1111 1111 1111 1111 1111 1111 1111

The first bit would be signed. This means that the value should
be the sum of:
1*2^0 + 1*2^1..... 1*3^30

Why is the value off by one?


Because a double is not an integral object. It expresses an
approximation to a value, and the printf format has truncated the
value. Just change the format to "%f\n" to see the difference.

--
Read about the Sony stealthware that is a security leak, phones
home, and is generally illegal in most parts of the world. Also
the apparent connivance of the various security software firms.
http://www.schneier.com/blog/archive...drm_rootk.html
Dec 15 '05 #54
"Chuck F. " <cb********@yahoo.com> writes:
Chad wrote:
>
Okay, maybe I'm going a bit off topic here, but, I think I'm
missing it. When I go something like:
include <stdio.h>
#include <math.h>
int main(void) {
int i = 0;
double sum = 0;


Why initialize these, when the initial values will never be used?
for (i = 0; i <= 30; i++) {
sum = pow(2.0, i) + sum;
}
printf("The value of c is: %0.0f\n",sum);
return 0;
}


These? The initial value of i isn't used; the initial value of sum is.

--
Keith Thompson (The_Other_Keith) ks***@mib.org <http://www.ghoti.net/~kst>
San Diego Supercomputer Center <*> <http://users.sdsc.edu/~kst>
We must do something. This is something. Therefore, we must do this.
Dec 15 '05 #55
"Jordan Abel" <jm****@purdue.edu> wrote in message
news:sl*******************@random.yi.org...
However, it would be reasonable to think that the compatibility between
signed and unsigned integers where they have the same value is a
required part of the binary interface of variadic functions.


It would be unreasonable to think that that wasn't the intention, because
footnote 31 clearly says it was; but would it be reasonable to deny that the
normative text lacks clear words that state that requirement, either
directly or by mentioning va_arg(), either in the description of the
function call operator or in the description of fprintf()? Is it really
reasonable to believe that it's clear enough that a requirement explicitly
stated in the description of one interface (the argument to printf() must
have the correct type) is overriden by a promise in the description in a
different interface (it's OK for the argument to va_arg() to have a slightly
different type), even though the two descriptions don't have any references
to each other?

Think about a subset of C defined by what remains from C99 after removing
footnote 31, all the contents of <stdarg.h>, and the few functions from
<stdio.h> that take a va_list argument. This would significantly reduce the
usefulness of variadic functions defined in programs; but would it change
the semantic of printf()? Do you think it would still be reasonable to
believe that this modified C required printf() to tolerate mixing signed
with unsigned?
Dec 15 '05 #56
Keith Thompson <ks***@mib.org> writes:
Tim Rentsch <tx*@alumnus.caltech.edu> writes:
"Old Wolf" <ol*****@inspire.net.nz> writes:
Chad wrote:
> The question is related to the following lines of code:
>
> #include <stdio.h>
> #include <math.h>
>
> int main(void) {
>
> int a = (int)pow(2.0 ,32.0);

Undefined behaviour -- the return value from pow() is greater
than INT_MAX .


You mean implementation defined, not undefined. ("Implementation
defined" could mean raising an implementation defined signal in
this case, but still implmentation defined.)


No, it's undefined.

C99 6.3.1.4p1:

When a finite value of real floating type is converted to an
integer type other than _Bool, the fractional part is discarded
(i.e., the value is truncated toward zero). If the value of the
integral part cannot be represented by the integer type, the
behavior is undefined.


You're absolutely right. Thank you for the correction.
Dec 15 '05 #57
Chuck F. said:
Chad wrote:

int main(void) {
int i = 0;
double sum = 0;


Why initialize these, when the initial values will never be used?


I have read Keith's comment, but I'll address the question as if I had not
noticed it. I, personally, give objects a known, determinate initial value
when defining them because I think it makes a program easier to debug.
Twice now I've let indeterminate values screw up a production environment
under conditions that didn't occur in testing (which is a good indication
that neither the programming nor the testing were up to scratch). Twice is
twice too many. I'm not going to let that happen again.

And now add in Keith's comment. Since the value of sum given above /was/
used, to remove it arbitrarily (as some people may well have been tempted
to do if maintaining the code) after a brief perusal of the code had
*apparently* indicated that it was not used would have introduced a bug
that may well not have been spotted in testing.

--
Richard Heathfield
"Usenet is a strange place" - dmr 29/7/1999
http://www.cpax.org.uk
email: rjh at above domain (but drop the www, obviously)
Dec 15 '05 #58
Keith Thompson wrote:
"Chuck F. " <cb********@yahoo.com> writes:
Chad wrote:

Okay, maybe I'm going a bit off topic here, but, I think I'm
missing it. When I go something like:
include <stdio.h>
#include <math.h>
int main(void) {
int i = 0;
double sum = 0;


Why initialize these, when the initial values will never be used?
for (i = 0; i <= 30; i++) {
sum = pow(2.0, i) + sum;
}
printf("The value of c is: %0.0f\n",sum);
return 0;
}


These? The initial value of i isn't used; the initial value of
sum is.


Mea Culpa. However I consider the proper initialization point is
in the for statement, i.e. "for (i = 0, sum = 0; ...)". In other
words as close as possible to the point of use.

--
Read about the Sony stealthware that is a security leak, phones
home, and is generally illegal in most parts of the world. Also
the apparent connivance of the various security software firms.
http://www.schneier.com/blog/archive...drm_rootk.html
Dec 15 '05 #59
Chuck F. wrote:
Chad wrote:
>
Okay, maybe I'm going a bit off topic here, but, I think I'm
missing it. When I go something like:

include <stdio.h>
#include <math.h>

int main(void) {
int i = 0;
double sum = 0;


Why initialize these, when the initial values will never be used?


The initial value of 'i' isn't used, but the initial value of 'sum'
certainly is.

I personally wouldn't intialize 'i', but some people argue that doing
so is a safety measure. In my personal experience, this "safety"
measure frequently prevents the symptoms of incorrectly written code
from being serious enough to be noticed, which in my opinion is a bad
thing. If my code uses the value of an object before that object has
been given the value it was supposed to have at that point, I'd greatly
prefer it if the value it uses is one that's likely to make my program
fail in an easily noticeable way. 0 is often not such a value. An
indeterminate one is more likely to produce noticeable symptoms. A
well-chosen specific initializer could be even better, except for the
fact that it gives the incorrect impression that the initializing value
was intended to be used. This can be fixed by adding in a comment:

int i = INT_MAX; // intended to gurantee problems if the program
incorrectly uses this value

But I prefer the simplicity of:

int i;
for (i = 0; i <= 30; i++) {
sum = pow(2.0, i) + sum;
}
printf("The value of c is: %0.0f\n",sum);
return 0;
}

The output is:
$./pw
The value of c is: 2147483647 (not 2147483648).

The way I understood this was that for 32 bits, pow(2.0, 31.0)
would look something like the following:

1111 1111 1111 1111 1111 1111 1111 1111

The first bit would be signed. This means that the value should
be the sum of:
1*2^0 + 1*2^1..... 1*3^30

Why is the value off by one?


Because a double is not an integral object. It expresses an
approximation to a value, and the printf format has truncated the
value. Just change the format to "%f\n" to see the difference.


Did you try that? I think you'll be surprised by the results. Just to
make things clearer, you might try using long double, and a LOT of
extra digits after the decimal point. If you've got a fully conforming
C99 implementation, it would be even clearer if you write it out in
hexadecimal floating point format.
Hint: it's not the program that's giving the wrong value for the sum of
this series.

Dec 15 '05 #60
"Chuck F. " <cb********@yahoo.com> writes:
Keith Thompson wrote:

[snip]
These? The initial value of i isn't used; the initial value of
sum is.


Mea Culpa. However I consider the proper initialization point is in
the for statement, i.e. "for (i = 0, sum = 0; ...)". In other words
as close as possible to the point of use.


I don't know that I'd want to squeeze the initializations of both i
and sum into the for loop, but it's certainly an option.

In C99, you can declare i at its point of first use:

for (int i = 0; i <= 30; i ++) { ... }

--
Keith Thompson (The_Other_Keith) ks***@mib.org <http://www.ghoti.net/~kst>
San Diego Supercomputer Center <*> <http://users.sdsc.edu/~kst>
We must do something. This is something. Therefore, we must do this.
Dec 15 '05 #61
Jordan Abel wrote:
On 2005-12-14, Wojtek Lerch <Wo******@yahoo.ca> wrote:
4#2 If a "shall" or "shall not" requirement that appears outside of a
constraint is violated, the behavior is undefined. Undefined behavior is
otherwise indicated in this International Standard by the words "undefined
behavior" or by the omission of any explicit definition of behavior. There
is no difference in emphasis among these three; they all describe "behavior
that is undefined". I thought the claim i was disputing was that there could be undefined
behavior without the standard making any explicit statement, not that
the explicit statement could be worded some particular other way.


The words that Wojtek just cited clearly state that one form
of undefined behavior occurs when the standard says nothing
at all about the behavior.

We tried to explicitly call out cases of undefined behavior
that are important to know about.
The standard doesn't define the effects of the phase of the moon on the
program - does that mean running a program while the moon is full is
undefined? how about the first quarter?


The behavior of the moon is undefined according to the C
standard. Fortunately it is not something that a C
programmer or implemntor needs to know.
Dec 15 '05 #62
Jordan Abel wrote:
As it happens, a positive signed int is permitted in general for
variadic functions that take an unsigned int [same for an unsigned <
INT_MAX for a signed] - The reason i added comp.std.c, was for the
question of whether this same exception would apply to printf.


Yes, the interface semantics for printf are the same as for
any other variadic function.
Dec 15 '05 #63
"Douglas A. Gwyn" <DA****@null.net> wrote in message
news:43***************@null.net...
Jordan Abel wrote:
As it happens, a positive signed int is permitted in general for
variadic functions that take an unsigned int [same for an unsigned <
INT_MAX for a signed] - The reason i added comp.std.c, was for the
question of whether this same exception would apply to printf.


Yes, the interface semantics for printf are the same as for
any other variadic function.


No, the semantics for any call to printf() are defined by the specifications
of the printf() function and the function call operator. The specification
of printf() refers to the promoted values and types of the arguments of the
function call. It does not talk about the type named in any invocation of
the va_arg() macro or refer to va_arg() in any other way. Even if the
entire contents of <stdarg.h> were removed from the standard, the
description of printf() would still make sense and be useful. And I don't
see any reason to doubt that it would define the same semantics.

The semantics for any non-standard variadic function are defined by various
parts of the standard, depending on details of the C code that defines the
function. If a variadic function completely ignores its variadic arguments,
programs are free to use any types of arguments in calls to that function;
but that doesn't imply that the same freedom applies to calls to printf(),
does it?

If some other variadic function uses va_arg() to fetch the values of the
arguments, it's the program's responsibility to ensure that the requirements
of va_arg() are sastisfied. In particular, va_arg() requires that the type
named by its second argument must be compatible with the promoted type of
the corresponding argument of the function call, or one must be the signed
version of the other, or one must be a pointer to void and the other a
pointer to a character type. That's a *restriction* that the standard
places on programs that use va_arg(). It specifically refers to va_arg().
It is *not* a general promise that it's always OK to mix signed with
unsigned or void pointers with char pointers where the standard says that
some two types must be compatible, such as in the description of printf().

Dec 15 '05 #64
Wojtek Lerch wrote:
... It does not talk about the type named in any invocation of
the va_arg() macro or refer to va_arg() in any other way.
I didn't say that it did.
Signed and unsigned varieties of an integer type are punnable
(for nonnegative values) in the context of function arguments,
and a variadic function cannot determine which type was actually
used in the invocation.
... If a variadic function completely ignores its variadic arguments,
programs are free to use any types of arguments in calls to that function;
but that doesn't imply that the same freedom applies to calls to printf(),
does it?
Yes, printf can be supplied with unused arguments, and it
often is (not always intentionally).
It is *not* a general promise that it's always OK to mix signed with
unsigned or void pointers with char pointers where the standard says that
some two types must be compatible, such as in the description of printf().


I don't recall the fprintf spec requiring compatible types.
Dec 16 '05 #65
"Douglas A. Gwyn" <DA****@null.net> wrote in message
news:43***************@null.net...
Wojtek Lerch wrote:
... It does not talk about the type named in any invocation of
the va_arg() macro or refer to va_arg() in any other way.
I didn't say that it did.
Signed and unsigned varieties of an integer type are punnable
(for nonnegative values) in the context of function arguments,


They're intended to, according to footnote 31; but AFAIK, there's no
normative text that actually makes that promise; is there?
and a variadic function cannot determine which type was actually
used in the invocation.


But of course it can, on any implementation that has a suitable extension to
allow it. C doesn't require printf() to be implemented as strictly
conforming C, does it?
... If a variadic function completely ignores its variadic arguments,
programs are free to use any types of arguments in calls to that
function;
but that doesn't imply that the same freedom applies to calls to
printf(),
does it?


Yes, printf can be supplied with unused arguments, and it
often is (not always intentionally).


I was talking about the *complete* freedom to pass whatever you want to the
variadic arguments, regardless of the values you pass to the non-variadic
arguments. Some variadic functions allow that, but printf() does not.
It is *not* a general promise that it's always OK to mix signed with
unsigned or void pointers with char pointers where the standard says that
some two types must be compatible, such as in the description of
printf().


I don't recall the fprintf spec requiring compatible types.


No, it requires "the correct type" (7.19.6.1#9); my mistake (I copied
"compatible" from va_arg()). That sounds even more restrictive, doesn't it?
Or do you mean that it's clear enough that "the correct type" is just a
shorter way of saying "one of the correct types, as explained in the
description of va_arg(), assuming that the type specified by 7.19.6.1#7 and
8 is used as the second argument to va_arg()"?
Dec 16 '05 #66
Douglas A. Gwyn wrote:
Wojtek Lerch wrote:
... It does not talk about the type named in any invocation of
the va_arg() macro or refer to va_arg() in any other way.


I didn't say that it did.
Signed and unsigned varieties of an integer type are punnable
(for nonnegative values) in the context of function arguments,
and a variadic function cannot determine which type was actually
used in the invocation.


It can't determine the type by using only those mechanism which are
defined by the standard; however, a conforming implementation can
provide as an extension additional functionality that does allow such
determination, and it's legal for printf() to be implemented in a way
that makes use of such an extension.

Until such time as the committee chooses to replace "are meant to
imply" with "guarantees", and moves the text into the normative portion
of the standard, a conforming implementation can have different types
that are not interchangeable, despite the fact that they are required
to (and do) have the same representation and same alignment. Such an
implementation would be contrary to the explicitly expressed intent of
the committee, but that doesn't prevent it from being conforming, since
the actual requirements of the standard don't implement that intent.

Dec 16 '05 #67
Wojtek Lerch wrote:
... C doesn't require printf() to be implemented as strictly
conforming C, does it?
However, the specification is based on C function semantics,
and the prototype with the ,...) notation is even part of the
spec, so we know what the linkage interface has to be like.
I was talking about the *complete* freedom to pass whatever you want to the
variadic arguments, regardless of the values you pass to the non-variadic
arguments. Some variadic functions allow that, but printf() does not.
That doesn't seem relevant. Obviously any specific function
has some restriction on its arguments, based on the definition
for the function. In the case of printf the arguments have to
match up well enough with the format so that the proper values
can be fetched for formatting.
No, it requires "the correct type" ...
That sounds even more restrictive, doesn't it?


What is "correct" has to be determined in other ways.
Dec 17 '05 #68
ku****@wizard.net wrote:
... Such an implementation would be contrary to the explicitly
expressed intent of the committee, but that doesn't prevent it
from being conforming, since the actual requirements of the
standard don't implement that intent.


That's a spuriously legalistic notion. The C Standard uses a
variety of methods to convey the intent, including examples and
explanatory footnotes. It is evident from this thread that the
actual requirement is not certain (for some readers) from the
normative text alone, but can be clarified by referring to the
footnote that explains what the normative text means.
Dec 17 '05 #69
"Douglas A. Gwyn" <DA****@null.net> wrote in message
news:43***************@null.net...
Wojtek Lerch wrote:
... C doesn't require printf() to be implemented as strictly
conforming C, does it?
However, the specification is based on C function semantics,
and the prototype with the ,...) notation is even part of the
spec, so we know what the linkage interface has to be like.


The "linkage interface"? What is that, in standardese? What exactly does
the standard says about it? How would it be affected if <stdarg.h> were
removed from the standard? Is it really something that the standard
requires to exist, or is it merely a mechanism that compilers commonly use
to implement the required semantics?

Neither the specification of printf() nor the description of function
semantics has references to a "linkage interface". The standard defines
semantics of printf() in terms of the promoted type of the argument. It
says, for instance, that the %u format requires an argument with type
unsigned int, and that if the argument doesn't have the "correct" type, the
behaviour is undefined. There's no hint anywhere that there might actually
be two different correct types for the %u format. There's no hint anywhere
that it's the description of va_arg() that defines what the set of "correct"
types is. There's no hint anywhere that removing <stdarg.h> from the
language could possibly affect the set of "correct" argument types for %u.

Or are all those hints actually there, and I just managed to miss them?
I was talking about the *complete* freedom to pass whatever you want to
the
variadic arguments, regardless of the values you pass to the non-variadic
arguments. Some variadic functions allow that, but printf() does not.


That doesn't seem relevant. Obviously any specific function
has some restriction on its arguments, based on the definition
for the function.


It was just an illustration of the simple fact that what the restrictions on
the arguments is depends on how the semantics of the function are defined.
If the function doesn't use va_arg() or anything else to get the values,
there's no restriction whatsoever. If the function uses
va_arg(ap,unsigned), the restriction is as described for va_arg(): the
argument must be an unsigned int or a non-negative signed int, or else the
behaviour is undefined. If the function is printf() and the format
specifier is %u, the restriction is as described for printf(): the argument
should be an unsigned int and if it doesn't have the correct type, the
behaviour is undefined.
In the case of printf the arguments have to
match up well enough with the format so that the proper values
can be fetched for formatting.


Yes, but how well is well enough? The standard doesn't say that they're
fetched via va_arg() (or "as if" via va_arg()), only that they must have
"the correct type" (notice the singular -- it doesn't say "one of the
correct types"), and names one type for each format specifier. It doesn't
say anything like "a type that has the same representation and alignment
requirements as the specified type", either.
No, it requires "the correct type" ...
That sounds even more restrictive, doesn't it?


What is "correct" has to be determined in other ways.


Other than by looking it up in the description of printf()?
Dec 17 '05 #70
"Douglas A. Gwyn" <DA****@null.net> wrote in message
news:43***************@null.net...
ku****@wizard.net wrote:
... Such an implementation would be contrary to the explicitly
expressed intent of the committee, but that doesn't prevent it
from being conforming, since the actual requirements of the
standard don't implement that intent.


That's a spuriously legalistic notion. The C Standard uses a
variety of methods to convey the intent, including examples and
explanatory footnotes. It is evident from this thread that the
actual requirement is not certain (for some readers) from the
normative text alone, but can be clarified by referring to the
footnote that explains what the normative text means.


I know of two places in the normative text that describe situations where a
signed type and the corresponding unsigned type are interchangeable as
arguments to functions, and those two places are quite clear already:
6.5.2.2p6 (calls to a function defined without a prototype) and 7.15.1.1p2
(va_arg()). If there are supposed to be more such situations, then I'm
afraid the footnote itself needs to be clarified. In particular, if the
only difference between two function types T1 and T2 is in the signedness of
parameters, was the intent that the two types are compatible, despite of
what 6.7.5.3p15 says? If not, which ones of the following were intended to
apply, if any:

- it's OK to use an expression with type T1 to call a function that was
defined as T2, even though 6.5.2.2p6 says it's undefined behaviour?

- it's OK to declare the function as T1 in one translation unit and define
as T2 in another translation unit, even though 6.2.7p1 says it's undefined
behaviour?

- it's OK to define the function as T1 and then as T2 in *the same*
translation unit, even though 6.7p4 says it's a constraint violation?

What about interchangeability as return values from function? I haven't
found any normative text that implies this kind of interchangeability; which
of the above three situations are meant to apply if T1 and T2 have different
return types?
Dec 17 '05 #71
Chad a écrit :
Now 'a' is zero! Ahhhhhhhhhhhhh........ I'm even more confused.


You seem to have hard time to match the types and the formatters...

#include <stdio.h>
#include <math.h>

int main(void)
{

unsigned long a = (unsigned long) pow (2 , 32);
double b = pow(2 , 32);

printf("The value of a is: %lu\n", a);
printf("The value of b is: %0.0f\n", b);
printf("The value of int is: %u\n", (unsigned) sizeof(int));
printf("The value of double is: %u\n", (unsigned) sizeof(double));

return 0;
}

(Windows XP/Mingw)
The value of a is: 4294967295
The value of b is: 4294967296
The value of int is: 4
The value of double is: 8

--
A+

Emmanuel Delahaye
Dec 17 '05 #72
Another thing I just realized is not quite clear is the exact consequences
of the fact that even though the standard guarantees that every signed
representation of a non-negative value is a valid representation of the
same value in the corresponding unsigned type, it doesn't work the other way
around. There may exist other unsigned representations of the same value
that do not represent the same value in the signed type but instead either
are trap representations or represent a negative value. Except for
va_arg(), are there any other situations where the standard guarantees that
storing a value through an unsigned type and then reading it back through
the corresponding signed type is guaranteed to produce the original value?
Dec 18 '05 #73
"Wojtek Lerch" <Wo******@yahoo.ca> wrote in message
news:40*************@individual.net...
Another thing I just realized is not quite clear is the exact consequences
of the fact that even though the standard guarantees that every signed
representation of a non-negative value is a valid representation of the
same value in the corresponding unsigned type, it doesn't work the other
way around. There may exist other unsigned representations of the same
value that do not represent the same value in the signed type but instead
either are trap representations or represent a negative value.
To continue this conversation with myself, the above is what 6.2.6.2p5 seems
to imply; on the other hand, 6.2.5p9 simply states that "the representation
of the same value in each type is the same", and has the footnote attached
to it that explains that that's meant to imply interchangeability. I don't
suppose that's meant to override the implication of the much more specific
6.2.6.2p5 and guarantee that the rule works both ways, is it?
Except for va_arg(), are there any other situations where the standard
guarantees that storing a value through an unsigned type and then reading
it back through the corresponding signed type is guaranteed to produce the
original value?

Dec 18 '05 #74
Wojtek Lerch wrote:

"Wojtek Lerch" <Wo******@yahoo.ca> wrote in message
news:40*************@individual.net...
Another thing I just realized is not quite clear
is the exact consequences
of the fact that even though the standard guarantees
that every signed
representation of a non-negative value
is a valid representation of the
same value in the corresponding unsigned type,
it doesn't work the other
way around.
There may exist other unsigned representations of the same
value that do not represent the same value in the
signed type but instead
either are trap representations or represent a negative value.


To continue this conversation with myself,
the above is what 6.2.6.2p5 seems
to imply; on the other hand,
6.2.5p9 simply states that "the representation
of the same value in each type is the same",
and has the footnote attached
to it that explains that that's meant to imply interchangeability.


That would have to put constraints on the values
of padding bits then, wouldn't it?
In a case where the type unsigned,
had no padding bits and also two more value bits than type int,
CHAR_BIT == 17
UINT_MAX == 131071
INT_MAX == 32767
reading an object of type int with a %u specifier
would interpret the padding bit of the int type object
as an unsigned value bit.

In a case where int and unsigned had the same number
of value bits,
reading an object of type unsigned with a %d specifier
would interpret a padding bit as the sign bit.

--
pete
Dec 18 '05 #75
On Tue, 13 Dec 2005 23:20:53 +0000 (UTC), Jordan Abel
<jm****@purdue.edu> wrote:
On 2005-12-13, Old Wolf <ol*****@inspire.net.nz> wrote:
int a /* = INT_MIN */;
printf("The value of a is: %u\n",a);


Undefined behaviour -- %u is for unsigned ints.


it's not clear that it's undefined - an unsigned int and a signed int
have the same size, and it's not clear that int can have any valid
representations that are trap representations for unsigned.

All magnitude bits in signed must be the same, but the sign bit need
not be an additional (high) magnitude bit for unsigned, it can be a
padding bit, and that padding bit set may be a trap representation.

This is why the unprototyped and vararg rules allow for corresponding
signed and unsigned integer types if (and only if) "the value is
representable in both types". (Plus technically it isn't clear the
vararg rules apply to the variadic routines in the standard library;
nothing explicitly says they use va_* or as-if, although presumably
that's the sensible thing for an implementor to do.)

- David.Thompson1 at worldnet.att.net
Dec 19 '05 #76
"pete" <pf*****@mindspring.com> wrote in message
news:43**********@mindspring.com...
Wojtek Lerch wrote:
To continue this conversation with myself,
the above is what 6.2.6.2p5 seems
to imply; on the other hand,
6.2.5p9 simply states that "the representation
of the same value in each type is the same",
and has the footnote attached
to it that explains that that's meant to imply interchangeability.
That would have to put constraints on the values
of padding bits then, wouldn't it?


Well yes, 6.2.6.2p5 says that very clearly. Any signed representation of a
non-negative value must represent the same value in the corresponding
unsigned type. If the signed type has padding bits that correspond to value
bits in the unsigned type, those padding bits must be set to zero in all
valid representations of non-negative values.

.... In a case where int and unsigned had the same number
of value bits,
reading an object of type unsigned with a %d specifier
would interpret a padding bit as the sign bit.


That's the situation I'm concerned about. From 6.2.6.2p5 it seems that it's
OK for the padding bit to be ignored. But 6.2.5p9 may be interpreted as
implying that the representation with the padding bit set to one must be
treated as a trap representation, because otherwise you would have a bit
pattern that represents a value in the range of both types when read through
the unsigned type, but a negative value when read through the signed type.
Dec 19 '05 #77
On 2005-12-17, Wojtek Lerch <Wo******@yahoo.ca> wrote:
The "linkage interface"? What is that, in standardese?


Stage 8 of translation.
Dec 20 '05 #78
"Jordan Abel" <jm****@purdue.edu> wrote in message
news:sl******************@random.yi.org...
On 2005-12-17, Wojtek Lerch <Wo******@yahoo.ca> wrote:
The "linkage interface"? What is that, in standardese?


Stage 8 of translation.


A stage of translation is an interface that implies how printf() must be
implemented? Somehow I doubt that's what he meant; but could you elaborate?
Dec 20 '05 #79

This thread has been closed and replies have been disabled. Please start a new discussion.

By using Bytes.com and it's services, you agree to our Privacy Policy and Terms of Use.

To disable or enable advertisements and analytics tracking please visit the manage ads & tracking page.