(Others have already answered pointing out other problems.)
In article <f6**********@n aig.caltech.edu >
David Mathog <ma****@caltech .eduwrote:
[casting &intvar to (long *) produces the message:
warning: dereferencing type-punned pointer will break strict-aliasing rules
]
>but ONLY if the gcc compiler is at -O2 or -O3. I don't see any
reason why optimization should change things much in this piece
of code ...
Optimizers love to make assumptions. The assumptions they are
allowed to make are those defined by the language [%]. In this
case, at "higher" optimization levels, gcc wants to make use of
the rule that any lvalue can only be accessed by:
- its name, or
- a pointer that points to its type, or
- a pointer that points to "individual bytes" (e.g., char *).
In particular, for instance, suppose we have the following code
fragment:
float x;
int *p;
if (sizeof(int) != sizeof(float)) {
printf("this program assumes sizeof(int) == sizeof(float)\n ");
exit(EXIT_FAILU RE);
}
p = (int *)&x;
x = 42.0;
<do lots of work with x that leaves it nonzero>
*p = 0;
if (x == 0.0)
printf("integer zero seems to be floating point zero too\n");
else
printf("verrry interesting! x = %g\n", x);
Now, on some machines (like the x86 for instance), it is very
helpful, for speed reasons, to keep "x" in something other than
ordinary RAM. If x can live in the FPU stack, for instance, the
compiler can use shorter and faster instructions to work with it
(in the <do lots of worksection).
But if the compiler *does* do this, then "p" points only to the
"ordinary RAM copy of x" (as opposed to the "live, useful copy of
x" inside the FPU stack). Changing *p changes only the non-live,
non-useful copy of x. When examining x after assigning to *p, the
compiler should use the live copy of x (in the FPU stack), which
-- assuming the <do lots of worksection really does leave x
nonzero -- will not be equal to 0.0, and the code fragment will
claim that the x86 makes all-zero-bits a non-zero floating point
number (which is in fact false).
The compiler is certainly *allowed* to do this, because modifying
an int (*p) is not supposed to change a float (x).
GCC's complaint:
warning: dereferencing type-punned pointer will break
strict-aliasing rules
is supposed to come out in those cases where it can detect, at
compile time, that some sort of source-level chicanery (such as
the above) could cause later optimization passes to make assumptions
that, while allowed by the Standard, will surprise some programmers.
The detector is probably imperfect, and is clearly only run at
higher optimization levels. (This is true of many of gcc's useful
warnings. In at least some cases, this is because the data structures
that allow the compiler to detect the problem it will warn about are
only built during optimization.)
[% In some cases, compilers may have extra optimization flags that
allow them to make assumptions *not* guaranteed by the language.
In other words, the programmer can, on the compilation flags line,
"write checks that the language can't cash", to borrow a phrase.
If you are such a programmer, make sure your code, at least, *can*
cash them. :-) ]
--
In-Real-Life: Chris Torek, Wind River Systems
Salt Lake City, UT, USA (40°39.22'N, 111°50.29'W) +1 801 277 2603
email: forget about it
http://web.torek.net/torek/index.html
Reading email is like searching for food in the garbage, thanks to spammers.