claude uq wrote:
long nx = 200L;
long ny = 300L;
long gridSize = 0L;
long var = 0L;
gridSize = nx*ny;
var = gridSize*gridSize;
cout<<gridSize<<" "<<var<<endl;
system("pause");
return 0;
}
When executed, the above streams out 60000 for gridSize and -694967296 for
var !!!
Tried on a few machine, still same. What's going on here.
Hit <Logo+R>calc<Enter> to raise the MS Windows Calculator. Select
View->Scientific. Enter 60000. Hit the square button (x^2). The result
(formatted for clarity) is 3 600 000 000.
Now switch to Hex mode with the radio button "Hex". Enter 7fffffff - which
is the maximum value of a signed 32-bit long integer. Switch the Mode back
to Dec. The number converts to 2 147 483 647. That's less than 3.6 billion.
Blaming MS is fun, but you over-flowed the long. The system ran out of bits
to store the entire number, and simply shoved the lowest 32 into a register.
The highest bit was set, so the number appears negative.
C++ code runs very lean and efficient because it passes the question what to
do at integer overflow time back to the programmer. Most of the time the
answer to this question is nothing, because we either count natural numbers
of whole things in user-comprehensible units, or we use 'double' to store
the huge numbers.
--
Phlip