473,406 Members | 2,217 Online
Bytes | Software Development & Data Engineering Community
Post Job

Home Posts Topics Members FAQ

Join Bytes to post your question to a community of 473,406 software developers and data experts.

small numerical differences in floating point result between wintel and Sun/SPARC

JS
We have the same floating point intensive C++ program that runs on
Windows on Intel chip and on Sun Solaris on SPARC chips. The program
reads the exactly the same input files on the two platforms. However,
they generate slightly different results for floating point numbers.

Are they really supposed to generate exactly the same results? I
guess so because both platforms are supposed to be IEEE floating point
standard (754?) compliant. I have turned on the Visual C++ compile
flags which will make sure the Windows produce standard compliant code
(the /Op flags). However, they still produce different results. I
suspect that this may be due to a commerical mathematical library that
we use which can't be compiled using /Op option. If I had recompiled
everything using /Op option, the two should have produced the same
results.

Am I right?

Thanks a lot.
Jul 22 '05 #1
31 3609
"JS" <so*****@somewhere.com> wrote...
We have the same floating point intensive C++ program that runs on
Windows on Intel chip and on Sun Solaris on SPARC chips. The program
reads the exactly the same input files on the two platforms. However,
they generate slightly different results for floating point numbers.

Are they really supposed to generate exactly the same results?
Not really.
I
guess so because both platforms are supposed to be IEEE floating point
standard (754?) compliant.
Yes, but they go about making calculations differently.
I have turned on the Visual C++ compile
flags which will make sure the Windows produce standard compliant code
(the /Op flags). However, they still produce different results. I
suspect that this may be due to a commerical mathematical library that
we use which can't be compiled using /Op option. If I had recompiled
everything using /Op option, the two should have produced the same
results.

Am I right?


Hard to say. However, it is well known that floating point numbers
and calculations involving those are different on different hardware
platforms. And, yes, as a result, math libraries and math functions
in the standard library can differ slightly.

V
Jul 22 '05 #2

"JS" <so*****@somewhere.com> wrote in message
news:st********************************@4ax.com...
We have the same floating point intensive C++ program that runs on
Windows on Intel chip and on Sun Solaris on SPARC chips. The program
reads the exactly the same input files on the two platforms. However,
they generate slightly different results for floating point numbers.
I'm not surprised.

Are they really supposed to generate exactly the same results?
Not necessarily.
I
guess so because both platforms are supposed to be IEEE floating point
standard (754?) compliant. I have turned on the Visual C++ compile
flags which will make sure the Windows produce standard compliant code
(the /Op flags).
However, they still produce different results. I
suspect that this may be due to a commerical mathematical library that
we use which can't be compiled using /Op option. If I had recompiled
everything using /Op option, the two should have produced the same
results.

Am I right?


http://docs.sun.com/source/806-3568/ncg_goldberg.html

-Mike
Jul 22 '05 #3
In article <st********************************@4ax.com>,
JS <so*****@somewhere.com> writes:
We have the same floating point intensive C++ program that runs on
Windows on Intel chip and on Sun Solaris on SPARC chips. The program
reads the exactly the same input files on the two platforms. However,
they generate slightly different results for floating point numbers.


What does this have to do with Fortran (or C)? Don't
cross post off-topic threads.

PS: Google on "floating goldberg"

--
Steve
Jul 22 '05 #4
JS wrote:
We have the same floating point intensive C++ program that runs on
Windows on Intel chip and on Sun Solaris on SPARC chips. The program
reads the exactly the same input files on the two platforms. However,
they generate slightly different results for floating point numbers.

Are they really supposed to generate exactly the same results? I
guess so because both platforms are supposed to be IEEE floating point
standard (754?) compliant. I have turned on the Visual C++ compile
flags which will make sure the Windows produce standard compliant code
(the /Op flags). However, they still produce different results. I
suspect that this may be due to a commerical mathematical library that
we use which can't be compiled using /Op option. If I had recompiled
everything using /Op option, the two should have produced the same
results.

Am I right?


Yes and no. As I understand it, the IEEE floating point standard places
reasonably-tight constraints on how *atomic* floating-point operations
are undertaken. For instance, given the bit patterns making up two
floating point numbers a and b, the standard says how to find the bit
pattern making up their product a*b.

However, a typical computer program is not a single atomic operation; it
is a whole sequence of them. Take for example this assignment:

d = a + b + c

What order should the sum be undertaken in? Should it be

d = (a + b) + c,

or

d = a + (b + c)?

Mathematically, these are identical. But in a computer program the "+"
operation does not represent true mathematical addition, but rather a
floating-point approximation of it. Even if this approximation conforms
to the IEEE standard, the results of the two assignments above will
differ in many situations. Consider, for instance, when:

a = 1.e10
b = -1.e10
c = 1.

Assuming floating-point math with a precision less than ten decimal
significant digits, the first expression above above will give d = 1,
but the second expression will give d = 0.

Therefore, the result of the *original* assignment above (the one
without the parentheses) depends on how the compiler decides to join the
two atomic addition operations. Even though these operations might
individually conform to the IEEE standard, their result will vary
depending on the order in which the compiler decides to perform them.
This is nothing to do with the IEEE standard per se, but a fundamental
limitation of finite-precision floating-point math.

Hopefully, this should underline the idiocy in rubbishing one compiler
because it produces slightly different results to another.

cheers,

Rich

--
Dr Richard H D Townsend
Bartol Research Institute
University of Delaware

[ Delete VOID for valid email address ]
Jul 22 '05 #5
In article <st********************************@4ax.com> JS <so*****@somewhere.com> writes:
....
Am I right?


No, you are wrong. The slight differences are there because the Intel
chips calculate expressions with 80 bits of precision, the Sparc uses
64 bits of precision. Both are allowed by the IEEE standard.
--
dik t. winter, cwi, kruislaan 413, 1098 sj amsterdam, nederland, +31205924131
home: bovenover 215, 1025 jn amsterdam, nederland; http://www.cwi.nl/~dik/
Jul 22 '05 #6
I have a customer who runs a simulation on both a PC and a UNIX box,
results are written to an ASCII data file. He has used
TFuzSerf, a fuzzy number file comparison, to verify similar results.

Demonstration versions and contact information are available
at the Complite File Comparison Family web page, at ...

http://world.std.com/~jdveale

This fuzzy number comparison utility allows you to specify both absolute
and relative tolerances(ranges) to numbers. ASCII numbers are
recognized and treated as true numbers, not just character strings.
As a result 2-digit and 3-digit exponents are handled automatically.

Please feel free to contact me if I can answer any questions.

Jim Veale

JS <so*****@somewhere.com> writes:
We have the same floating point intensive C++ program that runs on
Windows on Intel chip and on Sun Solaris on SPARC chips. The program
reads the exactly the same input files on the two platforms. However,
they generate slightly different results for floating point numbers. Are they really supposed to generate exactly the same results? I
guess so because both platforms are supposed to be IEEE floating point
standard (754?) compliant. I have turned on the Visual C++ compile
flags which will make sure the Windows produce standard compliant code
(the /Op flags). However, they still produce different results. I
suspect that this may be due to a commerical mathematical library that
we use which can't be compiled using /Op option. If I had recompiled
everything using /Op option, the two should have produced the same
results. Am I right? Thanks a lot.

Jul 22 '05 #7
>We have the same floating point intensive C++ program that runs on
Windows on Intel chip and on Sun Solaris on SPARC chips. The program
reads the exactly the same input files on the two platforms. However,
they generate slightly different results for floating point numbers.

Are they really supposed to generate exactly the same results? I
I don't believe the == operator applied to calculated floating-point
results is ever required to return 1. Nor does it have to be consistent
about it. An implementation like that probably won't sell well, but
ANSI C allows it.

int i;
double d1, d2;
... put a value in d1 and d2 ...

for (i = 0; (d1 + d2) == (d2 + d1); i++)
/* empty */ ;
printf("Iterations: %d\n", i);

There's nothing wrong with the code printing "Iterations: 13" here.
guess so because both platforms are supposed to be IEEE floating point
standard (754?) compliant.
Just because the hardware is IEEE floating point doesn't mean
the compiler has to keep the intermediate values in 80-bit long
double or has to lop off the extra precision consistently.

I have turned on the Visual C++ compile
flags which will make sure the Windows produce standard compliant code
(the /Op flags). However, they still produce different results. I
suspect that this may be due to a commerical mathematical library that
we use which can't be compiled using /Op option. If I had recompiled
everything using /Op option, the two should have produced the same
results.
C offers no guarantee that an Intel platform will produce the
same results as an Intel platform with the same CPU serial number
and the same compiler serial number.
Am I right?


No.

Gordon L. Burditt
Jul 22 '05 #8
On Mon, 13 Dec 2004 22:35:42 -0500, Rich Townsend wrote:

....

I have turned on the Visual C++ compile
flags which will make sure the Windows produce standard compliant code
(the /Op flags). However, they still produce different results. I
suspect that this may be due to a commerical mathematical library that
we use which can't be compiled using /Op option. If I had recompiled
everything using /Op option, the two should have produced the same
results.

Am I right?

Yes and no. As I understand it, the IEEE floating point standard places
reasonably-tight constraints on how *atomic* floating-point operations
are undertaken. For instance, given the bit patterns making up two
floating point numbers a and b, the standard says how to find the bit
pattern making up their product a*b.
It also depends on the language. C, and presumably C++, allows
intermediate results to be held at higher precision than indicated by the
type. IEEE supports a number of precisions but doesn't mandate which
should be used, it just says what will happen given a particular
precision.
However, a typical computer program is not a single atomic operation; it
is a whole sequence of them. Take for example this assignment:

d = a + b + c

What order should the sum be undertaken in? Should it be

d = (a + b) + c,

or

d = a + (b + c)?

That depends on the language. In C (and again presumably C++) d = a + b +c
is equivalent to d = (a + b) + c. A C optimiser cannot rearrange this
unless it can prove the result is consistent with this for all possible
input values. That's rarely possible in floating point.

....
Hopefully, this should underline the idiocy in rubbishing one compiler
because it produces slightly different results to another.


Nevertheless many if not most C compilers for x86 platforms violate the C
standard (not the IEEE standard) when it comes to floating point. C
requires that values held in objects and the results of casts be held at
the correct precision for the type. . However on x86 this requires the
value to be stored to and reloaded from memory/cache which is horrendously
inefficient compared to keeping the value in a register. Compiler writers
often take the view that generating faster code that keeps values
represented at a higher precision is the lesser of evils. Not everybody
agrees in all circumstances. I will leave the readers of comp.lang.c++ and
comp.lang.fortran to comment on those languages.

Lawrence
Jul 22 '05 #9

Gordon Burditt wrote:
We have the same floating point intensive C++ program that runs on
Windows on Intel chip and on Sun Solaris on SPARC chips. The programreads the exactly the same input files on the two platforms. However,they generate slightly different results for floating point numbers.
Are they really supposed to generate exactly the same results? I


I don't believe the == operator applied to calculated floating-point
results is ever required to return 1.


In fact, it's not required to return true (followups set to clc++) for
integers either (!)

Regards,
Michiel Salters

Jul 22 '05 #10
First, we assuming that the program in question here is not displaying
or writing results past the precision of the corresponding variables.

How many "places" (mantissa bits) are you talking about here?
If the results differ by only the last mantissa bit or two I would not
worry about this. If results differ by more than the least significant
bit or two, there could be reason to suspect compiler options, like /OP
(or, for example, XLF under AIX the compiler option -qfloat=nomaf).

For the case where results differ by more that a mantissa bit or two,
it could also be possible that the code itself is not completely
numerically stable. One of the ways to determine if a model is stable
is in fact to perturb the input (or platform) slightly and
compare results.

Skip

On Mon, 13 Dec 2004 22:00:03 -0500, JS <so*****@somewhere.com> wrote:

-|We have the same floating point intensive C++ program that runs on
-|Windows on Intel chip and on Sun Solaris on SPARC chips. The program
-|reads the exactly the same input files on the two platforms. However,
-|they generate slightly different results for floating point numbers.
-|
-|Are they really supposed to generate exactly the same results? I
-|guess so because both platforms are supposed to be IEEE floating point
-|standard (754?) compliant. I have turned on the Visual C++ compile
-|flags which will make sure the Windows produce standard compliant code
-|(the /Op flags). However, they still produce different results. I
-|suspect that this may be due to a commerical mathematical library that
-|we use which can't be compiled using /Op option. If I had recompiled
-|everything using /Op option, the two should have produced the same
-|results.
-|
-|Am I right?
-|
-|Thanks a lot.
Herman D. (Skip) Knoble, Research Associate
(a computing professional for 38 years)
Email: SkipKnobleLESS at SPAMpsu dot edu
Web: http://www.personal.psu.edu/hdk
Penn State Information Technology Services
Academic Services and Emerging Technologies
Graduate Education and Research Services
Penn State University
214C Computer Building
University Park, PA 16802-21013
Phone:+1 814 865-0818 Fax:+1 814 863-7049
Jul 22 '05 #11
Sun and Intel should produce identical results for
negation,+,-,*,/,sqrt() operations, because this is required for IEEE
754 compliance, and both implement double and float on the same sized
floating point values (older x86 compilers used to use 80bit for all FP
temporaries which could definately be a source of differences, but VC++
doesn't do this anymore). You should also expect identical output for
things like fprem(), ceil(), floor(), modf(), and so on. All of these
operations have well known and finite ways of being calculated to the
best and closest possible result, which is what the IEEE 754 specifies.
Results can be different if you put the processors into different
rounding modes -- I thought that the ANSI C specification was supposed
to specify a consistent rounding mode, so there shouldn't be any
differences because of that, but I could be wrong.

The problem is everything else. sin(), cos(), log(), exp(), atanh()
etc, ... these guys are not even consistent between Athlons and
Pentiums. The reason is that there is no known finite algorithm for
computing these guys to the exactly corrected rounded result. There
are plenty of ways to compute them within an accuracy of "1 ulp"
(i.e., off by at most one unit in the last siginificant bit.)

But I am not the foremost expert on this stuff (and it looks like the
other posters here know even less than me.) This question is more
appropriate for a group like comp.arch.arithmetic where there are
plenty of posters there who are expert on this.
--
Paul Hsieh
http://www.pobox.com/~qed/
http://bstring.sf.net/

Jul 22 '05 #12

"JS" <so*****@somewhere.com> wrote in message
news:st********************************@4ax.com...
We have the same floating point intensive C++ program that runs on
Windows on Intel chip and on Sun Solaris on SPARC chips. The program
reads the exactly the same input files on the two platforms. However,
they generate slightly different results for floating point numbers.

Are they really supposed to generate exactly the same results?
For fp intensive programs, whatever the language, no.
I
guess so because both platforms are supposed to be IEEE floating point
standard (754?) compliant. I have turned on the Visual C++ compile
flags which will make sure the Windows produce standard compliant code
(the /Op flags). However, they still produce different results. I
suspect that this may be due to a commerical mathematical library that
we use which can't be compiled using /Op option. If I had recompiled
everything using /Op option, the two should have produced the same
results.

Am I right?


In all probability, no, :-(.

For an update to the over-hackneyed Goldberg variations, see Parker,
Pierce, and Eggert , "Monte Carlo Arithmetic: exploiting randomness in
floating-point arithmetic," Comp. Sci. & Eng., pp. 58-68, July 2000.
..> Thanks a lot.

--
You're Welcome,
Gerry T.
______
"Competent engineers rightly distrust all numerical computations and seek
corroboration from alternative numerical methods, from scale models, from
prototypes, from experience... ." -- William V. Kahan.

Jul 22 '05 #13
JS


On Tue, 14 Dec 2004 08:20:41 -0500, Herman D. Knoble
<Sk************@SPAMpsu.DOT.edu> wrote:
First, we assuming that the program in question here is not displaying
or writing results past the precision of the corresponding variables.

How many "places" (mantissa bits) are you talking about here?
All the calculations are done in double. The output result has
precision (in the format of 999999999.666666) that is well within the
precision limit of double. So this is not due not incorrect precision
used for displaying or outputting the result.
If the results differ by only the last mantissa bit or two I would not
worry about this. If results differ by more than the least significant
bit or two, there could be reason to suspect compiler options, like /OP
(or, for example, XLF under AIX the compiler option -qfloat=nomaf).

Some posters don't seem to know about /Op option. From Microsoft doc:

"By default, the compiler uses the coprocessor’s 80-bit registers to
hold the intermediate results of floating-point calculations. This
increases program speed and decreases program size. However, because
the calculation involves floating-point data types that are
represented in memory by less than 80 bits, carrying the extra bits of
precision (80 bits minus the number of bits in a smaller
floating-point type) through a lengthy calculation can produce
inconsistent results.

With /Op, the compiler loads data from memory prior to each
floating-point operation and, if assignment occurs, writes the results
back to memory upon completion. Loading the data prior to each
operation guarantees that the data does not retain any significance
greater than the capacity of its type."

Thus the difference I got doesn't seem to come from the 80-bit vs 64
bit issue mentioned by some posters.
For the case where results differ by more that a mantissa bit or two,
it could also be possible that the code itself is not completely
numerically stable. One of the ways to determine if a model is stable
is in fact to perturb the input (or platform) slightly and
compare results.

Skip

On Mon, 13 Dec 2004 22:00:03 -0500, JS <so*****@somewhere.com> wrote:

-|We have the same floating point intensive C++ program that runs on
-|Windows on Intel chip and on Sun Solaris on SPARC chips. The program
-|reads the exactly the same input files on the two platforms. However,
-|they generate slightly different results for floating point numbers.
-|
-|Are they really supposed to generate exactly the same results? I
-|guess so because both platforms are supposed to be IEEE floating point
-|standard (754?) compliant. I have turned on the Visual C++ compile
-|flags which will make sure the Windows produce standard compliant code
-|(the /Op flags). However, they still produce different results. I
-|suspect that this may be due to a commerical mathematical library that
-|we use which can't be compiled using /Op option. If I had recompiled
-|everything using /Op option, the two should have produced the same
-|results.
-|
-|Am I right?
-|
-|Thanks a lot.
Herman D. (Skip) Knoble, Research Associate
(a computing professional for 38 years)
Email: SkipKnobleLESS at SPAMpsu dot edu
Web: http://www.personal.psu.edu/hdk
Penn State Information Technology Services
Academic Services and Emerging Technologies
Graduate Education and Research Services
Penn State University
214C Computer Building
University Park, PA 16802-21013
Phone:+1 814 865-0818 Fax:+1 814 863-7049


Jul 22 '05 #14
go****@hammy.burditt.org (Gordon Burditt) writes:
I don't believe the == operator applied to calculated floating-point
results is ever required to return 1. Nor does it have to be consistent
about it. An implementation like that probably won't sell well, but
ANSI C allows it.


I'd argue that the same thing applies to Fortran (ok, except that
I'd talk about == returning .true. instead of 1). There are probably
people who would disagree with me, but that would be my interpretation
of the standard.

The comment about such a compiler not selling well would also
apply. I'd likely not get too much argument on that part. :-)

--
Richard Maine
email: my last name at domain
domain: summertriangle dot net
Jul 22 '05 #15

"JS" <so*****@somewhere.com> wrote in message
news:nk********************************@4ax.com...

[Stuff from the ever-helpful Skip Knoble elided without prejudice]

Some posters don't seem to know about /Op option.
Not me, and a bunch besides!

FYI, most clf's posters are rabidly anti Microsoft and could give a toss
what it has to say about anything, least of all in respect to it's own
products, :-).
From Microsoft doc: (sic DOC!) "By default, the compiler uses the coprocessor's 80-bit registers to
hold the intermediate results of floating-point calculations. This
increases program speed and decreases program size. However, because
the calculation involves floating-point data types that are
represented in memory by less than 80 bits, carrying the extra bits of
precision (80 bits minus the number of bits in a smaller
floating-point type) through a lengthy calculation can produce
inconsistent results.

With /Op, the compiler loads data from memory prior to each
floating-point operation and, if assignment occurs, writes the results
back to memory upon completion. Loading the data prior to each
operation guarantees that the data does not retain any significance
greater than the capacity of its type."

IIRC (and I'm sure I do) this is a verbatim quote from Microsoft's Fortran
PowerStation documentation (sic DOC!). This product commands the least of
respect in these environs. (Word to the wise, look into VC++ ~2 for what
you seek!)
Thus the difference I got doesn't seem to come from the 80-bit vs 64
bit issue mentioned by some posters.


How come? I've no idea why you're beginning to remind me of Ozmandias but
I'm wondering if it's that Alexander the Wimp movie I endured on the
weekend.

--
You're Welcome,
Gerry T.
______
"Things are not what they seem; or, to be more accurate, they are not only
what they seem, but very much else besides." -- Aldous Huxley.

Jul 22 '05 #16
Richard Maine wrote:
go****@hammy.burditt.org (Gordon Burditt) writes:
I don't believe the == operator applied to calculated floating-point
results is ever required to return 1. Nor does it have to be consistent
about it. An implementation like that probably won't sell well, but
ANSI C allows it.


I'd argue that the same thing applies to Fortran (ok, except that
I'd talk about == returning .true. instead of 1). There are probably
people who would disagree with me, but that would be my interpretation
of the standard.


I'd argue that any language that claims IEEE compliance must
get a true result under some predictable circumstances. Not as
many such circumstances as originally intended by the IEEE
committee! But, *some* such circumstances exist. For example,
I think that 0.1 == 0.1 must be true, since the IEEE standard
places minimum requirements on the precision of decimal
to binary conversion. Of course, whether the value of a literal
constitutes a "calculated" result is maybe a matter of opinion.

--
J. Giles

"I conclude that there are two ways of constructing a software
design: One way is to make it so simple that there are obviously
no deficiencies and the other way is to make it so complicated
that there are no obvious deficiencies." -- C. A. R. Hoare
Jul 22 '05 #17
>>> I don't believe the == operator applied to calculated floating-point
results is ever required to return 1. Nor does it have to be consistent
about it. An implementation like that probably won't sell well, but
ANSI C allows it.
I'd argue that the same thing applies to Fortran (ok, except that
I'd talk about == returning .true. instead of 1). There are probably
people who would disagree with me, but that would be my interpretation
of the standard.


I'd argue that any language that claims IEEE compliance must
get a true result under some predictable circumstances. Not as
many such circumstances as originally intended by the IEEE
committee! But, *some* such circumstances exist. For example,


For example, d == d where d is not a NaN.
I think that 0.1 == 0.1 must be true, since the IEEE standard
places minimum requirements on the precision of decimal
to binary conversion.
But does it place MAXIMUM requirements? 0.1 is an infinite repeating
decimal in binary. Any extra precision on one side but not the other
is likely to cause a mismatch.
Of course, whether the value of a literal
constitutes a "calculated" result is maybe a matter of opinion.


Ok, I didn't define "calculated result", but the intent was something
like "the result of a floating-point binary operator such as +, -, *, /".
Although multiplication by 0.0 should come out exact.

0.1 == 0.1 has a much better chance than, say, (0.1 + 0.0) == (0.1 + 0.0) .

Gordon L. Burditt
Jul 22 '05 #18
Gordon Burditt wrote:
....
I'd argue that any language that claims IEEE compliance must
get a true result under some predictable circumstances. Not as
many such circumstances as originally intended by the IEEE
committee! But, *some* such circumstances exist. For example, .... I think that 0.1 == 0.1 must be true, since the IEEE standard
places minimum requirements on the precision of decimal
to binary conversion.


But does it place MAXIMUM requirements? 0.1 is an infinite repeating
decimal in binary. Any extra precision on one side but not the other
is likely to cause a mismatch.


The IEEE standard requires that, for numbers in this range, the
result must correctly rounded to the target precision. Since they're
both the same type and KIND (hence, the same target precision),
and since there is but one value that's closest to one tenth that's
representable as an IEEE single precision number, they must
give the same result.

--
J. Giles

"I conclude that there are two ways of constructing a software
design: One way is to make it so simple that there are obviously
no deficiencies and the other way is to make it so complicated
that there are no obvious deficiencies." -- C. A. R. Hoare
Jul 22 '05 #19

"James Giles" <ja********@worldnet.att.net> wrote in message
news:24*********************@bgtnsc04-news.ops.worldnet.att.net...

[...]

The IEEE standard requires that, for numbers in this range, the
result must correctly rounded to the target precision. Since they're
both the same type and KIND (hence, the same target precision),
and since there is but one value that's closest to one tenth that's
representable as an IEEE single precision number, they must
give the same result.


How profound!

--
You're Welcome,
Gerry T.
______
"Facts are meaningless. You could use facts to prove anything that's even
remotely true." -- Homer Simpson.
Jul 22 '05 #20
In article <nk********************************@4ax.com>,
JS <so*****@somewhere.com> wrote:
On Tue, 14 Dec 2004 08:20:41 -0500, Herman D. Knoble
<Sk************@SPAMpsu.DOT.edu> wrote:
First, we assuming that the program in question here is not displaying
or writing results past the precision of the corresponding variables.

How many "places" (mantissa bits) are you talking about here?


All the calculations are done in double. The output result has
precision (in the format of 999999999.666666) that is well within the
precision limit of double. So this is not due not incorrect precision
used for displaying or outputting the result.
If the results differ by only the last mantissa bit or two I would not
worry about this. If results differ by more than the least significant
bit or two, there could be reason to suspect compiler options, like /OP
(or, for example, XLF under AIX the compiler option -qfloat=nomaf).


Some posters don't seem to know about /Op option. From Microsoft doc:

"By default, the compiler uses the coprocessor¹s 80-bit registers to
hold the intermediate results of floating-point calculations. This
increases program speed and decreases program size. However, because
the calculation involves floating-point data types that are
represented in memory by less than 80 bits, carrying the extra bits of
precision (80 bits minus the number of bits in a smaller
floating-point type) through a lengthy calculation can produce
inconsistent results.

With /Op, the compiler loads data from memory prior to each
floating-point operation and, if assignment occurs, writes the results
back to memory upon completion. Loading the data prior to each
operation guarantees that the data does not retain any significance
greater than the capacity of its type."


That isn't enough. First, the x86 FPU must be switched to 64 bit mode,
otherwise you will occasionally get different result due to double
rounding (about 1 in 2000 chance for a single random operation). The
load/store will make sure that overflows and underflows are reproduced
(let x = 1e300. (x * x) / x should be Inf / x = Inf. Without storing and
reloading the result of x * x, x * x = 1e600, (x * x) / x = 1e300. You
still run into trouble because underflow in a multiplication and
division can give different results due to double rounding. You can
always switch to Java + strict FP mode to get reproducible results.
(Which will be damned slow but it will get everything right, that is why
nonstrict FP mode had to be introduced. It is slow and gives sometimes
different results).
Jul 22 '05 #21

"Christian Bau" <ch***********@cbau.freeserve.co.uk> wrote in message
news:ch*********************************@slb-newsm1.svr.pol.co.uk...
In article <nk********************************@4ax.com>,
JS <so*****@somewhere.com> wrote:

Some posters don't seem to know about /Op option. From Microsoft doc:

"By default, the compiler uses the coprocessor¹s 80-bit registers to
hold the intermediate results of floating-point calculations. This
increases program speed and decreases program size. However, because
the calculation involves floating-point data types that are
represented in memory by less than 80 bits, carrying the extra bits of
precision (80 bits minus the number of bits in a smaller
floating-point type) through a lengthy calculation can produce
inconsistent results.

With /Op, the compiler loads data from memory prior to each
floating-point operation and, if assignment occurs, writes the results
back to memory upon completion. Loading the data prior to each
operation guarantees that the data does not retain any significance
greater than the capacity of its type."


That isn't enough. First, the x86 FPU must be switched to 64 bit mode,
otherwise you will occasionally get different result due to double
rounding (about 1 in 2000 chance for a single random operation). The
load/store will make sure that overflows and underflows are reproduced
(let x = 1e300. (x * x) / x should be Inf / x = Inf. Without storing and
reloading the result of x * x, x * x = 1e600, (x * x) / x = 1e300. You
still run into trouble because underflow in a multiplication and
division can give different results due to double rounding.

That Microsoft doc is misleading, given that they always (at least since
VS6)
set the FPU to 53-bit precision, which I assume is what you mean by
"64-bit mode." I couldn't find any reference in the thread as to whether
SSE
code generation was invoked (no such option in VS6, but presumably preferred
in later versions).
Jul 22 '05 #22
On Tue, 14 Dec 2004 20:34:14 -0500, JS wrote:


On Tue, 14 Dec 2004 08:20:41 -0500, Herman D. Knoble
<Sk************@SPAMpsu.DOT.edu> wrote:
First, we assuming that the program in question here is not displaying
or writing results past the precision of the corresponding variables.

How many "places" (mantissa bits) are you talking about here?
All the calculations are done in double. The output result has
precision (in the format of 999999999.666666) that is well within the
precision limit of double. So this is not due not incorrect precision
used for displaying or outputting the result.


Can you create a small program that demonstrates the problem? Perhaps a
little summation loop or something. With a particular example it would be
possible to pin down the source of any inconsistency.

....
Some posters don't seem to know about /Op option. From Microsoft doc:

"By default, the compiler uses the coprocessor's 80-bit registers to
hold the intermediate results of floating-point calculations. This
increases program speed and decreases program size. However, because
the calculation involves floating-point data types that are
represented in memory by less than 80 bits, carrying the extra bits of
precision (80 bits minus the number of bits in a smaller
floating-point type) through a lengthy calculation can produce
inconsistent results.


I think somebody else pointed out that recent Microsoft compilers set the
FPU to 53 bits precision anyway. Another thing to consider is rounding
mode. They may well be the same but this needs to be checked.

Also I have an inherent mistrust of options like /Op. gcc certainly
doesn't get everything right with its version of that option, maybe other
compilers do but it can be hard on x86 especially and I wouldn't trust
it without some verification.

Lawrence
Jul 22 '05 #23


James Giles wrote:
Gordon Burditt wrote:
...
I'd argue that any language that claims IEEE compliance must
get a true result under some predictable circumstances. Not as
many such circumstances as originally intended by the IEEE
committee! But, *some* such circumstances exist. For example,
...
I think that 0.1 == 0.1 must be true, since the IEEE standard
places minimum requirements on the precision of decimal
to binary conversion.


But does it place MAXIMUM requirements? 0.1 is an infinite repeating
decimal in binary. Any extra precision on one side but not the other
is likely to cause a mismatch.

The IEEE standard requires that, for numbers in this range, the
result must correctly rounded to the target precision. Since they're
both the same type and KIND (hence, the same target precision),
and since there is but one value that's closest to one tenth that's
representable as an IEEE single precision number, they must
give the same result.

Fortran is a little more restrictive in this case. It
requires that all constants written in the same form
have the same value. So 0.1 always has the same value
within a program. The non-IEEE parts of Fortran don't
specify how accurate the representation must be.

Dick Hendrickson

Jul 22 '05 #24
In article <pa****************************@netactive.co.uk> Lawrence Kirby <lk****@netactive.co.uk> writes:
....
Also I have an inherent mistrust of options like /Op. gcc certainly
doesn't get everything right with its version of that option, maybe other
compilers do but it can be hard on x86 especially and I wouldn't trust
it without some verification.


Indeed. Is an expression like a * b + c indeed calculated with 53 bits
mantissa only?
--
dik t. winter, cwi, kruislaan 413, 1098 sj amsterdam, nederland, +31205924131
home: bovenover 215, 1025 jn amsterdam, nederland; http://www.cwi.nl/~dik/
Jul 22 '05 #25
JS
How can you switch FPU to 64 bit mode? Thanks.

On Wed, 15 Dec 2004 08:54:36 +0000, Christian Bau
<ch***********@cbau.freeserve.co.uk> wrote:

That isn't enough. First, the x86 FPU must be switched to 64 bit mode,
otherwise you will occasionally get different result due to double
rounding (about 1 in 2000 chance for a single random operation). The
load/store will make sure that overflows and underflows are reproduced
(let x = 1e300. (x * x) / x should be Inf / x = Inf. Without storing and
reloading the result of x * x, x * x = 1e600, (x * x) / x = 1e300. You
still run into trouble because underflow in a multiplication and
division can give different results due to double rounding. You can
always switch to Java + strict FP mode to get reproducible results.
(Which will be damned slow but it will get everything right, that is why
nonstrict FP mode had to be introduced. It is slow and gives sometimes
different results).


Jul 22 '05 #26
JS
Sorry, I hasn't been able to create a small program that demonstrates
the problem.

Can you create a small program that demonstrates the problem? Perhaps a
little summation loop or something. With a particular example it would be
possible to pin down the source of any inconsistency.

...


Jul 22 '05 #27

"Dik T. Winter" <Di********@cwi.nl> wrote in message
news:I8********@cwi.nl...
In article <pa****************************@netactive.co.uk> Lawrence Kirby <lk****@netactive.co.uk> writes: ...
> Also I have an inherent mistrust of options like /Op. gcc certainly
> doesn't get everything right with its version of that option, maybe other > compilers do but it can be hard on x86 especially and I wouldn't trust
> it without some verification.


Indeed. Is an expression like a * b + c indeed calculated with 53 bits
mantissa only?

Yes, because 53-bit precision mode is set, not on account of /Op. floats
also will be done with effective promotion of intermediates to double,
unless SSE is selected.
Jul 22 '05 #28

"JS" <so*****@somewhere.com> wrote in message
news:e8********************************@4ax.com...
Sorry, I hasn't been able to create a small program that demonstrates
the problem.

Can you create a small program that demonstrates the problem? Perhaps a
little summation loop or something. With a particular example it would be
possible to pin down the source of any inconsistency.


Does the Standard have anything to say about machine epsilon? MPJ
Jul 22 '05 #29
Merrill & Michele wrote:
"JS" <so*****@somewhere.com> wrote in message
news:e8********************************@4ax.com...
Sorry, I hasn't been able to create a small program that demonstrates
the problem.

Can you create a small program that demonstrates the problem? Perhaps a
little summation loop or something. With a particular example it would be
possible to pin down the source of any inconsistency.

Does the Standard have anything to say about machine epsilon? MPJ


Which Standard do you mean to ask about? You cross-posted to three
different language newsgroups. The C++ Standard says that the member
'epsilon' of 'std::numeric_limits' returns the difference between 1
and the least value greater than 1 that is representable. It does not
say specify the actual value returned from 'epsilon'. The C Standard
does give the maximum acceptable values of 'FLT_EPSILON', 'DBL_EPSILON'
and 'LDBL_EPSILON', but your implementation is free to provide its own,
smaller values. And I have no idea about the Fortran Standard.

V
Jul 22 '05 #30
Victor Bazarov wrote:
Merrill & Michele wrote:

....
Does the Standard have anything to say about machine epsilon? MPJ


Which Standard do you mean to ask about? You cross-posted to three
different language newsgroups. The C++ Standard says that the member
'epsilon' of 'std::numeric_limits' returns the difference between 1
and the least value greater than 1 that is representable. It does not
say specify the actual value returned from 'epsilon'. The C Standard
does give the maximum acceptable values of 'FLT_EPSILON', 'DBL_EPSILON'
and 'LDBL_EPSILON', but your implementation is free to provide its own,
smaller values. And I have no idea about the Fortran Standard.

Fortran has a generic intrinsic function called EPSILON. The
following code prints the values of the machine epsilon for single
and double precision:

Print *, epsilon(1.0), epsilon(1.0d0)
End

The return value is b^(1-p), where b is the base of the floating point
numbers (usually 2) and p is the number of base-b digits in the significand
(including any hidden normalization digit). For IEEE single, the result
is 2^(-23). For IEEE double, the result is 2^(-52). Note that the result
doesn't depend on the argument's value, only it's type attributes, so
EPSILON(0.0) returns the same result as EPSILON(1.0), since both
arguments are default precision reals. If the implementation were to
support additional real precisions (like quad, or double-extended)
the same inquiry function could be applied to them. Similarly, if
an implementation were to support the proposed IEEE decimal
floating point, the same inquiry function could be applied to those
as well. The result is in the same floating-point precision (and
base) as the argument.

--
J. Giles

"I conclude that there are two ways of constructing a software
design: One way is to make it so simple that there are obviously
no deficiencies and the other way is to make it so complicated
that there are no obvious deficiencies." -- C. A. R. Hoare
Jul 22 '05 #31
Merrill & Michele wrote:
.... snip ...
Does the Standard have anything to say about machine epsilon? MPJ


c:\dnld>grep -n epsilon \stds\n869.txt
28465: epsilon FLT_EPSILON, DBL_EPSILON, LDBL_EPSILON

Now is that so hard you couldn't do it for yourself?

--
Chuck F (cb********@yahoo.com) (cb********@worldnet.att.net)
Available for consulting/temporary embedded and systems.
<http://cbfalconer.home.att.net> USE worldnet address!

Jul 22 '05 #32

This thread has been closed and replies have been disabled. Please start a new discussion.

Similar topics

29
by: JS | last post by:
We have the same floating point intensive C++ program that runs on Windows on Intel chip and on Sun Solaris on SPARC chips. The program reads the exactly the same input files on the two platforms....
0
by: Charles Arthur | last post by:
How do i turn on java script on a villaon, callus and itel keypad mobile phone
0
BarryA
by: BarryA | last post by:
What are the essential steps and strategies outlined in the Data Structures and Algorithms (DSA) roadmap for aspiring data scientists? How can individuals effectively utilize this roadmap to progress...
1
by: nemocccc | last post by:
hello, everyone, I want to develop a software for my android phone for daily needs, any suggestions?
1
by: Sonnysonu | last post by:
This is the data of csv file 1 2 3 1 2 3 1 2 3 1 2 3 2 3 2 3 3 the lengths should be different i have to store the data by column-wise with in the specific length. suppose the i have to...
0
by: Hystou | last post by:
There are some requirements for setting up RAID: 1. The motherboard and BIOS support RAID configuration. 2. The motherboard has 2 or more available SATA protocol SSD/HDD slots (including MSATA, M.2...
0
Oralloy
by: Oralloy | last post by:
Hello folks, I am unable to find appropriate documentation on the type promotion of bit-fields when using the generalised comparison operator "<=>". The problem is that using the GNU compilers,...
0
jinu1996
by: jinu1996 | last post by:
In today's digital age, having a compelling online presence is paramount for businesses aiming to thrive in a competitive landscape. At the heart of this digital strategy lies an intricately woven...
0
by: Hystou | last post by:
Overview: Windows 11 and 10 have less user interface control over operating system update behaviour than previous versions of Windows. In Windows 11 and 10, there is no way to turn off the Windows...
0
tracyyun
by: tracyyun | last post by:
Dear forum friends, With the development of smart home technology, a variety of wireless communication protocols have appeared on the market, such as Zigbee, Z-Wave, Wi-Fi, Bluetooth, etc. Each...

By using Bytes.com and it's services, you agree to our Privacy Policy and Terms of Use.

To disable or enable advertisements and analytics tracking please visit the manage ads & tracking page.