473,547 Members | 2,290 Online
Bytes | Software Development & Data Engineering Community
+ Post

Home Posts Topics Members FAQ

small numerical differences in floating point result between wintel and Sun/SPARC

JS
We have the same floating point intensive C++ program that runs on
Windows on Intel chip and on Sun Solaris on SPARC chips. The program
reads the exactly the same input files on the two platforms. However,
they generate slightly different results for floating point numbers.

Are they really supposed to generate exactly the same results? I
guess so because both platforms are supposed to be IEEE floating point
standard (754?) compliant. I have turned on the Visual C++ compile
flags which will make sure the Windows produce standard compliant code
(the /Op flags). However, they still produce different results. I
suspect that this may be due to a commerical mathematical library that
we use which can't be compiled using /Op option. If I had recompiled
everything using /Op option, the two should have produced the same
results.

Am I right?

Thanks a lot.
Jul 22 '05 #1
31 3630
"JS" <so*****@somewh ere.com> wrote...
We have the same floating point intensive C++ program that runs on
Windows on Intel chip and on Sun Solaris on SPARC chips. The program
reads the exactly the same input files on the two platforms. However,
they generate slightly different results for floating point numbers.

Are they really supposed to generate exactly the same results?
Not really.
I
guess so because both platforms are supposed to be IEEE floating point
standard (754?) compliant.
Yes, but they go about making calculations differently.
I have turned on the Visual C++ compile
flags which will make sure the Windows produce standard compliant code
(the /Op flags). However, they still produce different results. I
suspect that this may be due to a commerical mathematical library that
we use which can't be compiled using /Op option. If I had recompiled
everything using /Op option, the two should have produced the same
results.

Am I right?


Hard to say. However, it is well known that floating point numbers
and calculations involving those are different on different hardware
platforms. And, yes, as a result, math libraries and math functions
in the standard library can differ slightly.

V
Jul 22 '05 #2

"JS" <so*****@somewh ere.com> wrote in message
news:st******** *************** *********@4ax.c om...
We have the same floating point intensive C++ program that runs on
Windows on Intel chip and on Sun Solaris on SPARC chips. The program
reads the exactly the same input files on the two platforms. However,
they generate slightly different results for floating point numbers.
I'm not surprised.

Are they really supposed to generate exactly the same results?
Not necessarily.
I
guess so because both platforms are supposed to be IEEE floating point
standard (754?) compliant. I have turned on the Visual C++ compile
flags which will make sure the Windows produce standard compliant code
(the /Op flags).
However, they still produce different results. I
suspect that this may be due to a commerical mathematical library that
we use which can't be compiled using /Op option. If I had recompiled
everything using /Op option, the two should have produced the same
results.

Am I right?


http://docs.sun.com/source/806-3568/ncg_goldberg.html

-Mike
Jul 22 '05 #3
In article <st************ *************** *****@4ax.com>,
JS <so*****@somewh ere.com> writes:
We have the same floating point intensive C++ program that runs on
Windows on Intel chip and on Sun Solaris on SPARC chips. The program
reads the exactly the same input files on the two platforms. However,
they generate slightly different results for floating point numbers.


What does this have to do with Fortran (or C)? Don't
cross post off-topic threads.

PS: Google on "floating goldberg"

--
Steve
Jul 22 '05 #4
JS wrote:
We have the same floating point intensive C++ program that runs on
Windows on Intel chip and on Sun Solaris on SPARC chips. The program
reads the exactly the same input files on the two platforms. However,
they generate slightly different results for floating point numbers.

Are they really supposed to generate exactly the same results? I
guess so because both platforms are supposed to be IEEE floating point
standard (754?) compliant. I have turned on the Visual C++ compile
flags which will make sure the Windows produce standard compliant code
(the /Op flags). However, they still produce different results. I
suspect that this may be due to a commerical mathematical library that
we use which can't be compiled using /Op option. If I had recompiled
everything using /Op option, the two should have produced the same
results.

Am I right?


Yes and no. As I understand it, the IEEE floating point standard places
reasonably-tight constraints on how *atomic* floating-point operations
are undertaken. For instance, given the bit patterns making up two
floating point numbers a and b, the standard says how to find the bit
pattern making up their product a*b.

However, a typical computer program is not a single atomic operation; it
is a whole sequence of them. Take for example this assignment:

d = a + b + c

What order should the sum be undertaken in? Should it be

d = (a + b) + c,

or

d = a + (b + c)?

Mathematically, these are identical. But in a computer program the "+"
operation does not represent true mathematical addition, but rather a
floating-point approximation of it. Even if this approximation conforms
to the IEEE standard, the results of the two assignments above will
differ in many situations. Consider, for instance, when:

a = 1.e10
b = -1.e10
c = 1.

Assuming floating-point math with a precision less than ten decimal
significant digits, the first expression above above will give d = 1,
but the second expression will give d = 0.

Therefore, the result of the *original* assignment above (the one
without the parentheses) depends on how the compiler decides to join the
two atomic addition operations. Even though these operations might
individually conform to the IEEE standard, their result will vary
depending on the order in which the compiler decides to perform them.
This is nothing to do with the IEEE standard per se, but a fundamental
limitation of finite-precision floating-point math.

Hopefully, this should underline the idiocy in rubbishing one compiler
because it produces slightly different results to another.

cheers,

Rich

--
Dr Richard H D Townsend
Bartol Research Institute
University of Delaware

[ Delete VOID for valid email address ]
Jul 22 '05 #5
In article <st************ *************** *****@4ax.com> JS <so*****@somewh ere.com> writes:
....
Am I right?


No, you are wrong. The slight differences are there because the Intel
chips calculate expressions with 80 bits of precision, the Sparc uses
64 bits of precision. Both are allowed by the IEEE standard.
--
dik t. winter, cwi, kruislaan 413, 1098 sj amsterdam, nederland, +31205924131
home: bovenover 215, 1025 jn amsterdam, nederland; http://www.cwi.nl/~dik/
Jul 22 '05 #6
I have a customer who runs a simulation on both a PC and a UNIX box,
results are written to an ASCII data file. He has used
TFuzSerf, a fuzzy number file comparison, to verify similar results.

Demonstration versions and contact information are available
at the Complite File Comparison Family web page, at ...

http://world.std.com/~jdveale

This fuzzy number comparison utility allows you to specify both absolute
and relative tolerances(rang es) to numbers. ASCII numbers are
recognized and treated as true numbers, not just character strings.
As a result 2-digit and 3-digit exponents are handled automatically.

Please feel free to contact me if I can answer any questions.

Jim Veale

JS <so*****@somewh ere.com> writes:
We have the same floating point intensive C++ program that runs on
Windows on Intel chip and on Sun Solaris on SPARC chips. The program
reads the exactly the same input files on the two platforms. However,
they generate slightly different results for floating point numbers. Are they really supposed to generate exactly the same results? I
guess so because both platforms are supposed to be IEEE floating point
standard (754?) compliant. I have turned on the Visual C++ compile
flags which will make sure the Windows produce standard compliant code
(the /Op flags). However, they still produce different results. I
suspect that this may be due to a commerical mathematical library that
we use which can't be compiled using /Op option. If I had recompiled
everything using /Op option, the two should have produced the same
results. Am I right? Thanks a lot.

Jul 22 '05 #7
>We have the same floating point intensive C++ program that runs on
Windows on Intel chip and on Sun Solaris on SPARC chips. The program
reads the exactly the same input files on the two platforms. However,
they generate slightly different results for floating point numbers.

Are they really supposed to generate exactly the same results? I
I don't believe the == operator applied to calculated floating-point
results is ever required to return 1. Nor does it have to be consistent
about it. An implementation like that probably won't sell well, but
ANSI C allows it.

int i;
double d1, d2;
... put a value in d1 and d2 ...

for (i = 0; (d1 + d2) == (d2 + d1); i++)
/* empty */ ;
printf("Iterati ons: %d\n", i);

There's nothing wrong with the code printing "Iterations : 13" here.
guess so because both platforms are supposed to be IEEE floating point
standard (754?) compliant.
Just because the hardware is IEEE floating point doesn't mean
the compiler has to keep the intermediate values in 80-bit long
double or has to lop off the extra precision consistently.

I have turned on the Visual C++ compile
flags which will make sure the Windows produce standard compliant code
(the /Op flags). However, they still produce different results. I
suspect that this may be due to a commerical mathematical library that
we use which can't be compiled using /Op option. If I had recompiled
everything using /Op option, the two should have produced the same
results.
C offers no guarantee that an Intel platform will produce the
same results as an Intel platform with the same CPU serial number
and the same compiler serial number.
Am I right?


No.

Gordon L. Burditt
Jul 22 '05 #8
On Mon, 13 Dec 2004 22:35:42 -0500, Rich Townsend wrote:

....

I have turned on the Visual C++ compile
flags which will make sure the Windows produce standard compliant code
(the /Op flags). However, they still produce different results. I
suspect that this may be due to a commerical mathematical library that
we use which can't be compiled using /Op option. If I had recompiled
everything using /Op option, the two should have produced the same
results.

Am I right?

Yes and no. As I understand it, the IEEE floating point standard places
reasonably-tight constraints on how *atomic* floating-point operations
are undertaken. For instance, given the bit patterns making up two
floating point numbers a and b, the standard says how to find the bit
pattern making up their product a*b.
It also depends on the language. C, and presumably C++, allows
intermediate results to be held at higher precision than indicated by the
type. IEEE supports a number of precisions but doesn't mandate which
should be used, it just says what will happen given a particular
precision.
However, a typical computer program is not a single atomic operation; it
is a whole sequence of them. Take for example this assignment:

d = a + b + c

What order should the sum be undertaken in? Should it be

d = (a + b) + c,

or

d = a + (b + c)?

That depends on the language. In C (and again presumably C++) d = a + b +c
is equivalent to d = (a + b) + c. A C optimiser cannot rearrange this
unless it can prove the result is consistent with this for all possible
input values. That's rarely possible in floating point.

....
Hopefully, this should underline the idiocy in rubbishing one compiler
because it produces slightly different results to another.


Nevertheless many if not most C compilers for x86 platforms violate the C
standard (not the IEEE standard) when it comes to floating point. C
requires that values held in objects and the results of casts be held at
the correct precision for the type. . However on x86 this requires the
value to be stored to and reloaded from memory/cache which is horrendously
inefficient compared to keeping the value in a register. Compiler writers
often take the view that generating faster code that keeps values
represented at a higher precision is the lesser of evils. Not everybody
agrees in all circumstances. I will leave the readers of comp.lang.c++ and
comp.lang.fortr an to comment on those languages.

Lawrence
Jul 22 '05 #9

Gordon Burditt wrote:
We have the same floating point intensive C++ program that runs on
Windows on Intel chip and on Sun Solaris on SPARC chips. The programreads the exactly the same input files on the two platforms. However,they generate slightly different results for floating point numbers.
Are they really supposed to generate exactly the same results? I


I don't believe the == operator applied to calculated floating-point
results is ever required to return 1.


In fact, it's not required to return true (followups set to clc++) for
integers either (!)

Regards,
Michiel Salters

Jul 22 '05 #10

This thread has been closed and replies have been disabled. Please start a new discussion.

Similar topics

29
520
by: JS | last post by:
We have the same floating point intensive C++ program that runs on Windows on Intel chip and on Sun Solaris on SPARC chips. The program reads the exactly the same input files on the two platforms. However, they generate slightly different results for floating point numbers. Are they really supposed to generate exactly the same results? I...
0
7507
marktang
by: marktang | last post by:
ONU (Optical Network Unit) is one of the key components for providing high-speed Internet services. Its primary function is to act as an endpoint device located at the user's premises. However, people are often confused as to whether an ONU can Work As a Router. In this blog post, we’ll explore What is ONU, What Is Router, ONU & Router’s main...
0
7698
Oralloy
by: Oralloy | last post by:
Hello folks, I am unable to find appropriate documentation on the type promotion of bit-fields when using the generalised comparison operator "<=>". The problem is that using the GNU compilers, it seems that the internal comparison operator "<=>" tries to promote arguments from unsigned to signed. This is as boiled down as I can make it. ...
0
7947
jinu1996
by: jinu1996 | last post by:
In today's digital age, having a compelling online presence is paramount for businesses aiming to thrive in a competitive landscape. At the heart of this digital strategy lies an intricately woven tapestry of website design and digital marketing. It's not merely about having a website; it's about crafting an immersive digital experience that...
1
7461
by: Hystou | last post by:
Overview: Windows 11 and 10 have less user interface control over operating system update behaviour than previous versions of Windows. In Windows 11 and 10, there is no way to turn off the Windows Update option using the Control Panel or Settings app; it automatically checks for updates and installs any it finds, whether you like it or not. For...
0
6030
agi2029
by: agi2029 | last post by:
Let's talk about the concept of autonomous AI software engineers and no-code agents. These AIs are designed to manage the entire lifecycle of a software development project—planning, coding, testing, and deployment—without human intervention. Imagine an AI that can take a project description, break it down, write the code, debug it, and then...
0
5080
by: conductexam | last post by:
I have .net C# application in which I am extracting data from word file and save it in database particularly. To store word all data as it is I am converting the whole word file firstly in HTML and then checking html paragraph one by one. At the time of converting from word file to html my equations which are in the word document file was convert...
0
3492
by: TSSRALBI | last post by:
Hello I'm a network technician in training and I need your help. I am currently learning how to create and manage the different types of VPNs and I have a question about LAN-to-LAN VPNs. The last exercise I practiced was to create a LAN-to-LAN VPN between two Pfsense firewalls, by using IPSEC protocols. I succeeded, with both firewalls in...
1
1046
muto222
by: muto222 | last post by:
How can i add a mobile payment intergratation into php mysql website.
0
747
bsmnconsultancy
by: bsmnconsultancy | last post by:
In today's digital era, a well-designed website is crucial for businesses looking to succeed. Whether you're a small business owner or a large corporation in Toronto, having a strong online presence can significantly impact your brand's success. BSMN Consultancy, a leader in Website Development in Toronto offers valuable insights into creating...

By using Bytes.com and it's services, you agree to our Privacy Policy and Terms of Use.

To disable or enable advertisements and analytics tracking please visit the manage ads & tracking page.