473,324 Members | 2,535 Online
Bytes | Software Development & Data Engineering Community
Post Job

Home Posts Topics Members FAQ

Join Bytes to post your question to a community of 473,324 software developers and data experts.

Two implementations of simple math equation yield different results

Avi
I need to implement the following calculation:
f = (a*b) + (c*d)
where a,b,c,d are given double values and f is a double variable for
the result

I found out that using two different implementations gives two
different results:
1) In the first implementation I compute the entire equation in memory
2) In the second implementation I store intermediate to variables, and
then load them to compute the final result.
The results differ by a tiny amount (1E-15). I suspect that the reason
for the difference is from round off errors of the compiler.
Nonetheless, I need to know the reason for the difference as I need to
have an absolute reference for optimization work.

The two ways are as follows:

1)
Calculating two double values and then summing them to a single double
value
double f1 = (a*b) + (c*d)

2)
Calculating two double values and storing them to two double variables
Summing the two double variables to a single double value
double e1 = (a*b);
double e2 = (c*d);
double f2 = e1 + e2;

The difference is calculated as follows:
double diff = f2-f1;

I expect the difference to be exactly 0 but instead, I get a tiny
value.
Could someone explain the reason for differences in the result?

I’m compiling with gcc

Thanks
Avi
Jun 27 '08 #1
6 1913
Avi <av**************@shaw.cawrote in news:d948d5e6-307b-4a17-a20e-
57**********@n1g2000prb.googlegroups.com:
I need to implement the following calculation:
f = (a*b) + (c*d)
where a,b,c,d are given double values and f is a double variable for
the result

I found out that using two different implementations gives two
different results:
1) In the first implementation I compute the entire equation in memory
2) In the second implementation I store intermediate to variables, and
then load them to compute the final result.
The results differ by a tiny amount (1E-15). I suspect that the reason
for the difference is from round off errors of the compiler.
Nonetheless, I need to know the reason for the difference as I need to
have an absolute reference for optimization work.
The reason of the difference is most probably that the processors'
internal floating-bit registers are 80-bit (common x86), but the doubles
in memory are 64 bit only. So there will be some differences depending on
whether the intermediate results are stored in memory meanwhile or not.

gcc has some option for effectively emulating 64-bit variables in 80-bit
registers, but this will slow the program down as you would guess. Better
is to make your algorithm and tests robust in the sense that such small
changes would not cause large changes in the final results (like failing
a test).

hth
Paavo
Jun 27 '08 #2
On May 13, 7:16 am, Avi <avner-moshkov...@shaw.cawrote:
I need to implement the following calculation:
f = (a*b) + (c*d)
where a,b,c,d are given double values and f is a double
variable for the result
I found out that using two different implementations gives two
different results:
1) In the first implementation I compute the entire equation in memory
You mean in CPU and registers. Variables are in memory.
2) In the second implementation I store intermediate to
variables, and then load them to compute the final result.
The results differ by a tiny amount (1E-15). I suspect that
the reason for the difference is from round off errors of the
compiler.
The C++ standard allows a compiler to use extended precision for
intermediate results, only "forcing" the result to the final
precision when it is assigned to a variable.
Nonetheless, I need to know the reason for the difference as I
need to have an absolute reference for optimization work.
The two ways are as follows:
1)
Calculating two double values and then summing them to a
single double value
double f1 = (a*b) + (c*d)
All calculations may be done in extended precision.
2)
Calculating two double values and storing them to two double variables
Summing the two double variables to a single double value
double e1 = (a*b);
This forces the result to double.
double e2 = (c*d);
As does this as well.
double f2 = e1 + e2;
The difference is calculated as follows:
double diff = f2-f1;
I expect the difference to be exactly 0 but instead, I get a
tiny value. Could someone explain the reason for differences
in the result?
It's very implementation dependent.
I'm compiling with gcc
More to the point, you're compiling on a machine which uses
extended precision in its floating point unit, and its floating
point registers. (An Intel, perhaps.)

The reason for this freedom is speed. Forcing the results of
each operation to be rounded to double can be done, but would
slow things down considerably (at least on Intel). And for the
most part, increased precision is not considered a defect.

--
James Kanze (GABI Software) email:ja*********@gmail.com
Conseils en informatique orientée objet/
Beratung in objektorientierter Datenverarbeitung
9 place Sémard, 78210 St.-Cyr-l'École, France, +33 (0)1 30 23 00 34
Jun 27 '08 #3
On May 13, 2:42*am, James Kanze <james.ka...@gmail.comwrote:
On May 13, 7:16 am, Avi <avner-moshkov...@shaw.cawrote:
double f2 = e1 + e2;
The difference is calculated as follows:
double diff = f2-f1;
I expect the difference to be exactly 0 but instead, I get a
tiny value. *Could someone explain the reason for differences
in the result?

It's very implementation dependent.
I'm compiling with gcc

More to the point, you're compiling on a machine which uses
extended precision in its floating point unit, and its floating
point registers. *(An Intel, perhaps.)

The reason for this freedom is speed. *Forcing the results of
each operation to be rounded to double can be done, but would
slow things down considerably (at least on Intel). *And for the
most part, increased precision is not considered a defect.
But the extended floating point precision is a defect if - as in this
case - consistent numeric results are required. Fortunately, all Intel
x86 chips since the Pentium 3 support double precision (64-bit)
floating point in their SSE units. So, the solution would be to have
the compiler generate SSE floating point instructions ("-msse2 -
mfpmath=sse") instead of 387 instructions. According to the gcc
documentation, this strategy has other benefits as well:

"The resulting code should be considerably faster in the majority of
cases and avoid the numerical instability problems of 387 code, but
may break some existing code that expects temporaries to be 80-bit."

Greg
Jun 27 '08 #4
Avi wrote:
One way to identify and ignore small differences is to set a small
threshold, say:
double epsilon = 1E-6
and compare the difference against the threshold as follows:

if (abs(diff) < epsilon)
{
// difference is considered as noise
}
{
// difference is significant
}

[..]
Is there a systematic threshold that can be used to define when the
differences are noise and when they are significant?
Or should I just pick a number which is small enough in my mind (say
epsilon = 1 E-6)?
It depends on your problem domain. Some compare absolute differences
(like you're suggesting here) and are happy. Some make the diff value
relative to the range (by dividing the diff by the larger of the two
values). If all the numbers are from the same range you're going to
be fine if you just pick the value that "fits" your range. The value
of the numeric epsilon only relates to the range from 0 to 1, IIRC.

V
--
Please remove capital 'A's when replying by e-mail
I do not respond to top-posted replies, please don't ask
Jun 27 '08 #5
Avi

Thanks for your answers. It clarified the reasons for the differences
and helped me set a mechanism to distinguish between round off noise
and significant differences.

Avi
Jun 27 '08 #6
On May 13, 12:15 pm, Greg Herlihy <gre...@mac.comwrote:
On May 13, 2:42 am, James Kanze <james.ka...@gmail.comwrote:
On May 13, 7:16 am, Avi <avner-moshkov...@shaw.cawrote:
double f2 = e1 + e2;
The difference is calculated as follows:
double diff = f2-f1;
I expect the difference to be exactly 0 but instead, I get a
tiny value. Could someone explain the reason for differences
in the result?
It's very implementation dependent.
I'm compiling with gcc
More to the point, you're compiling on a machine which uses
extended precision in its floating point unit, and its floating
point registers. (An Intel, perhaps.)
The reason for this freedom is speed. Forcing the results of
each operation to be rounded to double can be done, but would
slow things down considerably (at least on Intel). And for the
most part, increased precision is not considered a defect.
But the extended floating point precision is a defect if - as
in this case - consistent numeric results are required.
By defect, I presume you mean with regards to what is needed,
not according to the standard. I'm not that good in numerics to
argue one way or the other. Intuitively, I would agree with
you, but I know that opinions about this differ amongst the
specialists.

--
James Kanze (GABI Software) email:ja*********@gmail.com
Conseils en informatique orientée objet/
Beratung in objektorientierter Datenverarbeitung
9 place Sémard, 78210 St.-Cyr-l'École, France, +33 (0)1 30 23 00 34
Jun 27 '08 #7

This thread has been closed and replies have been disabled. Please start a new discussion.

Similar topics

16
by: Frank Millman | last post by:
Hi all I was helping my niece with her trigonometry homework last night. Her calculator's batteries were flat, so I thought I would use Python's math module to calculate sin, cos, and tan. I...
1
by: barnesc | last post by:
Hi! Here's a simple hashcash implementation in Python. 28 lines of actual code. Can be reduced to 17 lines for instructional purposes, if you don't want clustering, and use xrange() instead...
17
by: Just | last post by:
While googling for a non-linear equation solver, I found Math::Polynomial::Solve in CPAN. It seems a great little module, except it's not Python... I'm especially looking for its poly_root()...
1
by: qwweeeit | last post by:
Another math problem easier to solve by hand, but, IMHO, difficult to program in a concise way like the solution of Bill Mill (linalg_brute.py) to the problem of engsol. I appreciated very much...
1
by: limelight | last post by:
I have discovered a math error in the .NET framework's Log function. It returns incorrect results for varying powers of 2 that depend on whether the program is run from within the IDE or from the...
8
by: Bshealey786 | last post by:
Okay im doing my final project for my first computer science class(its my major, so it will be my first of many), but anyway im a beginner so im not to great with C++ yet. Anyway this is the error...
7
by: Wernfried Schwenkner | last post by:
I've found the discussion about Math.Log and the error with Math.Log(8,2) on Google. Unfortunatly the full thread isn't on my news server, so I can't reply. The problem doesn't only depend...
67
by: lcw1964 | last post by:
This may be in the category of bush-league rudimentary, but I am quite perplexed on this and diligent Googling has not provided me with a clear straight answer--perhaps I don't know how to ask the...
3
by: perfactmind | last post by:
Hi Friends, i am using asp.net 2.0 with c#. i am developing a page for enter maths questions. these all questions stored in a database. i have a problem how can i write the math equation in the...
0
by: ryjfgjl | last post by:
ExcelToDatabase: batch import excel into database automatically...
0
isladogs
by: isladogs | last post by:
The next Access Europe meeting will be on Wednesday 6 Mar 2024 starting at 18:00 UK time (6PM UTC) and finishing at about 19:15 (7.15PM). In this month's session, we are pleased to welcome back...
1
isladogs
by: isladogs | last post by:
The next Access Europe meeting will be on Wednesday 6 Mar 2024 starting at 18:00 UK time (6PM UTC) and finishing at about 19:15 (7.15PM). In this month's session, we are pleased to welcome back...
0
by: Vimpel783 | last post by:
Hello! Guys, I found this code on the Internet, but I need to modify it a little. It works well, the problem is this: Data is sent from only one cell, in this case B5, but it is necessary that data...
1
by: PapaRatzi | last post by:
Hello, I am teaching myself MS Access forms design and Visual Basic. I've created a table to capture a list of Top 30 singles and forms to capture new entries. The final step is a form (unbound)...
1
by: CloudSolutions | last post by:
Introduction: For many beginners and individual users, requiring a credit card and email registration may pose a barrier when starting to use cloud servers. However, some cloud server providers now...
1
by: Defcon1945 | last post by:
I'm trying to learn Python using Pycharm but import shutil doesn't work
1
by: Shællîpôpï 09 | last post by:
If u are using a keypad phone, how do u turn on JavaScript, to access features like WhatsApp, Facebook, Instagram....
0
by: Faith0G | last post by:
I am starting a new it consulting business and it's been a while since I setup a new website. Is wordpress still the best web based software for hosting a 5 page website? The webpages will be...

By using Bytes.com and it's services, you agree to our Privacy Policy and Terms of Use.

To disable or enable advertisements and analytics tracking please visit the manage ads & tracking page.