468,512 Members | 1,485 Online
Bytes | Developer Community
New Post

Home Posts Topics Members FAQ

Post your question to a community of 468,512 developers. It's quick & easy.

Floating Point comparison problem

Next month I will start to work on a C++ based Software named CAT++ which
is going to provide FORTRAN like arrays in C++ and will be used within
Scientific Community and hence will heavily depend on
Numerical-Computations. I was reading 29.16 ans 29.17 sections of FAQs:

http://www.parashift.com/c++-faq-lit...html#faq-29.16

as a test, I just tried this program on Linux, GCC 4.2.2 and it gave me 2
different results:
#include <iostream>
#include <string>
#include <vector>

int main()
{
const double x = 0.05;
const double y = 0.07;
const double z = x + y;
const double key_num = x + y;

if( key_num == z )
{
std::cout << "==\n";
}
else
{
std::cout << "!=\n";
}

return 0;
}

========== OUTPUT ============

/home/arnuld/programs $ g++ -ansi -pedantic -Wall -Wextra test2.cpp
/home/arnuld/programs $ ./a.out
==
/home/arnuld/programs $
it always outputs == , Now i changed key_num to this:

const double key_num = 0.12;

and this program now always outputs !=

/home/arnuld/programs $ g++ -ansi -pedantic -Wall -Wextra test2.cpp
/home/arnuld/programs $ ./a.out
!=
/home/arnuld/programs $
As FAQ explains Floating-Point is inaccurate but my project is heavily
oriented towards number-crunching, so I was thinking of stop developing
that sofwtare in C++ and simply use FORTRAN for this specific
application in a special domain.

--
http://lispmachine.wordpress.com/

Feb 15 '08 #1
15 7171
On Thu, 14 Feb 2008 22:07:38 -0800, red floyd wrote:
Did you remember to #include <limits>?

That's where std::numeric_limits lives.
that does not make any difference. Even after including the <limits>,
error remains the same:

/home/arnuld/programs $ g++ -ansi -pedantic -Wall -Wextra float-compare.cpp
float-compare.cpp: In function ‘bool float_equality(const T&, const T&,
const T&) [with T = double]’: float-compare.cpp:28: instantiated from
here
float-compare.cpp:13: error:
invalid operands of types ‘double ()()throw ()’ and ‘const double’
to binary ‘operator*’
float-compare.cpp:14: error: invalid operands of types ‘double ()()throw
()’ and ‘const double’ to binary ‘operator*’
float-compare.cpp:16: error: call of overloaded ‘abs(double)’ is
ambiguous /usr/include/stdlib.h:691: note: candidates are: int abs(int)
/usr/lib/gcc/x86_64-unknown-linux-gnu/4.2.2/../../../../include/c++/4.2.2/cstdlib:143:
note: long int std::abs(long int)
/usr/lib/gcc/x86_64-unknown-linux-gnu/4.2.2/../../../../include/c++/4.2.2/cstdlib:174:
note: long long int __gnu_cxx::abs(long\
long int)
/home/arnuld/programs $

--
http://lispmachine.wordpress.com/

Feb 15 '08 #2
On Feb 15, 5:58 pm, arnuld <da...@kallu.comwrote:
On Thu, 14 Feb 2008 22:07:38 -0800, red floyd wrote:
Did you remember to #include <limits>?
That's where std::numeric_limits lives.

that does not make any difference. Even after including the <limits>,
error remains the same:

/home/arnuld/programs $ g++ -ansi -pedantic -Wall -Wextra float-compare.cpp
float-compare.cpp: In function 'bool float_equality(const T&, const T&,
const T&) [with T = double]': float-compare.cpp:28: instantiated from
here
float-compare.cpp:13: error:
invalid operands of types 'double ()()throw ()' and 'const double'
to binary 'operator*'
float-compare.cpp:14: error: invalid operands of types 'double ()()throw
()' and 'const double' to binary 'operator*'
float-compare.cpp:16: error: call of overloaded 'abs(double)' is
ambiguous /usr/include/stdlib.h:691: note: candidates are: int abs(int)
/usr/lib/gcc/x86_64-unknown-linux-gnu/4.2.2/../../../../include/c++/4.2.2/cstdlib:143:
note: long int std::abs(long int)
/usr/lib/gcc/x86_64-unknown-linux-gnu/4.2.2/../../../../include/c++/4.2.2/cstdlib:174:
note: long long int __gnu_cxx::abs(long\
long int)
/home/arnuld/programs $

--http://lispmachine.wordpress.com/
You should use,

std::numeric_limits<T>::epsilon()

in place of
std::numeric_limits<T>::epsilon.

It will work now.

Regards,
Reetesh Mukul
Feb 15 '08 #3
On Fri, 15 Feb 2008 00:41:13 -0700, Jerry Coffin wrote:
I.e. std::numeric_limits<T>::epsilon is a function, and we need to use
its result. I've since come up with a method I like somewhat better
though:

#include <math.h>

bool isEqual(double a, double b, double prec = 1e-6) {
int mag1, mag2;

frexp(a, &mag1);
frexp(b, &mag2);

if ( mag1 != mag2)
return false;

double epsilon = ldexp(prec, mag1);

double difference = abs(a-b);

return difference <= epsilon;
}
It is going much more complex, I think the version of FAQ is much more
simple to work with.

--
http://lispmachine.wordpress.com/

Feb 15 '08 #4
Refering to "isEqual()" function of FAQ version:
http://www.parashift.com/c++-faq-lit...html#faq-29.17

here is some strange thing that happens on my system, Archlinux, AMD64
GCC 4.2.2:

int main()
{
double a = 1.0 / 10.0;
double b = a * 10.0;

if( float_equality( a, b ))
{
std::cout << "EPSILON at job\n";
}
else
{
std::cout << "OOPS!" << std::endl;
}

return 0;
}

opposite to what FAq says this code always outputs: OOPS!
int main()
{
double a = 1.0 / 10.0;
double b = a * 10.0;
const double c = 1.0;

if( b != c )
{
std::cout << "Surprise ! \n" << std::endl;
}
else
{
std::cout << "== \n";
}

return 0;
}

it always OUTPUTs -- ==
what is wrong here ?
-- arnuld
http://lispmachine.wordpress.com/

Feb 15 '08 #5
arnuld wrote:
here is some strange thing that happens on my system, Archlinux, AMD64
GCC 4.2.2:

int main()
{
double a = 1.0 / 10.0;
double b = a * 10.0;

if( float_equality( a, b ))
{
std::cout << "EPSILON at job\n";
}
else
{
std::cout << "OOPS!" << std::endl;
}

return 0;
}

opposite to what FAq says this code always outputs: OOPS!
Is it really that strange, that 0.1 != 1.0 ?
int main()
{
double a = 1.0 / 10.0;
double b = a * 10.0;
const double c = 1.0;

if( b != c )
{
std::cout << "Surprise ! \n" << std::endl;
}
else
{
std::cout << "== \n";
}

return 0;
}

it always OUTPUTs -- ==
what is wrong here ?
nothing: 1. == 1. sounds reasonable. I guess you need a break :)

-- ralph
Feb 15 '08 #6
arnuld wrote:
Next month I will start to work on a C++ based Software named CAT++ which
is going to provide FORTRAN like arrays in C++ and will be used within
Scientific Community and hence will heavily depend on
Numerical-Computations. I was reading 29.16 ans 29.17 sections of FAQs:
http://www.parashift.com/c++-faq-lit...html#faq-29.16
as a test, I just tried this program on Linux, GCC 4.2.2 and
it gave me 2 different results:
#include <iostream>
#include <string>
#include <vector>
int main()
{
const double x = 0.05;
const double y = 0.07;
const double z = x + y;
const double key_num = x + y;
if( key_num == z )
{
std::cout << "==\n";
}
else
{
std::cout << "!=\n";
}
return 0;
}
========== OUTPUT ============
/home/arnuld/programs $ g++ -ansi -pedantic -Wall -Wextra test2.cpp
/home/arnuld/programs $ ./a.out
==
/home/arnuld/programs $
it always outputs == , Now i changed key_num to this:
const double key_num = 0.12;
and this program now always outputs !=
Which seems normal to me. I certainly wouldn't expect anything
else.
/home/arnuld/programs $ g++ -ansi -pedantic -Wall -Wextra test2.cpp
/home/arnuld/programs $ ./a.out
!=
/home/arnuld/programs $
As FAQ explains Floating-Point is inaccurate but my project is
heavily oriented towards number-crunching, so I was thinking
of stop developing that sofwtare in C++ and simply use FORTRAN
for this specific application in a special domain.
It's not so much a question of floating point being inaccurate
per se, but rather that it follows somewhat different rules than
the real numbers. You can use them to simulate real numbers,
but you have to know what you are doing. (The term "inaccurate"
here generally should be interpeted to mean that they are an
inaccurate simulation of the real numbers.)

Changing to Fortran won't change anything, of course, because
the problem isn't with the language; it's with the way floating
point is implemented in hardware.

Until you've read and understood "What Every Computer Scientist
Should Know About Floating-Point Arithmetic"
(http://docs.sun.com/source/806-3568/ncg_goldberg.html), you
shouldn't even try to do anything with float or double. (But
note that that article is only an introduction. If you want to
write complex numeric applications, you'll need to know a lot
more.)

--
James Kanze (GABI Software) email:ja*********@gmail.com
Conseils en informatique oriente objet/
Beratung in objektorientierter Datenverarbeitung
9 place Smard, 78210 St.-Cyr-l'cole, France, +33 (0)1 30 23 00 34
Feb 15 '08 #7
Just some comments on Jerry's original code. I missed it when
he posted it.

On Feb 16, 7:24 am, Jerry Coffin <jcof...@taeus.comwrote:
In article <pan.2008.02.15.13.51.10.918...@kallu.com>, da...@kallu.com
says...
#include <math.h>
bool isEqual(double a, double b, double prec = 1e-6) {
What ever you do, don't call this isEqual. It doesn't test for
equality, and it doesn't establish an equivalence relationship.
Something like isApproximatelyEqual may require a bit more
typing, but it won't confuse the reader.
int mag1, mag2;
frexp(a, &mag1);
frexp(b, &mag2);
if ( mag1 != mag2)
return false;
And the above condition is almost certainly wrong. On my Intel
based machine, it means that your function will return false for
the values 1.0 and 0.9999999999999998, regardless of the
precision requested.

You really do have to calculate the espilon as a function of the
two values involved, something like:

if ( sign( a ) != sign( b ) ) { // not sure, but maybe...
return false ;
}
double tolerance = fabs(a + b) / prec / 2.0 ;
return fabs( a - b ) <= tolerance ;

Except, of course, that still has problems with very big values
(for which a + b might overflow) and very small values (for
which a - b might underflow, and result in zero). And I'm not
really sure what you should do for values which are near the
minimum representable: Do they compare approximately equal to
zero? Do they compare equal even if their signs are not the
same?

The correct answers to the above questions, of course, depend on
what is being done with the numbers. As Pete and I have said,
there isn't any real solution except understanding machine
floating point, and using that understanding intelligently.
double epsilon = ldexp(prec, mag1);
double difference = abs(a-b);
return difference <= epsilon;
}
It is going much more complex, I think the version of FAQ is much more
simple to work with.
This is still pretty trivial to work with -- you normally just use it
as:

if (isEqual(x,y))
whatever
else
whatever_else
And this is a perfect example of why the function shouldn't be
called isEqual. Any reader would expect that:
isEqual(x,y) && isEqual(y,z) <==isEqual(x,z)
A function where that doesn't hold (modulo "singular values"
like NaN's) shouldn't be called isEqual. Ever.

[...]
I'll openly admit that _internally_, this is more complex than the code
in the FAQ.
I'm not sure what the FAQ says, but your code isn't really
correct (although I'm not sure that there is an absolute
definition of "correct" in this case).

--
James Kanze (GABI Software) email:ja*********@gmail.com
Conseils en informatique oriente objet/
Beratung in objektorientierter Datenverarbeitung
9 place Smard, 78210 St.-Cyr-l'cole, France, +33 (0)1 30 23 00 34
Feb 16 '08 #8
In article <154dedac-ee33-44d1-bb89-
b1**********@d5g2000hsc.googlegroups.com>, ja*********@gmail.com says...

[ ... ]
What ever you do, don't call this isEqual. It doesn't test for
equality, and it doesn't establish an equivalence relationship.
Something like isApproximatelyEqual may require a bit more
typing, but it won't confuse the reader.
You have a good point -- this was originally posted as a follow-up to a
post that used the name isEqual, so I used the same even though it's
clearly not accurate. I believe a comment to that effect was made at the
time, but it should have been incorporated into the code.
>
int mag1, mag2;
frexp(a, &mag1);
frexp(b, &mag2);
if ( mag1 != mag2)
return false;

And the above condition is almost certainly wrong. On my Intel
based machine, it means that your function will return false for
the values 1.0 and 0.9999999999999998, regardless of the
precision requested.
Good point.
You really do have to calculate the espilon as a function of the
two values involved, something like:
That's what I was using frexp and ldexp to do. I didn't take all the
corner cases into account, but the idea was definitely to adjust the
magnitude of the precision to fit the magnitudes of the numbers.
Unfortunately, both you and I took roughly the same route, of adjusting
the precision specification to fit the magnitude of the numbers, which
makes some problems such as possible underflow and overflow difficult to
avoid.

After some thought, I realized that it would be better to reverse that:
adjust the magnitude of the numbers to fit the precision specification.
This makes most of the problems disappear completely. I've included the
code at the end of this post.
Except, of course, that still has problems with very big values
(for which a + b might overflow) and very small values (for
which a - b might underflow, and result in zero). And I'm not
really sure what you should do for values which are near the
minimum representable: Do they compare approximately equal to
zero? Do they compare equal even if their signs are not the
same?
See below -- I'm pretty sure my new code avoids problems with either
underflow or overflow.

I'm not sure the problems with numbers extremely close to zero are
entirely real. The question being asked is whether two numbers are equal
to a specified precision. The answer to that is a simple yes/no. That
question may not always be the appropriate one for every possible
calculation, but that doesn't mean there's anything wrong with the code
as it stands.
I'm not sure what the FAQ says, but your code isn't really
correct (although I'm not sure that there is an absolute
definition of "correct" in this case).
This is a basic question of what should be specified, and whether that
specification fits all possible purposes. The code in the FAQ might fit
some specification, but only a very limited one (i.e. that all the
numbers involved must be quite close to 1.0 or -1.0).

I wouldn't attempt to claim that it fits every possible situation, but I
think the following code is (at least starting to get close to) correct.
The specification I'm following is that it is supposed to determine
whether two numbers are equal within a specified relative difference.
Here's the code, along with a few test cases:

#include <math.h>

bool isNearlyEqual(double a, double b, double prec = 1e-6) {
int mag1, mag2;

double num1 = frexp(a, &mag1);
double num2 = frexp(b, &mag2);

if (abs(mag1-mag2) 1)
return false;

num2 = ldexp(num2, mag2-mag1);

return fabs(num2-num1) < prec;
}

#ifdef TEST
#include <iostream>

int main() {

std::cout.setf(std::ios_base::boolalpha);

// these should yield false -- the differ after 5 digits
std::cout << isNearlyEqual(1e-20, 1.00001e-20) << std::endl;
std::cout << isNearlyEqual(1e200, 1.00001e200) << std::endl;
std::cout << isNearlyEqual(-1e10, -1.0001e10) << std::endl;

// these should yield true; all are equal to at least 6 digits
std::cout << isNearlyEqual(1e-20, 1.000001e-20) << std::endl;
std::cout << isNearlyEqual(1e200, 1.000001e200) << std::endl;
std::cout << isNearlyEqual(1.0, 0.99999998) << std::endl;
std::cout << isNearlyEqual(-1e10, -1.000001e10) << std::endl;
return 0;
}

#endif

I'd certainly welcome any comments on this code.

As an aside, I'd note that I don't really like the 'if (abs(mag2-mag1)>
1)' part, but I can't see anything a lot better at the moment. In
theory, I'd like it to take the tolerance into account -- e.g. if
somebody specifies a tolerance of 100, it should allow numbers that are
different by a factor of 100 to yield true. This would add some
complexity, however, and I can't think of any real uses to justify it.

--
Later,
Jerry.

The universe is a figment of its own imagination.
Feb 16 '08 #9
On 2008-02-17 08:05:09, James Kanze wrote:
(Is "target value" the correct English word here: "valeur de consigne" in
French, "Sollwert" in German?)
"Setpoint" is usually used for this.

Gerhard
Feb 17 '08 #10
On Feb 17, 3:58 pm, Gerhard Fiedler <geli...@gmail.comwrote:
On 2008-02-17 08:05:09, James Kanze wrote:
(Is "target value" the correct English word here: "valeur de
consigne" in French, "Sollwert" in German?)
"Setpoint" is usually used for this.
Thanks. I'm not familiar with the term, but then, I've never
worked in an English speaking country.

I'll admit that I particularly like the German in this case:
Istwert and Sollwert---literally "is-value" and
"should-be-value". Although my library was originally developed
in French, and has since been converted to English, the
parameters of the verification functions in the test suites have
always been "ist" and "soll". Short, simple, direct and to the
point.

--
James Kanze (GABI Software) email:ja*********@gmail.com
Conseils en informatique oriente objet/
Beratung in objektorientierter Datenverarbeitung
9 place Smard, 78210 St.-Cyr-l'cole, France, +33 (0)1 30 23 00 34
Feb 18 '08 #11
Would this work for you: http://www.cygnus-software.com/paper...ringfloats.htm

Feb 18 '08 #12
On 2008-02-18 06:12:54, James Kanze wrote:
>>(Is "target value" the correct English word here: "valeur de consigne"
in French, "Sollwert" in German?)
>"Setpoint" is usually used for this.

Thanks. I'm not familiar with the term, but then, I've never
worked in an English speaking country.
See <http://en.wikipedia.org/wiki/Setpoint_%28control_system%29(only a
stub, but still -- you also find it easily in user manuals of controllers
etc.)
I'll admit that I particularly like the German in this case: Istwert and
Sollwert
Yup. Too German though for general use :)

Gerhard
Feb 18 '08 #13
On Feb 18, 2:17*pm, Pete Becker <p...@versatilecoding.comwrote:
[snip]
Perhaps badly worded, I read this as "Floating point math on everyday
PC's is not exactly the same as it is on paper". :-)

That's a generous reading.
Maybe but it isn't written as a peer reviewable paper...

It presents
as evidence the example of converting the text "0.2" to floating-point,
which has nothing to do with floating-point math.
Er, no it doesn't say that... He's saying that a declaration such as,
float f = 0.2f;
will give an underlying value, which is very close to but not
precisely 0.2.

Yes, that's a conversion from text (the 0.2f in the soruce code) to
floating-point. It has nothing to do with floating-point math, which
operates on on the result of that conversion.
Ah, I thought you were talking about converting "string" values, e.g.
from a file.

<tongueFirmlyInCheck>

Again, I may be being 'generous' but I don't find the example of 0.2
as an introduction, to be /that/ bad. If you want to get some
surprising math then insert a simple addition, e.g.

f = f + 0.0;

</tongueFirmlyInCheck>

The OP is developing something that will "heavily depend on Numerical-
Computations" - he may well need the exactness...

If so, then this entire thread, about approximate equality, is misguided.
Yes, that may be true but only OP knows...
Feb 18 '08 #14
On 2008-02-18 09:30:02 -0500, Martin <ma**********@btopenworld.comsaid:
On Feb 18, 2:17*pm, Pete Becker <p...@versatilecoding.comwrote:
[snip]

<tongueFirmlyInCheck>

Again, I may be being 'generous' but I don't find the example of 0.2
as an introduction, to be /that/ bad. If you want to get some
surprising math then insert a simple addition, e.g.

f = f + 0.0;

</tongueFirmlyInCheck>
The point is that nobody claims that integer math is "not exact"
despite the fact that 0.2 cannot be represented exactly as in int
value. The difference is that programmers understand (mostly) how
integer math works on computers, and they don't understand how
floating-point math works. The problem is not floating-point math, but
ignorance. The solution is not hacked approximations, but understanding.

--
Pete
Roundhouse Consulting, Ltd. (www.versatilecoding.com) Author of "The
Standard C++ Library Extensions: a Tutorial and Reference
(www.petebecker.com/tr1book)

Feb 18 '08 #15
In message
<8f**********************************@d5g2000hsc.g ooglegroups.com>,
Martin <ma**********@btopenworld.comwrites
>
<tongueFirmlyInCheck>

Again, I may be being 'generous' but I don't find the example of 0.2
as an introduction, to be /that/ bad. If you want to get some
surprising math then insert a simple addition, e.g.

f = f + 0.0;
Go on, surprise me.
></tongueFirmlyInCheck>
--
Richard Herring
Feb 18 '08 #16

This discussion thread is closed

Replies have been disabled for this discussion.

Similar topics

14 posts views Thread by Amit Bhatia | last post: by
14 posts views Thread by Robert Latest | last post: by
32 posts views Thread by ma740988 | last post: by
33 posts views Thread by dis_is_eagle | last post: by
70 posts views Thread by Robert Gamble | last post: by
18 posts views Thread by n.torrey.pines | last post: by
1 post views Thread by fmendoza | last post: by
By using this site, you agree to our Privacy Policy and Terms of Use.