By using this site, you agree to our updated Privacy Policy and our Terms of Use. Manage your Cookies Settings.
448,562 Members | 1,244 Online
Bytes IT Community
+ Ask a Question
Need help? Post your question and get tips & solutions from a community of 448,562 IT Pros & Developers. It's quick & easy.

C++ doubt

P: n/a
Hello,
I have two doubts on C++.
1. float i=1.1;
double d=1.1;
if(i==d)
printf("equal\n");
else
printf("not-equal\n");
what is the output?
I am getting "not-equal" as output.
Why it is so?
Jul 23 '05 #1
Share this Question
Share on Google+
39 Replies


P: n/a
This is a precision problem
For 2 float numbers you should use '>=' or '<=' operator, not '=='
operator to judge if they equal

Jul 23 '05 #2

P: n/a

"santosh" <sa*************@in.bosch.com> skrev i en meddelelse
news:d6**********@ns2.fe.internet.bosch.com...
Hello,
I have two doubts on C++.
1. float i=1.1;
double d=1.1;
if(i==d)
printf("equal\n");
else
printf("not-equal\n");
what is the output?
I am getting "not-equal" as output.
Why it is so?

float and double are imprecise types and none of them can accurately
represent 1.1. When the float and the double get compared, the float is
first type-promoted to double. In this process, the difference between the
accuracy of the two types manifests itself: the two values are different.

/Peter
Jul 23 '05 #3

P: n/a
santosh wrote:
Hello,
I have two doubts on C++.
1. float i=1.1;
double d=1.1;
if(i==d)
printf("equal\n");
else
printf("not-equal\n");
what is the output?
I am getting "not-equal" as output.
Why it is so?


Both 'C' and 'C++' may output "not-equal" for this.

It has to do with how an infinite range of possible
values are represented in the limited bits available
to store the values. Most floating point values can
not be stored precisely using the binary representation
used in computers. Since 'float' and 'double' may be
different sizes (i.e. use differing number of binary
bits to hold their value), they each store a slightly
different value that approximates 1.1.

See the FAQ sections 29.16 and 29.17 for more details:

http://www.parashift.com/c++-faq-lit...html#faq-29.16

Regards,
Larry

--
Anti-spam address, change each 'X' to '.' to reply directly.
Jul 23 '05 #4

P: n/a
You do not have two doubles, you have a float and a double.
Comparing two floats with each other or two doubles with each other is
tricky but comparing a float with a double the way you do is asking for
problems.

The following happens when you do if(i==d)
i is implicitly expanded to double size. The problem being that only
the first 8 decimals of i have a value (1.1000000 should be about
right) a double has a precision of (at least) 16 decimals so you end up
with a number 1.1000000???????? where the questionmarks can be anything
between 00000000 and 49999999 because 1.1 does not fit perfectly in the
double (or float) binary format so the compiler will start some guess
work on what the last 8 digits would be been.

The easiest way to see this in action is copy your float into a double
and then print out this double and the original float.

Jul 23 '05 #5

P: n/a
ve*********@hotmail.com wrote:

The following happens when you do if(i==d)
i is implicitly expanded to double size. The problem being that only
the first 8 decimals of i have a value (1.1000000 should be about
right) a double has a precision of (at least) 16 decimals so you end up
with a number 1.1000000???????? where the questionmarks can be anything
between 00000000 and 49999999 because 1.1 does not fit perfectly in the
double (or float) binary format so the compiler will start some guess
work on what the last 8 digits would be been.


NO, THE COMPILER DOES NOT GUESS WHAT THE LAST DIGITS WOULD HAVE BEEN.
Phew, sorry to shout, but this is a fundamental misconception. The
compiler generates a double that has the same value as the float. No
guessing involved; usually it's just padding with zeros. The reason the
two original values are different is that the "extra" bits in the double
value were discarded when 1.1 was stored as a float. You can't get them
back. When a float is promoted to double it usually has a different
value from a double initialized with the same initializer.

--

Pete Becker
Dinkumware, Ltd. (http://www.dinkumware.com)
Jul 23 '05 #6

P: n/a
Then where comes the 8 decimals of numbers come from other then taking
a wild stab in the dark and guessing number(s)?

Jul 23 '05 #7

P: n/a

"Teddy" <du********@hotmail.com> wrote in message
news:11**********************@f14g2000cwb.googlegr oups.com...
This is a precision problem
For 2 float numbers you should use '>=' or '<=' operator, not '=='
operator to judge if they equal


How are <= and >= going to tell you if the values are equal?

(And unless you happen to know what occurs when you create a float and a
double from a numeric literal, you won't even know which one you'd *expect*
to be larger or smaller than the other.)

For comparing real numbers, if you *really* need to test equality, your only
choice is to test if the two numbers are "close enough" to be considered
equal. But in general, it's better to try to avoid making such a test at
all.

-Howard

Jul 23 '05 #8

P: n/a
santosh wrote:
Hello,
I have two doubts on C++.
1. float i=1.1;
double d=1.1;
if(i==d)
printf("equal\n");
else
printf("not-equal\n");
what is the output?
I am getting "not-equal" as output.
Why it is so?


You should read (at least) the floating point articles here:
http://www.petebecker.com/journeymansshop.html

Jul 23 '05 #9

P: n/a
ve*********@hotmail.com wrote:

(Context brought back; Pete Becker wrote:
NO, THE COMPILER DOES NOT GUESS WHAT THE LAST DIGITS WOULD HAVE BEEN.
Phew, sorry to shout, but this is a fundamental misconception. The >>
compiler generates a double that has the same value as the float. No
guessing involved; usually it's just padding with zeros. The reason
the two original values are different is that the "extra" bits in the
double value were discarded when 1.1 was stored as a float. You can't
get them back. When a float is promoted to double it usually has a
different value from a double initialized with the same initializer.
)
Then where comes the 8 decimals of numbers come from other then taking
a wild stab in the dark and guessing number(s)?


Nowhere: the computer is using base 2, not base 10.
Binary:
..1 = 1/2
0.01 = 1/2^2
etc.

Binary Decimal
0.1 0.5
0.01 0.25
0.001 0.125
0.0001 0.0625
0.00001 0.03125
0.00011 0.09375

It can't represent 1.1 exactly in a finite number of bits; as you see
above truncating the sequence will give a series of numbers almost,
but not exactly, equal to 0.1 (decimal). The links given by others on
this thread almost certainly cover this in much more detail.

--
imalone
Jul 23 '05 #10

P: n/a
ve*********@hotmail.com wrote:
Then where comes the 8 decimals of numbers come from other then taking
a wild stab in the dark and guessing number(s)?


I'm not completely sure what numbers you're referring to, but as a
guess, if you try to display more digits than a floating point type
holds, you'll get strange-looking results. Not because anyone is
guessing, but because binary numbers that end with all zeros don't
necessarily translate into decimal numbers that end with all zeros.
Not to mention that often values are rounded for display, so what you
see isn't necessarily what you've got.

--

Pete Becker
Dinkumware, Ltd. (http://www.dinkumware.com)
Jul 23 '05 #11

P: n/a

"Pete Becker" <pe********@acm.org> wrote in message
news:_9********************@giganews.com...
Howard wrote:

For comparing real numbers, if you *really* need to test equality, your
only choice is to test if the two numbers are "close enough" to be
considered equal.


If you *really* need to test equality you test equality. If you need to
test "close enough" you test "close enough." Exact equality of floating
point values is sometimes meaningful.

--

Pete Becker
Dinkumware, Ltd. (http://www.dinkumware.com)


Quite right. I probably worded that poorly. I meant that if you can't
avoid doing *some* kind of equality comparison, then comparing for "close
enough" was the only reasonable way to accomplish the task (in the general
case), because there is no guarantee that an algrabaic equality will
correspond to a physical equality (i.e., where == will return true) when
dealing with floating-point numbers on a finite-precision computer.

-Howard

Jul 23 '05 #12

P: n/a
> You misunderstood what i said before
Of courese, it should be like this: if(fabs(f1-f2)<=1.0E-20)
I'm sorry that I didn't give an example

Nobody writes if(f1<=f2) to test if f1 and f2 are equal
It is terrible


NP! Prolly standard to assume such things I'm just picky cuz I'm "new" :)
Jul 23 '05 #13

P: n/a
Howard wrote:

Quite right. I probably worded that poorly. I meant that if you can't
avoid doing *some* kind of equality comparison, then comparing for "close
enough" was the only reasonable way to accomplish the task (in the general
case), because there is no guarantee that an algrabaic equality will
correspond to a physical equality (i.e., where == will return true) when
dealing with floating-point numbers on a finite-precision computer.


Again: if equality is what you need, that's what you test for. It's well
defined, exact, and reproducible. In some cases it might be appropriate
to test for some relaxed notion of equality, but that should be by
design, not by application of some general rule. Most people apply
general rules like this one to floating point computations because they
don't understand floating point well enough to know what they're doing.
Substituting "close enough" doesn't make up for that deficiency, so it
only gives the illusion of accuracy.

--

Pete Becker
Dinkumware, Ltd. (http://www.dinkumware.com)
Jul 23 '05 #14

P: n/a

"santosh" <sa*************@in.bosch.com> wrote in message
news:d6**********@ns2.fe.internet.bosch.com...
Hello,
I have two doubts on C++.
1. float i=1.1;
double d=1.1;
if(i==d)
printf("equal\n");
else
printf("not-equal\n");
what is the output?
I am getting "not-equal" as output.
Why it is so?


Thats by design. You can't compare for equality with doubles or floats. Or
rather, comparing them won't provide the expected behaviour because of the
nature of these types.
Essentially, 1.999999 and 2.0 are both valid representations of the same
number (remember the topic of significant digits?). Also, comparing doubles
in any way is an intensive CPU operation.

Jul 23 '05 #15

P: n/a
Peter Julian wrote:
Essentially, 1.999999 and 2.0 are both valid representations of the same
number (remember the topic of significant digits?).
No, they're two distinct numbers.
Also, comparing doubles
in any way is an intensive CPU operation.


Not on any architecture I'm aware of. Two doubles are equal if they have
the same bit pattern.

--

Pete Becker
Dinkumware, Ltd. (http://www.dinkumware.com)
Jul 23 '05 #16

P: n/a
Pete Becker <pe********@acm.org> wrote in
news:_9********************@giganews.com:

If you *really* need to test equality you test equality. If you need
to test "close enough" you test "close enough." Exact equality of
floating point values is sometimes meaningful.

That's fine, but dangerous. You are highly likely to get compiler
differences. Not all of them obey the Banker's standard
Jul 23 '05 #17

P: n/a
Peter Gordon wrote:
Pete Becker <pe********@acm.org> wrote in
news:_9********************@giganews.com:

If you *really* need to test equality you test equality. If you need
to test "close enough" you test "close enough." Exact equality of
floating point values is sometimes meaningful.


That's fine, but dangerous. You are highly likely to get compiler
differences. Not all of them obey the Banker's standard


It's not at all dangerous to understand the problem you're trying to
solve and use that knowledge to write code that solves it correctly.
It's extremely dangerous to use approximate solutions just because you
don't understand the problem well enough to get correct results.

Most programmers (including me) don't understand floating point math
well enough to use it in serious applications. Too many programmers (not
including me) assume that they can write something that's approximately
right and the result will be good enough.

--

Pete Becker
Dinkumware, Ltd. (http://www.dinkumware.com)
Jul 23 '05 #18

P: n/a
Pete Becker <pe********@acm.org> wrote in news:I_CdnUxJVaCDoQ_fRVn-
2A@giganews.com:
Peter Gordon wrote:
Pete Becker <pe********@acm.org> wrote in

That's fine, but dangerous. You are highly likely to get compiler
differences. Not all of them obey the Banker's standard
It's not at all dangerous to understand the problem you're trying to
solve and use that knowledge to write code that solves it correctly.
It's extremely dangerous to use approximate solutions just because you
don't understand the problem well enough to get correct results.

Most programmers (including me) don't understand floating point math
well enough to use it in serious applications. Too many programmers

(not including me) assume that they can write something that's approximately right and the result will be good enough.

In binary, there is no correct result, they are all approximations.
The point of having a standard is so every system calculates the
same approximation. The first group that "butted their heads" up
against this problem were bankers. The odd cent difference does
not matter much, but is not good enough for balancing books.
They defined, or paid to have defined, a standard for coping with
this imprecision. It has become the standard for the computer
industry.

However, lets agree, that using an equality test on floating
point numbers is not a good idea.
Jul 23 '05 #19

P: n/a
Further to my previous post.
I did a quick google search. Check:
http://p2p.wrox.com/topic.asp?TOPIC_ID=3122

It's a common problem and I know from personal
exerience that it can cause some very heated
arguments.
Jul 23 '05 #20

P: n/a
Peter Gordon wrote:

In binary, there is no correct result, they are all approximations.
Nonsense. 0.5 + 0.5 is one, whether you do it in binary or decimal.
The point of having a standard is so every system calculates the
same approximation. The first group that "butted their heads" up
against this problem were bankers. The odd cent difference does
not matter much, but is not good enough for balancing books.
They defined, or paid to have defined, a standard for coping with
this imprecision. It has become the standard for the computer
industry.
Nope. It may be a standard for financial computations, but there is far
more to numerical analysis than financial computations.

However, lets agree, that using an equality test on floating
point numbers is not a good idea.


Since I've said at least three times that that's not the case, it's not
likely that we'll agree on it.

--

Pete Becker
Dinkumware, Ltd. (http://www.dinkumware.com)
Jul 23 '05 #21

P: n/a
Just for fun:

#include <iostream>
#include <ostream>
using namespace std;

int main( )
{
double d;
float f;
d = 1.1;
f = 1.1;

if (static_cast<float>(d) == f)
cout << "They are equal.\n";
else
cout << "They are unequal.\n";
//Shows equal.

if (d == f)
cout << "They are equal.\n";
else
cout << "They are unequal.\n";
//Shows unequal.

float tempf;
double tempd;
tempf = d;
tempd = d;
tempf = tempf && tempd;

if (tempd == d)
cout << "They are equal.\n";
else
cout << "They are unequal.\n";
//Shows equal.

return 0;
}
--
Gary

"Pete Becker" <pe********@acm.org> wrote in message
news:0v********************@giganews.com...
Peter Julian wrote:
Essentially, 1.999999 and 2.0 are both valid representations of the same
number (remember the topic of significant digits?).


No, they're two distinct numbers.
Also, comparing doubles
in any way is an intensive CPU operation.


Not on any architecture I'm aware of. Two doubles are equal if they have
the same bit pattern.

--

Pete Becker
Dinkumware, Ltd. (http://www.dinkumware.com)

Jul 23 '05 #22

P: n/a
Let me see if I get this right this time:
I made the (stupid) mistake of looking that the doubles in normal
decimal notation. If I'd done it in hex I'd seen the last bytes as 0.
And using the normal math on the resulting number I should get the
extra digits.
Right?

Jul 23 '05 #23

P: n/a
"ve*********@hotmail.com" wrote:

Let me see if I get this right this time:
I made the (stupid) mistake of looking that the doubles in normal
decimal notation. If I'd done it in hex I'd seen the last bytes as 0.
And using the normal math on the resulting number I should get the
extra digits.
Right?


right.

A somewhat similar example in decimal would be:
What is the value of 1.0 / 3.0 ?
If you insist on 4 decimal places this equals: 0.3333
But if you allow for 7 decimal places, you don't get 0.3333000
but you get 0.3333333

--
Karl Heinz Buchegger
kb******@gascad.at
Jul 23 '05 #24

P: n/a

"Gary Labowitz" <gl*******@comcast.net> wrote in message
news:ka********************@comcast.com...
Just for fun:

float tempf;
double tempd;
tempf = d;
tempd = d;
tempf = tempf && tempd; // did you mean tempd = ?

if (tempd == d) // or did you mean to check tempf here?
cout << "They are equal.\n";
else
cout << "They are unequal.\n";
//Shows equal.

return 0;
}
--
Gary


I suspect you made a coding error there. You're computing a value for
tempf, but that's not what you're using in the comparison. Did you mean to
check the value of tempf, or to put the result of the && in the double
tempd? I suspect the latter, since that wouldn't lose precision, but you
didn't say.

-Howard
Jul 23 '05 #25

P: n/a
Gary Labowitz wrote:
Just for fun:

#include <iostream>
#include <ostream>
using namespace std;

int main( )
{
double d;
float f;
d = 1.1;
f = 1.1;

if (static_cast<float>(d) == f)
cout << "They are equal.\n";
else
cout << "They are unequal.\n";
//Shows equal.
I got unequal.

if (d == f)
cout << "They are equal.\n";
else
cout << "They are unequal.\n";
//Shows unequal.
I got unequal.

float tempf;
double tempd;
tempf = d;
tempd = d;
tempf = tempf && tempd;

if (tempd == d)
cout << "They are equal.\n";
else
cout << "They are unequal.\n";
//Shows equal.
Of course! tempd is a double and was made equal to d. The tempf is not
used so there is no point in even having it. However if you made a typo
and wanted to type "if (tempf == d) then I got unequal.
return 0;
}

Jul 23 '05 #26

P: n/a
Of course! tempd is a double and was made equal to d. The tempf is not
used so there is no point in even having it. However if you made a typo
and wanted to type "if (tempf == d) then I got unequal.


I just re-read this, and it doesn't sound quite like I wanted. No
offence was offered :)
Jul 23 '05 #27

P: n/a
Randy wrote:

if (static_cast<float>(d) == f)


I got unequal.


Some compilers by default don't do all of the adjustments to floating
point types that the standard requires, so the comparison would actually
be d as a double against f promoted to double. That makes for faster
computations, but technically doesn't comply to the C and C++
requirements. Check for a compiler switch that controls this, if you
prefer slow math. <g>

--

Pete Becker
Dinkumware, Ltd. (http://www.dinkumware.com)
Jul 23 '05 #28

P: n/a
"Howard" <al*****@hotmail.com> wrote in message
news:hf*********************@bgtnsc05-news.ops.worldnet.att.net...

"Gary Labowitz" <gl*******@comcast.net> wrote in message
news:ka********************@comcast.com...
Just for fun:

float tempf;
double tempd;
tempf = d;
tempd = d;
tempf = tempf && tempd; // did you mean tempd = ?

if (tempd == d) // or did you mean to check tempf here?
cout << "They are equal.\n";
else
cout << "They are unequal.\n";
//Shows equal.

return 0;
}
--
Gary
I suspect you made a coding error there. You're computing a value for
tempf, but that's not what you're using in the comparison. Did you mean

to check the value of tempf, or to put the result of the && in the double
tempd? I suspect the latter, since that wouldn't lose precision, but you
didn't say.


The latter. I intended to take the "float" portion of the double and && in
the second half of the "double" portion with it.
The whole thing could have been better shown by displaying the bits of the
two variables. Oh well.
--
Gary
Jul 23 '05 #29

P: n/a

"Pete Becker" <pe********@acm.org> wrote in message
news:0v********************@giganews.com...
Peter Julian wrote:
Essentially, 1.999999 and 2.0 are both valid representations of the same
number (remember the topic of significant digits?).


No, they're two distinct numbers.
Also, comparing doubles
in any way is an intensive CPU operation.


Not on any architecture I'm aware of. Two doubles are equal if they have
the same bit pattern.

No, the result is not neccessarily the same bit pattern. Different CPU
architectures and compilers have different floating point data
representations along with intermediate results that vary in precision (in
some cases, even the same architecture will generate different results in
debug than they will in release mode). And some decimal values, like 0.1,
can't be accurately represented in a floating point value.

Jul 23 '05 #30

P: n/a
Peter Julian wrote:
"Pete Becker" <pe********@acm.org> wrote in message
news:0v********************@giganews.com...
Peter Julian wrote:

Also, comparing doubles
in any way is an intensive CPU operation.
Not on any architecture I'm aware of. Two doubles are equal if they have
the same bit pattern.


No, the result is not neccessarily the same bit pattern.


Huh? Again: on every architecture I'm aware of, two doubles are equal if
they have the same bit pattern. It has nothing to do with where they
came from. It's simply a fact.
Different CPU
architectures and compilers have different floating point data
representations along with intermediate results that vary in precision (in
some cases, even the same architecture will generate different results in
debug than they will in release mode). And some decimal values, like 0.1,
can't be accurately represented in a floating point value.


Yes, but that has nothing to do with what I said, nor does it have
anything to do with your cliam, which was that "comparing doubles in any
way is an intensive CPU operation." That is simply false. Comparing
doubles for exact equality is trivial.

It is not necessary to make the sign of the cross to ward off evil
whenever anyone mentions testing floating point values for equality.
What is necessary is objective analysis.

--

Pete Becker
Dinkumware, Ltd. (http://www.dinkumware.com)
Jul 23 '05 #31

P: n/a

"Pete Becker" <pe********@acm.org> wrote in message
news:Ld********************@rcn.net...
Peter Julian wrote:
"Pete Becker" <pe********@acm.org> wrote in message
news:0v********************@giganews.com...
Peter Julian wrote:
Also, comparing doubles
in any way is an intensive CPU operation.

Not on any architecture I'm aware of. Two doubles are equal if they have
the same bit pattern.

No, the result is not neccessarily the same bit pattern.


Huh? Again: on every architecture I'm aware of, two doubles are equal if
they have the same bit pattern. It has nothing to do with where they
came from. It's simply a fact.


Thats exactly my point, the 2 doubles *must* have the same bit pattern. In
other words, this is guarenteed to fail unless 64-bit double precison or
rounded 32-bit precision is involved (the output is self explanatory):

#include <iostream>
#include <iomanip>

int main()
{
double d = 21.1;
double d_result = 2.11 * 10.0;
std::cout << "d = " << std::setprecision(20) << d;
std::cout << "\nd_result = " << std::setprecision(20) << d_result;
if( d == d_result )
{
std::cout << "\nthe exact same bit pattern.\n";
}
else
{
std::cout << "\nnot the same bit pattern.\n";
}

return 0;
}

Take note: some compilers will optimize the double into an int in release
mode. Some compilers will not. I'm not discussing integers here.

output:
d = 21.100000000000001
d_result = 21.099999999999998
not the same bit pattern.
Different CPU
architectures and compilers have different floating point data
representations along with intermediate results that vary in precision (in some cases, even the same architecture will generate different results in debug than they will in release mode). And some decimal values, like 0.1, can't be accurately represented in a floating point value.

Yes, but that has nothing to do with what I said, nor does it have
anything to do with your cliam, which was that "comparing doubles in any
way is an intensive CPU operation." That is simply false. Comparing
doubles for exact equality is trivial.


Do you still think its trivial? think again. Its not. In fact its both as
time consuming as it is inexact.

It is not necessary to make the sign of the cross to ward off evil
whenever anyone mentions testing floating point values for equality.
What is necessary is objective analysis.
versus proof? Please... read up on IEEE 754
test away-> http://babbage.cs.qc.edu/courses/cs341/IEEE-754.html

--

Pete Becker
Dinkumware, Ltd. (http://www.dinkumware.com)


Jul 23 '05 #32

P: n/a
Peter Julian wrote:

Thats exactly my point, the 2 doubles *must* have the same bit pattern. In
other words, this is guarenteed to fail unless 64-bit double precison or
rounded 32-bit precision is involved (the output is self explanatory):


I have no idea what point you're trying to make here. I made the simple
statement that two doubles are equal if their representations have the
same bit pattern. It doesn't matter whether you got those two values
from doubles, floats, or ints. If they hold the same bit pattern they
are equal, and that is a meaningful and useful definition of equality*.
The fact that doing a computation in two different ways can produce
results that are not equal is irrelevant. Remember, all this started
from your assertion that "comparing doubles IN ANY WAY is an intensive
CPU operation [emphasis added]." That is not true, because you can
meaningfully compare doubles by treating them as suitably sized integral
values and comparing them as such. That is not an intensive CPU operation.

If your point is simply that doing what looks like the same computation
in two different ways can produce different results, then you're wasting
everyone's time, because that was established much earlier in this thread.
*For the nitpickers: yes, if the values are NaNs they are supposed to
compare unequal. And, of course, there are some values that sometimes
have multiple representations (for example, 0 and -0 are distinct values
that compare equal), so it is not true that two values are equal only if
they have the same bit pattern. But that's a completely different
discussion.

--

Pete Becker
Dinkumware, Ltd. (http://www.dinkumware.com)
Jul 23 '05 #33

P: n/a
"Pete Becker" <pe********@acm.org> wrote in message
news:wo********************@rcn.net...
Peter Julian wrote:

Thats exactly my point, the 2 doubles *must* have the same bit pattern. In other words, this is guarenteed to fail unless 64-bit double precison or
rounded 32-bit precision is involved (the output is self explanatory):


I have no idea what point you're trying to make here. I made the simple
statement that two doubles are equal if their representations have the
same bit pattern. It doesn't matter whether you got those two values
from doubles, floats, or ints. If they hold the same bit pattern they
are equal, and that is a meaningful and useful definition of equality*.
The fact that doing a computation in two different ways can produce
results that are not equal is irrelevant. Remember, all this started
from your assertion that "comparing doubles IN ANY WAY is an intensive
CPU operation [emphasis added]." That is not true, because you can
meaningfully compare doubles by treating them as suitably sized integral
values and comparing them as such. That is not an intensive CPU operation.

If your point is simply that doing what looks like the same computation
in two different ways can produce different results, then you're wasting
everyone's time, because that was established much earlier in this thread.
*For the nitpickers: yes, if the values are NaNs they are supposed to
compare unequal. And, of course, there are some values that sometimes
have multiple representations (for example, 0 and -0 are distinct values
that compare equal), so it is not true that two values are equal only if
they have the same bit pattern. But that's a completely different
discussion.


Peter, perhaps you know this for sure: When we were first developing (circa
1956) is was pretty standard to do comparisons by subtracting one operand
from the other and check the hardware zero and overflow flags. If the
hardware operation resulted in a zero result, then the operands were
considered equal. As you can guess, there are cases where the initial values
in bits were not exactly identical, but after scaling, conversions, and
other manipulations doing the subtraction, the result could be zero (all 0
bits, with underflow, if I recall right). Do modern CPU's operate in this
way? Or are they required to simulate it?
My mindset is that there is no actual "compare" of bit structures, but
manipulation to effect subtraction and check of result. Comments?
--
Gary
Jul 23 '05 #34

P: n/a
Gary Labowitz wrote:

Peter,
That's "Pete". <g>
perhaps you know this for sure: When we were first developing (circa
1956) is was pretty standard to do comparisons by subtracting one operand
from the other and check the hardware zero and overflow flags. If the
hardware operation resulted in a zero result, then the operands were
considered equal. As you can guess, there are cases where the initial values
in bits were not exactly identical, but after scaling, conversions, and
other manipulations doing the subtraction, the result could be zero (all 0
bits, with underflow, if I recall right). Do modern CPU's operate in this
way? Or are they required to simulate it?
My mindset is that there is no actual "compare" of bit structures, but
manipulation to effect subtraction and check of result. Comments?


On the Intel architecture, for example, the representation of floating
point values is carefully contrived so that you can determine the
relative order of two values (other than NaNs) by comparing their bits
as if they represented signed integral types. Values are always
normalized (except, of course, for values that are too small to
normalize), the high-order bit contains the sign, the next few bits
contain a biased exponent (all-zeros is the smallest [negative] exponent
value, a 0 exponent is 127 for floats, etc.), and the remaining bits are
the fraction. I assume the hardware takes advantage of this...

--

Pete Becker
Dinkumware, Ltd. (http://www.dinkumware.com)
Jul 23 '05 #35

P: n/a
In article <Ld********************@rcn.net>, pe********@acm.org
says...

[ ... ]
Huh? Again: on every architecture I'm aware of, two doubles are equal if
they have the same bit pattern. It has nothing to do with where they
came from. It's simply a fact.


Depending on implementation, two NaNs that have identical bit
patterns can still compare as being unequal.

In the other direction, two floating point numbers can have different
bit patterns and still compare as equal to each other (e.g. on an
Intel x86, 0.0 can be represented by a large number of different bit
patterns).

--
Later,
Jerry.

The universe is a figment of its own imagination.
Jul 23 '05 #36

P: n/a
Jerry Coffin wrote:

Depending on implementation, two NaNs that have identical bit
patterns can still compare as being unequal.

In the other direction, two floating point numbers can have different
bit patterns and still compare as equal to each other (e.g. on an
Intel x86, 0.0 can be represented by a large number of different bit
patterns).


Yes, those are the issues I was talking about when I said:

*For the nitpickers: yes, if the values are NaNs they are supposed to
compare unequal. And, of course, there are some values that sometimes
have multiple representations (for example, 0 and -0 are distinct values
that compare equal), so it is not true that two values are equal only if
they have the same bit pattern. But that's a completely different
discussion.

--

Pete Becker
Dinkumware, Ltd. (http://www.dinkumware.com)
Jul 23 '05 #37

P: n/a
In article <ir********************@rcn.net>, pe********@acm.org
says...

[ ... ]
Yes, those are the issues I was talking about when I said:

*For the nitpickers: yes, if the values are NaNs they are supposed to
compare unequal. And, of course, there are some values that sometimes
have multiple representations (for example, 0 and -0 are distinct values
that compare equal), so it is not true that two values are equal only if
they have the same bit pattern. But that's a completely different
discussion.


Somehow I seem to have missed that (part of that?) post, but I think
it's basically inaccurate.

On a real machine, a floating point comparison typically takes two
operands and produces some single-bit results (usually in a special
flags register). It starts by doing a floating point subtraction of
one of those operands from the other, and then examines the result of
that subtraction to see whether it's zero, negative, etc., and sets
flags appropriately based on those conditions.

Now, you (Pete) seem to be focusing primarily on the floating point
subtraction itself. While there's nothing exactly wrong with that,
it's a long ways from the whole story. The floating point subtraction
just produces a floating point result -- and it's the checks I
mentioned (for zero and NaN) that actually determine the state of the
zero flag.

As such, far from being a peripheral detail important only to
nitpickers, this is really central, and the subtraction is nearly a
peripheral detail that happens to produce a value to be examined --
in particular, it's also perfectly reasonable (and fairly common) to
set the flags based on other operations as well as subtraction.

As I recall, the question that started this sub-thread was mostly one
of whether a floating point comparison was particularly expensive. In
this respect, I'd say Pete is basically dead-on: a floating point
comparison is quite fast not only on current hardware, but even on
ancient stuff (e.g. 4 clocks on a 486). Unless the data being
compared has been used recently enough to be in registers (or at
least cache) already, loading the data from memory will almost always
take substantially longer than doing the comparison itself.

I suspect, however, that this mostly missed the point that was
originally attempting to be made: which is that under _most_
circumstances, comparing floating point numbers for equality is a
mistake. Since floating point results typically get rounded, you
usually want to compare based on whether the difference between the
two is smaller than some delta. This delta will depend on the
magnitude of the numbers involved. The library contains a *_EPSILON
in float.h (and aliases for the same general idea in other headers)
that defines the smallest difference that can be represented between
1 and something larger than 1.

Therefore, if you're doing math with doubles (for example) you start
by estimating the amount of rounding that might happen based on what
you're doing. Let's assume you have a fairly well-behaved
computation, and it involves a dozen or so individual calculations.
You then do your comparison something like:

delta = ((val1+val2)/2.0) * DOUBLE_EPSILON * 12.0;

if ( val1-val2 <= delta)
// consider them equal.
else
// consider them unequal.

While the comparison itself is fast and cheap, it's really only a
small part of the overall job.

--
Later,
Jerry.

The universe is a figment of its own imagination.
Jul 23 '05 #38

P: n/a
Jerry Coffin wrote:

Now, you (Pete) seem to be focusing primarily on the floating point
subtraction itself.
No, what I'm talking about is based on the specific representation, and
shortcuts that can be used in semi-numeric algorithms.
While there's nothing exactly wrong with that,
it's a long ways from the whole story.


Of course. Remember, the context here is the assertion that "comparing
doubles in any way is an intensive CPU operation." That's far too broad
a generalization.

--

Pete Becker
Dinkumware, Ltd. (http://www.dinkumware.com)
Jul 23 '05 #39

P: n/a
"Pete Becker" <pe********@acm.org> wrote in message
news:-P********************@giganews.com...
Gary Labowitz wrote:

Peter,


That's "Pete". <g>
perhaps you know this for sure: When we were first developing (circa
1956) is was pretty standard to do comparisons by subtracting one operand from the other and check the hardware zero and overflow flags. If the
hardware operation resulted in a zero result, then the operands were
considered equal. As you can guess, there are cases where the initial values in bits were not exactly identical, but after scaling, conversions, and
other manipulations doing the subtraction, the result could be zero (all 0 bits, with underflow, if I recall right). Do modern CPU's operate in this way? Or are they required to simulate it?
My mindset is that there is no actual "compare" of bit structures, but
manipulation to effect subtraction and check of result. Comments?


On the Intel architecture, for example, the representation of floating
point values is carefully contrived so that you can determine the
relative order of two values (other than NaNs) by comparing their bits
as if they represented signed integral types. Values are always
normalized (except, of course, for values that are too small to
normalize), the high-order bit contains the sign, the next few bits
contain a biased exponent (all-zeros is the smallest [negative] exponent
value, a 0 exponent is 127 for floats, etc.), and the remaining bits are
the fraction. I assume the hardware takes advantage of this...


Okay, Pete!
I worked on the microcode for the S/360 Mod 40, so I'm talking IBM mainframe
here. I'm sure the Intel architecture takes advantage of checking NaN and
sign differences before doing any data manipulations to compare values.
Special conditions like that are usually knocked out immediately. I always
wanted to study the Intel designs, but I kept putting it off until now I'm
not likely ever to look into it.
--
Gary
Jul 23 '05 #40

This discussion thread is closed

Replies have been disabled for this discussion.