P: n/a

Hi all,
i have encountered the strangest behavior. Check out this simple
program:
#include <stdio.h>
int main()
{
double time = 1;
double i = 0;
for (double i = 0; i < time ; i+=0.01 )
{
if ( i == 0.15 )
{
printf( "found %f\n", i);
}
if ( i < 0.1 )
{
printf( "foundXX %f\n", i);
}
}
return 1;
}
What would you expect to be printed:
All the numbers from 0.0 to 0.9 with the prefex: foundXX
and last line in output should be found 0.15  right?!
Wrong... what I get is all the numbers from 0.0 to 0.1 printed
(including 0.1!!)
When checking if ( i==0.1) { printf( "foundXX %f\n",i);} it does not
print foundXX 0.1!!
Why exactly does it think that 0.1 is < than 0.1!!??
anyone?
Thanks  
Share this Question
P: n/a
 em************@gmail.com wrote:
for (double i = 0; i < time ; i+=0.01 )
{
if ( i == 0.15 )
{
printf( "found %f\n", i);
}
if ( i < 0.1 )
{
printf( "foundXX %f\n", i);
}
}
As others have pointed out, 0.01 cannot be represented accurately with
floating point numbers (for the same reason as eg. 1/3 cannot be
represented accurately with decimal numbers).
Clearly you want a precise number of iterations to your loop, and you
want precise control on what happens with some precise values of the
loop counter. In those cases what you should do is to use an integer
loop counter, and calculate the floating point value from it. In other
words:
for(int i = 0; i < 100; ++i)
{
double value = i/100.0;
if(i == 15) std::cout << "found " << value << "\n";
if(i < 10) std::cout << "foundXX " << value << "\n";
}
The integer loop counter will make sure that an accurate number of
iterations is performed, and comparing this integer loop counter in the
conditionals will make sure that those conditionals give an accurate
result. Whenever the floatingpoint value needs to be used, use the
'value' variable, as exemplified above.  
P: n/a
 Mi***************@gmail.com wrote:
Machine double numbers are a finite subset of the
infinite real number domain. The operations on them executed by a
computer are similar but not identical to the math +, , *, /, ...
operators 0.1 + 0.1 + 0.1 + 0.1 + 0.1 .
Then why doesn't an implementation use a large number of bits, for
people that want accuracy and also for x==y to work?
Like 8,000 bits or 8,000,000 bits or any number of bits necessary to
store all the possible numbers for double for example.  
P: n/a

"Ioannis Vranos" <iv*****@nospam.no.spamfreemail.grwrote in message
news:ft***********@ulysses.noc.ntua.gr...
Mi***************@gmail.com wrote:
>Machine double numbers are a finite subset of the infinite real number domain. The operations on them executed by a computer are similar but not identical to the math +, , *, /, ... operators 0.1 + 0.1 + 0.1 + 0.1 + 0.1 .
Then why doesn't an implementation use a large number of bits, for
people that want accuracy and also for x==y to work?
Like 8,000 bits or 8,000,000 bits or any number of bits necessary to
store all the possible numbers for double for example.
Since 0.1 in binary is an infinate regression, it would take an infinite
number of bits to represent it.

Jim Langston ta*******@rocketmail.com  
P: n/a

Ioannis Vranos wrote:
Then why doesn't an implementation use a large number of bits, for
people that want accuracy and also for x==y to work?
Sure, if you want your program to be a thousand times slower.
The C++ compiler will usually use the FPU (and sometimes even SSE) to
make floating point calculations in hardware. To get larger floating
point values you would have to use a software library, which would be
enormously slower.
Besides, 0.1 is inaccurate in binary floating point format regardless
of the number of bits used (for the exact same reason as 1/3 is
inaccurate in decimal format regardless of how many decimals you use).
Like 8,000 bits or 8,000,000 bits or any number of bits necessary to
store all the possible numbers for double for example.
double already stores all possible numbers representable with a double.  
P: n/a

On 10 Apr., 01:49, Ioannis Vranos <ivra...@nospam.no.spamfreemail.gr>
wrote:
Michael.Boehni...@gmail.com wrote:
Machine double numbers are a finite subset of the
infinite real number domain. The operations on them executed by a
computer are similar but not identical to the math +, , *, /, ...
operators 0.1 + 0.1 + 0.1 + 0.1 + 0.1 .
oops. " != 0.5 " is missing.
>
Then why doesn't an implementation use a large number of bits, for
people that want accuracy and also for x==y to work?
Like 8,000 bits or 8,000,000 bits or any number of bits necessary to
store all the possible numbers for double for example.
A large number of bits is not enough, you'd need an *inifinte* number
of bits. Even if you extend the "normal" double numbers concept by
fractions (e.g. store two integer numbers 1 and 10 to represent 1/10),
you cannot represent the whole rational set (e.g. sqrt(2), pi or e
cannot be stored like this without loss of precision). Also, there is
an issue in performance: the more memory you use to store a single
value, the longer it will take to operate on them.
AFAIR, the current implementation by x86 CPUs uses 80 binary digits
for an extended precision floating point number internally and 64
binary digits to store a double in RAM memory. Should be enough for
most applications, *if* you follow the caveats mentioned before. 
Especially do not compare floats/doubles for (in)equality by != / ==
operators. If you want more precision you need to do it by application
code. This is considerably slower than using direct CPU/FPU commands
for floating point.
best,
Michael.  
P: n/a

On Apr 10, 1:49 am, Ioannis Vranos <ivra...@nospam.no.spamfreemail.gr>
wrote:
Michael.Boehni...@gmail.com wrote:
Machine double numbers are a finite subset of the
infinite real number domain. The operations on them executed by a
computer are similar but not identical to the math +, , *, /, ...
operators 0.1 + 0.1 + 0.1 + 0.1 + 0.1 .
Then why doesn't an implementation use a large number of bits, for
people that want accuracy and also for x==y to work?
Like 8,000 bits or 8,000,000 bits or any number of bits
necessary to store all the possible numbers for double for
example.
I'm not sure I understand. A double has enough bits to store
all possible numbers for double. By definitionif the number
can't be stored in a double, then it's not a possible number for
double.
The problem here is that the real number 0.1 is not a possible
number for double. And if you're asking that the implementation
use enough bits to store all possible real numbers, that's
lg2(N), where N is the number of possible real numbers. And off
hand, how many possible real numbers would you say there are?
In this particular case, an implementation could have masked the
problem by using a decimal representation for double. But that
will fail as soon as you try something like:
step = 1.0/3.0 ;
for ( value = 0.0 ; value != 1.0 ; value += step ) ...
(Back when I was working in this environment, I actually wrote a
proof that for all finite representations of floating point
values, the loop:
step = 1.0/N ;
for ( value = 0.0 ; value != 1.0 ; value += step )
will fail for some values of N.)

James Kanze (GABI Software) email:ja*********@gmail.com
Conseils en informatique orientée objet/
Beratung in objektorientierter Datenverarbeitung
9 place Sémard, 78210 St.Cyrl'École, France, +33 (0)1 30 23 00 34  
P: n/a

Juha Nieminen wrote:
Ioannis Vranos wrote:
>Then why doesn't an implementation use a large number of bits, for people that want accuracy and also for x==y to work?
Sure, if you want your program to be a thousand times slower.
The C++ compiler will usually use the FPU (and sometimes even SSE) to
make floating point calculations in hardware. To get larger floating
point values you would have to use a software library, which would be
enormously slower.
Besides, 0.1 is inaccurate in binary floating point format regardless
of the number of bits used (for the exact same reason as 1/3 is
inaccurate in decimal format regardless of how many decimals you use).
OK, but in math we have a symbol to represent the infinite repeating
sequences in decimals, like 7.33333... or 7.343434...  
P: n/a

Ioannis Vranos wrote:
OK, but in math we have a symbol to represent the infinite repeating
sequences in decimals, like 7.33333... or 7.343434...
Floating point numbers are not the set of real numbers, nor are they
symbolic numbers. They are binary numbers with a fixed amount of bits.
You can't expect to be able to do with them what you can do in
"mathematics". You can only expect being able to approximate a finite
amount of operations.  
P: n/a

A large number of bits is not enough, you'd need an *inifinte* number
of bits. Even if you extend the "normal" double numbers concept by
fractions (e.g. store two integer numbers 1 and 10 to represent 1/10),
you cannot represent the whole rational set (e.g. sqrt(2), pi or e
cannot be stored like this without loss of precision). Also, there is
an issue in performance: the more memory you use to store a single
value, the longer it will take to operate on them.
/pedant mode on
sqrt(2), pi and e are not in "the rational set" they are irrational,
meaning that they cannot be written as the quotient of two integers.
Moreover pi and e are transcendental meaning that they cannot be
expressed as the root of a polynomial equation (like sqrt(2) can since
it is one root of X^2  2 = 0). What you mean is the real set.
/pedant mode off  
P: n/a

Juha Nieminen wrote:
Ioannis Vranos wrote:
>OK, but in math we have a symbol to represent the infinite repeating sequences in decimals, like 7.33333... or 7.343434...
Floating point numbers are not the set of real numbers, nor are they
symbolic numbers. They are binary numbers with a fixed amount of bits.
You can't expect to be able to do with them what you can do in
"mathematics". You can only expect being able to approximate a finite
amount of operations.
Well I could think of an implementation where the (last) repeating
sequences could be identified with the use of extra bits. For example an
additional byte could be reserved and used for this in the style:
10000000 = The last decimal digit is repeating indefinetely,
example: 1.1111111111111111....
11000000 = The two last decimal digits are repeating indefinitely,
example: 1.1212121212121212....
11100000 = The three last decimal digits are repeating indefinitely,
example: 2.233234123123123123....
11110000 = The four last decimal digits are repeating indefinitely,
example, 1.543424321432143214321.....
etc.  
P: n/a

Ioannis Vranos wrote:
Juha Nieminen wrote:
>Ioannis Vranos wrote:
>>OK, but in math we have a symbol to represent the infinite repeating sequences in decimals, like 7.33333... or 7.343434...
Floating point numbers are not the set of real numbers, nor are they symbolic numbers. They are binary numbers with a fixed amount of bits. You can't expect to be able to do with them what you can do in "mathematics". You can only expect being able to approximate a finite amount of operations.
Well I could think of an implementation where the (last) repeating
sequences could be identified with the use of extra bits. For example an
additional byte could be reserved and used for this in the style:
10000000 = The last decimal digit is repeating indefinetely,
example: 1.1111111111111111....
11000000 = The two last decimal digits are repeating indefinitely,
example: 1.1212121212121212....
11100000 = The three last decimal digits are repeating indefinitely,
example: 2.233234123123123123....
11110000 = The four last decimal digits are repeating indefinitely,
example, 1.543424321432143214321.....
etc.
More accurately an example of storing the value of the second example
would be:
Stored in double bits:
1.12
Repeating flags (in our case one byte reserved for this):
11000000 = means .12 is repeated indefinitely  
P: n/a

Ioannis Vranos wrote:
Juha Nieminen wrote:
Well I could think of an implementation where the (last) repeating
sequences could be identified with the use of extra bits. For example an
additional byte could be reserved and used for this in the style:
How about simply using integers, as I suggested in my other post?
Much easier and very efficient.  
P: n/a

On 10 Apr., 15:48, brian tyler <brian.ty...@gmail.comwrote:
/pedant mode on
What you mean is the real set.
/pedant mode off
I was only testing if you all pay attention. Honest. No, really. Could
these eyes lie???
:) You got me here.
Michael  
P: n/a

On 10 Apr., 16:01, Ioannis Vranos <ivra...@nospam.no.spamfreemail.gr>
wrote:
Juha Nieminen wrote:
Well I could think of an implementation where the (last) repeating
sequences could be identified with the use of extra bits. For example an
additional byte could be reserved and used for this in the style:
By adding some extra bits, you certainly can extend the set of
representable real numbers. However, the length of the periodically
repeated portion also can have arbitrary length (up to the size of
the denominator if you convert to a normalized fraction). A number
like 1/x where x is roughly 10^9 may have a worst case period length
of 10^9... (of course, not all numbers in this range are *so* nasty).
You'd need to store this sequence somewhere at least once.
You'd be better off by just adding the extra bits to the mantissa.
best,
Michael  
P: n/a

On Thu, 10 Apr 2008 07:40:06 0700, Michael.Boehnisch wrote:
On 10 Apr., 15:48, brian tyler <brian.ty...@gmail.comwrote:
>/pedant mode on What you mean is the real set. /pedant mode off
I was only testing if you all pay attention. Honest. No, really. Could
these eyes lie???
:) You got me here.
Michael
Haven't used the pedant mode tags in a while ;)
Brian  
P: n/a

On Thu, 10 Apr 2008 07:40:06 0700, Michael.Boehnisch wrote:
On 10 Apr., 15:48, brian tyler <brian.ty...@gmail.comwrote:
>/pedant mode on What you mean is the real set. /pedant mode off
I was only testing if you all pay attention. Honest. No, really. Could
these eyes lie???
:) You got me here.
Michael
Haven't used the pedant mode tags in a while ;)
Brian   This discussion thread is closed Replies have been disabled for this discussion.   Question stats  viewed: 2449
 replies: 18
 date asked: Apr 9 '08
