473,499 Members | 1,562 Online
Bytes | Software Development & Data Engineering Community
+ Post

Home Posts Topics Members FAQ

float representation and precision

Hi,

yesterday I discovered something that won't come as a surprise to more experienced C++ - programmers - but I was genuinely amazed. I had this line of code:

float x = 0.9f;

and when I had a look at it in the VS.NET 2003 debugger, it said x had avalue of 0.89999998 ! When I printf'ed x to a logfile, it was 0.899999976158142090000000000000. Now, why on earth is that? I understand that the results of floatingpoint operations will always be somewhat imprecise due to machine limitations, but 0.9? Can anybody explain this to me?

Jul 23 '05 #1
13 5319

Sebastian wrote:
Hi,

yesterday I discovered something that won't come as a
surprise to more experienced C++ - programmers - but I
was genuinely amazed. I had this line of code:

float x = 0.9f;

and when I had a look at it in the VS.NET 2003 debugger,
it said x had a value of 0.89999998 ! When I printf'ed x
to a logfile, it was 0.899999976158142090000000000000. Now,
why on earth is that? I understand that the results of
floatingpoint operations will always be somewhat imprecise
due to machine limitations, but 0.9? Can anybody explain
this to me?


IEEE754, which is the floating point representation used on most
platforms that I am aware of, is a base two encoding for floating point
values. Just as base 10 has irrational fractions (somewhat like 1/3)
so does base two. The adventure here is that the fractions that are
irrational in base two are different from those that are irrational in
base 10. Apparently 9/10 is one of those irrational fractions in base
2.

Regards,

Jon Trauntvein

Jul 23 '05 #2
Sebastian wrote:
Hi,

yesterday I discovered something that won't come as a surprise to more
experienced C++ - programmers - but I was genuinely amazed. I had this
line of code:

float x = 0.9f;

and when I had a look at it in the VS.NET 2003 debugger, it said x had a
value of 0.89999998 ! When I printf'ed x to a logfile, it was
0.899999976158142090000000000000. Now, why on earth is that? I understand
that the results of floatingpoint operations will always be somewhat
imprecise due to machine limitations, but 0.9? Can anybody explain this to
me?


Floating point numbers approximate a continuous range of numbers. Since that
range has an infinite count of different numbers, it isn't possible to
store every number exactly in a computer.
Remember that computers are not calculating in base 10 as humans usually
are, but in base 2. So a number that might look simple to us (like 0.9) can
in base 2 have an infinite number of digits, and thus be impossible to
store exactly in a base 2 floating point number.

Jul 23 '05 #3
Wow. Thanks for the speedy answer. I like this - the idea that there would be irrational numbers in the binary system never crossed my mind, butit sounds absolutely logical. :-)
On 7 May 2005 05:21:19 -0700, JH Trauntvein <j.**********@comcast.net> wrote:

Sebastian wrote:
Hi,

yesterday I discovered something that won't come as a
surprise to more experienced C++ - programmers - but I
was genuinely amazed. I had this line of code:

float x = 0.9f;

and when I had a look at it in the VS.NET 2003 debugger,
it said x had a value of 0.89999998 ! When I printf'ed x
to a logfile, it was 0.899999976158142090000000000000. Now,
why on earth is that? I understand that the results of
floatingpoint operations will always be somewhat imprecise
due to machine limitations, but 0.9? Can anybody explain
this to me?


IEEE754, which is the floating point representation used on most
platforms that I am aware of, is a base two encoding for floating point
values. Just as base 10 has irrational fractions (somewhat like 1/3)
so does base two. The adventure here is that the fractions that are
irrational in base two are different from those that are irrational in
base 10. Apparently 9/10 is one of those irrational fractions in base
2.

Regards,

Jon Trauntvein


--

Regards,

Sebastian
Jul 23 '05 #4
In article <11*********************@z14g2000cwz.googlegroups. com>,
"JH Trauntvein" <j.**********@comcast.net> writes:
Sebastian wrote:

yesterday I discovered something that won't come as a
surprise to more experienced C++ - programmers - but I
was genuinely amazed. I had this line of code:

float x = 0.9f;

and when I had a look at it in the VS.NET 2003 debugger,
it said x had a value of 0.89999998 !
IEEE754, which is the floating point representation used on most
platforms that I am aware of, is a base two encoding for floating point
values. Just as base 10 has irrational fractions (somewhat like 1/3)
so does base two. The adventure here is that the fractions that are
irrational in base two are different from those that are irrational in
base 10. Apparently 9/10 is one of those irrational fractions in base
2.


An "irrational fraction" is a contradiction in terms, "irrational"
meaning "cannot be expressed as a ratio", i.e. "cannot be expressed
as a fraction". You mean "repeating", as in "repeating decimal".
Now, what makes a decimal number repeat, is that it has factors
other than "2" and "5" in the denominator. A non-repeating binary
ratio can only have powers of "2" in the denominator. So, evidently,
every repeating decimal number also repeats in binary, but there
are repeating binary nubers that won't repeat in decimal (e.g. 2/5).

In any case, that's not the reason for the loss of precision in
converting to floating point and back. What's "stored in binary"
is not the number itself, but the natural logarithm of the number.
Since _e_, the base of the natural logarithm is irrational, any
rational number is going to have a non-rational representation in
floating point. The only numbers that come out even in floating
point are irrational themselves. If you ever don't get a rounding
error when converting to floating point and back, it's essentially
coincidence (although it will be reproducible, of course).

--
Frederick
Jul 23 '05 #5
> In any case, that's not the reason for the loss of precision in
converting to floating point and back. What's "stored in binary"
is not the number itself, but the natural logarithm of the number.
Since _e_, the base of the natural logarithm is irrational, any
rational number is going to have a non-rational representation in
floating point. The only numbers that come out even in floating
point are irrational themselves. If you ever don't get a rounding
error when converting to floating point and back, it's essentially
coincidence (although it will be reproducible, of course).


I don't know where you get the natural log information from.

http://www.nuvisionmiami.com/books/a...oating_tut.htm

Regards,
cadull
Jul 23 '05 #6
In article <d5**********@nnrp.waia.asn.au>,
"cadull" <cl********@spam.cadull.com> writes:
In any case, that's not the reason for the loss of precision in
converting to floating point and back. What's "stored in binary"
is not the number itself, but the natural logarithm of the number.
Since _e_, the base of the natural logarithm is irrational, any
rational number is going to have a non-rational representation in
floating point. The only numbers that come out even in floating
point are irrational themselves. If you ever don't get a rounding
error when converting to floating point and back, it's essentially
coincidence (although it will be reproducible, of course).


I don't know where you get the natural log information from.


Oops. It's log-2. Fractions involving powers of two will thus
convert back and forth without loss.

Frederick
Jul 23 '05 #7
Frederick Bruckman wrote:
In any case, that's not the reason for the loss of precision in
converting to floating point and back. What's "stored in binary"
is not the number itself, but the natural logarithm of the number.


Where did you get that idea from? Floating point numbers are usually stored
in base 2, not base e. I have seen a format where they were stored in base
16, but never e.

Jul 23 '05 #8
Frederick Bruckman wrote:

Oops. It's log-2.


No, it's not. It's just like 1.3 x 10^3, but in base 2 instead of base
10. The exponent is the logarithm of the value, but the fraction is just
the representation of the scaled value in the appropriate base. So 1/2
is represented in binary floating point as .1 base 2, 1 is represented
as .1 x 2^1, and 2 is represented as .1 x 2^2.

--

Pete Becker
Dinkumware, Ltd. (http://www.dinkumware.com)
Jul 23 '05 #9
Rolf Magnus wrote:

Floating point numbers approximate a continuous range of numbers. Since that
range has an infinite count of different numbers, it isn't possible to
store every number exactly in a computer.


It doesn't even have to be infinite, just more than the floating point
type can hold.

--

Pete Becker
Dinkumware, Ltd. (http://www.dinkumware.com)
Jul 23 '05 #10
Pete Becker wrote:
Frederick Bruckman wrote:

Oops. It's log-2.

No, it's not. It's just like 1.3 x 10^3, but in base 2 instead of base
10. The exponent is the logarithm of the value, but the fraction is just
the representation of the scaled value in the appropriate base. So 1/2
is represented in binary floating point as .1 base 2,


No. It's represented as 1 * 2^-1. The mantissa always is >=1 and <2, similar
to what you would do in base 10. You don't write 0.13 x 10^4, but 1.3 *
10^3.
1 is represented as .1 x 2^1, and 2 is represented as .1 x 2^2.


1 is represented as 1 * 2^0, 2 as 1 * 2^1.

Btw: How would you calculate the exponent from a given number?
Jul 23 '05 #11
Rolf Magnus wrote:
Pete Becker wrote:

Frederick Bruckman wrote:

Oops. It's log-2.


No, it's not. It's just like 1.3 x 10^3, but in base 2 instead of base
10. The exponent is the logarithm of the value, but the fraction is just
the representation of the scaled value in the appropriate base. So 1/2
is represented in binary floating point as .1 base 2,

No. It's represented as 1 * 2^-1. The mantissa always is >=1 and <2,


Not under IEEE 754 and its successors. Normalized non-zero fractions are
always less than 1 and greater than or equal to 1/base. This gets
confusing, though, because some architectures (in particular, Intel)
suppress the leading bit, since it's always 1. But when you stick it
back in, the fraction (as the name implies <g>) is always less than 1.

--

Pete Becker
Dinkumware, Ltd. (http://www.dinkumware.com)
Jul 23 '05 #12
In article <d5*************@news.t-online.com>,
Rolf Magnus <ra******@t-online.de> writes:
Frederick Bruckman wrote:
In any case, that's not the reason for the loss of precision in
converting to floating point and back. What's "stored in binary"
is not the number itself, but the natural logarithm of the number.


Where did you get that idea from? Floating point numbers are usually stored
in base 2, not base e. I have seen a format where they were stored in base
16, but never e.


I don't know where I got that idea from. It might be a nice way
to represent numbers on a scientific calculator -- many of the
"interesting" irrational numbers would be repeating decimals in
base _e_, but I don't know that anyone actually does that. Never
mind.

--
Frederick
Jul 23 '05 #13
Sebastian wrote:
...
yesterday I discovered something that won't come as a surprise to more experienced C++ - programmers - but I was genuinely amazed. I had this line of code:

float x = 0.9f;

and when I had a look at it in the VS.NET 2003 debugger, it said x had a value of 0.89999998 ! When I printf'ed x to a logfile, it was 0.899999976158142090000000000000. Now, why on earth is that? I understand that the results of floatingpoint operations will always be somewhat imprecise due to machine limitations, but 0.9? Can anybody explain this to me?
...


Decimal 0.9 is

0.111001100110011001100... = 0.1(1100)

in binary positional notation. The last part - 1100 - is repeated
indefinitely. This means that a binary machine that uses some form of
positional notation for floating-point numbers and does not provide any
means for representing periodic fractions won't be able to represent 0.9
precisely. The same applies to 0.1, 0.2, 0.8. That's what you observe in
your case.

--
Best regards,
Andrey Tarasevich

Jul 23 '05 #14

This thread has been closed and replies have been disabled. Please start a new discussion.

Similar topics

11
102436
by: Christian Christmann | last post by:
Hi, just a general question. Which data type is more precise for large values: float or double? Thank you. Chris
2
2652
by: Shi Jin | last post by:
Hi there, I have been thinking this for quite a while: if we are considering the number of different representations of one type,say int and float, is there any difference as long as they are...
16
2529
by: Gerald Lafreniere | last post by:
{ float F=123.456000; F*=1000; // Actually I used a for loop F*=10 three times. printf("%f\n", F); } This will produce something like 123456.00XXXX, where XXXX are garbage digits. Why...
13
3316
by: maadhuu | last post by:
hello , i would like to know as to why double is more efficient than float . thanking you, ranjan.
7
3631
by: A. L. | last post by:
Consider following code segment: #1: double pi = 3.141592653589; #2: printf("%lf\n", pi); #3: printf("%1.12lf\n", pi); #4: printf("%1.15lf\n", pi); The above code outputs as following: ...
2
4261
by: Stu Smith | last post by:
A feature we'd like to see in VS 2004 is a compiler warning for equality comparisons on floating values. Why? Because the behaviour allowed under the ECMA spec is somewhat suprising. (Well it...
3
23719
by: Madan | last post by:
Hi all, I had problem regarding float/double arithmetic only with + and - operations, which gives inaccurate precisions. I would like to know how the arithmetic operations are internally handled...
19
2182
by: Jon Shemitz | last post by:
Is there a difference between a constant like "12.34f" and "(float) 12.34"? In principle, at least, the latter is a double constant being cast to a float; while the two both generate actual...
60
7153
by: Erick-> | last post by:
hi all... I've readed some lines about the difference between float and double data types... but, in the real world, which is the best? when should we use float or double?? thanks Erick
116
35659
by: Dilip | last post by:
Recently in our code, I ran into a situation where were stuffing a float inside a double. The precision was extended automatically because of that. To make a long story short, this caused...
0
7171
Oralloy
by: Oralloy | last post by:
Hello folks, I am unable to find appropriate documentation on the type promotion of bit-fields when using the generalised comparison operator "<=>". The problem is that using the GNU compilers,...
0
7220
jinu1996
by: jinu1996 | last post by:
In today's digital age, having a compelling online presence is paramount for businesses aiming to thrive in a competitive landscape. At the heart of this digital strategy lies an intricately woven...
1
6893
by: Hystou | last post by:
Overview: Windows 11 and 10 have less user interface control over operating system update behaviour than previous versions of Windows. In Windows 11 and 10, there is no way to turn off the Windows...
0
7386
tracyyun
by: tracyyun | last post by:
Dear forum friends, With the development of smart home technology, a variety of wireless communication protocols have appeared on the market, such as Zigbee, Z-Wave, Wi-Fi, Bluetooth, etc. Each...
1
4918
isladogs
by: isladogs | last post by:
The next Access Europe User Group meeting will be on Wednesday 1 May 2024 starting at 18:00 UK time (6PM UTC+1) and finishing by 19:30 (7.30PM). In this session, we are pleased to welcome a new...
0
4599
by: conductexam | last post by:
I have .net C# application in which I am extracting data from word file and save it in database particularly. To store word all data as it is I am converting the whole word file firstly in HTML and...
0
3098
by: TSSRALBI | last post by:
Hello I'm a network technician in training and I need your help. I am currently learning how to create and manage the different types of VPNs and I have a question about LAN-to-LAN VPNs. The...
0
3090
by: adsilva | last post by:
A Windows Forms form does not have the event Unload, like VB6. What one acts like?
0
1427
by: 6302768590 | last post by:
Hai team i want code for transfer the data from one system to another through IP address by using C# our system has to for every 5mins then we have to update the data what the data is updated ...

By using Bytes.com and it's services, you agree to our Privacy Policy and Terms of Use.

To disable or enable advertisements and analytics tracking please visit the manage ads & tracking page.