468,167 Members | 1,931 Online

# Briefly explain why 10.12 doesn't display the value expected? printf( 2 Nibble
printf("10.10 = %.15lf\n", 10.10);
printf("10.12 = %.15lf\n", 10.12);
printf("10.15 = %.15lf\n", 10.15);
* Briefly explain why 10.12 doesn't display the value expected?
Jan 30 '21 #1
7 1875 dev7060
413 Expert 256MB
Briefly explain why 10.12 doesn't display the value expected?
What is the expected value and displaying value?
Jan 30 '21 #2
CloseShot
1 Bit
10.10 = 10.100000000000000
10.12 = 10.119999999999999
10.15 = 10.150000000000000

This is what we are getting if we print that. why 10.12 is getting a different value?
Jan 31 '21 #3
dev7060
413 Expert 256MB
Repeating fractions in the binary form may not be exactly represented as decimal point numbers. I guess this is making the difference here too.
Jan 31 '21 #4
SioSio
239 128KB
Inside the computer, decimal numbers including the decimal point are converted to binary numbers and stored in memory as follows. (32bit example)
(Integer part)
10 = 1 × 2 ^ 3 + 0 × 2 ^ 2 + 1 × 2 ^ 1 + 0 × 2 ^ 0 ⇒ 1010
(After the decimal point part)
0.12 × 2 = 0.24 ⇒ 0 for the integer part
0.24 × 2 = 0.48 ⇒ 0 for the integer part
0.48 × 2 = 0.96 ⇒ 0 for the integer part
0.96 × 2 = 1.92 ⇒ 1 for the integer part

1.92-1 = 0.92
0.92 × 2 = 1.84 ⇒ 1 for the integer part
1.84-1 = 0.84
0.84 × 2 = 1.68 ⇒ 1 for the integer part
1.68-1 = 0.68
0.68 × 2 = 1.36 ⇒ 1 for the integer part
1.36-1 = 0.36
0.36 × 2 = 0.72 ⇒ 0 for the integer part

0.72 × 2 = 1.44 ⇒ 1 for the integer part
1.44-1 = 0.44
0.44 × 2 = 0.88 ⇒ 0 for the integer part
0.88 × 2 = 1.76 ⇒ 1 for the integer part
1.76-1 = 0.76
0.76 × 2 = 1.52 ⇒ 1 for the integer part

1.52-1 = 0.52
0.52 × 2 = 1.04 ⇒ 1 for the integer part
1.04-1 = 0.04
0.04 × 2 = 0.08 ⇒ 0 for the integer part
0.08 × 2 = 0.16 ⇒ 0 for the integer part
0.16 × 2 = 0.32 ⇒ 0 for the integer part

0.32 × 2 = 0.64 ⇒ 0 for the integer part
0.64 × 2 = 1.28 ⇒ 1 for the integer part
1.28-1 = 0.28
0.28 × 2 = 0.56 ⇒ 0 for the integer part
0.56 × 2 = 1.12 ⇒ 1 for the integer part
1.12-1 = 0.12
0.12 × 2 = 0.24 ⇒ 0 for the integer part
0.24 × 2 = 0.48 ⇒ 0 for the integer part
0.48 × 2 = 0.96 ⇒ 0 for the integer part
0.96 × 2 = 1.92 ⇒ 1 for the integer part

Since it exceeds 32 bits, no further calculations are performed (truncated).

It is stored in the memory as follows.
0000 1010 .0001 1110 1011 1000 0101 0001

convert to a decimal number
10.119999945163727
Feb 1 '21 #5
dev7060
413 Expert 256MB
Sio:
It is stored in the memory as follows.
0000 1010 .0001 1110 1011 1000 0101 0001

convert to a decimal number
10.119999945163727
How does 10.119999945163727 make 10.119999999999999?

Moreover, if the same logic is applied to 10.15, binary: 00001010.00100110011001100110011, decimal representation: 10.149999976158142, but 10.15 is displayed exactly.
Feb 1 '21 #6
SioSio
239 128KB
correction.
In 64-bit environment, real constants are interpreted as double type by default.

Double precision double type
The sign part is
0
The index part is
100 0000 0010
The mantissa part is
0100 0011 1101 0111 0000 1010 0011 1101 0111 0000 1010 0011 1101
Feb 2 '21 #7
Banfa
9,033 Expert Mod 8TB
It is stored in the memory as follows.
0000 1010 .0001 1110 1011 1000 0101 0001
It most certainly is not stored in memory like that, although your latest post is more correct.

The main point is that a floating point representation is an approximation to a value and specifically it is not guaranteed that any given decimal real is precisely representable in the binary floating point form used by your computer. When you output the value you didn't ask for a fixed precision or fixed number of decimal places so the computer just gave you everything it had.

There are all sorts of problems this and the general representation of floating points introduces, for example adding small floating point numbers to large ones can introduce significant errors because the lower precision digits of the smaller number get lost when added to the larger number. The operations == and != are pretty much guaranteed to fail with floating point and you should certainly program as if they always fail (== is always false, != is always true). Even < and > may produce unexpected results at the end of a calculation because the rounding effects put the calculation on the wrong side of the line compared to doing it by hand; although personally I normally prepared to live with this).

If precision is important don't use floating point there are other options; if you decide that floating point will do then always decide on the precision you need and force your output to that precision.

If you really want to know the gritty detail then read What Every Computer Scientist
Should Know About FloatingPoint Arithmetic
.
Feb 2 '21 #8

### Post your reply

Sign in to post your reply or Sign up for a free account.

### Similar topics

 2 posts views Thread by dave | last post: by 1 post views Thread by Parveen | last post: by 2 posts views Thread by | last post: by 1 post views Thread by satish.vell | last post: by 3 posts views Thread by getintanked | last post: by 3 posts views Thread by =?Utf-8?B?Umljb2hEZXZlbG9wZXI=?= | last post: by 2 posts views Thread by mervyntracy | last post: by 1 post views Thread by bhargavi514 | last post: by 1 post views Thread by jeepzhen | last post: by 1 post views Thread by sipeed | last post: by reply views Thread by LessComplexity | last post: by reply views Thread by ravipankaj | last post: by reply views Thread by ravipankaj | last post: by 1 post views Thread by gcdp | last post: by 1 post views Thread by newkwesi | last post: by reply views Thread by kamranasdasdas | last post: by 14 posts views Thread by EricB | last post: by reply views Thread by gcreed | last post: by
By using this site, you agree to our Privacy Policy and Terms of Use.