P: n/a

Hello everyone,
Just check this code:
#include<stdio.h>
int main()
{
double a,b;
scanf("%lf",&a);
scanf("%lf",&b);
printf("\nNumber a: %0.16lf",a);
printf("\nNumber b: %0.16lf",b);
printf("\nDifference:%0.16lf\n",(ab));
return 0;
}
Output

[alagu@localhost decimal]$ ./decimal
12.3333
11.1111
Number a: 12.3332999999999995
Number b: 11.1111000000000004
Difference:1.2221999999999991

We wanted a clean number like 12.3333 and not
12.3332999.What is the reason for this
and is there anyway to avoid this?We want decimal
points to be exact.How to do it?
Thanks in advance,
Regards,
Allagappan M  
Share this Question
P: n/a

Actually with very double number there is a precision attached. long
double will have more precision while float will have a less precision.
To avoid the problem you facing you got to device your own data
structure to represent precision you require.
Regards,
Bhuwan Chopra  
P: n/a

> [alagu@localhost decimal]$ ./decimal 12.3333 11.1111
Number a: 12.3332999999999995 Number b: 11.1111000000000004 Difference:1.2221999999999991  We wanted a clean number like 12.3333 and not 12.3332999.What is the reason for this and is there anyway to avoid this?We want decimal points to be exact.How to do it?
Hi,
because of the binary system numbers that doesn't
end with '5' (or .0) cannot be stored correctly.
Its not a matter of precision. You can store 12.3333
as 123333 and keep in mind that your number has to
be divided by 10,000.
Michael  
P: n/a

In article <11*********************@f14g2000cwb.googlegroups. com> "Allagappan" <al********@gmail.com> writes:
.... double a,b; scanf("%lf",&a); scanf("%lf",&b); printf("\nNumber a: %0.16lf",a); printf("\nNumber b: %0.16lf",b); printf("\nDifference:%0.16lf\n",(ab));
.... [alagu@localhost decimal]$ ./decimal 12.3333 11.1111
Number a: 12.3332999999999995 Number b: 11.1111000000000004 Difference:1.2221999999999991  We wanted a clean number like 12.3333 and not 12.3332999.What is the reason for this and is there anyway to avoid this?We want decimal points to be exact.How to do it?
See the FAQ. The bottom line is that 12.3333 can not be represented
exactly in a double. If you want to be exact you need scaled
integers.

dik t. winter, cwi, kruislaan 413, 1098 sj amsterdam, nederland, +31205924131
home: bovenover 215, 1025 jn amsterdam, nederland; http://www.cwi.nl/~dik/  
P: n/a

Allagappan wrote: #include<stdio.h> int main() { double a,b; scanf("%lf",&a); scanf("%lf",&b); printf("\nNumber a: %0.16lf",a); printf("\nNumber b: %0.16lf",b); printf("\nDifference:%0.16lf\n",(ab)); return 0; }
Output  [alagu@localhost decimal]$ ./decimal 12.3333 11.1111
Number a: 12.3332999999999995 Number b: 11.1111000000000004 Difference:1.2221999999999991  We wanted a clean number like 12.3333 and not 12.3332999.What is the reason for this and is there anyway to avoid this?We want decimal points to be exact.How to do it?
Do you understand how binary floatingpoint works? http://www2.hursley.ibm.com/decimal/decifaq1.html http://www2.hursley.ibm.com/decimal/  
P: n/a

On 20 May 2005 03:05:41 0700, Allagappan
<al********@gmail.com> wrote: Hello everyone, Just check this code:
#include<stdio.h> int main() { double a,b; scanf("%lf",&a); scanf("%lf",&b); printf("\nNumber a: %0.16lf",a); printf("\nNumber b: %0.16lf",b); printf("\nDifference:%0.16lf\n",(ab)); return 0; }
Output  [alagu@localhost decimal]$ ./decimal 12.3333 11.1111
Number a: 12.3332999999999995 Number b: 11.1111000000000004 Difference:1.2221999999999991  We wanted a clean number like 12.3333 and not 12.3332999.What is the reason for this and is there anyway to avoid this?We want decimal points to be exact.How to do it?
Use COBOL or another language which supports exact fixed point decimal
numbers. Or see the FAQ, it's the first item in the section on floating
point numbers. Since numbers are held (in most machines) as binary,
precision is not infinite. Just as 1/3 is not representable exactly in
decimal (or binary or any other base not a multiple of 3), so 1/10 is
not representable in binary (or ternary or any other base not a multiple
of both 2 and 5).
It's easy to demonstrate with a much simpler program:
#include <stdio.h>
int main(void)
{
double x = 0.1;
printf("%0.16lf\n", x);
return 0;
}
You could always use a lower precision for your output, like %0.12lf,
which would make it look correct even though internally the number is
incorrect in the last bit or so.
Chris C  
P: n/a

Chris Croughton wrote: It's easy to demonstrate with a much simpler program:
#include <stdio.h>
int main(void) { double x = 0.1; printf("%0.16lf\n", x); return 0; }
Easy to demonstrate undefined behaviour :) (in C89 anyway).
%f (not %lf) is the correct printf specifier for 'double'.  
P: n/a

Allagappan wrote:
.... snip ... Number a: 12.3332999999999995 Number b: 11.1111000000000004 Difference:1.2221999999999991  We wanted a clean number like 12.3333 and not 12.3332999.What is the reason for this and is there anyway to avoid this?We want decimal points to be exact.How to do it?
Use integers.

"If you want to post a followup via groups.google.com, don't use
the broken "Reply" link at the bottom of the article. Click on
"show options" at the top of the article, then click on the
"Reply" at the bottom of the article headers."  Keith Thompson  
P: n/a

Allagappan wrote: Hello everyone, Just check this code:
#include<stdio.h> int main() { double a,b; scanf("%lf",&a); scanf("%lf",&b); printf("\nNumber a: %0.16lf",a); printf("\nNumber b: %0.16lf",b); printf("\nDifference:%0.16lf\n",(ab)); return 0; }
Output  [alagu@localhost decimal]$ ./decimal 12.3333 11.1111
Number a: 12.3332999999999995 Number b: 11.1111000000000004 Difference:1.2221999999999991  We wanted a clean number like 12.3333 and not 12.3332999.What is the reason for this and is there anyway to avoid this?We want decimal points to be exact.How to do it?
Thanks in advance, Regards, Allagappan M
Try using a "%0.4f" specifier in the printf.

Thomas Matthews
C++ newsgroup welcome message: http://www.slack.net/~shiva/welcome.txt
C++ Faq: http://www.parashift.com/c++faqlite
C Faq: http://www.eskimo.com/~scs/cfaq/top.html
alt.comp.lang.learn.cc++ faq: http://www.comeaucomputing.com/learn/faq/
Other sites: http://www.josuttis.com  C++ STL Library book http://www.sgi.com/tech/stl  Standard Template Library  
P: n/a

Allagappan wrote: Hello everyone, Just check this code:
#include<stdio.h> int main() { double a,b; scanf("%lf",&a); scanf("%lf",&b); printf("\nNumber a: %0.16lf",a); printf("\nNumber b: %0.16lf",b); printf("\nDifference:%0.16lf\n",(ab)); return 0; }
Output  [alagu@localhost decimal]$ ./decimal 12.3333 11.1111
Number a: 12.3332999999999995 Number b: 11.1111000000000004 Difference:1.2221999999999991  We wanted a clean number like 12.3333 and not 12.3332999.What is the reason for this and is there anyway to avoid this?We want decimal points to be exact.How to do it?
%.17g format is sufficient to express the full precision of IEEE 64bit
double. Anything more is nearly guaranteed to produce "unclean" output.
Frequently, you may have to go to %.16g to force rounding. In general,
those decimal digits limits may be calculated from information present in
<float.h>.

Tim Prince  
P: n/a

Hello everyone,
snippet:
#include<stdio.h>
int main()
{
double a=144.623353166;
double b=144.623352166;
printf("%0.17lg,%0.17lg\n",a,(ab));
printf("%0.17lf,%0.17lf\n",a,(ab));
return 0;
}
Output:

144.62335316599999,9.9999999747524271e07
144.62335316599998691,0.00000099999999748
As Tim says,%.17g should have given me 0.000001(note the variation in
sixth digit of a and b),but it is not that so.
Regards,
Allagappan  
P: n/a

Allagappan wrote: Hello everyone,
snippet:
#include<stdio.h> int main() { double a=144.623353166; double b=144.623352166; printf("%0.17lg,%0.17lg\n",a,(ab)); printf("%0.17lf,%0.17lf\n",a,(ab)); return 0; }
Output:  144.62335316599999,9.9999999747524271e07 144.62335316599998691,0.00000099999999748
As Tim says,%.17g should have given me 0.000001(note the variation in sixth digit of a and b),but it is not that so.
Regards, Allagappan
When you subtract 2 numbers which are equal in the first 8 digits, and have
fractional parts which don't have exact decimal to binary conversion, you
should expect only 8 clean digits of result.

Tim Prince  
P: n/a

On Fri, 20 May 2005 12:24:51 +0100, in comp.lang.c , Chris Croughton
<ch***@keristor.net> wrote: On 20 May 2005 03:05:41 0700, Allagappan <al********@gmail.com> wrote:
Number a: 12.3332999999999995
.... We wanted a clean number like 12.3333
....Use COBOL or another language which supports exact fixed point decimal
Is it really possible that people can pass their CS exams without ever
reading Goldberg? I thought it was compulsory...
Heck, I came up from the trenches, and I learned about FP in my first
week aat work...

Mark McIntyre
CLC FAQ <http://www.eskimo.com/~scs/Cfaq/top.html>
CLC readme: <http://www.ungerhu.com/jxh/clc.welcome.txt>  
P: n/a

Mark McIntyre <ma**********@spamcop.net> writes: Is it really possible that people can pass their CS exams without ever reading Goldberg? I thought it was compulsory...
Heck, I came up from the trenches, and I learned about FP in my first week aat work...
Systems folk like me don't often encounter floating point. In an
operating system, the floatingpoint unit is just another set of
registers you have to save and restore.

int main(void){char p[]="ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuv wxyz.\
\n",*q="kl BIcNBFr.NKEzjwCIxNJC";int i=sizeof p/2;char *strchr();int putchar(\
);while(*q){i+=strchr(p,*q++)p;if(i>=(int)sizeof p)i=sizeof p1;putchar(p[i]\
);}return 0;}  
P: n/a

In article <11********************@g43g2000cwa.googlegroups.c om> "Allagappan" <al********@gmail.com> writes:
Lessee whether you understand it this way: double a=144.623353166; double b=144.623352166; printf("%0.17lg,%0.17lg\n",a,(ab));
a and b are, when converted to binary and stored as a double precision, in
binary, followed by the difference:
a = 10010000.10011111100101000001001010110101110100111 1111
b = 10010000.10011111100101000000000111101110110111000 1011
a  b = 0.000000000000000000010000110001101111011110100
The exact decimal values for these numbers are:
a = 144.623353165999986913448083214461803436279296875
b = 144.623352165999989438205375336110591888427734375
ab = 0.000000999999997475242707878351211547851562500
The next higher representable decimal numbers are
a : 144.623353166000015335157513618469238281250000000
b : 144.623352166000017859914805740118026733398437500
You may note that both these values are further away from the exact decimal
values than the ones I gave above
144.62335316599999,9.9999999747524271e07 144.62335316599998691,0.00000099999999748
Looks pretty much like the correctly rounded values.
As Tim says,%.17g should have given me 0.000001(note the variation in sixth digit of a and b),but it is not that so.
Because you ask for 17 digits after the decimal point, and that is what
is given. If you had asked for 14 digits or less you have gotten what
you appear to wish.

dik t. winter, cwi, kruislaan 413, 1098 sj amsterdam, nederland, +31205924131
home: bovenover 215, 1025 jn amsterdam, nederland; http://www.cwi.nl/~dik/  
P: n/a

Allagappan wrote: Hello everyone,
snippet:
#include<stdio.h> int main() { double a=144.623353166; double b=144.623352166; printf("%0.17lg,%0.17lg\n",a,(ab)); printf("%0.17lf,%0.17lf\n",a,(ab)); return 0; }
Output:  144.62335316599999,9.9999999747524271e07 144.62335316599998691,0.00000099999999748
As Tim says,%.17g should have given me 0.000001(note the variation in sixth digit of a and b),but it is not that so.
Regards, Allagappan
Hi Al,
Your expectations cannot be met. Most conversions between binary and
decimal are inexact due to the differences in the two number systems.
Please also note that 'real' numbers are stored in your computer in
binary, not decimal. When you do something like ..
float f = 1.2;
... constant 1.2 is converted to binary floating point and stored in f.
Something like this:
00111111 10011001 10011001 10011010
Exp = 127 (1)
00000001
Man = .10011001 10011001 10011010
1.20000005e+00
The above is my explanation of IEEE FP. The Mantissa is a fraction (less
than one) and the Exponent (after applying an offset) is the number of
bits to shift the mantissa left to obtain the Value.
Note that the fractional part of 1.2 expressed in binary is a repeating
series 00110011 etc. in 23 bits but the last series, 0011 runs out of
bits and so is 'rounded' toward infinity as 0100. As a result, the float
is slightly larger than 1.2 and printf with "%.8e" will convert it back
to decimal and show it to you.
But it's still a conversion and prone to error. Converting float to
double is lossless. The double converted to decimal ..
1.2000000476837158e+00
Now if I convert 1.2 to double right away, the fractional 0011 series
has 52 bits to play itself out and no rounding takes place. Converting
the double with "%.16e" gives decimal ..
1.2000000000000000e+00
Go figure. Somebody should write a book.. :)

Joe Wright mailto:jo********@comcast.net
"Everything should be made as simple as possible, but not simpler."
 Albert Einstein   
P: n/a

On Fri, 20 May 2005 22:25:20 +0100, Mark McIntyre
<ma**********@spamcop.net> wrote: On Fri, 20 May 2005 12:24:51 +0100, in comp.lang.c , Chris Croughton <ch***@keristor.net> wrote:
On 20 May 2005 03:05:41 0700, Allagappan <al********@gmail.com> wrote:
Number a: 12.3332999999999995 ... We wanted a clean number like 12.3333 ...Use COBOL or another language which supports exact fixed point decimal
Is it really possible that people can pass their CS exams without ever reading Goldberg? I thought it was compulsory...
I don't think I ever read Goldberg, but binary fractional arithmetic was
introduced in our university class along with why it wasn't possible to
represent decimal fractions exactly (as a maths student I already knew
that from using alternatebased numbers).
Heck, I came up from the trenches, and I learned about FP in my first week aat work...
I have a feeling that it may have been mentioned in Daniel D.
McCracken's FORTRAN IV book, which was the first programming language
book I read, but I can't find my copy...
Chris C  
P: n/a

In article <sl******************@ccserver.keris.net>, ch***@keristor.net
says... Is it really possible that people can pass their CS exams without ever reading Goldberg? I thought it was compulsory...
I don't think I ever read Goldberg, but binary fractional arithmetic was introduced in our university class along with why it wasn't possible to represent decimal fractions exactly (as a maths student I already knew that from using alternatebased numbers).
When I was a younger pup, you had to take a numerical analysis course,
focused entirely upon how to perform floating point calculations
as accurately possible, error propagation, etc. along with interpolation,
integration, difeqs, solving nonlinear equations, least squares,
singular value decomp, random numbers and monte carlo methods. You
wanted a degree, you passed the class.
The standard text was Forsythe, Malcom and Moler's "Computer Methods
for Mathematical Computations", my copy is (C) 1977. Of course,
George Forsythe was the founder and chair of Stanford's CS department
when he died five years earlier, but Michael Malcolm and Cleve Moler
gave him top billing anyway, quite a noble gesture.
It's interesting to note that the book referred to "desk machines"
and "automatic computers". :)
Some of the computer systems and "desk machines" discussed with respect
to base, precision and exponent range early in the text include the
Univac 1108, Honeywell 6000, PDP11, Control Data 6600, Cray1, Illiac
IV, SETUN, Burroughs B5500, Hewlett Packard HP45, TI SR5x, IBM 360
and 370, Telefunken TR440 and Maniac II. Extensive discussion of
the variations between the platforms, single and double precision
usage (and the tradeoffs on the various platforms) is discussed
early on, along with error minimization, then a lot of math and
algorithms to solve them accurately.
It covered a lot of ground that was required for all CS students
once upon a time. I don't remember the last time I met a new
CS grad that had a clue what "machine epsilon" even meant.

Randy Howard (2reply remove FOOBAR)
Life should not be a journey to the grave with the intention of arriving
safely in an attractive and well preserved body, but rather to skid in
sideways, chocolate in one hand, martini in the other, body thoroughly
used up, totally worn out and screaming "WOO HOO what a ride!!"  
P: n/a

Randy Howard wrote:
.... snip ... When I was a younger pup, you had to take a numerical analysis course, focused entirely upon how to perform floating point calculations as accurately possible, error propagation, etc. along with interpolation, integration, difeqs, solving nonlinear equations, least squares, singular value decomp, random numbers and monte carlo methods. You wanted a degree, you passed the class.
The standard text was Forsythe, Malcom and Moler's "Computer Methods for Mathematical Computations", my copy is (C) 1977.
.... snip ...
No such thing when I was in school. I discovered Knuth about the
time I was creating my third floating point system, and had had my
nose rubbed in some of the inherent (and some not inherent)
problems. One of my important references was Margenau and Murphys
"Mathematics of Physics and Chemistry", which predates computers.
I remember it was especially lucid on least square fits, so I could
apply it to other things than polynomials. Unfortunately some
cretin stole my copy.

Some informative links:
news:news.announce.newusers http://www.geocities.com/nnqweb/ http://www.catb.org/~esr/faqs/smartquestions.html http://www.caliburn.nl/topposting.html http://www.netmeister.org/news/learn2quote.html  
P: n/a

On Sun, 22 May 2005 09:16:37 GMT, Randy Howard
<ra*********@FOOverizonBAR.net> wrote: When I was a younger pup, you had to take a numerical analysis course, focused entirely upon how to perform floating point calculations as accurately possible, error propagation, etc. along with interpolation, integration, difeqs, solving nonlinear equations, least squares, singular value decomp, random numbers and monte carlo methods. You wanted a degree, you passed the class.
I took NA, but it didn't cover floating point at all (it was run as part
of the maths course, not the computing one).
The standard text was Forsythe, Malcom and Moler's "Computer Methods for Mathematical Computations", my copy is (C) 1977. Of course, George Forsythe was the founder and chair of Stanford's CS department when he died five years earlier, but Michael Malcolm and Cleve Moler gave him top billing anyway, quite a noble gesture.
Ah, if it was published in America in 1977 it would be very unlikely to
have made it to the UK in time for my graduation in 1978 <g>. (Still
available from the original publishers (PrenticeHall) from Amazon and
others, I just looked.)
It's interesting to note that the book referred to "desk machines" and "automatic computers". :)
Some of the computer systems and "desk machines" discussed with respect to base, precision and exponent range early in the text include the Univac 1108, Honeywell 6000, PDP11, Control Data 6600, Cray1, Illiac IV, SETUN, Burroughs B5500, Hewlett Packard HP45, TI SR5x, IBM 360 and 370, Telefunken TR440 and Maniac II. Extensive discussion of the variations between the platforms, single and double precision usage (and the tradeoffs on the various platforms) is discussed early on, along with error minimization, then a lot of math and algorithms to solve them accurately.
And of course well prior to the IEE standardisation of floating point.
It covered a lot of ground that was required for all CS students once upon a time. I don't remember the last time I met a new CS grad that had a clue what "machine epsilon" even meant.
I don't think I've ever heard it called that, but it's pretty obvious
(to me at least) in context. But given by the number of time it's asked
here (it really is a FAQ) I assume that it's no longer being taught...
Chris C   This discussion thread is closed Replies have been disabled for this discussion.   Question stats  viewed: 17490
 replies: 19
 date asked: Nov 14 '05
