473,325 Members | 2,870 Online
Bytes | Software Development & Data Engineering Community
Post Job

Home Posts Topics Members FAQ

Join Bytes to post your question to a community of 473,325 software developers and data experts.

Accuracy of doubles frustrating

You'd think that the advantage to extra decimal places in a data type is
better accuracy. Well think again. Look at this method and test
results:

================================================== =========
// not just for rounding decimals, but for getting nearest
// whole number values. IE. Nearest(443, 5) = 445
public static double Nearest(double source, double nearest)
{
//method 1 - uses modulus operator
// (remainder from division)
double rem = source % nearest;
double dest = source - rem;

//method 2 - get remainder without modulus
double times = source / nearest;
int iTimes = (int)SM.Round(times,0);
//double times2 = (double)iTimes;
double dest2 = iTimes * nearest;

//see if Tostring and back helps?
string sDest2 = dest2.ToString();
double dest2fs = double.Parse(sDest2);

return dest2;
}
================================================== =========

test with 134.111 and .1 (works perfectly both ways):

source 134.111 double
nearest 0.1 double
rem 0.010999999999982552 double
dest 134.1 double
times 1341.11 double
iTimes 1341 int
dest2 134.1 double
sDest2 "134.1" string
dest2fs 134.1 double

test with 55.35 and .1 (round errors both ways):

source 55.35 double
nearest 0.1 double
rem 0.049999999999998351 double
dest 55.300000000000004 double
times 553.5 double
iTimes 554 int
dest2 55.400000000000006 double
sDest2 "55.4" string
dest2fs 55.4 double

test with 35.5 and .1 (works 2nd way, flaw 1st way):

source 35.5 double
nearest 0.1 double
rem 0.099999999999998035 double
dest 35.4 double
times 355.0 double
iTimes 355 int
dest2 35.5 double
sDest2 "35.5" string
dest2fs 35.5 double

I could understand losing decimal accuracy with double values nearing the
double.MaxValue as that leaves less memory for the decimal places, but
for numbers less than 100 and only 2 decimal places, why is there a
problem???

What I find very strange is that dest2 in test2 has 15 decimal places,
but the ToString() method chops off everything after the first zero. Or
it could just be rounding to the nearest 14 before the toString? either
way would be the same result in this case.

Or is there just something wrong with my processor (mobile pentium 4,
3.02ghtz from Compaq)? Can you (anyone) verify my results on your
machine? If same results, can anyone explain?

Michael Lang
Jul 21 '05 #1
11 1939
Michael Lang <ml@nospam.com> wrote in
news:Xn********************************@207.46.248 .16:
================================================== =========
// not just for rounding decimals, but for getting nearest
// whole number values. IE. Nearest(443, 5) = 445
public static double Nearest(double source, double nearest)
{
//method 1 - uses modulus operator
// (remainder from division)
double rem = source % nearest;
double dest = source - rem;

//method 2 - get remainder without modulus
double times = source / nearest;
int iTimes = (int)SM.Round(times,0);
//double times2 = (double)iTimes;
double dest2 = iTimes * nearest;

//see if Tostring and back helps?
string sDest2 = dest2.ToString();
double dest2fs = double.Parse(sDest2);

return dest2;
}
================================================== =========


By the way, one logical change (no change in previous results):

=========================================
//method 1
double rem = source % nearest;
double dest;
if (rem >= nearest/2)
{
dest = source - rem + nearest;
}
else
{
dest = source - rem;
}
=========================================

Another devastating test result:

Call Nearest(134.1, .2). should get "134.2", but get:

source 134.1 double
nearest 0.2 double
rem 0.099999999999986877 double
dest 134.0 double
times 670.49999999999989 double
iTimes 670 int
dest2 134.0 double
sDest2 "134" string
dest2fs 134.0 double

134.1 / .2 should equal 670.5 which would round up, but since it is just
under 670.5, it rounds down!

Michael Lang
Jul 21 '05 #2
Hi Michael,

We had a query in the languages.vb group about why 5.1 * 100 of all things wasn't accurate. I'll use the reply that I
gave then.

<quote>
5.1 * 100 is so obviously 510 to us because we think in decimal.

Convert 5.1 to a binary floating point number, however, and the digits
after the 'binary point' go on somewhat longer than the few bytes available.
Thus the very fact of <storing> 5.1 is going to introduce an error - let alone
multiplying it.

You'd think that it wouldn't need many bits to store 5.1 given that 51
only needs 6!

51 = 32 + 16 + 2 + 1 = 110011

But 5.1 is more complicated than this as it is built up from fractional
powers of two:

5.1 = 4 + 1 + 1/16 + 1/32 + 1/256 + 1/512 + 1/2048 + ...

= 11.00011001101000 ...

We've already got to 14 digits after the 'binary point' and it's still not
accurate, being only 5.09999765625.

Some numbers just don't like being converted to sums of powers of two.
</quote>

Now if you consider that you have two numbers in your function, either or both may be suffering from that inherent loss
of accuracy - right from the word go. The result can be a compounded error.

Another factor that you may not be aware of is that rounding in .NET is done to the nearest even number. Once upon a
time 1.5, 2.5, 3.5, 4.5, 5.5, 6.5 would round to 2, 3, 4, 5. 6, 7. Not anymore - they round like so: 2, 2, 4, 4, 6, 6!! The
idea is that by alternately rounding up and then down, the accuracy is greater overall. Unfortunately that may be so in
heavy-duty calculations but in day-to-day programming that is unexpected behaviour - ie. a bug waiting to be discovered.

Regards,
Fergus

Jul 21 '05 #3
Ferg:

I missed that post, but I just walked through it. Very enlightening!
"Fergus Cooney" <fi****@post.com> wrote in message
news:u$**************@tk2msftngp13.phx.gbl...
Hi Michael,

We had a query in the languages.vb group about why 5.1 * 100 of all things wasn't accurate. I'll use the reply that I gave then.

<quote>
5.1 * 100 is so obviously 510 to us because we think in decimal.

Convert 5.1 to a binary floating point number, however, and the digits
after the 'binary point' go on somewhat longer than the few bytes available. Thus the very fact of <storing> 5.1 is going to introduce an error - let alone multiplying it.

You'd think that it wouldn't need many bits to store 5.1 given that 51
only needs 6!

51 = 32 + 16 + 2 + 1 = 110011

But 5.1 is more complicated than this as it is built up from fractional powers of two:

5.1 = 4 + 1 + 1/16 + 1/32 + 1/256 + 1/512 + 1/2048 + ...

= 11.00011001101000 ...

We've already got to 14 digits after the 'binary point' and it's still not accurate, being only 5.09999765625.

Some numbers just don't like being converted to sums of powers of two.
</quote>

Now if you consider that you have two numbers in your function, either or both may be suffering from that inherent loss of accuracy - right from the word go. The result can be a compounded error.
Another factor that you may not be aware of is that rounding in .NET is done to the nearest even number. Once upon a time 1.5, 2.5, 3.5, 4.5, 5.5, 6.5 would round to 2, 3, 4, 5. 6, 7. Not anymore - they round like so: 2, 2, 4, 4, 6, 6!! The idea is that by alternately rounding up and then down, the accuracy is greater overall. Unfortunately that may be so in heavy-duty calculations but in day-to-day programming that is unexpected behaviour - ie. a bug waiting to be discovered.
Regards,
Fergus

Jul 21 '05 #4
"Fergus Cooney" <fi****@post.com> wrote in
news:u$**************@tk2msftngp13.phx.gbl:
Hi Michael,

We had a query in the languages.vb group about why 5.1 * 100 of
all things wasn't accurate. I'll use the reply that I
gave then.

<quote>
5.1 * 100 is so obviously 510 to us because we think in decimal.

Convert 5.1 to a binary floating point number, however, and the
digits
after the 'binary point' go on somewhat longer than the few bytes
available. Thus the very fact of <storing> 5.1 is going to introduce
an error - let alone multiplying it.

You'd think that it wouldn't need many bits to store 5.1 given
that 51
only needs 6!

51 = 32 + 16 + 2 + 1 = 110011

But 5.1 is more complicated than this as it is built up from
fractional
powers of two:

5.1 = 4 + 1 + 1/16 + 1/32 + 1/256 + 1/512 + 1/2048 + ...

= 11.00011001101000 ...

We've already got to 14 digits after the 'binary point' and it's
still not
accurate, being only 5.09999765625.

Some numbers just don't like being converted to sums of powers of
two.
</quote>

Now if you consider that you have two numbers in your function,
either or both may be suffering from that inherent loss
of accuracy - right from the word go. The result can be a compounded
error.

Another factor that you may not be aware of is that rounding in
.NET is done to the nearest even number. Once upon a
time 1.5, 2.5, 3.5, 4.5, 5.5, 6.5 would round to 2, 3, 4, 5. 6, 7. Not
anymore - they round like so: 2, 2, 4, 4, 6, 6!! The idea is that by
alternately rounding up and then down, the accuracy is greater
overall. Unfortunately that may be so in heavy-duty calculations but
in day-to-day programming that is unexpected behaviour - ie. a bug
waiting to be discovered.

Regards,
Fergus

So, there is no built in cure for this problem? I still don't understand
why accuracy isn't important in mathmatical computing? It sorta defeats
the whole purpose of computing without the accuracy. What do all the
highly scientific applications do?

Even calc.exe that comes with Windows will get the correct answer. So
how does it do it?

If there is a problem storing numbers as is, then why aren't they stored
differently? Why not store 5.1 as "51" and "1". The "1" being the
number of decimal places to offset to get the real number.

If you were back in elementary school, you would be taught to multiply as
follows:

10.01
x 5
-----
5005 then shift decimal by the number of total decimal digits in both
source numbers, in this case 2, which equals 50.05.

Also, more detailed example...
55.2
x 4.5
------
552 x 5 = 2760 with 2 digit shift (27.60)
+552 x 40 = 22080 with 2 digit shift (220.80)
-zero added to end of 4 for each "place" after it
----------------
24840 with 2 digit shift (248.4)

or
55.2 x 4.5 = 552 x 45 = 24840 with 2 digit shift = 248.4

This way computers and humans would both "think" of numbers in the same
way, and we would always get the same EXACT result. Or am I missing
something?

Couldn't Microsoft create a new numeric datatype that used the
mathematical concepts above that had just as high of a max value as
double and used 64 bits? (or as high as a float using 32 bits?)

MIchael Lang
Jul 21 '05 #5
Michael Lang <ml@nospam.com> wrote:
I could understand losing decimal accuracy with double values nearing the
double.MaxValue as that leaves less memory for the decimal places, but
for numbers less than 100 and only 2 decimal places, why is there a
problem???


It still can't represent those numbers accurately, just as with as many
decimal places as you like, you can't represent a third accurately.

See http://www.pobox.com/~skeet/csharp/floatingpoint.html and
http://www.pobox.com/~skeet/csharp/decimal.html
--
Jon Skeet - <sk***@pobox.com>
http://www.pobox.com/~skeet
If replying to the group, please do not mail me too
Jul 21 '05 #6
Michael Lang <ml@nospam.com> wrote:
So, there is no built in cure for this problem? I still don't understand
why accuracy isn't important in mathmatical computing? It sorta defeats
the whole purpose of computing without the accuracy. What do all the
highly scientific applications do?
If they care about getting exactly correct decimal values, they use a
decimal type, probably their own.
Even calc.exe that comes with Windows will get the correct answer. So
how does it do it?
It doesn't use binary floating point, presumably.
If there is a problem storing numbers as is, then why aren't they stored
differently?
Because storing numbers as binary floating point is efficient, both in
space and storage.
Why not store 5.1 as "51" and "1". The "1" being the
number of decimal places to offset to get the real number.


That's basically what the decimal type does. However, it's both slower
and bigger than double. For most scientific applications, the decimal
representation doesn't matter - as soon as you do things like dividing
by three you'll get the same problem anyway. They care about how close
the actual answer is to the theoretical answer, and double is
reasonably good for that, as well as being fast.

--
Jon Skeet - <sk***@pobox.com>
http://www.pobox.com/~skeet
If replying to the group, please do not mail me too
Jul 21 '05 #7

"Michael Lang" <ml@nospam.com> wrote in message
news:Xn********************************@207.46.248 .16...
You'd think that the advantage to extra decimal places in a data type is
better accuracy. Well think again. Look at this method and test
results:
<snip> I could understand losing decimal accuracy with double values nearing the
double.MaxValue as that leaves less memory for the decimal places, but
for numbers less than 100 and only 2 decimal places, why is there a
problem???

What I find very strange is that dest2 in test2 has 15 decimal places,
but the ToString() method chops off everything after the first zero. Or
it could just be rounding to the nearest 14 before the toString? either
way would be the same result in this case.

Or is there just something wrong with my processor (mobile pentium 4,
3.02ghtz from Compaq)? Can you (anyone) verify my results on your
machine? If same results, can anyone explain?

Michael Lang


Definitely not your CPU :-)
To put this in perspective, the phenomenon occurs in C, C++, C#, Java and
other languages, not just in .NET. As others have posted, it's the result of
how real numbers are stored in memory.

--
Peter - [MVP - Academic]
Jul 21 '05 #8
"Michael Lang" <ml@nospam.com> wrote in message
news:Xn********************************@207.46.248 .16...
So, there is no built in cure for this problem? I still don't understand
why accuracy isn't important in mathmatical computing? It sorta defeats
the whole purpose of computing without the accuracy.
If you are going to work with floating point values on a computer there is a
value called epsilon that you should become acquainted with.

See, for example.
http://www.ma.utexas.edu/documentati...ck/node73.html

If long chains of floating point calculations need to be performed, one
approach is to keep accumulate roundoff differences and add them back in
occasionally to make the results more accurate.
What do all the highly scientific applications do?
It is customary to perform calculations at higher precision than is needed,
because then roundoff doesn't factor in. If very high precision is required
then extended precision techniques need to be employed.

My CompSci MS Thesis involved extended precision division algorithms.
http://webpals.sdln.net/cgi-bin/pals...%202/di%200002
If there is a problem storing numbers as is, then why aren't they stored
differently? Why not store 5.1 as "51" and "1". The "1" being the
number of decimal places to offset to get the real number.


The decimal data type uses fixed point, rather than floating point, and is
meant to address some of the issues.

-- Alan
Jul 21 '05 #9
So why does 0.56 + 0.39 = 0.95 when computed using Visual C++ doubles,
but 0.950000000000000007 when using C#?

-Roger

"Alan Pretre" <no@spam> wrote in
news:Oh**************@TK2MSFTNGP09.phx.gbl:
"Michael Lang" <ml@nospam.com> wrote in message
news:Xn********************************@207.46.248 .16...
So, there is no built in cure for this problem? I still don't
understand why accuracy isn't important in mathmatical computing? It
sorta defeats the whole purpose of computing without the accuracy.
If you are going to work with floating point values on a computer
there is a value called epsilon that you should become acquainted
with.

See, for example.
http://www.ma.utexas.edu/documentati...ck/node73.html

If long chains of floating point calculations need to be performed,
one approach is to keep accumulate roundoff differences and add them
back in occasionally to make the results more accurate.
What do all the highly scientific applications do?


It is customary to perform calculations at higher precision than is
needed, because then roundoff doesn't factor in. If very high
precision is required then extended precision techniques need to be
employed.

My CompSci MS Thesis involved extended precision division algorithms.
http://webpals.sdln.net/cgi-bin/pals...CAT____/au%20%

2 0%20Pretre,%20Alan./SAGE%200/MAXDI%202/di%200002
If there is a problem storing numbers as is, then why aren't they
stored differently? Why not store 5.1 as "51" and "1". The "1"
being the number of decimal places to offset to get the real number.


The decimal data type uses fixed point, rather than floating point,
and is meant to address some of the issues.

-- Alan


Jul 21 '05 #10
R Warford <rw@nospam.com> wrote:
So why does 0.56 + 0.39 = 0.95 when computed using Visual C++ doubles,
but 0.950000000000000007 when using C#?


I think you'll find it doesn't actually get to 0.95 using Visual C++
either - it's just being displayed like that.

(It's not exactly 0.950000000000000007 in C# either, it's actually
0.950000000000000066613381477509392425417900085449 21875.)

--
Jon Skeet - <sk***@pobox.com>
http://www.pobox.com/~skeet
If replying to the group, please do not mail me too
Jul 21 '05 #11
Of course, I should have looked deeper. Thanks!
I find the VC++ display more comforting, though! :-)

Jon Skeet [C# MVP] <sk***@pobox.com> wrote in
news:MP************************@msnews.microsoft.c om:
R Warford <rw@nospam.com> wrote:
So why does 0.56 + 0.39 = 0.95 when computed using Visual C++ doubles,
but 0.950000000000000007 when using C#?


I think you'll find it doesn't actually get to 0.95 using Visual C++
either - it's just being displayed like that.

(It's not exactly 0.950000000000000007 in C# either, it's actually
0.950000000000000066613381477509392425417900085449 21875.)


Jul 21 '05 #12

This thread has been closed and replies have been disabled. Please start a new discussion.

Similar topics

1
by: kk_oop | last post by:
Hi. We have some code that was being accessed through Corba. We recently removed the Corba and replaced it with local method calls. This caused the code to break. Our code is filled with...
6
by: Thomas Lumley | last post by:
What does the standard guarantee about the accuracy of eg the trigonometric functions? There is obviously an implementation-dependent upper bound on the accuracy, since the answer is stored in a...
11
by: Michael Lang | last post by:
You'd think that the advantage to extra decimal places in a data type is better accuracy. Well think again. Look at this method and test results: ...
8
by: Jon Rea | last post by:
Hi, I have just tried compiling some scientific calculation code on Redhad Linux using gcc-3.4.4 that I normally compile on MS Visual C++ 03 and 05. I was surprised to get different floating...
5
by: Sensei | last post by:
Hi again! I am trying to figure a simple example of a program that shows the accuracy differences between float and double types. In particular I'm searching for an operation that works on...
48
by: Bill Cunningham | last post by:
Is this valid C syntax ? double=double/int; I seem to be having trouble here. Bill
4
by: AGP | last post by:
I thought i had this down but i think i am missing something. i read in a series of strings in CSV format that are actually numbers. i do something like Dim gVal as Double Dim sData as String =...
0
by: DolphinDB | last post by:
Tired of spending countless mintues downsampling your data? Look no further! In this article, you’ll learn how to efficiently downsample 6.48 billion high-frequency records to 61 million...
0
isladogs
by: isladogs | last post by:
The next Access Europe meeting will be on Wednesday 6 Mar 2024 starting at 18:00 UK time (6PM UTC) and finishing at about 19:15 (7.15PM). In this month's session, we are pleased to welcome back...
0
by: Vimpel783 | last post by:
Hello! Guys, I found this code on the Internet, but I need to modify it a little. It works well, the problem is this: Data is sent from only one cell, in this case B5, but it is necessary that data...
0
by: jfyes | last post by:
As a hardware engineer, after seeing that CEIWEI recently released a new tool for Modbus RTU Over TCP/UDP filtering and monitoring, I actively went to its official website to take a look. It turned...
1
by: PapaRatzi | last post by:
Hello, I am teaching myself MS Access forms design and Visual Basic. I've created a table to capture a list of Top 30 singles and forms to capture new entries. The final step is a form (unbound)...
1
by: CloudSolutions | last post by:
Introduction: For many beginners and individual users, requiring a credit card and email registration may pose a barrier when starting to use cloud servers. However, some cloud server providers now...
1
by: Shællîpôpï 09 | last post by:
If u are using a keypad phone, how do u turn on JavaScript, to access features like WhatsApp, Facebook, Instagram....
0
by: Faith0G | last post by:
I am starting a new it consulting business and it's been a while since I setup a new website. Is wordpress still the best web based software for hosting a 5 page website? The webpages will be...
0
isladogs
by: isladogs | last post by:
The next Access Europe User Group meeting will be on Wednesday 3 Apr 2024 starting at 18:00 UK time (6PM UTC+1) and finishing by 19:30 (7.30PM). In this session, we are pleased to welcome former...

By using Bytes.com and it's services, you agree to our Privacy Policy and Terms of Use.

To disable or enable advertisements and analytics tracking please visit the manage ads & tracking page.