472,143 Members | 1,421 Online

# Paranoic about real number imprecisions

Hallo,

It is know issue that due to the fact that computer has to store the real
numbers in limited set of bytes, thus causing a minor imprecision from the
decimal value that likely was stored. I don't know, how dotnet framework
stored doubles, but it's certain, that if I store, say, 1.234567 in some
real variable, it is stored in memory like something close to
1.2345670000000000000003454786544 or
1.234566999999999999999999999999999924354324.

Due to this problem it may sometimes cause some unexpected results when
comparing the real numbers in "if" statement.

Some programming languages have built-in protection, that compares only
limited number of digits (less than declared capability of the type), when
asked to compare two real numbers, thus the garbage in the lesser decimal
fields goes unnoticed. I have been using an "ApproximatelyEqual" method, for
comparing two real numbers and providing precision:

input: A, B, precision
output: (abs(A-B) <= 10^(-precision))

Example:
A = 1.00000000000001
B = 1.00000000000007
precision = 13
abs(A-B) = 0.00000000000006
10^(-precision) = 0.0000000000001
abs(A-B) <= 10^(-precision) = true

precision = 14
10^(-precision) = 0.00000000000001
abs(A-B) <= 10^(-precision) = false

For a project I am working on, where there are done multiple operations with
real numbers, I wanted to be sure, where there are some traps I may fall in.
Is it safe to compare the real numbers without custom "ApproximatelyEqual"
method?

Thanks,

Pavils
Nov 16 '05 #1
10 2576
Why you don't use double?

"Pavils Jurjans" <pa****@mailbox.riga.lv> schrieb im Newsbeitrag
news:e8**************@TK2MSFTNGP10.phx.gbl...
Hallo,

It is know issue that due to the fact that computer has to store the real
numbers in limited set of bytes, thus causing a minor imprecision from the
decimal value that likely was stored. I don't know, how dotnet framework
stored doubles, but it's certain, that if I store, say, 1.234567 in some
real variable, it is stored in memory like something close to
1.2345670000000000000003454786544 or
1.234566999999999999999999999999999924354324.

Due to this problem it may sometimes cause some unexpected results when
comparing the real numbers in "if" statement.

Some programming languages have built-in protection, that compares only
limited number of digits (less than declared capability of the type), when
asked to compare two real numbers, thus the garbage in the lesser decimal
fields goes unnoticed. I have been using an "ApproximatelyEqual" method, for comparing two real numbers and providing precision:

input: A, B, precision
output: (abs(A-B) <= 10^(-precision))

Example:
A = 1.00000000000001
B = 1.00000000000007
precision = 13
abs(A-B) = 0.00000000000006
10^(-precision) = 0.0000000000001
abs(A-B) <= 10^(-precision) = true

precision = 14
10^(-precision) = 0.00000000000001
abs(A-B) <= 10^(-precision) = false

For a project I am working on, where there are done multiple operations with real numbers, I wanted to be sure, where there are some traps I may fall in. Is it safe to compare the real numbers without custom "ApproximatelyEqual"
method?

Thanks,

Pavils

Nov 16 '05 #2
For real good precicion use the decimal type. Is is slower than double and
uses more memory but it is much more suited for precise calculations.

--
cody

Freeware Tools, Games and Humour
http://www.deutronium.de.vu || http://www.deutronium.tk
"Pavils Jurjans" <pa****@mailbox.riga.lv> schrieb im Newsbeitrag
news:e8**************@TK2MSFTNGP10.phx.gbl...
Hallo,

It is know issue that due to the fact that computer has to store the real
numbers in limited set of bytes, thus causing a minor imprecision from the
decimal value that likely was stored. I don't know, how dotnet framework
stored doubles, but it's certain, that if I store, say, 1.234567 in some
real variable, it is stored in memory like something close to
1.2345670000000000000003454786544 or
1.234566999999999999999999999999999924354324.

Due to this problem it may sometimes cause some unexpected results when
comparing the real numbers in "if" statement.

Some programming languages have built-in protection, that compares only
limited number of digits (less than declared capability of the type), when
asked to compare two real numbers, thus the garbage in the lesser decimal
fields goes unnoticed. I have been using an "ApproximatelyEqual" method, for comparing two real numbers and providing precision:

input: A, B, precision
output: (abs(A-B) <= 10^(-precision))

Example:
A = 1.00000000000001
B = 1.00000000000007
precision = 13
abs(A-B) = 0.00000000000006
10^(-precision) = 0.0000000000001
abs(A-B) <= 10^(-precision) = true

precision = 14
10^(-precision) = 0.00000000000001
abs(A-B) <= 10^(-precision) = false

For a project I am working on, where there are done multiple operations with real numbers, I wanted to be sure, where there are some traps I may fall in. Is it safe to compare the real numbers without custom "ApproximatelyEqual"
method?

Thanks,

Pavils

Nov 16 '05 #3
> Why you don't use double?

Using double type will not save from this problem.

Pavils
Nov 16 '05 #4
Once I had the same problem, I have to calculate some value and compare
them,
with float I have always precision errors, with double never.
"Pavils Jurjans" <pa****@mailbox.riga.lv> schrieb im Newsbeitrag
news:OK**************@tk2msftngp13.phx.gbl...
Why you don't use double?

Using double type will not save from this problem.

Pavils

Nov 16 '05 #5
You can not count on double, as it has the same problem that a float
has.

You should use the Decimal type. It will store numbers with a large
amount of decimal places accurately.

Hope this helps.
--
- Nicholas Paldino [.NET/C# MVP]
- mv*@spam.guard.caspershouse.com
"Zürcher See" <aq****@cannabismail.com> wrote in message
news:%2****************@TK2MSFTNGP12.phx.gbl...
Once I had the same problem, I have to calculate some value and compare
them,
with float I have always precision errors, with double never.
"Pavils Jurjans" <pa****@mailbox.riga.lv> schrieb im Newsbeitrag
news:OK**************@tk2msftngp13.phx.gbl...
> Why you don't use double?

Using double type will not save from this problem.

Pavils

Nov 16 '05 #6
no, but the decimal type will, as Nicholas points out.

Any reason this won't work? I don't know what calculations you are using...
decimal isn't appropriate for every binary operation, but it is very good in
this situation because it doesn't store the number in a "true" binary
format... it stores the decimal digits themselves.

--- Nick Malik

"Pavils Jurjans" <pa****@mailbox.riga.lv> wrote in message
news:OK**************@tk2msftngp13.phx.gbl...
Why you don't use double?

Using double type will not save from this problem.

Pavils

Nov 16 '05 #7
Zürcher See <aq****@cannabismail.com> wrote:
Once I had the same problem, I have to calculate some value and compare
them, with float I have always precision errors, with double never.

You've been lucky then.

Double has more precision than float, but it's still unable to
represent variable numbers (eg 0.1) exactly.

--
Jon Skeet - <sk***@pobox.com>
http://www.pobox.com/~skeet
Nov 16 '05 #8
Nick Malik <ni*******@hotmail.nospam.com> wrote:
no, but the decimal type will, as Nicholas points out.
Well, that depends.
Any reason this won't work? I don't know what calculations you are
using... decimal isn't appropriate for every binary operation, but it
is very good in this situation because it doesn't store the number in
a "true" binary format... it stores the decimal digits themselves.

Indeed. It is able to exactly represent every decimal number. Of
course, that doesn't help if you're dealing with numbers which can't be
exactly represented in decimal - such as 1/3.

If you're dealing with numbers which can always be exactly represented
in decimal even after all the operations you're interested in,
decimal's great - but it's just another floating point type with all
the associated problems really.

--
Jon Skeet - <sk***@pobox.com>
http://www.pobox.com/~skeet
Nov 16 '05 #9

"Jon Skeet [C# MVP]" <sk***@pobox.com> wrote in message
news:MP************************@msnews.microsoft.c om...
Nick Malik <ni*******@hotmail.nospam.com> wrote:
no, but the decimal type will, as Nicholas points out.
Well, that depends.

True. My "snappy reply" was overly broad and I could have qualified it
better. Unfortunately, the OP hasn't provided enough detail about his or
her calculations to make it clear if this particular numeric type would be
any more appropriate, or why they need to compare against specific real
numbered values.

We are left to guess.
Any reason this won't work? I don't know what calculations you are
using... decimal isn't appropriate for every binary operation, but it
is very good in this situation because it doesn't store the number in
a "true" binary format... it stores the decimal digits themselves.

Indeed. It is able to exactly represent every decimal number. Of
course, that doesn't help if you're dealing with numbers which can't be
exactly represented in decimal - such as 1/3.

If you're dealing with numbers which can always be exactly represented
in decimal even after all the operations you're interested in,
decimal's great - but it's just another floating point type with all
the associated problems really.

No argument here. That's why I posed the follow-up question. There are
specific mitigations for working around issues where the rational number can
be managed as a numerator and denominator until the last possible moment,
which can, depending on the calculation, maintain a bit more of the
precision. That isn't common but it does sometimes work. Other mitigations
rest with use of factors, matrix operations, formulaic representations, etc.
Once again, their applicability depends on information that the OP has not
provided.

When, or if, the OP decides to weigh in with more information, I'll be happy
to engage in a (hopefully) fruitful discussion of the uses of binary math to
solve the problem. Until then, we are, as they say in my home town of
Knoxville, "spittin' in the wind."

--- Nick
Nov 16 '05 #10
Nick Malik <ni*******@hotmail.nospam.com> wrote:
Nick Malik <ni*******@hotmail.nospam.com> wrote:
no, but the decimal type will, as Nicholas points out.
Well, that depends.

True. My "snappy reply" was overly broad and I could have qualified it
better. Unfortunately, the OP hasn't provided enough detail about his or
her calculations to make it clear if this particular numeric type would be
any more appropriate, or why they need to compare against specific real
numbered values.

We are left to guess.

Indeed :(

I guess my reason for replying was that there is a misconception around
that somehow decimals are "precise" whereas float/double aren't. Of
course, pinning down exactly what people mean by "precise" is a tricky
business - decimal/float/double are all precise, in that they exactly
represent numbers. They just don't exactly represent *all* numbers,
including the true results of many operations working with the precise
original numbers...
Any reason this won't work? I don't know what calculations you are
using... decimal isn't appropriate for every binary operation, but it
is very good in this situation because it doesn't store the number in
a "true" binary format... it stores the decimal digits themselves.

Indeed. It is able to exactly represent every decimal number. Of
course, that doesn't help if you're dealing with numbers which can't be
exactly represented in decimal - such as 1/3.

If you're dealing with numbers which can always be exactly represented
in decimal even after all the operations you're interested in,
decimal's great - but it's just another floating point type with all
the associated problems really.

No argument here. That's why I posed the follow-up question. There are
specific mitigations for working around issues where the rational number can
be managed as a numerator and denominator until the last possible moment,
which can, depending on the calculation, maintain a bit more of the
precision. That isn't common but it does sometimes work.

Yup. I seem to remember that it used to be a big performance win too,
in the right circumstances - these days FP units are probably fast
enough to counter a lot of the performance benefits of sticking with
integers.
Other mitigations
rest with use of factors, matrix operations, formulaic representations, etc.
Once again, their applicability depends on information that the OP has not
provided.

When, or if, the OP decides to weigh in with more information, I'll be happy
to engage in a (hopefully) fruitful discussion of the uses of binary math to
solve the problem. Until then, we are, as they say in my home town of
Knoxville, "spittin' in the wind."

:)

--
Jon Skeet - <sk***@pobox.com>
http://www.pobox.com/~skeet