473,554 Members | 3,162 Online
Bytes | Software Development & Data Engineering Community
+ Post

Home Posts Topics Members FAQ

Floating Point Precision

Hi,

I have a question about floating point precision in C.

What is the minimum distinguishable difference between 2 floating point
numbers? Does this differ for various computers?

Is this the EPSILON? I know in float.h a FLT_EPSILON is defined to be
10^-5. Does this mean that the computer cannot distinguish between 2
numbers that differ by less than this epsilon?

A problem I am seeing is a difference in values from a floating point
computation for a run on a Windows machine compared to a run on a Linux
machine. The values differ by 10^-6.

Thanks for any help,

Michael

Nov 15 '05 #1
15 3904
mi************* @gmail.com wrote:
Hi,

I have a question about floating point precision in C.

What is the minimum distinguishable difference between 2 floating point
numbers? Does this differ for various computers? The only reason it would be the same is that most computers support IEEE
754, at least to this extent. This is already Off Topic for c.l.c.
Is this the EPSILON? I know in float.h a FLT_EPSILON is defined to be
10^-5. Does this mean that the computer cannot distinguish between 2
numbers that differ by less than this epsilon? FLT_EPSILON is the positive difference between 1.0f and the next higher
representable number of float data type. I would be disappointed in and
C textbook which did not explain this.
A problem I am seeing is a difference in values from a floating point
computation for a run on a Windows machine compared to a run on a Linux
machine. The values differ by 10^-6.

You could expect such differences, in float data, even between two
versions of the same compiler, or between different optimization or code
generation options of the same compiler, even on the same OS. If you
want these differences to be smaller, use double data type. Check what
the FAQ says on this subject.
Nov 15 '05 #2
mi************* @gmail.com wrote:
Hi,

I have a question about floating point precision in C.

What is the minimum distinguishable difference between 2 floating point
numbers? Does this differ for various computers?

Is this the EPSILON? I know in float.h a FLT_EPSILON is defined to be
10^-5. Does this mean that the computer cannot distinguish between 2
numbers that differ by less than this epsilon?

A problem I am seeing is a difference in values from a floating point
computation for a run on a Windows machine compared to a run on a Linux
machine. The values differ by 10^-6.


I suggest you have a look at
<http://en.wikipedia.or g/wiki/IEEE_754>
and especially the reference from there to the Goldberg paper,
<http://docs.sun.com/source/806-3568/ncg_goldberg.ht ml>
to understand how floating point numbers work.
Some notes:

The "next bigger" or "next smaller" number from a given floating
point number p is not always at the same distance but depends on
p.

EPSILON is the smallest number eps such that 1+eps != 1, so
1+eps is the next number after 1. If the base of the floating
point types is b, then the next number after b is b*(1+eps)
and not b+eps.

Right in the same vein, there are numbers which cannot be
represented by floating point numbers (e.g. the numbers between
1 and 1+eps), so errors are introduced and propagated throughout
your computations; there are rounding errors as well, so you
basically need a little bit of numerical analysis to know that
your results are still reasonably accurate.
Equality is not a relation you should rely on. Even working
with relative errors can give you a headache when working with
sets of potentially equal values.

C does not guarantee much about floating point numbers.
The few guarantees you do have are mainly the limits given in
<float.h> -- everything else depends on your implementation
(which often is comprised of platform, operating system,
compiler).

On a related note:
The "natural" floating point type in C is double. Use float only
if you have severe memory problems or can _prove_ that the
accuracy is sufficient for your purposes. (Hardly surprising, it
often is not.)
Cheers
Michael
--
E-Mail: Mine is an /at/ gmx /dot/ de address.
Nov 15 '05 #3
In article <11************ **********@o13g 2000cwo.googleg roups.com>,
<mi************ *@gmail.com> wrote:
I have a question about floating point precision in C. What is the minimum distinguishable difference between 2 floating point
numbers?
You could probably work it out in terms of FLT_RADIX and FLT_MANT_DIG
but you have the small problem that the value you are describing is
not representable as a normalized number -- you might have the
case where A > B and yet A - B is not representable in normalized form.

Does this differ for various computers?
Yes, definitely.
Is this the EPSILON? I know in float.h a FLT_EPSILON is defined to be
10^-5. Does this mean that the computer cannot distinguish between 2
numbers that differ by less than this epsilon?
No, FLT_EPSILON is such that 1.0 is distinguishable from 1.0 + FLT_EPSILON

A problem I am seeing is a difference in values from a floating point
computation for a run on a Windows machine compared to a run on a Linux
machine. The values differ by 10^-6.


The absolute value doesn't tell us much -- if you are working with
values in the range 1E50 then 1E-6 is miniscule, but if you are
working with values in the range 1E-30 then 1E-6 is huge.

There are a lot of different reasons why computations come out differently
on different computers -- too many to list them all in one message.
As an example: on Pentiums, the native double precision size is 80 bits
but the C double precision size is 64 bits. If some steps of the
calculations are carried out at 80 bits, you can end up with different
results. There are sometimes compiler options that control whether
native-size register-to-register calculations are allowed, or whether
the machine must round to the storable precision at each stage.

But that's far from the only reason.
--
Entropy is the logarithm of probability -- Boltzmann
Nov 15 '05 #4
Have a look at ftp://ftp.quitt.net/Outgoing/goldbergFollowup.pdf
--
#include <standard.discl aimer>
_
Kevin D Quitt USA 91387-4454 96.37% of all statistics are made up
Nov 15 '05 #5
Michael Mair wrote:
EPSILON is the smallest number eps such that 1+eps != 1, so
1+eps is the next number after 1. If the base of the floating
point types is b, then the next number after b is b*(1+eps)
and not b+eps.


The epsilon value should be the difference between 1 and the next
representable number after 1.

But consider the value x, defined as three quarters of the epsilon value.
float x = 0.75 * FLT_EPSILON;

now, your condition (1 + x) != 1 is very likely to be true. The result
of the addition on the left hand side is not representable, but it
should round to the closest representable value, which is the next value
after 1, even though x is less than FLT_EPSILON.

Perhaps the following condition is better?
eps > 0 && (1 + eps) - 1 == eps

Here's what I get on my computer:

FLT_EPSILON is 0.0000001192092 895507812500000 00
x is 0.0000000894069 671630859375000 00
1 + x is 1.0000001192092 895507812500000 00
(1 + x) - 1 is 0.0000001192092 895507812500000 00

x is 3/4 of FLT_EPSILON. Adding x to 1 rounds up to 1 + FLT_EPSILON.
Taking 1 back off again leaves the true FLT_EPSILON, not the 3/4 of it
that we started with.

--
Simon.
Nov 15 '05 #6


mi************* @gmail.com wrote On 09/09/05 14:02,:
Hi,

I have a question about floating point precision in C.

What is the minimum distinguishable difference between 2 floating point
numbers? Does this differ for various computers?
The smallest discernible difference depends on the
magnitude of the numbers: the computer can surely distinguish
1.0 from 1.1, but might not be able to tell 100000000000000 .0
from 100000000000000 .1 even though the two pairs of values
differ (mathematically speaking) by the same amount. You've
got to be concerned with relative error, not with absolute
error.

And yes: The relative error (loosely speaking, the precision)
will differ from one machine to another.
Is this the EPSILON? I know in float.h a FLT_EPSILON is defined to be
10^-5. Does this mean that the computer cannot distinguish between 2
numbers that differ by less than this epsilon?
First, I think you've misunderstood what FLT_EPSILON is.
It is the difference between 1.0f and the "next" float value,
the smallest float larger than 1.0f that is distinguishable
from 1.0f. That is, FLT_EPSILON is one way of describing the
precision of float values on the system at hand. Note that
although 1.0f is distinguishable from 1.0f+FLT_EPSILO N,
1000000f need not be distinguishable from 1000000f+FLT_EP SILON.

Second, FLT_EPSILON is not necessarily 1E-5: the C Standard
requires that it be no greater than 1E-5, but permits lower
values (greater precision) for machines that support them.
A problem I am seeing is a difference in values from a floating point
computation for a run on a Windows machine compared to a run on a Linux
machine. The values differ by 10^-6.


Without knowing what the values are, there's no way to tell
whether a difference of 1E-6 is huge or tiny. If the values
are supposed to be Planck's Constant (~6.6E-34), 1E-6 represents
an enormous error. If they're supposed to be Avogadro's Number
(~6.0E23) the difference is completely insignificant.

For the purposes of argument, let's say the values are in
the vicinity of 1. Then a difference of 1E-6 in float arithmetic
on a machine where FLT_EPSILON is 1E-5 is nothing to worry about;
you've already done better than you had any right to expect.

Beyond that, we get into the analysis of the origins and
propagation of errors, a field known as "Numerical Analysis."
The topic is simple at first but deceptively so, because it
fairly rapidly becomes the stuff of PhD theses. A widely-
available paper called (IIRC) "What Every Computer Scientist
Should Know about Floating-Point Arithmetic" would be worth
your while to read.

--
Er*********@sun .com

Nov 15 '05 #7
Simon Biber wrote:
Michael Mair wrote:
EPSILON is the smallest number eps such that 1+eps != 1, so
1+eps is the next number after 1. If the base of the floating
point types is b, then the next number after b is b*(1+eps)
and not b+eps.
The epsilon value should be the difference between 1 and the next
representable number after 1.


Yep, I was imprecise. Let M be the set of all floating point
numbers representable by the respective floating point type;
then EPSILON = min {eps \in M | eps > 0 and 1+eps != 1}.

But consider the value x, defined as three quarters of the epsilon value.
float x = 0.75 * FLT_EPSILON;

now, your condition (1 + x) != 1 is very likely to be true. The result
of the addition on the left hand side is not representable, but it
should round to the closest representable value, which is the next value
after 1, even though x is less than FLT_EPSILON.
This is a question of the rounding mode.
Perhaps the following condition is better?
eps > 0 && (1 + eps) - 1 == eps
Yes, indeed. My mistake was that I had the classical
eps = 1.0S;
while ((T) (1.0S+eps/FLT_RADIX) != 1.0S)
eps /= FLT_RADIX;
in mind (where S is the appropriate type suffix or nothing for type T;
the cast can be necessary for avoiding excess precision
->FLT_EVAL_METHO D. The usual caveats for gcc and FP arithmetics on x86
and similar apply, though.)
Here's what I get on my computer:

FLT_EPSILON is 0.0000001192092 895507812500000 00
x is 0.0000000894069 671630859375000 00
1 + x is 1.0000001192092 895507812500000 00
(1 + x) - 1 is 0.0000001192092 895507812500000 00

x is 3/4 of FLT_EPSILON. Adding x to 1 rounds up to 1 + FLT_EPSILON.
Taking 1 back off again leaves the true FLT_EPSILON, not the 3/4 of it
that we started with.


See above. Round to zero is possible (->FLT_ROUNDS).

Cheers
Michael
--
E-Mail: Mine is an /at/ gmx /dot/ de address.
Nov 15 '05 #8
Michael Mair wrote:
Simon Biber wrote:
Michael Mair wrote:
EPSILON is the smallest number eps such that 1+eps != 1, so
1+eps is the next number after 1. If the base of the floating
point types is b, then the next number after b is b*(1+eps)
and not b+eps.


The epsilon value should be the difference between 1 and the next
representable number after 1.


Yep, I was imprecise. Let M be the set of all floating point
numbers representable by the respective floating point type;
then EPSILON = min {eps \in M | eps > 0 and 1+eps != 1}.


That still suffers from the rounding mode issue. There are many possible
values of eps that are members of M, are greater than zero but ess than
the true epsilon value, and when added to one may round up to a value
that is not equal to 1.

Unless you mean the + operator to be an abstract mathematical thing that
can return any real number, rather than the one that must operate within
the given floating-point type.

You need to make clear whether:
+: M X M -> M (+ is of a type that maps a pair of M to a single M)
or:
+: M X M -> R (+ is of a type that maps a pair of M to a real)

--
Simon.
Nov 15 '05 #9
mi************* @gmail.com wrote on 09/09/05 :
What is the minimum distinguishable difference between 2 floating point
numbers? Does this differ for various computers?
float : FLT_EPSILON (<float.h>)
double : DBL_EPSILON (<float.h>)

[C99]
long double : LDBL_EPSILON (<float.h>)
Is this the EPSILON?
Yup.
I know in float.h a FLT_EPSILON is defined to be
10^-5.
On /this/ implementation.
Does this mean that the computer cannot distinguish between 2
numbers that differ by less than this epsilon?
On this computer, I dunno. On /this/ implementation of the C-language,
yes.
A problem I am seeing is a difference in values from a floating point
computation for a run on a Windows machine compared to a run on a Linux
machine. The values differ by 10^-6.


Could be. In general terms, floating points representation is (nearly)
always an approximation. Use 'double' for a better precision. (C99
supports long double).

--
Emmanuel
The C-FAQ: http://www.eskimo.com/~scs/C-faq/faq.html
The C-library: http://www.dinkumware.com/refxc.html

"C is a sharp tool"
Nov 15 '05 #10

This thread has been closed and replies have been disabled. Please start a new discussion.

Similar topics

4
3292
by: Dave | last post by:
Hi folks, I am trying to develop a routine that will handle sphere-sphere and sphere-triangle collisions and interactions. My aim is to develop a quake style collision engine where a player can interact with a rich 3D environment. Seem to be 90% of the way there! My problems are related to calculations where the result tends to zero (or...
31
3631
by: JS | last post by:
We have the same floating point intensive C++ program that runs on Windows on Intel chip and on Sun Solaris on SPARC chips. The program reads the exactly the same input files on the two platforms. However, they generate slightly different results for floating point numbers. Are they really supposed to generate exactly the same results? I...
5
3729
by: Anton Noll | last post by:
We are using Visual Studio 2003.NET (C++) for the development of our software in the fields digital signal processing and numerical acoustics. One of our programs was working correctly if we are using the Debug-Version of the program, but it fails (or leads to false results) if we are using the Release-Version. After a long debugging...
687
23005
by: cody | last post by:
no this is no trollposting and please don't get it wrong but iam very curious why people still use C instead of other languages especially C++. i heard people say C++ is slower than C but i can't believe that. in pieces of the application where speed really matters you can still use "normal" functions or even static methods which is...
2
15832
by: Benjamin Rutt | last post by:
Does anyone have C code laying around to do this? I have to read in some binary data files that contains some 4-byte IBM/370 floating point values. I would like a function to convert 4-byte IBM/370 float values into IEEE 754 'float' values that I can use directly with any modern C environment (that uses IEEE 754). I can handle endian...
5
2400
by: Steffen | last post by:
Hi, is it possible to have two fractions, which (mathematically) have the order a/b < c/d (a,b,c,d integers), but when (correctly) converted into floating point representation just have the opposite order? The idea is that the two fractions are almost identical and that the error introduced by going to floating point representation is...
10
18743
by: Bryan Parkoff | last post by:
The guideline says to use %f in printf() function using the keyword float and double. For example float a = 1.2345; double b = 5.166666667; printf("%.2f\n %f\n", a, b);
4
2825
by: jacob navia | last post by:
Hi people I continue to work in the tutorial for lcc-win32, and started to try to explain the floating point flags. Here is the relevant part of the tutorial. Since it is a difficult part, I would like your expert advise before I publish any serious nonsense. Any comments are welcome, style, organization, hard errors, etc.
137
6592
by: mathieu.dutour | last post by:
Dear all, I want to do multiprecision floating point, i.e. I want to go beyond single precision, double precision and have quadruple precision, octuple precision and the like, and possibly with high speed. What would be the possible alternatives? Thanks for any help
39
3527
by: rembremading | last post by:
Hi all! The following piece of code has (for me) completely unexpected behaviour. (I compile it with gcc-Version 4.0.3) Something goes wrong with the integer to float conversion. Maybe somebody out there understands what happens. Essentially, when I subtract the (double) function value GRID_POINT(2) from a variable which has been assigned...
0
7583
marktang
by: marktang | last post by:
ONU (Optical Network Unit) is one of the key components for providing high-speed Internet services. Its primary function is to act as an endpoint device located at the user's premises. However, people are often confused as to whether an ONU can Work As a Router. In this blog post, we’ll explore What is ONU, What Is Router, ONU & Router’s main...
0
7507
by: Hystou | last post by:
Most computers default to English, but sometimes we require a different language, especially when relocating. Forgot to request a specific language before your computer shipped? No problem! You can effortlessly switch the default language on Windows 10 without reinstalling. I'll walk you through it. First, let's disable language...
0
7783
Oralloy
by: Oralloy | last post by:
Hello folks, I am unable to find appropriate documentation on the type promotion of bit-fields when using the generalised comparison operator "<=>". The problem is that using the GNU compilers, it seems that the internal comparison operator "<=>" tries to promote arguments from unsigned to signed. This is as boiled down as I can make it. ...
0
8019
jinu1996
by: jinu1996 | last post by:
In today's digital age, having a compelling online presence is paramount for businesses aiming to thrive in a competitive landscape. At the heart of this digital strategy lies an intricately woven tapestry of website design and digital marketing. It's not merely about having a website; it's about crafting an immersive digital experience that...
1
7542
by: Hystou | last post by:
Overview: Windows 11 and 10 have less user interface control over operating system update behaviour than previous versions of Windows. In Windows 11 and 10, there is no way to turn off the Windows Update option using the Control Panel or Settings app; it automatically checks for updates and installs any it finds, whether you like it or not. For...
1
5424
isladogs
by: isladogs | last post by:
The next Access Europe User Group meeting will be on Wednesday 1 May 2024 starting at 18:00 UK time (6PM UTC+1) and finishing by 19:30 (7.30PM). In this session, we are pleased to welcome a new presenter, Adolph Dupré who will be discussing some powerful techniques for using class modules. He will explain when you may want to use classes...
0
3546
by: TSSRALBI | last post by:
Hello I'm a network technician in training and I need your help. I am currently learning how to create and manage the different types of VPNs and I have a question about LAN-to-LAN VPNs. The last exercise I practiced was to create a LAN-to-LAN VPN between two Pfsense firewalls, by using IPSEC protocols. I succeeded, with both firewalls in...
1
1115
muto222
by: muto222 | last post by:
How can i add a mobile payment intergratation into php mysql website.
0
825
bsmnconsultancy
by: bsmnconsultancy | last post by:
In today's digital era, a well-designed website is crucial for businesses looking to succeed. Whether you're a small business owner or a large corporation in Toronto, having a strong online presence can significantly impact your brand's success. BSMN Consultancy, a leader in Website Development in Toronto offers valuable insights into creating...

By using Bytes.com and it's services, you agree to our Privacy Policy and Terms of Use.

To disable or enable advertisements and analytics tracking please visit the manage ads & tracking page.