473,387 Members | 1,493 Online
Bytes | Software Development & Data Engineering Community
Post Job

Home Posts Topics Members FAQ

Join Bytes to post your question to a community of 473,387 software developers and data experts.

The machine epsilon

Hi.

Continuing with my tutorial, here is an entry I have added recently. I
hope it is not controversial. If you see any errors/ambiguities/etc please
just answer in this thread.

Thanks in advance for your help.
----------------------------------------------------------------------
The machine epsilon

The machine epsilon is the smallest number that changes the result of an
addition operation at the point where the representation of the numbers
is the densest. In IEEE754 representation this number has an exponent
value of the bias, and a fraction of 1. If you add a number smaller than
this to 1.0, the result will be 1.0. For the different representations
we have in the standard header <float.h>:

#define FLT_EPSILON 1.19209290e-07F // float
#define DBL_EPSILON 2.2204460492503131e-16 // double
#define LDBL_EPSILON 1.084202172485504434007452e-19L //long double
// qfloat epsilon truncated so that it fits in this page...
#define QFLT_EPSILON 1.09003771904865842969737513593110651 ... E-106

This defines are part of the C99 ANSI standard. For the standard types
(float, double and long double) this defines should always exist in
other compilers.

Here is a program that will find out the machine epsilon for a given
floating point representation.

#include <stdio.h>
int main(void)
{
double float_radix=2.0;
double inverse_radix = 1.0/float_radix;
double machine_precision = 1.0;
double temp = 1.0 + machine_precision;

while (temp != 1.0) {
machine_precision *= inverse_radix;
temp = 1.0 + machine_precision ;
printf("%.17g\n",machine_precision);
}
return 0;
}
Jun 29 '07 #1
39 17734
jacob navia said:

<snip>
For the different
representations we have in the standard header <float.h>:

#define FLT_EPSILON 1.19209290e-07F // float
#define DBL_EPSILON 2.2204460492503131e-16 // double
#define LDBL_EPSILON 1.084202172485504434007452e-19L //long double
// qfloat epsilon truncated so that it fits in this page...
#define QFLT_EPSILON 1.09003771904865842969737513593110651 ... E-106
Conforming implementations must not define QFLT_EPSILON in <float.h>
This defines are part of the C99 ANSI standard.
Well, three of them are.

Your text suffers from your usual confusion between lcc-win32 and C.

<snip>

--
Richard Heathfield <http://www.cpax.org.uk>
Email: -www. +rjh@
Google users: <http://www.cpax.org.uk/prg/writings/googly.php>
"Usenet is a strange place" - dmr 29 July 1999
Jun 29 '07 #2
Richard Heathfield wrote:
jacob navia said:

<snip>
>For the different
representations we have in the standard header <float.h>:

#define FLT_EPSILON 1.19209290e-07F // float
#define DBL_EPSILON 2.2204460492503131e-16 // double
#define LDBL_EPSILON 1.084202172485504434007452e-19L //long double
// qfloat epsilon truncated so that it fits in this page...
#define QFLT_EPSILON 1.09003771904865842969737513593110651 ... E-106

Conforming implementations must not define QFLT_EPSILON in <float.h>
The C standard paragraph J.5.6: Common extensions:
J.5.6 Other arithmetic types
Additional arithmetic types, such as _ _int128 or double double, and
their appropriate conversions are defined (6.2.5, 6.3.1). Additional
floating types may have more range or precision than long double, may be
used for evaluating expressions of other floating types, and may be
used to define float_t or double_t.
>
>This defines are part of the C99 ANSI standard.

Well, three of them are.
You cut the next sentence!

"For the standard types (float, double and long double) this defines
should always exist in other compilers. "

This is a good example of BIAS when quoting.
Jun 29 '07 #3
In article <46***********************@news.orange.fr>,
jacob navia <ja***@jacob.remcomp.frwrote:
>>This defines are part of the C99 ANSI standard.
Presumably you meant "these", not "this". And it would be less jargonish
to say "definitions" rather than "defines".
>Well, three of them are.

You cut the next sentence!

"For the standard types (float, double and long double) this defines
should always exist in other compilers. "

This is a good example of BIAS when quoting.
I don't think so. There's no need to say something that's wrong here,
even if you clarify it in the next sentence.

-- Richard

--
"Consideration shall be given to the need for as many as 32 characters
in some alphabets" - X3.4, 1963.
Jun 29 '07 #4
Richard Tobin wrote:
In article <46***********************@news.orange.fr>,
jacob navia <ja***@jacob.remcomp.frwrote:
>>>This defines are part of the C99 ANSI standard.

Presumably you meant "these", not "this". And it would be less jargonish
to say "definitions" rather than "defines".
>>Well, three of them are.
You cut the next sentence!

"For the standard types (float, double and long double) this defines
should always exist in other compilers. "

This is a good example of BIAS when quoting.

I don't think so. There's no need to say something that's wrong here,
even if you clarify it in the next sentence.

-- Richard
OK. Now it is:

These definitions (except the qfloat part) are part of the C99 ANSI
standard. For the standard types (float, double and long double) they
should always exist in other compilers. The type qfloat is an extension
of lcc-win32.

Jun 29 '07 #5
jacob navia said:
Richard Heathfield wrote:
>jacob navia said:

<snip>
>>For the different
representations we have in the standard header <float.h>:

#define FLT_EPSILON 1.19209290e-07F // float
#define DBL_EPSILON 2.2204460492503131e-16 // double
#define LDBL_EPSILON 1.084202172485504434007452e-19L //long double
// qfloat epsilon truncated so that it fits in this page...
#define QFLT_EPSILON 1.09003771904865842969737513593110651 ... E-106

Conforming implementations must not define QFLT_EPSILON in <float.h>

The C standard paragraph J.5.6: Common extensions:
J.5.6 Other arithmetic types
Additional arithmetic types, such as _ _int128 or double double, and
their appropriate conversions are defined (6.2.5, 6.3.1). Additional
floating types may have more range or precision than long double, may
be
used for evaluating expressions of other floating types, and may be
used to define float_t or double_t.
What has that to do with what I said? I didn't say that implementations
can't provide extra types. I said a conforming implementation must not
define QFLT_EPSILON in <float.h- that is not the same as saying that
implementations cannot provide extra types.
>>This defines are part of the C99 ANSI standard.

Well, three of them are.

You cut the next sentence!
Yes. It was not relevant to my point, which is that QFLT_EPSILON is not
part of the C Standard.

"For the standard types (float, double and long double) this defines
should always exist in other compilers. "

This is a good example of BIAS when quoting.
No, it isn't. Your statement incorrectly claimed that QFLT_EPSILON was
part of the C99 Standard. The fact that it was followed by another
statement which didn't reiterate the claim does not make your original
claim correct.

--
Richard Heathfield <http://www.cpax.org.uk>
Email: -www. +rjh@
Google users: <http://www.cpax.org.uk/prg/writings/googly.php>
"Usenet is a strange place" - dmr 29 July 1999
Jun 29 '07 #6
jacob navia said:

<snip>
OK. Now it is:

These definitions (except the qfloat part) are part of the C99 ANSI
standard. For the standard types (float, double and long double) they
should always exist in other compilers. The type qfloat is an
extension of lcc-win32.
Why mention lcc-win32 extensions in a C tutorial?

And if it's an lcc-win32 tutorial, why ask for comment in a C newsgroup?
Is there not an lcc-win32 newsgroup where you can discuss lcc-win32
extensions?

--
Richard Heathfield <http://www.cpax.org.uk>
Email: -www. +rjh@
Google users: <http://www.cpax.org.uk/prg/writings/googly.php>
"Usenet is a strange place" - dmr 29 July 1999
Jun 29 '07 #7
In article <Te******************************@bt.com>,
Richard Heathfield <rj*@see.sig.invalidwrote:
>Why mention lcc-win32 extensions in a C tutorial?

And if it's an lcc-win32 tutorial, why ask for comment in a C newsgroup?
Is there not an lcc-win32 newsgroup where you can discuss lcc-win32
extensions?
You're not being reasonable. He's writing a C tutorial intended for
lcc-win32 users. Since most of it is standard C, I see no reason why
he shouldn't post about it here.

-- Richard
--
"Consideration shall be given to the need for as many as 32 characters
in some alphabets" - X3.4, 1963.
Jun 29 '07 #8
Richard Tobin said:

<snip>
You're not being reasonable.
Shurely shome mishtake?
He's writing a C tutorial
The internal evidence of his articles suggests otherwise.

<snip>

--
Richard Heathfield <http://www.cpax.org.uk>
Email: -www. +rjh@
Google users: <http://www.cpax.org.uk/prg/writings/googly.php>
"Usenet is a strange place" - dmr 29 July 1999
Jun 29 '07 #9
In article <V6******************************@comcast.com>,
Eric Sosman <es*****@acm-dot-org.invalidwrote:

>In IEEE754 representation this number has an exponent
value of the bias, and a fraction of 1.
If by "the bias" you mean the traditional offset in the
encoding of the exponent, then I think this statement is wrong.
An FP epsilon has to do with the precision of the fraction,
not with the span of possible exponents.
The epsilon is such that if it were de-normalised to have the same
exponent as 1.0, only the lowest bit of the fraction would be set. To
normalise it, you need to shift that lowest bit up into the hidden
bit, and adjust the exponent accordingly. For 1.0, the exponent is
equal to the bias. So the epsilon has an exponent equal to the bias
minus the number of fraction bits, and the fraction part is zero
(because of the hidden bit).

-- Richard
--
"Consideration shall be given to the need for as many as 32 characters
in some alphabets" - X3.4, 1963.
Jun 29 '07 #10
Richard Tobin wrote:
In article <V6******************************@comcast.com>,
Eric Sosman <es*****@acm-dot-org.invalidwrote:

>>In IEEE754 representation this number has an exponent
value of the bias, and a fraction of 1.
> If by "the bias" you mean the traditional offset in the
encoding of the exponent, then I think this statement is wrong.
An FP epsilon has to do with the precision of the fraction,
not with the span of possible exponents.

The epsilon is such that if it were de-normalised to have the same
exponent as 1.0, only the lowest bit of the fraction would be set. To
normalise it, you need to shift that lowest bit up into the hidden
bit, and adjust the exponent accordingly. For 1.0, the exponent is
equal to the bias. So the epsilon has an exponent equal to the bias
minus the number of fraction bits, and the fraction part is zero
(because of the hidden bit).
Right: Not equal to the bias, as I said. (And in any case
it's wrong to speak of the exponent *value* being equal to the
bias or to the bias minus delta: the exponent is 1-delta and
the bias crops up in the *encoding* of that value.)

--
Eric Sosman
es*****@acm-dot-org.invalid

Jun 29 '07 #11
In article <p8******************************@comcast.com>,
Eric Sosman <es*****@acm-dot-org.invalidwrote:
>(And in any case
it's wrong to speak of the exponent *value* being equal to the
bias or to the bias minus delta: the exponent is 1-delta and
the bias crops up in the *encoding* of that value.)
Surely you mean 0-delta?

It does seem to be common to call the encoded value the exponent; I
suppose the context usually makes it clear. But I agree that it's
inaccurate - I should have said the "exponent bits" or the "encoded
exponent" or something like that.

-- Richard
--
"Consideration shall be given to the need for as many as 32 characters
in some alphabets" - X3.4, 1963.
Jun 29 '07 #12

"jacob navia" <ja***@jacob.remcomp.frha scritto nel messaggio news:46***********************@news.orange.fr...
Richard Heathfield wrote:
>jacob navia said:

<snip>
>>For the different
representations we have in the standard header <float.h>:

#define FLT_EPSILON 1.19209290e-07F // float
#define DBL_EPSILON 2.2204460492503131e-16 // double
#define LDBL_EPSILON 1.084202172485504434007452e-19L //long double
// qfloat epsilon truncated so that it fits in this page...
#define QFLT_EPSILON 1.09003771904865842969737513593110651 ... E-106

Conforming implementations must not define QFLT_EPSILON in <float.h>

The C standard paragraph J.5.6: Common extensions:
J.5.6 Other arithmetic types
Additional arithmetic types, such as _ _int128 or double double, and
their appropriate conversions are defined (6.2.5, 6.3.1). Additional
floating types may have more range or precision than long double, may be
used for evaluating expressions of other floating types, and may be
used to define float_t or double_t.
Where does it allow you to use an identifier which doesn't begin
with an underscore and the standard never reserves?
What happens if i write a program with the lines
#include <float.h>
int main(int QFLT_EPSILON, char *argv[])
(yes, I'm allowed to do that) and try to compile it with your
compiler?
Jun 29 '07 #13
Richard Tobin wrote:
In article <V6******************************@comcast.com>,
Eric Sosman <es*****@acm-dot-org.invalidwrote:

>>In IEEE754 representation this number has an exponent
value of the bias, and a fraction of 1.
> If by "the bias" you mean the traditional offset in the
encoding of the exponent, then I think this statement is wrong.
An FP epsilon has to do with the precision of the fraction,
not with the span of possible exponents.

The epsilon is such that if it were de-normalised to have the same
exponent as 1.0, only the lowest bit of the fraction would be set. To
normalise it, you need to shift that lowest bit up into the hidden
bit, and adjust the exponent accordingly. For 1.0, the exponent is
equal to the bias. So the epsilon has an exponent equal to the bias
minus the number of fraction bits, and the fraction part is zero
(because of the hidden bit).

-- Richard
That is what I am trying to say. The epsilon is
sign: 0
exponent: zero, i.e. the bias.
mantissa: 0000000000 0000000000 0000000000 0000000000 0000000000 01
<--10-->
Using printf:
printf("%a",DBL_EPSILON); 0x1.0000000000000p-052
Jun 29 '07 #14
jacob navia wrote:
mantissa: 0000000000 0000000000 0000000000 0000000000 0000000000 01
This is wrong, excuse me. The mantissa is normalized, and the bits are:
100000000 etc

The value is 2^-52--2.2204460492503130808472633361816e-16

Jun 29 '07 #15
In article <46***********************@news.orange.fr>,
jacob navia <ja***@jacob.remcomp.frwrote:
>That is what I am trying to say. The epsilon is
sign: 0
exponent: zero, i.e. the bias.
mantissa: 0000000000 0000000000 0000000000 0000000000 0000000000 01
If you put those bits into a double, you don't get the epsilon (try
it). You are writing it as a de-normalised representation, but if you
put those bits in a double they would be interpreted as a normalized
value, equal to 1+epsilon.

The IEEE representation of DBL_EPSILON is

sign bit: 0
exponent bits: 01111001011 (representing -52)
mantissa bits: 000.... (representing 1.0, because of the hidden bit)

-- Richard
--
"Consideration shall be given to the need for as many as 32 characters
in some alphabets" - X3.4, 1963.
Jun 29 '07 #16
Eric Sosman wrote On 06/29/07 07:31,:
jacob navia wrote:
>[...]
If you add a number smaller than
this to 1.0, the result will be 1.0. For the different representations
we have in the standard header <float.h>:


Finally, we get to an understandable definition: x is the
FP epsilon if 1+x is the smallest representable number greater
than 1 (when evaluated in the appropriate type). [...]
Now that I think of it, the two descriptions are
not the same. Mine is correct, as far as I know, but
Jacob's is subtly wrong. (Hint: Rounding modes.)

--
Er*********@sun.com
Jun 29 '07 #17
Richard Tobin wrote On 06/29/07 08:53,:
In article <p8******************************@comcast.com>,
Eric Sosman <es*****@acm-dot-org.invalidwrote:
>>(And in any case
it's wrong to speak of the exponent *value* being equal to the
bias or to the bias minus delta: the exponent is 1-delta and
the bias crops up in the *encoding* of that value.)


Surely you mean 0-delta?
I don't think so. The fraction of a normalized, non-
zero, finite IEEE number has a value 0.5 <= f < 1, so
unity is represented as two to the first times one-half:
2^1 * .100...000(2). The unbiased exponent value in the
representation of unity is therefore one, not zero.

Of course, omitting any definition of "delta" leaves
quite a bit of wiggle room. ;-)

--
Er*********@sun.com
Jun 29 '07 #18
Eric Sosman wrote:
Richard Tobin wrote On 06/29/07 08:53,:
>In article <p8******************************@comcast.com>,
Eric Sosman <es*****@acm-dot-org.invalidwrote:
>>(And in any case
it's wrong to speak of the exponent *value* being equal to the
bias or to the bias minus delta: the exponent is 1-delta and
the bias crops up in the *encoding* of that value.)

Surely you mean 0-delta?

I don't think so. The fraction of a normalized, non-
zero, finite IEEE number has a value 0.5 <= f < 1, so
unity is represented as two to the first times one-half:
2^1 * .100...000(2). The unbiased exponent value in the
representation of unity is therefore one, not zero.
I think you forget the implicit bit Eric.

Jun 29 '07 #19
Eric Sosman wrote:
Eric Sosman wrote On 06/29/07 07:31,:
>jacob navia wrote:
>>[...]
If you add a number smaller than
this to 1.0, the result will be 1.0. For the different representations
we have in the standard header <float.h>:

Finally, we get to an understandable definition: x is the
FP epsilon if 1+x is the smallest representable number greater
than 1 (when evaluated in the appropriate type). [...]

Now that I think of it, the two descriptions are
not the same. Mine is correct, as far as I know, but
Jacob's is subtly wrong. (Hint: Rounding modes.)
The standard says:
5.2.4.2.2:
DBL_EPSILON
the difference between 1 and the least value greater than 1 that is
representable in the given floating point type, b^(1-p)

Since we have 53 bits in the mantissa we have
2^(1-53)--2.2204460492503131E-16
as shown by my program!
BY THE WAY I added an exercise:

Exercise
--------

Explain why the last number printed is NOT DBL_EPSILON
but the number before.

Jun 29 '07 #20
In article <1183127942.886274@news1nwk>,
Eric Sosman <Er*********@Sun.COMwrote:
I don't think so. The fraction of a normalized, non-
zero, finite IEEE number has a value 0.5 <= f < 1, so
unity is represented as two to the first times one-half:
2^1 * .100...000(2). The unbiased exponent value in the
representation of unity is therefore one, not zero.
I was interpreting the hidden bit as representing 1 so that with the
fraction bits the mantissa is 1.f, i.e. in the range [1,2), and
interpreting an exponent field with bits 0111... as representing 0.
This seems to be the way it's usually described in the web pages I
just searched. But it is equivalent to say that the mantissa is 0.1f
and the exponent is one greater.

-- Richard

--
"Consideration shall be given to the need for as many as 32 characters
in some alphabets" - X3.4, 1963.
Jun 29 '07 #21
Richard Tobin wrote:
In article <46***********************@news.orange.fr>,
jacob navia <ja***@jacob.remcomp.frwrote:
>That is what I am trying to say. The epsilon is
sign: 0
exponent: zero, i.e. the bias.
mantissa: 0000000000 0000000000 0000000000 0000000000 0000000000 01

If you put those bits into a double, you don't get the epsilon (try
it). You are writing it as a de-normalised representation, but if you
put those bits in a double they would be interpreted as a normalized
value, equal to 1+epsilon.

The IEEE representation of DBL_EPSILON is

sign bit: 0
exponent bits: 01111001011 (representing -52)
mantissa bits: 000.... (representing 1.0, because of the hidden bit)

-- Richard
#include <stdio.h>
#include <float.h>
int main(void)
{
double d = DBL_EPSILON;
int *p = (int *)&d;

printf("%a\n",DBL_EPSILON);
printf("0x%x 0x%x\n",p[0],p[1]);
}

The output is
0x1.0000000000000p-052
0x0 0x3cb00000
3cb --971.
971 - 1023 ---52
The rest is zero, since there is a hidden bit.

The effective representation of DBL_EPSILON
is then:
sign:0
exponent: 971 (-52 since 971-1023 is -52)
Bias: 1023
mantissa: Zero (since we have a hidden bit of zero)
Jun 29 '07 #22
Army1987 wrote:
"jacob navia" <ja***@jacob.remcomp.frha scritto nel messaggio news:46***********************@news.orange.fr...
>Richard Heathfield wrote:
>>jacob navia said:

<snip>

For the different
representations we have in the standard header <float.h>:

#define FLT_EPSILON 1.19209290e-07F // float
#define DBL_EPSILON 2.2204460492503131e-16 // double
#define LDBL_EPSILON 1.084202172485504434007452e-19L //long double
// qfloat epsilon truncated so that it fits in this page...
#define QFLT_EPSILON 1.09003771904865842969737513593110651 ... E-106
Conforming implementations must not define QFLT_EPSILON in <float.h>
The C standard paragraph J.5.6: Common extensions:
J.5.6 Other arithmetic types
Additional arithmetic types, such as _ _int128 or double double, and
their appropriate conversions are defined (6.2.5, 6.3.1). Additional
floating types may have more range or precision than long double, may be
used for evaluating expressions of other floating types, and may be
used to define float_t or double_t.
Where does it allow you to use an identifier which doesn't begin
with an underscore and the standard never reserves?
What happens if i write a program with the lines
#include <float.h>
int main(int QFLT_EPSILON, char *argv[])
(yes, I'm allowed to do that) and try to compile it with your
compiler?

You will have to add the -ansic flag to your compilation flags
Jun 29 '07 #23
jacob navia wrote On 06/29/07 10:56,:
Eric Sosman wrote:
>>Eric Sosman wrote On 06/29/07 07:31,:
>>>jacob navia wrote:

[...]
If you add a number smaller than
this to 1.0, the result will be 1.0. For the different representations
we have in the standard header <float.h>:

Finally, we get to an understandable definition: x is the
FP epsilon if 1+x is the smallest representable number greater
than 1 (when evaluated in the appropriate type). [...]

Now that I think of it, the two descriptions are
not the same. Mine is correct, as far as I know, but
Jacob's is subtly wrong. (Hint: Rounding modes.)


The standard says:
5.2.4.2.2:
DBL_EPSILON
the difference between 1 and the least value greater than 1 that is
representable in the given floating point type, b^(1-p)
Yes, but what you said in your tutorial was "If you add
a number smaller than this [epsilon] to 1.0, the result will
be 1.0," and that is not necessarily true. For example, on
the machine in front of me at the moment,

1.0 + DBL_EPSILON * 3 / 4

is greater than one, even though `DBL_EPSILON * 3 / 4' is
25% smaller than DBL_EPSILON.

--
Er*********@sun.com
Jun 29 '07 #24
jacob navia wrote:
Eric Sosman wrote:
.... snip ...
>>
I don't think so. The fraction of a normalized, non-zero, finite
IEEE number has a value 0.5 <= f < 1, so unity is represented as
two to the first times one-half: 2^1 * .100...000(2). The
unbiased exponent value in the representation of unity is
therefore one, not zero.

I think you forget the implicit bit Eric.
In some systems. Not necessarily C. Do try to stay on topic.

--
<http://www.cs.auckland.ac.nz/~pgut001/pubs/vista_cost.txt>
<http://www.securityfocus.com/columnists/423>
<http://www.aaxnet.com/editor/edit043.html>
cbfalconer at maineline dot net

--
Posted via a free Usenet account from http://www.teranews.com

Jun 29 '07 #25
CBFalconer wrote:
jacob navia wrote:
>Eric Sosman wrote:
... snip ...
>>I don't think so. The fraction of a normalized, non-zero, finite
IEEE number has a value 0.5 <= f < 1, so unity is represented as
two to the first times one-half: 2^1 * .100...000(2). The
unbiased exponent value in the representation of unity is
therefore one, not zero.
I think you forget the implicit bit Eric.

In some systems. Not necessarily C. Do try to stay on topic.
The C standard assumes IEEE 754 representation Chuck.

Jun 29 '07 #26
In article <46***********************@news.orange.fr>,
jacob navia <ja***@jacob.remcomp.frwrote:
>The C standard assumes IEEE 754 representation Chuck.
C89 doesn't.

--
There are some ideas so wrong that only a very intelligent person
could believe in them. -- George Orwell
Jun 29 '07 #27

"jacob navia" <ja***@jacob.remcomp.frha scritto nel messaggio news:46***********************@news.orange.fr...
Army1987 wrote:
>"jacob navia" <ja***@jacob.remcomp.frha scritto nel messaggio news:46***********************@news.orange.fr...
>>Richard Heathfield wrote:
jacob navia said:

<snip>

For the different
representations we have in the standard header <float.h>:
>
#define FLT_EPSILON 1.19209290e-07F // float
#define DBL_EPSILON 2.2204460492503131e-16 // double
#define LDBL_EPSILON 1.084202172485504434007452e-19L //long double
// qfloat epsilon truncated so that it fits in this page...
#define QFLT_EPSILON 1.09003771904865842969737513593110651 ... E-106
Conforming implementations must not define QFLT_EPSILON in <float.h>
The C standard paragraph J.5.6: Common extensions:
J.5.6 Other arithmetic types
Additional arithmetic types, such as _ _int128 or double double, and
their appropriate conversions are defined (6.2.5, 6.3.1). Additional
floating types may have more range or precision than long double, may be
used for evaluating expressions of other floating types, and may be
used to define float_t or double_t.
Where does it allow you to use an identifier which doesn't begin
with an underscore and the standard never reserves?
What happens if i write a program with the lines
#include <float.h>
int main(int QFLT_EPSILON, char *argv[])
(yes, I'm allowed to do that) and try to compile it with your
compiler?

You will have to add the -ansic flag to your compilation flags
Thanks. (Not that I might ever be going to do that, but if I can
disable that, it is no worse (at least under this aspect) than gcc
not allowing me to name a variable random if I don't use -ansi
because of the POSIX function named that way.)
Jun 29 '07 #28

"jacob navia" <ja***@jacob.remcomp.frha scritto nel messaggio news:46***********************@news.orange.fr...
CBFalconer wrote:
>jacob navia wrote:
>>Eric Sosman wrote:
... snip ...
>>>I don't think so. The fraction of a normalized, non-zero, finite
IEEE number has a value 0.5 <= f < 1, so unity is represented as
two to the first times one-half: 2^1 * .100...000(2). The
unbiased exponent value in the representation of unity is
therefore one, not zero.
I think you forget the implicit bit Eric.

In some systems. Not necessarily C. Do try to stay on topic.
The C standard assumes IEEE 754 representation Chuck.
It doesn't. It says that an implementation only can define
__STDC_IEC_559__ when it comforms to that standard. C doesn't even
require FLT_RADIX to be a power of 2.

(Hey, n1124.pdf has a blank between the two underscores...)
Jun 29 '07 #29
jacob navia wrote:
CBFalconer wrote:
>jacob navia wrote:
.... snip ...
>>
>>I think you forget the implicit bit Eric.

In some systems. Not necessarily C. Do try to stay on topic.

The C standard assumes IEEE 754 representation Chuck.
No it doesn't. It allows it.

--
<http://www.cs.auckland.ac.nz/~pgut001/pubs/vista_cost.txt>
<http://www.securityfocus.com/columnists/423>
<http://www.aaxnet.com/editor/edit043.html>
cbfalconer at maineline dot net

--
Posted via a free Usenet account from http://www.teranews.com

Jun 29 '07 #30
jacob navia <ja***@jacob.remcomp.frwrites:
CBFalconer wrote:
>jacob navia wrote:
>>Eric Sosman wrote:
... snip ...
>>>I don't think so. The fraction of a normalized, non-zero, finite
IEEE number has a value 0.5 <= f < 1, so unity is represented as
two to the first times one-half: 2^1 * .100...000(2). The
unbiased exponent value in the representation of unity is
therefore one, not zero.
I think you forget the implicit bit Eric.
In some systems. Not necessarily C. Do try to stay on topic.
The C standard assumes IEEE 754 representation Chuck.
It most certainly does not, and it never has.

--
Keith Thompson (The_Other_Keith) ks***@mib.org <http://www.ghoti.net/~kst>
San Diego Supercomputer Center <* <http://users.sdsc.edu/~kst>
"We must do something. This is something. Therefore, we must do this."
-- Antony Jay and Jonathan Lynn, "Yes Minister"
Jun 29 '07 #31
Keith Thompson wrote:
jacob navia <ja***@jacob.remcomp.frwrites:
>CBFalconer wrote:
>>jacob navia wrote:
Eric Sosman wrote:
... snip ...
I don't think so. The fraction of a normalized, non-zero, finite
IEEE number has a value 0.5 <= f < 1, so unity is represented as
two to the first times one-half: 2^1 * .100...000(2). The
unbiased exponent value in the representation of unity is
therefore one, not zero.
I think you forget the implicit bit Eric.
In some systems. Not necessarily C. Do try to stay on topic.
The C standard assumes IEEE 754 representation Chuck.

It most certainly does not, and it never has.
In my copy of the standard there is a lengthy
Annex F (normative) IEC 60559 floating-point arithmetic

This annex specifies C language support for the IEC 60559 floating-point
standard. The
IEC 60559 floating-point standard is specifically Binary floating-point
arithmetic for
microprocessor systems, second edition (IEC 60559:1989), previously
designated
IEC 559:1989 and as IEEE Standard for Binary Floating-Point Arithmetic
(ANSI/IEEE 754−1985). IEEE Standard for Radix-Independent Floating-Point
Arithmetic (ANSI/IEEE 854−1987) generalizes the binary standard to remove
dependencies on radix and word length. IEC 60559 generally refers to the
floating-point
standard, as in IEC 60559 operation, IEC 60559 format, etc. An
implementation that
defines _ _STDC_IEC_559_ _ shall conform to the specifications in this
annex. Where
a binding between the C language and IEC 60559 is indicated, the IEC
60559-specified
behavior is adopted by reference, unless stated otherwise.

So, obviously in some systems no standard floating
point will be used, but that should be extremely rare.
Jun 29 '07 #32

"jacob navia" <ja***@jacob.remcomp.frha scritto nel messaggio news:46***********************@news.orange.fr...
Keith Thompson wrote:
>jacob navia <ja***@jacob.remcomp.frwrites:
>>CBFalconer wrote:
jacob navia wrote:
Eric Sosman wrote:
... snip ...
>I don't think so. The fraction of a normalized, non-zero, finite
>IEEE number has a value 0.5 <= f < 1, so unity is represented as
>two to the first times one-half: 2^1 * .100...000(2). The
>unbiased exponent value in the representation of unity is
>therefore one, not zero.
I think you forget the implicit bit Eric.
In some systems. Not necessarily C. Do try to stay on topic.

The C standard assumes IEEE 754 representation Chuck.

It most certainly does not, and it never has.

In my copy of the standard there is a lengthy
Annex F (normative) IEC 60559 floating-point arithmetic

This annex specifies C language support for the IEC 60559 floating-point standard. The
IEC 60559 floating-point standard is specifically Binary floating-point arithmetic for
microprocessor systems, second edition (IEC 60559:1989), previously designated
IEC 559:1989 and as IEEE Standard for Binary Floating-Point Arithmetic
(ANSI/IEEE 754?1985). IEEE Standard for Radix-Independent Floating-Point
Arithmetic (ANSI/IEEE 854?1987) generalizes the binary standard to remove
dependencies on radix and word length. IEC 60559 generally refers to the floating-point
standard, as in IEC 60559 operation, IEC 60559 format, etc. An implementation that
defines _ _STDC_IEC_559_ _ shall conform to the specifications in this annex. Where
a binding between the C language and IEC 60559 is indicated, the IEC 60559-specified
behavior is adopted by reference, unless stated otherwise.

So, obviously in some systems no standard floating
point will be used, but that should be extremely rare.
Read the first words of the last sentence you copynpasted.
The standard explicitly allows FLT_RADIX to be even a power of 10.
Read 5.2.4.2.2 throughout.
Jun 29 '07 #33
In article <46***********************@news.orange.fr>,
jacob navia <ja***@jacob.remcomp.frwrote:
>Eric Sosman wrote:
> For IEEE double, there are about four and a half
thousand million million distinct values strictly less
than DBL_EPSILON which when added to 1 will produce the
sum 1+DBL_EPSILON in "round to nearest" mode, which is
the mode in effect when a C program starts.
>Great Eric, unnormalized numbers exist.
>What's your point?
Eric isn't talking about unnormalized numbers.

Consider DBL_EPSILON . It is noticably less than 1/2 so in IEEE
754 format, it will be normalized as some negative exponent
followed by a hidden 1 followed by some 53 bit binary fraction.
Take that 53 bit binary fraction as an integer and subtract 1 from
it, and construct a double with the same exponent as DBL_EPSILON
but the reduced fraction. Call the result, say, NEARLY_DBL_EPSILON .
Now, take 1 + NEARLY_DBL_EPSILON . What is the result?
Algebraicly, it isn't quite 1 + DBL_EPSILON, but floating point
arithmetic doesn't obey normal algebra, so the result depends upon
the IEEE rounding mode in effect. If "round to nearest" (or
perhaps some other modes) is in effect, the result is close enough to
1 + DBL_EPSILON that the processor will round the result to
1 + DBL_EPSILON . C only promises accuracy to at best 1 ULP
("Unit in the Last Place"), so this isn't "wrong" (and what
exactly is right or wrong in such a case is arguable.)
--
"It is important to remember that when it comes to law, computers
never make copies, only human beings make copies. Computers are given
commands, not permission. Only people can be given permission."
-- Brad Templeton
Jun 29 '07 #34
On Fri, 29 Jun 2007 10:40:58 +0000, in comp.lang.c , Richard
Heathfield <rj*@see.sig.invalidwrote:
>Richard Tobin said:

<snip>
>You're not being reasonable.

Shurely shome mishtake?
On your part, yes.
>He's writing a C tutorial

The internal evidence of his articles suggests otherwise.
And now you're being gratuitous too. Give Jacob his due - he's trying
to write a C tutorial, he's very sensibly asking for a review here,
and is prepared to take in comments even from people who he has had
run-ins with before. Stop being so childish.
--
Mark McIntyre

"Debugging is twice as hard as writing the code in the first place.
Therefore, if you write the code as cleverly as possible, you are,
by definition, not smart enough to debug it."
--Brian Kernighan
Jun 29 '07 #35
jacob navia <ja***@jacob.remcomp.frwrites:
Keith Thompson wrote:
>jacob navia <ja***@jacob.remcomp.frwrites:
[...]
>>The C standard assumes IEEE 754 representation Chuck.
It most certainly does not, and it never has.

In my copy of the standard there is a lengthy
Annex F (normative) IEC 60559 floating-point arithmetic

This annex specifies C language support for the IEC 60559
floating-point standard. The IEC 60559 floating-point standard is
specifically Binary floating-point arithmetic for microprocessor
systems, second edition (IEC 60559:1989), previously designated IEC
559:1989 and as IEEE Standard for Binary Floating-Point Arithmetic
(ANSI/IEEE 754−1985). IEEE Standard for Radix-Independent
Floating-Point Arithmetic (ANSI/IEEE 854−1987) generalizes the
binary standard to remove dependencies on radix and word length. IEC
60559 generally refers to the floating-point standard, as in IEC
60559 operation, IEC 60559 format, etc. An implementation that
defines _ _STDC_IEC_559_ _ shall conform to the specifications in
this annex. Where a binding between the C language and IEC 60559 is
indicated, the IEC 60559-specified behavior is adopted by reference,
unless stated otherwise.
Right. The relevant sentence is:

An implementation that defines __STDC_IEC_559__ shall conform to
the specifications in this annex.

See also C99 6.18.8p2, "Predefined macro names":

The following macro names are conditionally defined by the
implementation:

__STDC_IEC_559__ The integer constant 1, intended to indicate
conformance to the specifications in annex F (IEC 60559
floating-point arithmetic).

[...]

An implementation is not required to conform to annex F. It's merely
required to do so *if* it defines __STDC_IEC_559__.
So, obviously in some systems no standard floating
point will be used, but that should be extremely rare.
Why should they be rare? Most new systems these days do implement
IEEE floating-point, or at least use the formats, but there are still
systems that don't (VAX, some older Crays, IBM mainframes). Annex F
merely provides a framework for implementations that do support IEEE
FP, but it's explicitly optional.

I can also imagine an implementation that uses the IEEE floating-point
formats, but doesn't meet the requirements of Annex F; such an
implementation could not legally define __STDC_IEC_559__. (I have no
idea whether such implementations exist, or how common they are.)

And of course the __STDC_IEC_559__ and annex F are new in C99, so they
don't apply to the majority of implementations that don't conform to
the C99 standard.

--
Keith Thompson (The_Other_Keith) ks***@mib.org <http://www.ghoti.net/~kst>
San Diego Supercomputer Center <* <http://users.sdsc.edu/~kst>
"We must do something. This is something. Therefore, we must do this."
-- Antony Jay and Jonathan Lynn, "Yes Minister"
Jun 30 '07 #36
Here's *my* floating-point tutorial:

Read "What every computer scientist should know about floating-point
arithmetic", by David Goldberg.

--
Keith Thompson (The_Other_Keith) ks***@mib.org <http://www.ghoti.net/~kst>
San Diego Supercomputer Center <* <http://users.sdsc.edu/~kst>
"We must do something. This is something. Therefore, we must do this."
-- Antony Jay and Jonathan Lynn, "Yes Minister"
Jun 30 '07 #37
Army1987 wrote:
>I forgot about the rounding issue completely. The good "test" would be
DBL_EPSILON*1/4

NO! The rounding mode needn't be to nearest!

According to the C standard:
(Annex F.7.3)

At program startup the floating-point environment is initialized as
prescribed by IEC 60559:
— All floating-point exception status flags are cleared.

— The rounding direction mode is rounding to nearest. (!!!!!)

— The dynamic rounding precision mode (if supported) is set so that
results are not shortened.
Jun 30 '07 #38
jacob navia wrote:
Army1987 wrote:
>>I forgot about the rounding issue completely. The good "test" would be
DBL_EPSILON*1/4

NO! The rounding mode needn't be to nearest!


According to the C standard:
(Annex F.7.3)

At program startup the floating-point environment is initialized as
prescribed by IEC 60559:
— All floating-point exception status flags are cleared.

— The rounding direction mode is rounding to nearest. (!!!!!)

— The dynamic rounding precision mode (if supported) is set so that
results are not shortened.
As has been pointed out to you, annex F does not apply unless
__STDC_IEC_559__ is defined by the implementation.
Jun 30 '07 #39
JT
Army1987 wrote:
NO! The rounding mode needn't be to nearest!
jacob navia <j...@jacob.remcomp.frwrote:
According to the C standard:
(Annex F.7.3)

At program startup the floating-point environment
is initialized as prescribed by IEC 60559:
- All floating-point exception status flags are cleared.
- The rounding direction mode is rounding to nearest. (!!!!!)
To Jacob: Eric Sosman already SAID THAT on June 29
that "rounding to nearest" is the start-up default.
But the program can change the default after start up.

So the FLAWED PARAPHRASE you keep insisting is not
a universal fact. You should use the OFFICIAL DEFINITION
of epsilon, rather than the FLAWED paraphrase
you KEEP insisting.
On Jun 29, 7:46 pm, Eric Sosman wrote:
Wrong again, Jacob. Twice.
First...
Second...
there are about four and a half
thousand million million distinct values
strictly less than DBL_EPSILON which
when added to 1 will produce the
sum 1+DBL_EPSILON in "round to nearest" mode,
which is the mode in effect when a C program starts.
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ^^^^
Jun 30 '07 #40

This thread has been closed and replies have been disabled. Please start a new discussion.

Similar topics

2
by: mao | last post by:
Hello all, Does anyone knows how to return the machine epsilon using C# or .NET in general ? In C++ it's numeric_limits<double>::epsilon() ; what about the C# equivalent? Thanks,
44
by: Daniel | last post by:
I am grappling with the idea of double.Epsilon. I have written the following test: public void FuzzyDivisionTest() { double a = 0.33333d; double b = 1d / 3d; Assert.IsFalse(a == b,...
4
by: David Veeneman | last post by:
Are System.Double operators overloaded to perform epsilon comparisons, or do these comparisons have to be performed by the programmer. In other words, if I have two doubles, A and B, and I test "A...
3
by: Piotrekk | last post by:
Hi I have important question. This is the way iam calculating machine epsilon float fEps = 1.0f, fStore = 2.0f; int i1 = 0; // Calculating epsilon for float while (fStore > 1.0f) {
4
by: H.S. | last post by:
Hello, I am trying out a few methods with which to test of a given number is practically zero. as an example, does the following test correctly if a given number is zero within machine...
0
by: taylorcarr | last post by:
A Canon printer is a smart device known for being advanced, efficient, and reliable. It is designed for home, office, and hybrid workspace use and can also be used for a variety of purposes. However,...
0
by: ryjfgjl | last post by:
If we have dozens or hundreds of excel to import into the database, if we use the excel import function provided by database editors such as navicat, it will be extremely tedious and time-consuming...
0
by: emmanuelkatto | last post by:
Hi All, I am Emmanuel katto from Uganda. I want to ask what challenges you've faced while migrating a website to cloud. Please let me know. Thanks! Emmanuel
0
BarryA
by: BarryA | last post by:
What are the essential steps and strategies outlined in the Data Structures and Algorithms (DSA) roadmap for aspiring data scientists? How can individuals effectively utilize this roadmap to progress...
1
by: nemocccc | last post by:
hello, everyone, I want to develop a software for my android phone for daily needs, any suggestions?
1
by: Sonnysonu | last post by:
This is the data of csv file 1 2 3 1 2 3 1 2 3 1 2 3 2 3 2 3 3 the lengths should be different i have to store the data by column-wise with in the specific length. suppose the i have to...
0
marktang
by: marktang | last post by:
ONU (Optical Network Unit) is one of the key components for providing high-speed Internet services. Its primary function is to act as an endpoint device located at the user's premises. However,...
0
Oralloy
by: Oralloy | last post by:
Hello folks, I am unable to find appropriate documentation on the type promotion of bit-fields when using the generalised comparison operator "<=>". The problem is that using the GNU compilers,...
0
jinu1996
by: jinu1996 | last post by:
In today's digital age, having a compelling online presence is paramount for businesses aiming to thrive in a competitive landscape. At the heart of this digital strategy lies an intricately woven...

By using Bytes.com and it's services, you agree to our Privacy Policy and Terms of Use.

To disable or enable advertisements and analytics tracking please visit the manage ads & tracking page.