473,421 Members | 1,532 Online
Bytes | Software Development & Data Engineering Community
Post Job

Home Posts Topics Members FAQ

Join Bytes to post your question to a community of 473,421 software developers and data experts.

Machine precision

Hello (not sure this is the right forum for that question so please redirect
me if necessary)

How can I know how many double values are available between 0 and 1?
On my machine (pentium 3) I get a sizeof(double) = 8

Is the distribution of the double values in a fixed range (eg here between 0
and 1) uniform? ie same number of values in the range [0.0 ; 0.1[ than in
the range [0.9 ; 1.0[

How can I interpret the DBL_EPSILON value (which is 2.22045e-16 on my
machine).

Any good website on the subject to recommend?

Thanks Phil
Jul 19 '05 #1
24 7976
Philipp wrote:
Hello (not sure this is the right forum for that question so please redirect
me if necessary)

How can I know how many double values are available between 0 and 1?
On my machine (pentium 3) I get a sizeof(double) = 8
The precision of a type double is left up to the compiler. The C++
specification states a minimum precision, but your compiler is allowed
to exceed that precision, regardless of whether the processor has
the capability or not. The compiler is allowed to use software to
for floating point calculations. Summary: See your compiler
documentation or ask in a newsgroup about your compiler.

Is the distribution of the double values in a fixed range (eg here between 0
and 1) uniform? ie same number of values in the range [0.0 ; 0.1[ than in
the range [0.9 ; 1.0[
My guess is that the distribution is uniform, and depends on the
limits set by the compiler.
How can I interpret the DBL_EPSILON value (which is 2.22045e-16 on my
machine).
My understanding is the DBL_EPSILON is the finest resolution for a
double. Although you may want to check the C++ specification on that.
Any good website on the subject to recommend?

Thanks Phil


Probably the site for the ANSI electronic documents.

--
Thomas Matthews

C++ newsgroup welcome message:
http://www.slack.net/~shiva/welcome.txt
C++ Faq: http://www.parashift.com/c++-faq-lite
C Faq: http://www.eskimo.com/~scs/c-faq/top.html
alt.comp.lang.learn.c-c++ faq:
http://www.raos.demon.uk/acllc-c++/faq.html
Other sites:
http://www.josuttis.com -- C++ STL Library book

Jul 19 '05 #2

"Philipp" wrote:
How can I know how many double values are available between 0 and 1?
On my machine (pentium 3) I get a sizeof(double) = 8

Is the distribution of the double values in a fixed range (eg here between 0 and 1) uniform? ie same number of values in the range [0.0 ; 0.1[ than in
the range [0.9 ; 1.0[

How can I interpret the DBL_EPSILON value (which is 2.22045e-16 on my
machine).

Any good website on the subject to recommend?


C++ doubles are based on the IEEE754-standard, which most CPUs implement.

There is a nice paper titled "What Every Computer Scientist Should Know
About Floating-Point Arithmetic" by David Goldberg. It answers all of your
questions, except the DBL_EPSILON one.

DBL_EPSILON should be the smallest double d so that (1+d)!=1 IIRC.

HTH,
Patrick
Jul 19 '05 #3
"Thomas Matthews" <Th**********************@sbcglobal.net> wrote in message
news:2R******************@newssvr16.news.prodigy.c om...
Philipp wrote:
Hello (not sure this is the right forum for that question so please redirect
me if necessary)

How can I know how many double values are available between 0 and 1?
On my machine (pentium 3) I get a sizeof(double) = 8


The precision of a type double is left up to the compiler. The C++
specification states a minimum precision, but your compiler is allowed
to exceed that precision, regardless of whether the processor has
the capability or not. The compiler is allowed to use software to
for floating point calculations. Summary: See your compiler
documentation or ask in a newsgroup about your compiler.


All true, but that doesn't answer the OP's question. There's only a loose
relation between the number of bytes occupied by a floating-point value
and the number of values it can represent between 0 and 1. To first order,
the representation typically uses one bit to represent the sign of the
value and one bit to represent the sign of the exponent. That's a slight
simplification, and the range of exponents is often asymmetric around 1.0.
But this is enough to tell you that, for eight-bit bytes, you can expect
about 2^62 values between 0.0 and 1.0. FWIW.
Is the distribution of the double values in a fixed range (eg here between 0
and 1) uniform? ie same number of values in the range [0.0 ; 0.1[ than in
the range [0.9 ; 1.0[


My guess is that the distribution is uniform, and depends on the
limits set by the compiler.


No, the distribution is extremely *non* uniform, with values much more densely
packed close to zero.
How can I interpret the DBL_EPSILON value (which is 2.22045e-16 on my
machine).


My understanding is the DBL_EPSILON is the finest resolution for a
double. Although you may want to check the C++ specification on that.


DBL_EPSILON is the smallest value you can add to 1.0 and get a representable
answer greater than 1.0. It's a measure of the granularity of values in the
uniform range just above 1.0. (If the floating-point base is 2, the values are
twice as dense in the uniform range just below 1.0.)
Any good website on the subject to recommend?


The most readable intro to this stuff I've ever read is an ancient book
by Pat Sterbenz, called Floating Point Computation. Wish I could think of
a modern version that's as good. You can try reading the preambles to the
various modern floating-point formats, particularly those based on IEEE 754,
but they seldom discuss the implications of the representation.

HTH,

P.J. Plauger
Dinkumware, Ltd.
http://www.dinkumware.com
Jul 19 '05 #4
"Patrick Frankenberger" <p.*************@gmx.net> wrote in message
news:bn*************@news.t-online.com...
C++ doubles are based on the IEEE754-standard, which most CPUs implement.
Sadly, there is no such requirement. It is often the case, however, because
most modern processors do indeed implement IEEE 754 floating-point arithmetic.
There is a nice paper titled "What Every Computer Scientist Should Know
About Floating-Point Arithmetic" by David Goldberg. It answers all of your
questions, except the DBL_EPSILON one.
Generally good reading, if a bit alarmist.
DBL_EPSILON should be the smallest double d so that (1+d)!=1 IIRC.


You RC.

P.J. Plauger
Dinkumware, Ltd.
http://www.dinkumware.com
Jul 19 '05 #5
Patrick Frankenberger wrote:

C++ doubles are based on the IEEE754-standard, which most CPUs implement.


It's the other way around: most CPUs implement IEEE 754, so that's what
most C++ implementations do. The C++ standard does not require IEEE 754.

--

Pete Becker
Dinkumware, Ltd. (http://www.dinkumware.com)
Jul 19 '05 #6
P.J. Plauger wrote:
No, the distribution is extremely *non* uniform, with values much more densely
packed close to zero.


This has me interested, since I would have assumed the same as the
previous poster, i.e. that values would be evenly spaced according
to the smallest value (DBL_EPSILON).

Anyone have a simple explanation of why?

- Keith

Jul 19 '05 #7
On Mon, 20 Oct 2003 19:08:32 +0100, Keith S. <fa***@ntlworld.com> wrote:
P.J. Plauger wrote:
No, the distribution is extremely *non* uniform, with values much more
densely
packed close to zero.


This has me interested, since I would have assumed the same as the
previous poster, i.e. that values would be evenly spaced according
to the smallest value (DBL_EPSILON).

Anyone have a simple explanation of why?

- Keith


Up to the time when most significant bit becomes one.
Then the spacing is becomes two times bigger.
You forgot about the exponent.
--
grzegorz
Jul 19 '05 #8
gr**********@pacbell.net wrote:
Up to the time when most significant bit becomes one.
Then the spacing is becomes two times bigger.
You forgot about the exponent.


Ah. All is clear :)

- Keith
Jul 19 '05 #9

"Keith S." wrote:
P.J. Plauger wrote:
No, the distribution is extremely *non* uniform, with values much more densely packed close to zero.


This has me interested, since I would have assumed the same as the
previous poster, i.e. that values would be evenly spaced according
to the smallest value (DBL_EPSILON).


A floating-point number is: a*2^(b-offset)
a is a signed integer and b is an unsigned integer.

HTH,
Patrick
Jul 19 '05 #10

"Keith S." <fa***@ntlworld.com> wrote in message news:bn************@ID-169434.news.uni-berlin.de...
P.J. Plauger wrote:
No, the distribution is extremely *non* uniform, with values much more densely
packed close to zero.


This has me interested, since I would have assumed the same as the
previous poster, i.e. that values would be evenly spaced according
to the smallest value (DBL_EPSILON).

Anyone have a simple explanation of why?

FLOATING POINT. Do you understand mantissa and exponent?
Jul 19 '05 #11
Ron Natalie wrote:
FLOATING POINT. Do you understand mantissa and exponent?


I understand politeness, shame that you do not.

- Keith

Jul 19 '05 #12

"Keith S." <fa***@ntlworld.com> wrote in message news:bn************@ID-169434.news.uni-berlin.de...
Ron Natalie wrote:
FLOATING POINT. Do you understand mantissa and exponent?


I understand politeness, shame that you do not.

I wasn't trying to be impolite, just a bit terser then usual. doubles aren't
just fractions, they shift. I thought the above would be enough of a hint
if you thought about it.
Jul 19 '05 #13
Ron Natalie wrote:
No, the distribution is extremely *non* uniform, with values much more densely packed close to zero.


This has me interested, since I would have assumed the same as the
previous poster, i.e. that values would be evenly spaced according
to the smallest value (DBL_EPSILON).

Anyone have a simple explanation of why?

FLOATING POINT. Do you understand mantissa and exponent?


In the dark ages that thing was called, mistakenly, mantissa. It has
nothing to do with the mantissa as in logarithms. Many (most?) people are
now using a much less tortured term "significand". AFAIK that word was
coined specifically for the use at hand. Much better to invent a word than
to use one wrongly, which is what has been done in this field. So if the
poster knew about mantissas (which he probably did) he would be doubly
confused.
Jul 19 '05 #14
Philipp wrote:
How can I know how many double values are available between 0 and 1?
On my machine (pentium 3) I get a sizeof(double) = 8

Is the distribution of the double values in a fixed range
(eg here between 0 and 1) uniform?
i.e. same number of values in the range [0.0 ; 0.1[
than in the range [0.9 ; 1.0[

How can I interpret the DBL_EPSILON value
(which is 2.22045e-16 on my machine).

Any good website on the subject to recommend?


Read
What Every Scientist Should Know About Floating-Point Arithmetic

http://docs.sun.com/db?p=/doc/800-7895
On your machine, a floating-point number

double x = (1 - 2*s)*m*2^e

where s in {0, 1} is the sign bit,
1/2 <= m < 1 is the *normalized* mantissa and
e is the exponent.
There are DBL_MANT_DIG = 53 binary digits
in the mantissa m but, since the most significant bit
is always 1, it isn't represented and is known as the hidden bit
so there are just 2^52 possible values for m.
For normalized double precision floating-point,
DBL_MIN_EXP = -1021 <= e <= 1024 = DBL_MAX_EXP.
When e = -1022, a *denormalized* double precision
floating-point number x = (1 - 2*s)*m*2^(-1021)
where 0 <= m < 1/2.
When e = +1025, x is Not a Number (NaN)
or a positive or negative infinity.
The IEEE representation is

SEM

where S is the sign bit,
E is an eleven bit [excess 1023] exponent
and 1 <= M < 2 is the 52 bit mantissa with a hidden 1 bit.

s = S
m = M/2
e = (E - 1023) + 1

Note that E = 0 when e = -1022 so that
the representation of +0 is all zeros.

Jul 19 '05 #15
On Mon, 20 Oct 2003 20:19:31 +0200, "Patrick Frankenberger"
<p.*************@gmx.net> wrote in comp.lang.c++:

"Keith S." wrote:
P.J. Plauger wrote:
No, the distribution is extremely *non* uniform, with values much more densely packed close to zero.


This has me interested, since I would have assumed the same as the
previous poster, i.e. that values would be evenly spaced according
to the smallest value (DBL_EPSILON).


A floating-point number is: a*2^(b-offset)
a is a signed integer and b is an unsigned integer.

HTH,
Patrick


....on some platforms, perhaps all of those that you are familiar with.
The C++ language standard deliberately does not specify the
implementation details of the floating point types, and some are quite
different from the model you describe.

--
Jack Klein
Home: http://JK-Technology.Com
FAQs for
comp.lang.c http://www.eskimo.com/~scs/C-faq/top.html
comp.lang.c++ http://www.parashift.com/c++-faq-lite/
alt.comp.lang.learn.c-c++ ftp://snurse-l.org/pub/acllc-c++/faq
Jul 19 '05 #16
Ron Natalie wrote:
I wasn't trying to be impolite, just a bit terser then usual. doubles aren't
just fractions, they shift. I thought the above would be enough of a hint
if you thought about it.


OK, fair enough. I obviously had not thought about it enough ;)

- Keith

Jul 19 '05 #17
"osmium" <r1********@comcast.net> writes:
Ron Natalie wrote:
> No, the distribution is extremely *non* uniform, with values much more densely > packed close to zero.

This has me interested, since I would have assumed the same as the
previous poster, i.e. that values would be evenly spaced according
to the smallest value (DBL_EPSILON).

Anyone have a simple explanation of why?

FLOATING POINT. Do you understand mantissa and exponent?


In the dark ages that thing was called, mistakenly, mantissa. It has
nothing to do with the mantissa as in logarithms. Many (most?) people are
now using a much less tortured term "significand".


Excuse me, but that's nonsense. Everybody I know uses the terms mantissa and
exponent.
Just because you or anybody else doesn't like mantissa doesn't make it wrong.

regards
frank

--
Frank Schmitt
4SC AG phone: +49 89 700763-0
e-mail: frankNO DOT SPAMschmitt AT 4sc DOT com
Jul 19 '05 #18
Frank Schmitt wrote:
Excuse me, but that's nonsense. Everybody I know uses the terms mantissa and
exponent.
Just because you or anybody else doesn't like mantissa doesn't make it wrong.


Acording to Knuth "it is an abuse of terminology to call the fraction
part a mantissa, since that term has quite a different meaning in
connection with logarithms".

But this is getting a bit pedantic...

- Keith

Jul 19 '05 #19
"Keith S." <fa***@ntlworld.com> wrote in message
news:bn************@ID-169434.news.uni-berlin.de...
Frank Schmitt wrote:
Excuse me, but that's nonsense. Everybody I know uses the terms mantissa and exponent.
Just because you or anybody else doesn't like mantissa doesn't make it
wrong.
Acording to Knuth "it is an abuse of terminology to call the fraction
part a mantissa, since that term has quite a different meaning in
connection with logarithms".

But this is getting a bit pedantic...


Hmm... sounds good ... but is it?

According to good ol' pedantic Webster:

Main Entry: pe·dan·tic
Pronunciation: pi-'dan-tik
Function: adjective
Date: circa 1600
1 : of, relating to, or being a pedant
2 : narrowly, stodgily, and often ostentatiously learned

Main Entry: man·tis·sa
Pronunciation: man-'ti-s&
Function: noun
Etymology: Latin mantisa, mantissa makeweight, from Etruscan
Date: circa 1847
: the part of a logarithm to the right of the decimal point

I'd say that Knuth, rather than being pedantic, was correct.

Main Entry: correct
Function: adjective
Etymology: Middle English, corrected, from Latin correctus, from past
participle of corrigere
Date: 1676
1 : conforming to an approved or conventional standard
2 : conforming to or agreeing with fact, logic, or known truth
3 : conforming to a set figure <enclosed the correct return postage>

Nevertheless, we still can USE the word mantissa for the numeric value of
floating-point encodings.
(IEEE standard calls it "the fraction.")

"When I use a word," Humpty Dumpty said in rather a scornful tone, "it means
just what I choose it to mean - neither more nor less."
Lewis Carroll
--
Gary
Jul 19 '05 #20

"Keith S." <fa***@ntlworld.com> wrote in message news:bn************@ID-169434.news.uni-berlin.de...
Frank Schmitt wrote:
Excuse me, but that's nonsense. Everybody I know uses the terms mantissa and
exponent.
Just because you or anybody else doesn't like mantissa doesn't make it wrong.


Acording to Knuth "it is an abuse of terminology to call the fraction
part a mantissa, since that term has quite a different meaning in
connection with logarithms".


Lots of words have different meanings in different contexts.
Knuth's no paragon of linguistic sanity. He can't even get typography right.

Jul 19 '05 #21
Keith S. writes:
Acording to Knuth "it is an abuse of terminology to call the fraction
part a mantissa, since that term has quite a different meaning in
connection with logarithms".

But this is getting a bit pedantic...


But it is not pedantic. Misappropriating and misusing a word from another
field can be disastrous, as it is in this case. You show me a hundred
programmers and I will show you a significant number who think it really
*IS* a mantissa as in logarithms. Which is the very reason for this
sub-thread.
Jul 19 '05 #22

"osmium" <r1********@comcast.net> wrote in message news:bn************@ID-179017.news.uni-berlin.de...
Keith S. writes:
Acording to Knuth "it is an abuse of terminology to call the fraction
part a mantissa, since that term has quite a different meaning in
connection with logarithms".

But this is getting a bit pedantic...
But it is not pedantic. Misappropriating and misusing a word from another
field can be disastrous, as it is in this case.


Show me how this is disasterous. It's not just a case of me "stealing the word."
It was done long before I got to it. It's no different than dozens of other terms
in a new science like computers.
You show me a hundred
programmers and I will show you a significant number who think it really
*IS* a mantissa as in logarithms.


Give me a break. I doubt that. Frankly, most programmers these days
have no clue what a mantissa means with respect to logarithms at all.
Those of us old farts remember using log charts where you'd have to
seperate the whole and fractional parts of the logarithm, but those went
the way of the dodo 25 years ago when the inexpensive scientific calculator
came out. My log charts and my slide rule haven't seen the light of day in
decades.

Those who know what a logaritm mantissa don't have to think to hard to realize
it just means the fractional part of the value, as opposed to athe fractional part of
the exponent.
Jul 19 '05 #23
Ron Natalie writes:
"osmium" <r1********@comcast.net> wrote in message news:bn************@ID-179017.news.uni-berlin.de...
Keith S. writes:
Acording to Knuth "it is an abuse of terminology to call the fraction
part a mantissa, since that term has quite a different meaning in
connection with logarithms".

But this is getting a bit pedantic...


But it is not pedantic. Misappropriating and misusing a word from another field can be disastrous, as it is in this case.


Show me how this is disasterous. It's not just a case of me "stealing

the word." It was done long before I got to it. It's no different than dozens of other terms in a new science like computers.
You show me a hundred
programmers and I will show you a significant number who think it really
*IS* a mantissa as in logarithms.
Give me a break. I doubt that. Frankly, most programmers these days
have no clue what a mantissa means with respect to logarithms at all.
Those of us old farts remember using log charts where you'd have to
seperate the whole and fractional parts of the logarithm, but those went
the way of the dodo 25 years ago when the inexpensive scientific

calculator came out. My log charts and my slide rule haven't seen the light of day in decades.

Those who know what a logaritm mantissa don't have to think to hard to realize it just means the fractional part of the value, as opposed to athe fractional part of the exponent.


Your post is there. Res ipsa loquitur.
Jul 19 '05 #24
Thank you all for your answers.
The proposed article "What every computer scientist should know about
floating-point arithmetic" is definitely worth reading.

I'm using gcc 3.3 or Metrowerks compiler and am surprised that sizeof(int) =
sizeof(long) = 4 (I thought long was bigger than int) and that sizeof(long
double) = 12 (with gcc), and sizeof(long double) = 8 (with Metrowerks) are
different... Hmm need to use some caution when doing precise calculation
then...

Anyway that's not the point here :-) Best regards
Phil

Jul 19 '05 #25

This thread has been closed and replies have been disabled. Please start a new discussion.

Similar topics

2
by: Brian van den Broek | last post by:
Hi all, I guess it is more of a maths question than a programming one, but it involves use of the decimal module, so here goes: As a self-directed learning exercise I've been working on a...
5
by: DAVID SCHULMAN | last post by:
I've been trying to perform a calculation that has been running into an underflow (insufficient precision) problem in Microsoft Excel, which calculates using at most 15 significant digits. For this...
2
by: rupert | last post by:
i've got the following code: #include <iostream> #include <string> #include <vector> #include <iomanip> using namespace std; int main(double argc, char* argv) { double r = 0.01;
9
by: asdf | last post by:
I want to set the computation precision to quadruple precision, how can I do it in C++ coding? Thanks all.
10
by: Bo Yang | last post by:
Hi, I am confused by the double type in C++. I don't know whether below is legal or possible: double PI = 3.141592675932; Now, can I get another double variable from PI with lower precision,...
4
by: H.S. | last post by:
Hello, I am trying out a few methods with which to test of a given number is practically zero. as an example, does the following test correctly if a given number is zero within machine...
6
by: Matthew | last post by:
Hi, I want to change the precision level of floating point variables and calculations which is done in php.ini. However the server I rent for my domain does not give me access to php.ini,...
0
by: Charles Coldwell | last post by:
James Kanze <james.kanze@gmail.comwrites: True, with some additional considerations. The commonly used IEEE 754 floating point formats are single precision: 32 bits including 1 sign bit, 23...
1
by: abil | last post by:
i've already build a program that contain all the price, the change given back to the customer but i dun have no idea which function i should use to do program print report... #include <cstdlib>...
0
BarryA
by: BarryA | last post by:
What are the essential steps and strategies outlined in the Data Structures and Algorithms (DSA) roadmap for aspiring data scientists? How can individuals effectively utilize this roadmap to progress...
1
by: nemocccc | last post by:
hello, everyone, I want to develop a software for my android phone for daily needs, any suggestions?
0
by: Hystou | last post by:
There are some requirements for setting up RAID: 1. The motherboard and BIOS support RAID configuration. 2. The motherboard has 2 or more available SATA protocol SSD/HDD slots (including MSATA, M.2...
0
marktang
by: marktang | last post by:
ONU (Optical Network Unit) is one of the key components for providing high-speed Internet services. Its primary function is to act as an endpoint device located at the user's premises. However,...
0
Oralloy
by: Oralloy | last post by:
Hello folks, I am unable to find appropriate documentation on the type promotion of bit-fields when using the generalised comparison operator "<=>". The problem is that using the GNU compilers,...
1
by: Hystou | last post by:
Overview: Windows 11 and 10 have less user interface control over operating system update behaviour than previous versions of Windows. In Windows 11 and 10, there is no way to turn off the Windows...
0
agi2029
by: agi2029 | last post by:
Let's talk about the concept of autonomous AI software engineers and no-code agents. These AIs are designed to manage the entire lifecycle of a software development project—planning, coding, testing,...
0
isladogs
by: isladogs | last post by:
The next Access Europe User Group meeting will be on Wednesday 1 May 2024 starting at 18:00 UK time (6PM UTC+1) and finishing by 19:30 (7.30PM). In this session, we are pleased to welcome a new...
0
by: conductexam | last post by:
I have .net C# application in which I am extracting data from word file and save it in database particularly. To store word all data as it is I am converting the whole word file firstly in HTML and...

By using Bytes.com and it's services, you agree to our Privacy Policy and Terms of Use.

To disable or enable advertisements and analytics tracking please visit the manage ads & tracking page.