449,402 Members | 1,110 Online
Need help? Post your question and get tips & solutions from a community of 449,402 IT Pros & Developers. It's quick & easy.

# Machine precision

 P: n/a Hello (not sure this is the right forum for that question so please redirect me if necessary) How can I know how many double values are available between 0 and 1? On my machine (pentium 3) I get a sizeof(double) = 8 Is the distribution of the double values in a fixed range (eg here between 0 and 1) uniform? ie same number of values in the range [0.0 ; 0.1[ than in the range [0.9 ; 1.0[ How can I interpret the DBL_EPSILON value (which is 2.22045e-16 on my machine). Any good website on the subject to recommend? Thanks Phil Jul 19 '05 #1
24 Replies

 P: n/a Philipp wrote: Hello (not sure this is the right forum for that question so please redirect me if necessary) How can I know how many double values are available between 0 and 1? On my machine (pentium 3) I get a sizeof(double) = 8 The precision of a type double is left up to the compiler. The C++ specification states a minimum precision, but your compiler is allowed to exceed that precision, regardless of whether the processor has the capability or not. The compiler is allowed to use software to for floating point calculations. Summary: See your compiler documentation or ask in a newsgroup about your compiler. Is the distribution of the double values in a fixed range (eg here between 0 and 1) uniform? ie same number of values in the range [0.0 ; 0.1[ than in the range [0.9 ; 1.0[ My guess is that the distribution is uniform, and depends on the limits set by the compiler. How can I interpret the DBL_EPSILON value (which is 2.22045e-16 on my machine). My understanding is the DBL_EPSILON is the finest resolution for a double. Although you may want to check the C++ specification on that. Any good website on the subject to recommend? Thanks Phil Probably the site for the ANSI electronic documents. -- Thomas Matthews C++ newsgroup welcome message: http://www.slack.net/~shiva/welcome.txt C++ Faq: http://www.parashift.com/c++-faq-lite C Faq: http://www.eskimo.com/~scs/c-faq/top.html alt.comp.lang.learn.c-c++ faq: http://www.raos.demon.uk/acllc-c++/faq.html Other sites: http://www.josuttis.com -- C++ STL Library book Jul 19 '05 #2

 P: n/a "Philipp" wrote: How can I know how many double values are available between 0 and 1? On my machine (pentium 3) I get a sizeof(double) = 8 Is the distribution of the double values in a fixed range (eg here between 0 and 1) uniform? ie same number of values in the range [0.0 ; 0.1[ than in the range [0.9 ; 1.0[ How can I interpret the DBL_EPSILON value (which is 2.22045e-16 on my machine). Any good website on the subject to recommend? C++ doubles are based on the IEEE754-standard, which most CPUs implement. There is a nice paper titled "What Every Computer Scientist Should Know About Floating-Point Arithmetic" by David Goldberg. It answers all of your questions, except the DBL_EPSILON one. DBL_EPSILON should be the smallest double d so that (1+d)!=1 IIRC. HTH, Patrick Jul 19 '05 #3

 P: n/a "Patrick Frankenberger" wrote in message news:bn*************@news.t-online.com... C++ doubles are based on the IEEE754-standard, which most CPUs implement. Sadly, there is no such requirement. It is often the case, however, because most modern processors do indeed implement IEEE 754 floating-point arithmetic. There is a nice paper titled "What Every Computer Scientist Should Know About Floating-Point Arithmetic" by David Goldberg. It answers all of your questions, except the DBL_EPSILON one. Generally good reading, if a bit alarmist. DBL_EPSILON should be the smallest double d so that (1+d)!=1 IIRC. You RC. P.J. Plauger Dinkumware, Ltd. http://www.dinkumware.com Jul 19 '05 #5

 P: n/a Patrick Frankenberger wrote: C++ doubles are based on the IEEE754-standard, which most CPUs implement. It's the other way around: most CPUs implement IEEE 754, so that's what most C++ implementations do. The C++ standard does not require IEEE 754. -- Pete Becker Dinkumware, Ltd. (http://www.dinkumware.com) Jul 19 '05 #6

 P: n/a P.J. Plauger wrote: No, the distribution is extremely *non* uniform, with values much more densely packed close to zero. This has me interested, since I would have assumed the same as the previous poster, i.e. that values would be evenly spaced according to the smallest value (DBL_EPSILON). Anyone have a simple explanation of why? - Keith Jul 19 '05 #7

 P: n/a On Mon, 20 Oct 2003 19:08:32 +0100, Keith S. wrote: P.J. Plauger wrote: No, the distribution is extremely *non* uniform, with values much more densely packed close to zero. This has me interested, since I would have assumed the same as the previous poster, i.e. that values would be evenly spaced according to the smallest value (DBL_EPSILON). Anyone have a simple explanation of why? - Keith Up to the time when most significant bit becomes one. Then the spacing is becomes two times bigger. You forgot about the exponent. -- grzegorz Jul 19 '05 #8

 P: n/a gr**********@pacbell.net wrote: Up to the time when most significant bit becomes one. Then the spacing is becomes two times bigger. You forgot about the exponent. Ah. All is clear :) - Keith Jul 19 '05 #9

 P: n/a "Keith S." wrote: P.J. Plauger wrote: No, the distribution is extremely *non* uniform, with values much more densely packed close to zero. This has me interested, since I would have assumed the same as the previous poster, i.e. that values would be evenly spaced according to the smallest value (DBL_EPSILON). A floating-point number is: a*2^(b-offset) a is a signed integer and b is an unsigned integer. HTH, Patrick Jul 19 '05 #10

 P: n/a "Keith S." wrote in message news:bn************@ID-169434.news.uni-berlin.de... P.J. Plauger wrote: No, the distribution is extremely *non* uniform, with values much more densely packed close to zero. This has me interested, since I would have assumed the same as the previous poster, i.e. that values would be evenly spaced according to the smallest value (DBL_EPSILON). Anyone have a simple explanation of why? FLOATING POINT. Do you understand mantissa and exponent? Jul 19 '05 #11

 P: n/a Ron Natalie wrote: FLOATING POINT. Do you understand mantissa and exponent? I understand politeness, shame that you do not. - Keith Jul 19 '05 #12

 P: n/a "Keith S." wrote in message news:bn************@ID-169434.news.uni-berlin.de... Ron Natalie wrote: FLOATING POINT. Do you understand mantissa and exponent? I understand politeness, shame that you do not. I wasn't trying to be impolite, just a bit terser then usual. doubles aren't just fractions, they shift. I thought the above would be enough of a hint if you thought about it. Jul 19 '05 #13

 P: n/a Ron Natalie wrote: No, the distribution is extremely *non* uniform, with values much more densely packed close to zero. This has me interested, since I would have assumed the same as the previous poster, i.e. that values would be evenly spaced according to the smallest value (DBL_EPSILON). Anyone have a simple explanation of why? FLOATING POINT. Do you understand mantissa and exponent? In the dark ages that thing was called, mistakenly, mantissa. It has nothing to do with the mantissa as in logarithms. Many (most?) people are now using a much less tortured term "significand". AFAIK that word was coined specifically for the use at hand. Much better to invent a word than to use one wrongly, which is what has been done in this field. So if the poster knew about mantissas (which he probably did) he would be doubly confused. Jul 19 '05 #14

 P: n/a Philipp wrote: How can I know how many double values are available between 0 and 1? On my machine (pentium 3) I get a sizeof(double) = 8 Is the distribution of the double values in a fixed range (eg here between 0 and 1) uniform? i.e. same number of values in the range [0.0 ; 0.1[ than in the range [0.9 ; 1.0[ How can I interpret the DBL_EPSILON value (which is 2.22045e-16 on my machine). Any good website on the subject to recommend? Read What Every Scientist Should Know About Floating-Point Arithmetic http://docs.sun.com/db?p=/doc/800-7895 On your machine, a floating-point number double x = (1 - 2*s)*m*2^e where s in {0, 1} is the sign bit, 1/2 <= m < 1 is the *normalized* mantissa and e is the exponent. There are DBL_MANT_DIG = 53 binary digits in the mantissa m but, since the most significant bit is always 1, it isn't represented and is known as the hidden bit so there are just 2^52 possible values for m. For normalized double precision floating-point, DBL_MIN_EXP = -1021 <= e <= 1024 = DBL_MAX_EXP. When e = -1022, a *denormalized* double precision floating-point number x = (1 - 2*s)*m*2^(-1021) where 0 <= m < 1/2. When e = +1025, x is Not a Number (NaN) or a positive or negative infinity. The IEEE representation is SEM where S is the sign bit, E is an eleven bit [excess 1023] exponent and 1 <= M < 2 is the 52 bit mantissa with a hidden 1 bit. s = S m = M/2 e = (E - 1023) + 1 Note that E = 0 when e = -1022 so that the representation of +0 is all zeros. Jul 19 '05 #15

 P: n/a On Mon, 20 Oct 2003 20:19:31 +0200, "Patrick Frankenberger" wrote in comp.lang.c++: "Keith S." wrote: P.J. Plauger wrote: No, the distribution is extremely *non* uniform, with values much more densely packed close to zero. This has me interested, since I would have assumed the same as the previous poster, i.e. that values would be evenly spaced according to the smallest value (DBL_EPSILON). A floating-point number is: a*2^(b-offset) a is a signed integer and b is an unsigned integer. HTH, Patrick ....on some platforms, perhaps all of those that you are familiar with. The C++ language standard deliberately does not specify the implementation details of the floating point types, and some are quite different from the model you describe. -- Jack Klein Home: http://JK-Technology.Com FAQs for comp.lang.c http://www.eskimo.com/~scs/C-faq/top.html comp.lang.c++ http://www.parashift.com/c++-faq-lite/ alt.comp.lang.learn.c-c++ ftp://snurse-l.org/pub/acllc-c++/faq Jul 19 '05 #16

 P: n/a Ron Natalie wrote: I wasn't trying to be impolite, just a bit terser then usual. doubles aren't just fractions, they shift. I thought the above would be enough of a hint if you thought about it. OK, fair enough. I obviously had not thought about it enough ;) - Keith Jul 19 '05 #17

 P: n/a "osmium" writes: Ron Natalie wrote: > No, the distribution is extremely *non* uniform, with values much more densely > packed close to zero. This has me interested, since I would have assumed the same as the previous poster, i.e. that values would be evenly spaced according to the smallest value (DBL_EPSILON). Anyone have a simple explanation of why? FLOATING POINT. Do you understand mantissa and exponent? In the dark ages that thing was called, mistakenly, mantissa. It has nothing to do with the mantissa as in logarithms. Many (most?) people are now using a much less tortured term "significand". Excuse me, but that's nonsense. Everybody I know uses the terms mantissa and exponent. Just because you or anybody else doesn't like mantissa doesn't make it wrong. regards frank -- Frank Schmitt 4SC AG phone: +49 89 700763-0 e-mail: frankNO DOT SPAMschmitt AT 4sc DOT com Jul 19 '05 #18

 P: n/a Frank Schmitt wrote: Excuse me, but that's nonsense. Everybody I know uses the terms mantissa and exponent. Just because you or anybody else doesn't like mantissa doesn't make it wrong. Acording to Knuth "it is an abuse of terminology to call the fraction part a mantissa, since that term has quite a different meaning in connection with logarithms". But this is getting a bit pedantic... - Keith Jul 19 '05 #19

 P: n/a "Keith S." wrote in message news:bn************@ID-169434.news.uni-berlin.de... Frank Schmitt wrote: Excuse me, but that's nonsense. Everybody I know uses the terms mantissa and exponent. Just because you or anybody else doesn't like mantissa doesn't make it wrong. Acording to Knuth "it is an abuse of terminology to call the fraction part a mantissa, since that term has quite a different meaning in connection with logarithms". But this is getting a bit pedantic... Hmm... sounds good ... but is it? According to good ol' pedantic Webster: Main Entry: pe·dan·tic Pronunciation: pi-'dan-tik Function: adjective Date: circa 1600 1 : of, relating to, or being a pedant 2 : narrowly, stodgily, and often ostentatiously learned Main Entry: man·tis·sa Pronunciation: man-'ti-s& Function: noun Etymology: Latin mantisa, mantissa makeweight, from Etruscan Date: circa 1847 : the part of a logarithm to the right of the decimal point I'd say that Knuth, rather than being pedantic, was correct. Main Entry: correct Function: adjective Etymology: Middle English, corrected, from Latin correctus, from past participle of corrigere Date: 1676 1 : conforming to an approved or conventional standard 2 : conforming to or agreeing with fact, logic, or known truth 3 : conforming to a set figure Nevertheless, we still can USE the word mantissa for the numeric value of floating-point encodings. (IEEE standard calls it "the fraction.") "When I use a word," Humpty Dumpty said in rather a scornful tone, "it means just what I choose it to mean - neither more nor less." Lewis Carroll -- Gary Jul 19 '05 #20

 P: n/a "Keith S." wrote in message news:bn************@ID-169434.news.uni-berlin.de... Frank Schmitt wrote: Excuse me, but that's nonsense. Everybody I know uses the terms mantissa and exponent. Just because you or anybody else doesn't like mantissa doesn't make it wrong. Acording to Knuth "it is an abuse of terminology to call the fraction part a mantissa, since that term has quite a different meaning in connection with logarithms". Lots of words have different meanings in different contexts. Knuth's no paragon of linguistic sanity. He can't even get typography right. Jul 19 '05 #21

 P: n/a Keith S. writes: Acording to Knuth "it is an abuse of terminology to call the fraction part a mantissa, since that term has quite a different meaning in connection with logarithms". But this is getting a bit pedantic... But it is not pedantic. Misappropriating and misusing a word from another field can be disastrous, as it is in this case. You show me a hundred programmers and I will show you a significant number who think it really *IS* a mantissa as in logarithms. Which is the very reason for this sub-thread. Jul 19 '05 #22

 P: n/a "osmium" wrote in message news:bn************@ID-179017.news.uni-berlin.de... Keith S. writes: Acording to Knuth "it is an abuse of terminology to call the fraction part a mantissa, since that term has quite a different meaning in connection with logarithms". But this is getting a bit pedantic... But it is not pedantic. Misappropriating and misusing a word from another field can be disastrous, as it is in this case. Show me how this is disasterous. It's not just a case of me "stealing the word." It was done long before I got to it. It's no different than dozens of other terms in a new science like computers. You show me a hundred programmers and I will show you a significant number who think it really *IS* a mantissa as in logarithms. Give me a break. I doubt that. Frankly, most programmers these days have no clue what a mantissa means with respect to logarithms at all. Those of us old farts remember using log charts where you'd have to seperate the whole and fractional parts of the logarithm, but those went the way of the dodo 25 years ago when the inexpensive scientific calculator came out. My log charts and my slide rule haven't seen the light of day in decades. Those who know what a logaritm mantissa don't have to think to hard to realize it just means the fractional part of the value, as opposed to athe fractional part of the exponent. Jul 19 '05 #23

 P: n/a Ron Natalie writes: "osmium" wrote in message news:bn************@ID-179017.news.uni-berlin.de... Keith S. writes: Acording to Knuth "it is an abuse of terminology to call the fraction part a mantissa, since that term has quite a different meaning in connection with logarithms". But this is getting a bit pedantic... But it is not pedantic. Misappropriating and misusing a word from another field can be disastrous, as it is in this case. Show me how this is disasterous. It's not just a case of me "stealing the word." It was done long before I got to it. It's no different than dozens of other terms in a new science like computers. You show me a hundred programmers and I will show you a significant number who think it really *IS* a mantissa as in logarithms. Give me a break. I doubt that. Frankly, most programmers these days have no clue what a mantissa means with respect to logarithms at all. Those of us old farts remember using log charts where you'd have to seperate the whole and fractional parts of the logarithm, but those went the way of the dodo 25 years ago when the inexpensive scientific calculator came out. My log charts and my slide rule haven't seen the light of day in decades. Those who know what a logaritm mantissa don't have to think to hard to realize it just means the fractional part of the value, as opposed to athe fractional part of the exponent. Your post is there. Res ipsa loquitur. Jul 19 '05 #24

 P: n/a Thank you all for your answers. The proposed article "What every computer scientist should know about floating-point arithmetic" is definitely worth reading. I'm using gcc 3.3 or Metrowerks compiler and am surprised that sizeof(int) = sizeof(long) = 4 (I thought long was bigger than int) and that sizeof(long double) = 12 (with gcc), and sizeof(long double) = 8 (with Metrowerks) are different... Hmm need to use some caution when doing precise calculation then... Anyway that's not the point here :-) Best regards Phil Jul 19 '05 #25

### This discussion thread is closed

Replies have been disabled for this discussion.