473,398 Members | 2,165 Online
Bytes | Software Development & Data Engineering Community
Post Job

Home Posts Topics Members FAQ

Join Bytes to post your question to a community of 473,398 software developers and data experts.

float limits

Why is it that FLT_DIG (from <float.h>) is 6 while DBL_DIB is 15?

Doing the math, the mantissa for floats is 24 bits = 2^24-1 max value
= 16,777,215.0f. Anything 8-digit odd # greater than that will be
rounded off.
For doubles, the mantissa is 53 bits = 2^53-1 max value =
9,007,199,254,740,991.0l (that's an L). So 16 digit odd numbers
greater than that will be rounded off. To get the actual precision we
take log(base 10) of those numbers and get 7.22 and 15.95
respectively.

....floats have greater than 7 digits precision and doubles only
greater than 15 digits. So how does MS guarantee no rounding errors
for 15 digit doubles yet 6 digit floats (if I understand correctly,
the last digit of precision must be used to round off the number...the
numbers are not just truncated at 7 & 15 digits...)

Anything I'm missing for the doubles case? It looks like they should
be guaranteeing 14 digits.
Nov 14 '05 #1
14 15243
>Why is it that FLT_DIG (from <float.h>) is 6 while DBL_DIB is 15?

Doing the math, the mantissa for floats is 24 bits = 2^24-1 max value
= 16,777,215.0f. Anything 8-digit odd # greater than that will be
rounded off.
I don't think you get to count the "hidden 1" bit that actually
is not stored in the number. The maximum mantissa is 2**24-1.
The minimum mantissa, without changing the exponent, is 2**23.
That's 2**23 combinations.
For doubles, the mantissa is 53 bits = 2^53-1 max value =
9,007,199,254,740,991.0l (that's an L). So 16 digit odd numbers
greater than that will be rounded off. To get the actual precision we
take log(base 10) of those numbers and get 7.22 and 15.95
respectively.
I think you should subtract .30 (log base 10 of 2, one bit) from
each of those, giving 6.92 and 15.65, respectively.
...floats have greater than 7 digits precision and doubles only
greater than 15 digits. So how does MS guarantee no rounding errors
for 15 digit doubles yet 6 digit floats (if I understand correctly,
the last digit of precision must be used to round off the number...the
numbers are not just truncated at 7 & 15 digits...)
It's not just Microsoft: FreeBSD has the same values for the i386
platform. And I believe both are correct.
Anything I'm missing for the doubles case? It looks like they should
be guaranteeing 14 digits.


ANSI C gives formulas for the constants in <float.h>.
if b = FLT_RADIX (the base) and p = FLT_MANT_DIG (digits in that base), then

FLT_DIG = floor((p-1)*log10(b) ) + (1 if b is a power of 10, 0 otherwise).

Gordon L. Burditt
Nov 14 '05 #2
On 25 Aug 2004 17:06:31 -0700, zi****@gmail.com (ziller) wrote in
comp.lang.c:
Why is it that FLT_DIG (from <float.h>) is 6 while DBL_DIB is 15?
Because that is what the implementation documents that it provides, as
required by the C standard. FLT_DIG and DBL_DIG are required to be at
least 6 and 10 respectively.
Doing the math, the mantissa for floats is 24 bits = 2^24-1 max value
= 16,777,215.0f. Anything 8-digit odd # greater than that will be
rounded off.
For doubles, the mantissa is 53 bits = 2^53-1 max value =
9,007,199,254,740,991.0l (that's an L). So 16 digit odd numbers
greater than that will be rounded off. To get the actual precision we
take log(base 10) of those numbers and get 7.22 and 15.95
respectively.

...floats have greater than 7 digits precision and doubles only
greater than 15 digits. So how does MS guarantee no rounding errors
for 15 digit doubles yet 6 digit floats (if I understand correctly,
the last digit of precision must be used to round off the number...the
numbers are not just truncated at 7 & 15 digits...)

Anything I'm missing for the doubles case? It looks like they should
be guaranteeing 14 digits.


What you are missing is that the C standard imposes no requirements
for "no rounding errors". In fact rounding errors are guaranteed in
almost all floating point operations.

The definition of those terms is spelled out clearly in C standard,
and it says nothing at all about rounding errors. Basically, these
values represent the largest number of decimal digits that can be
fully represented in the floating point type.

If FLT_DIG is 6, that means that any integral value in the range of
-999,999 to +999,999 can be placed into a float and then into a large
enough integer type and result will be exactly the same as the
original number.

If DBL_DIGIT is 15, that means any integral value in the range
-999,999,999,999,999 to 999,999,999,999,999 can be placed into a
double and then into a large enough integer type (if one exists) and
the result will be exactly the same as the original value.

Nowhere is there any mention of rounding at all.

If I assume that you mean Microsoft's 32-bit x86 implementations, you
have some errors in your calculations. Not the calculations
themselves, but your assumptions about the number of mantissa bits in
the Intel FPU single and double precision types, which are 23 and 52
respectively, not 24 and 53.

Which results in ranges of 8,388,609 and 4,503,599,627,370,496
respectively. There are 7 decimal digit numbers outside the range of
magnitude for the former, and 16 digit numbers for the latter.

<off-topic>

If you want to understand the actual format of Intel floating point
representations, you can download the documentation for free from
http://developer.intel.com. If you do, don't bother looking at the 80
bit extended precision format. Microsoft has decided that you aren't
qualified to use that format at the expense of "compatibility" among
Windows versions on various processors.

Here's a quote from Microsoft:

With the 16-bit Microsoft C/C++ compilers, long doubles are stored as
80- bit (10-byte) data types. Under Windows NT, in order to be
compatible with other non-Intel floating point implementations, the
80-bit long double format is aliased to the 64-bit (8-byte) double
format.

The complete web page may be found at:

http://support.microsoft.com/default...b;en-us;129209

</off-topic>

--
Jack Klein
Home: http://JK-Technology.Com
FAQs for
comp.lang.c http://www.eskimo.com/~scs/C-faq/top.html
comp.lang.c++ http://www.parashift.com/c++-faq-lite/
alt.comp.lang.learn.c-c++
http://www.contrib.andrew.cmu.edu/~a...FAQ-acllc.html
Nov 14 '05 #3
ziller <zi****@gmail.com> wrote:

Why is it that FLT_DIG (from <float.h>) is 6 while DBL_DIB is 15?


Because of roundoff error. The definition of FLT_DIG requires that
*any* representable number with that many decimal digits can be rounded
into a float and back again without changing the value. Unless floats
are stored in base 10 (or a power of 10), there are roundoff errors on
both conversions that compound in the worst case. Thus, the C Standard
says the correct formula to use in the non-decimal case is:

floor((p-1)*log10(b))

where p is the precision and b is the base. For base 2 with 24 and 53
bits of precision, that yields 6 and 15 respectively.

-Larry Jones

There's a connection here, I just know it. -- Calvin
Nov 14 '05 #4
OK but I still don't understand why p-1. This hidden bit seems to
have been used in the latest visual studios (haven't looked closely
but I believe the limits.h and float.h for vs changed from version
5.0-6.0).

But here's the thing. Try it yourself in code:

printf("%f\n", 16777215.0f);
printf("%f\n", 16777216.0f); // ... even
printf("%f\n", 16777217.0f); // ... odd

16777215 = 2^24-1 (not 23)...that's definately 7.22 bits of precision
we're getting in VS (tested with 7.0).

la************@ugs.com wrote in message news:<ru************@jones.homeip.net>...
ziller <zi****@gmail.com> wrote:

Why is it that FLT_DIG (from <float.h>) is 6 while DBL_DIB is 15?


Because of roundoff error. The definition of FLT_DIG requires that
*any* representable number with that many decimal digits can be rounded
into a float and back again without changing the value. Unless floats
are stored in base 10 (or a power of 10), there are roundoff errors on
both conversions that compound in the worst case. Thus, the C Standard
says the correct formula to use in the non-decimal case is:

floor((p-1)*log10(b))

where p is the precision and b is the base. For base 2 with 24 and 53
bits of precision, that yields 6 and 15 respectively.

-Larry Jones

There's a connection here, I just know it. -- Calvin

Nov 14 '05 #5
>OK but I still don't understand why p-1. This hidden bit seems to
have been used in the latest visual studios (haven't looked closely
but I believe the limits.h and float.h for vs changed from version
5.0-6.0).
Unless VS uses software emulation of floating point, the hidden
bit is a hardware feature of floating point implemented in hardware.
This does not rule out bugs in the header files but I see no obvious way
to compromise the security of Windows with FLT_DIG, so I don't think
even Microsoft would make this mistake.
But here's the thing. Try it yourself in code:

printf("%f\n", 16777215.0f);
printf("%f\n", 16777216.0f); // ... even
printf("%f\n", 16777217.0f); // ... odd

16777215 = 2^24-1 (not 23)...that's definately 7.22 bits of precision
we're getting in VS (tested with 7.0).
Try BOTH ENDS of that range. You don't get to claim the maximum
precision, you claim the minimum that holds over the whole range.
printf("%f\n", 8388608.0f); // ... even
printf("%f\n", 8388609.0f); // ... odd


You're getting counts of 1 in a number of magnitude 2**23, thus
23 bits of precision. That you get more at the other end of the
range is not relevant: you have to use a value guaranteed over
the entire range.

Gordon L. Burditt
Nov 14 '05 #6
kal
zi****@gmail.com (ziller) wrote in message news:<53**************************@posting.google. com>...
Doing the math, the mantissa for floats is 24 bits =
2^24-1 max value = 16,777,215.


For single precision floating points, the fractional part
of the mantissa is stored in 23 bits.

The mantissa is said to have 24 bits of precision only under
the assumption of the leading bit of '1'. But this leading
bit business is true only for NORMALIZED forms.

Now, from the C99 thingy.

5.2.4.2.2 Characteristics of floating types <float.h>

3 In addition to normalized floating-point numbers ...
floating types may be able to contain other kinds of
floating-point numbers, such as subnormal floating-point
numbers ... and unnormalized floating-point numbers ...
Nov 14 '05 #7

"kal" <k_*****@yahoo.com> wrote in message
news:a5*************************@posting.google.co m...
zi****@gmail.com (ziller) wrote in message news:<53**************************@posting.google. com>...
Doing the math, the mantissa for floats is 24 bits =
2^24-1 max value = 16,777,215.


For single precision floating points, the fractional part
of the mantissa is stored in 23 bits.

The mantissa is said to have 24 bits of precision only under
the assumption of the leading bit of '1'. But this leading
bit business is true only for NORMALIZED forms.

Now, from the C99 thingy.

5.2.4.2.2 Characteristics of floating types <float.h>

3 In addition to normalized floating-point numbers ...
floating types may be able to contain other kinds of
floating-point numbers, such as subnormal floating-point
numbers ... and unnormalized floating-point numbers ...

Whether it says so or not, the stuff in <float.h> doesn't apply to
subnormals, nor does <float.h> tell you whether subnormal operations are
enabled or not. If you care to be pedantic or historical, prior to the IEEE
standard, normalized numbers didn't suppress the most significant bit on all
machines. <float.h> doesn't care about that distinction either. If you saw
a <float.h> set up for 23 or 52 bit mantissa, you could be fairly certain it
was one of those machines which didn't suppress MSB. C, of course, didn't
have the near universal ability back then which it does now.
Nov 14 '05 #8
kal wrote:
zi****@gmail.com (ziller) wrote in message
Doing the math, the mantissa for floats is 24 bits =
2^24-1 max value = 16,777,215.


For single precision floating points, the fractional part
of the mantissa is stored in 23 bits.

The mantissa is said to have 24 bits of precision only under
the assumption of the leading bit of '1'. But this leading
bit business is true only for NORMALIZED forms.


Denormalized forms can always have as little as 1 significant bit
in the significand. They don't count.

--
A: Because it fouls the order in which people normally read text.
Q: Why is top-posting such a bad thing?
A: Top-posting.
Q: What is the most annoying thing on usenet and in e-mail?
Nov 14 '05 #9
In article <14*************************@posting.google.com> gr****@hotmail.com (grv575) writes:
OK but I still don't understand why p-1.


You are missing something. As Larry Jones wrote:
The definition of FLT_DIG requires that
*any* representable number with that many decimal digits can be rounded
into a float and back again without changing the value.


Note the *any*. (Larry could have made it to "that many decimal digits of
precision".) If you consider only integer numbers, then, indeed, it could
have been 7. But there are ranges where two f-p numbers with 7 decimal
digits of precision are different but nevertheless round to the same
number in 24 bits of precision.
--
dik t. winter, cwi, kruislaan 413, 1098 sj amsterdam, nederland, +31205924131
home: bovenover 215, 1025 jn amsterdam, nederland; http://www.cwi.nl/~dik/
Nov 14 '05 #10
In article <cg********@library2.airnews.net> go***********@burditt.org (Gordon Burditt) writes:
OK but I still don't understand why p-1. This hidden bit seems to
have been used in the latest visual studios (haven't looked closely
but I believe the limits.h and float.h for vs changed from version
5.0-6.0).


Unless VS uses software emulation of floating point, the hidden
bit is a hardware feature of floating point implemented in hardware.


Yes. And so what? Also the hidden bit counts as a bit of the mantissa.
printf("%f\n", 16777215.0f);
printf("%f\n", 16777216.0f); // ... even
printf("%f\n", 16777217.0f); // ... odd

16777215 = 2^24-1 (not 23)...that's definately 7.22 bits of precision
we're getting in VS (tested with 7.0).


Try BOTH ENDS of that range. You don't get to claim the maximum
precision, you claim the minimum that holds over the whole range.
printf("%f\n", 8388608.0f); // ... even
printf("%f\n", 8388609.0f); // ... odd


You're getting counts of 1 in a number of magnitude 2**23, thus
23 bits of precision. That you get more at the other end of the
range is not relevant: you have to use a value guaranteed over
the entire range.


I have no idea what you intend to say here. 8388609 is represented
exactly, and as that is 2^23 + 1, I see 24 bits of precision.

Consider however the following two numbers:
9903521000000000000000000000 and 9903522000000000000000000000,
they have 7 decimal digits of precision. The first one is in binary
approximately (28 bits of precision rounded down):
2^70 * 100000000000000000000000.1001
and the second is:
2^70 * 100000000000000000000001.0110
Rounded to 24 bits nearest representable number both are rounded to the
same number: 9903521494874662916604297216 (2^93 + 2^70). You can verify
that both are closer to this number than to the next lower representable
number (9903520314283042199192993792) or the next higher representable
number (9903522675466283634015600640).

More examples can be found. This occurs when the density of decimal
numbers is high (just below a power of 10) and at the same time the
density of binary numbers is low (just above a power of 2).
--
dik t. winter, cwi, kruislaan 413, 1098 sj amsterdam, nederland, +31205924131
home: bovenover 215, 1025 jn amsterdam, nederland; http://www.cwi.nl/~dik/
Nov 14 '05 #11
In article <a5*************************@posting.google.com> k_*****@yahoo.com (kal) writes:
zi****@gmail.com (ziller) wrote in message news:<53**************************@posting.google. com>...
Doing the math, the mantissa for floats is 24 bits =
2^24-1 max value = 16,777,215.


For single precision floating points, the fractional part
of the mantissa is stored in 23 bits.

The mantissa is said to have 24 bits of precision only under
the assumption of the leading bit of '1'. But this leading
bit business is true only for NORMALIZED forms.


Non-sequitur. Denormals can have as few as 1 bit of precision. The
rules are for normalised numbers.
--
dik t. winter, cwi, kruislaan 413, 1098 sj amsterdam, nederland, +31205924131
home: bovenover 215, 1025 jn amsterdam, nederland; http://www.cwi.nl/~dik/
Nov 14 '05 #12
grv575 <gr****@hotmail.com> wrote:

16777215 = 2^24-1 (not 23)...that's definately 7.22 bits of precision
we're getting in VS (tested with 7.0).


That's because you're testing integers, which generally do not suffer
from rounding errors. Try putting decimal points in front of those
numbers, converting them to binary and then back to decimal and see what
happens. Remember, the requirement is not just to convert each decimal
number to a unique binary number but rather to convert each decimal
number to a binary number that will convert back to the original decimal
number.

-Larry Jones

I've got to start listening to those quiet, nagging doubts. -- Calvin
Nov 14 '05 #13
In article <n6************@jones.homeip.net> la************@ugs.com writes:
Remember, the requirement is not just to convert each decimal
number to a unique binary number but rather to convert each decimal
number to a binary number that will convert back to the original decimal
number.


The "-1" in the rules are there to be sure that each decimal number is
converted to a unique binary number (see my example in a previous
article of why the "-1" is needed). This forward unique conversion yields
automatically a valid back-conversion to the same original number. I think
that is even pretty easy to prove. (Just consider the spacing of numbers
in both bases.)
--
dik t. winter, cwi, kruislaan 413, 1098 sj amsterdam, nederland, +31205924131
home: bovenover 215, 1025 jn amsterdam, nederland; http://www.cwi.nl/~dik/
Nov 14 '05 #14
Jack Klein wrote:

Much major snipping..

If I assume that you mean Microsoft's 32-bit x86 implementations, you
have some errors in your calculations. Not the calculations
themselves, but your assumptions about the number of mantissa bits in
the Intel FPU single and double precision types, which are 23 and 52
respectively, not 24 and 53.


Where do you get that? The mantissa of float is surely 24 bits of
value, even if bit 23 is not actually there. All floats (except
sub-normal ones) are 'normalized' which means shifted left until the
msb of the mantissa (bit 23) is 1. Because the b23 value is always 1
in terms of the mantissa, we don't need to reserve actual space for
it. Instead, we use the space for the lsb of the exponent.

16777214
01001011 01111111 11111111 11111110
Exp = 150 (24)
00011000
Man = .11111111 11111111 11111110
1.67772140e+07

16777215
01001011 01111111 11111111 11111111
Exp = 150 (24)
00011000
Man = .11111111 11111111 11111111
1.67772150e+07

If I can represent 16777214 and 16777215 exactly, and I can (the
nonsense about 8388607 notwithstanding), the mantissa is effectively
24 bits wide, not 23.

--
Joe Wright mailto:jo********@comcast.net
"Everything should be made as simple as possible, but not simpler."
--- Albert Einstein ---
Nov 14 '05 #15

This thread has been closed and replies have been disabled. Please start a new discussion.

Similar topics

3
by: Erik2000 | last post by:
If someone defines a float and double in C++, does that say anything about how many digits the number will hold at maximum and how many places there will be after the decimal point? Or is this...
26
by: Alexander Block | last post by:
Hello newsgroup, let's say I have a function like template<class Type> inline bool areEqual(const Type &a, const Type &b) { return ( a == b ); }
15
by: dkultasev | last post by:
I mean, when I am trying to get result of 1/3 (or smf) it returns 0.33333333334 or something. I understand that result can not be 100% right, but it is it possibly to get more accuracy ?...
2
by: Remi Villatel | last post by:
Hi there, I have following CSS definitions: div.limits { margin: 0 20px 0 20px; } div.halfleft { float: left; left: 0; width: 50%;
19
by: Jon Shemitz | last post by:
Is there a difference between a constant like "12.34f" and "(float) 12.34"? In principle, at least, the latter is a double constant being cast to a float; while the two both generate actual...
20
by: ehabaziz2001 | last post by:
That program does not yield and respond correctly espcially for the pointers (*f),(*i) in print_divide_meter_into(&meter,&yds,&ft,&ins); /*--------------pnt02own.c------------ ---1 inch = 2.51...
1
by: Wayne Shu | last post by:
Hei everyone: Just see the output of the following program #include <iostream> #include <cstdlib> #include <limits> int main() { std::cout << "minimum exponent of double: " <<
7
by: Gary Baydo | last post by:
In an effort to write a simple rounding function, I wrote the following code out of curiosity. My question is, can I rely on the output to 'make sense'? As an added 'exercise' I tried to write...
4
by: john | last post by:
The code: #include <iostream> #include <limits> int main() { using namespace std;
0
BarryA
by: BarryA | last post by:
What are the essential steps and strategies outlined in the Data Structures and Algorithms (DSA) roadmap for aspiring data scientists? How can individuals effectively utilize this roadmap to progress...
1
by: nemocccc | last post by:
hello, everyone, I want to develop a software for my android phone for daily needs, any suggestions?
1
by: Sonnysonu | last post by:
This is the data of csv file 1 2 3 1 2 3 1 2 3 1 2 3 2 3 2 3 3 the lengths should be different i have to store the data by column-wise with in the specific length. suppose the i have to...
0
by: Hystou | last post by:
There are some requirements for setting up RAID: 1. The motherboard and BIOS support RAID configuration. 2. The motherboard has 2 or more available SATA protocol SSD/HDD slots (including MSATA, M.2...
0
marktang
by: marktang | last post by:
ONU (Optical Network Unit) is one of the key components for providing high-speed Internet services. Its primary function is to act as an endpoint device located at the user's premises. However,...
0
by: Hystou | last post by:
Most computers default to English, but sometimes we require a different language, especially when relocating. Forgot to request a specific language before your computer shipped? No problem! You can...
0
by: Hystou | last post by:
Overview: Windows 11 and 10 have less user interface control over operating system update behaviour than previous versions of Windows. In Windows 11 and 10, there is no way to turn off the Windows...
0
tracyyun
by: tracyyun | last post by:
Dear forum friends, With the development of smart home technology, a variety of wireless communication protocols have appeared on the market, such as Zigbee, Z-Wave, Wi-Fi, Bluetooth, etc. Each...
0
agi2029
by: agi2029 | last post by:
Let's talk about the concept of autonomous AI software engineers and no-code agents. These AIs are designed to manage the entire lifecycle of a software development project—planning, coding, testing,...

By using Bytes.com and it's services, you agree to our Privacy Policy and Terms of Use.

To disable or enable advertisements and analytics tracking please visit the manage ads & tracking page.