On Apr 4, 8:09 pm, Allin Cottrell <cottr...@wfu.eduwrote:
Code fragment:
#include <stdio.h>
int main (void)
{
double a = 1.0;
double b = 0.9999999999999999;
printf("%#12.5g\n", a);
printf("%#12.5g\n", b);
return 0;
}
I would expect the above to produce
1.0000
1.0000
but here it produces
1.0000
1.00000
Is this in conformity with the C standard? "Here" is linux, glibc
2.4, but I've seen the same thing on win32.
Here is what the standard says about the g format specifier:
"g,G A double argument representing a floating-point number is
converted in style f or e (or in style F or E in the case of a G
conversion specifier), with the precision specifying the number of
significant digits. If the precision is zero, it is taken as 1. The
style used depends on the value converted; style e (or E) is used only
if the exponent resulting from such a conversion is less than −4 or
greater than or equal to the precision. Trailing zeros are removed
from the fractional portion of the result unless the # flag is
specified; a decimal-point character appears only if it is followed by
a digit. A double argument representing an infinity or NaN is
converted in the style of an f or F conversion specifier."
And here is the definition for the width and precision"
"* An optional minimum field width. If the converted value has fewer
characters than the field width, it is padded with spaces (by default)
on the left (or right, if the left adjustment flag, described later,
has been given) to the field width. The field width takes the form of
an asterisk * (described later) or a decimal integer.232)
* An optional precision that gives the minimum number of digits to
appear for the d, i, o, u, x, and X conversions, the number of digits
to appear after the decimal-point character for a, A, e, E, f, and F
conversions, the maximum number of significant digits for the g and G
conversions, or the maximum number of bytes to be written for s
conversions. The precision takes the form of a period (.) followed
either by an asterisk * (described later) or by an optional decimal
integer; if only the period is specified, the precision is taken as
zero. If a precision appears with any other conversion specifier, the
behavior is undefined."
I am not sure how the C compiler computes significant digits, but it
seems logical to me that both outputs should look the same. Probably,
news:comp.std.c can offer the best explanation.