ar**********@googlemail.com wrote:

Hello.

Is there any reliable way to determine the number

of digits of precision given by the float/double

type? I read somewhere that the C99 standard

guarantees six digits of precision with the float

type, although I can't seem to find this

information now.

A cursory search through the C99 pdf doesn't bring

up anything particularly helpful.

Your search was a very cursory one, wasn't it:

#include <stdio.h>

#include <float.h>

int main(void)

{

printf

("Decimal precision of floating point types in "

"this implementation:\n"

"float: %d (standard requires at least 6)\n"

"double: %d (standard requires at least 10)\n", FLT_DIG,

DBL_DIG);

#if defined(__STDC_VERSION__) && (__STDC_VERSION__ >= 199901L)

printf("long double: %d (standard requires at least 10)\n",

LDBL_DIG);

#endif

return 0;

}

[Output on my current version]

Decimal precision of floating point types in this implementation:

float: 6 (standard requires at least 6)

double: 15 (standard requires at least 10)

long double: 18 (standard requires at least 10)