By using this site, you agree to our updated Privacy Policy and our Terms of Use. Manage your Cookies Settings.
425,710 Members | 1,626 Online
Bytes IT Community
+ Ask a Question
Need help? Post your question and get tips & solutions from a community of 425,710 IT Pros & Developers. It's quick & easy.

Initializing constants

P: n/a
The code block below initialized a r/w variable (usually .bss) to the
value of pi. One, of many, problem is any linked compilation unit may
change the global variable. Adjusting

// rodata
const long double const_pi=0.0;

lines to

// rodata
const long double const_pi=init_ldbl_pi();

would add additional protections, but is not legal C, and, rightly so,
fails on GCC/i386.

Do any C standards define a means to initalize constants to values
obtained from hardware, or does the total number of constants and/or
cross-compiling prohibit it completely (although, when cross-compiling,
the compiler could create a value using the resources available i.e.
emulation)?

I'm hoping something like this is possible...

// rodata
const long double const_pi= LDBL_PI;

When compiling on i386 systems for i386 targets and using the
coprocessor, LDBL_PI evaluates to the 80bit value of fldpi instruction,
and when compiling on non i386 systems for i386 targets, an emulation
value is calculated from available resources.

BEGIN CODE
#include <stdio.h>

// prototype
long double init_ldbl_pi();

// rodata
const long double const_pi=0.0;

// global function
long double init_ldbl_pi() {
long double ldbl=3.14;

#if (defined __GNUC__) && (defined __i386__)
asm("fldpi":::"st");
asm("fstpt %0":"=m" (ldbl)::"st");
#endif
return ldbl;
}

// main function
int main() {
long double ldbl_pi=init_ldbl_pi();
printf("%Le\n", ldbl_pi);
return 0;
}
END CODE

Aug 31 '06 #1
Share this Question
Share on Google+
34 Replies


P: n/a
"newsposter0123" <ne************@yahoo.comwrites:
I'm hoping something like this is possible...

// rodata
const long double const_pi= LDBL_PI;

When compiling on i386 systems for i386 targets and using the
coprocessor, LDBL_PI evaluates to the 80bit value of fldpi instruction,
and when compiling on non i386 systems for i386 targets, an emulation
value is calculated from available resources.
What's the value in getting pi from an i386 instruction? I would
suggest just writing out enough digits of pi to cover the desired
level of significance.
--
"What is appropriate for the master is not appropriate for the novice.
You must understand the Tao before transcending structure."
--The Tao of Programming
Aug 31 '06 #2

P: n/a

Ben Pfaff wrote:
When compiling on i386 systems for i386 targets and using the
coprocessor, LDBL_PI evaluates to the 80bit value of fldpi instruction,
and when compiling on non i386 systems for i386 targets, an emulation
value is calculated from available resources.

What's the value in getting pi from an i386 instruction? I would
suggest just writing out enough digits of pi to cover the desired
level of significance.
Hardware generally provides constants other than pi.

For one, when converting numbers. Ideally you would like to get the
original number back. Especially when adjusting bases. But that debate
rages elsewhere.

Provided the precision of the measurements device was known in advance,
then the number of significant digits could be predetermined. If a
device with greater precision were used, then the constant would be
require adjustment (and no longer be a constant).

In general, constants imply unrestricted usage, not limited to a
specific application, or limiting the number of significant digits in a
calculation. Otherwise, why spend $$$ on precision equipment?

Aug 31 '06 #3

P: n/a
"newsposter0123" <ne************@yahoo.comwrites:
Ben Pfaff wrote:
When compiling on i386 systems for i386 targets and using the
coprocessor, LDBL_PI evaluates to the 80bit value of fldpi instruction,
and when compiling on non i386 systems for i386 targets, an emulation
value is calculated from available resources.

What's the value in getting pi from an i386 instruction? I would
suggest just writing out enough digits of pi to cover the desired
level of significance.

Hardware generally provides constants other than pi.

For one, when converting numbers. Ideally you would like to get the
original number back. Especially when adjusting bases. But that debate
rages elsewhere.

Provided the precision of the measurements device was known in advance,
then the number of significant digits could be predetermined. If a
device with greater precision were used, then the constant would be
require adjustment (and no longer be a constant).
But the i386 floating-point architecture also has a fixed
precision. When you obtain your device with greater precision,
you will have to change the code anyhow. So I don't see the
benefit.
--
int main(void){char p[]="ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuv wxyz.\
\n",*q="kl BIcNBFr.NKEzjwCIxNJC";int i=sizeof p/2;char *strchr();int putchar(\
);while(*q){i+=strchr(p,*q++)-p;if(i>=(int)sizeof p)i-=sizeof p-1;putchar(p[i]\
);}return 0;}
Aug 31 '06 #4

P: n/a

Ben Pfaff wrote:
"newsposter0123" <ne************@yahoo.comwrites:
Ben Pfaff wrote:
Provided the precision of the measurements device was known in advance,
then the number of significant digits could be predetermined. If a
device with greater precision were used, then the constant would be
require adjustment (and no longer be a constant).

But the i386 floating-point architecture also has a fixed
precision.
Yep. Some hardware provides an 128bit long double.
When you obtain your device with greater precision,
you will have to change the code anyhow.
Depending on how the application was implemented, maybe. If the
calculations could be completed using the long double type, then no.
But the library/ api providing and/or using the constant would not
require adjustment.
>So I don't see the
benefit.
Much easier for the programmer to implement.

BTW, this is off topic, but if a measurment device (micrometer) is
graduated in thousands of an inch and has range 0-2 inch, would the
measurement .053 contain 3 or 4 significant digits and be written as
5.300e-2 or 5.30e-2? I'm pretty sure the measurement 1.053 would have
4.

Aug 31 '06 #5

P: n/a
"newsposter0123" <ne************@yahoo.comwrites:
Ben Pfaff wrote:
>"newsposter0123" <ne************@yahoo.comwrites:
Provided the precision of the measurements device was known in advance,
then the number of significant digits could be predetermined. If a
device with greater precision were used, then the constant would be
require adjustment (and no longer be a constant).

But the i386 floating-point architecture also has a fixed
precision.
Yep. Some hardware provides an 128bit long double.
>When you obtain your device with greater precision,
you will have to change the code anyhow.

Depending on how the application was implemented, maybe. If the
calculations could be completed using the long double type, then no.
But the library/ api providing and/or using the constant would not
require adjustment.
You said, in the article that I originally replied to, that you
wanted to use an i386-specific instruction to obtain the value of
pi. You can't use that to obtain more than 80 bits of precision,
and you can't use it except on an i386 machine. Now you're
telling me that this will allow you to obtain greater precision
on a new device without adjusting the library or API.
>>So I don't see the
benefit.
Much easier for the programmer to implement.
I don't see how it's easier to use an i386-specific instruction
to obtain 80 bits of pi that to write a floating-point constant
that contains 80 bits of pi, and I certainly don't see how it
makes the code more extensible to better precision.
BTW, this is off topic, but if a measurment device (micrometer) is
graduated in thousands of an inch and has range 0-2 inch, would the
measurement .053 contain 3 or 4 significant digits and be written as
5.300e-2 or 5.30e-2? I'm pretty sure the measurement 1.053 would have
4.
Sounds like a trick question to me. I would think that .053 has
2 significant digits.
--
int main(void){char p[]="ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuv wxyz.\
\n",*q="kl BIcNBFr.NKEzjwCIxNJC";int i=sizeof p/2;char *strchr();int putchar(\
);while(*q){i+=strchr(p,*q++)-p;if(i>=(int)sizeof p)i-=sizeof p-1;putchar(p[i]\
);}return 0;}
Aug 31 '06 #6

P: n/a
Ben Pfaff wrote:
>Ben Pfaff wrote:
>>"newsposter0123" <ne************@yahoo.comwrites:
You said, in the article that I originally replied to
Appreciate your reply.
that you
wanted to use an i386-specific instruction to obtain the value of
pi.
The specific example I used was for gcc running on i386, creating i386
targets. Generally, I was interested in how the C standards would view
such an method for initializing readonly (.rodata) constants.
You can't use that to obtain more than 80 bits of precision,
and you can't use it except on an i386 machine.
Exactly.
Now you're
telling me that this will allow you to obtain greater precision
on a new device without adjusting the library or API.
I don't have to adjust the constant pi in my calculator every time I
evaluate a new equation. I certainly would not want applications to
adjust the value of a library exported "constant" every time it was
used, which, if it were a read only constant would be illegal C.
I don't see how it's easier to use an i386-specific instruction
to obtain 80 bits of pi that to write a floating-point constant
that contains 80 bits of pi and I certainly don't see how it
makes the code more extensible to better precision.
I think it would be easier to write LDBL_PI then to write "3.14...." to
25 or 30 digits of accuracy (or whatever it takes to get the most
accurate value of the constant for the arch dependent implementation of
long double). Plus, assuming the compiler uses the maximum precision
available for the long double type at compile time (either from
hardware or emulation), I doubt it could calculate (due to chopping,
rounding, etc.), based on a "3.14...." string, a value of pi more
accurate than a hardware value of pi. This would probably be true for
any other hardware supplied constant.
Sounds like a trick question to me. I would think that .053 has
2 significant digits.
Hmmm. I guess the tenths 0 must be used to position the decimal point,
and is, therefore ,not significant.

Aug 31 '06 #7

P: n/a
newsposter0123 wrote:
The code block below initialized a r/w variable (usually .bss) to the
value of pi. One, of many, problem is any linked compilation unit may
change the global variable.
Well, don't do that then.

Lame answer? Sure. You can't have your cake and eat it too, though -- being
accessible from anywhere is why global variables are used and shouldn't be.
Adjusting

// rodata
const long double const_pi=0.0;

lines to

// rodata
const long double const_pi=init_ldbl_pi();

would add additional protections, but is not legal C, and, rightly so,
fails on GCC/i386.
It's more correct to say that it's not strictly conforming C. It's legal for
an implementation to compile this, however, since an implementation may
allow additional forms of constant expressions.

Obviously, most won't, certainly not arbitrary functions.
Do any C standards define a means to initalize constants to values
obtained from hardware
What do you mean by "C standards"? If the latest standard doesn't have it,
the earlier ones probably won't, either. If you just mean "any standard
written that involves the C language", a la POSIX, that's a different story.

Obviously, "values obtained from hardware" would have a hard time getting
standardized by anything.
or does the total number of constants and/or cross-compiling prohibit it
completely (although, when cross-compiling, the compiler could create a
value using the resources available i.e. emulation)?
No idea what you're getting at, here.
I'm hoping something like this is possible...

// rodata
const long double const_pi= LDBL_PI;

When compiling on i386 systems for i386 targets and using the
coprocessor, LDBL_PI evaluates to the 80bit value of fldpi instruction,
and when compiling on non i386 systems for i386 targets, an emulation
value is calculated from available resources.
You don't really want a constant expression (which has a very specific
meaning in C), but a read-only value. C does not directly implement such
semantics. Many platforms will allow you to implement this one way or the
other (with linker directives or virtual memory protection) but otherwise
nothing but good discipline will help.

If you *do* really want a compile-time constant expression based on some
arbitrary platform-specific calculation, you're basically asking your
compiler for magic. It's not wise to expect that.

If you really, really want this, use code generation and put the constant in
a separate header. Of course, platform-specific code generation has its own
problems -- the main one here being that you have to implement and run a
separate utility just to get the actual program to compile. This basically
means you're extending the implementation itself to give you what you want,
a powerful but difficult-to-maintain and easily abused technique.

S.
Aug 31 '06 #8

P: n/a
Skarmander wrote:
newsposter0123 wrote:
>The code block below initialized a r/w variable (usually .bss) to the
value of pi. One, of many, problem is any linked compilation unit may
change the global variable.

Well, don't do that then.
I am trying hard to avoid it.
>would add additional protections, but is not legal C, and, rightly so,
fails on GCC/i386.
It's more correct to say that it's not strictly conforming C. It's legal
for an implementation to compile this, however, since an implementation
may allow additional forms of constant expressions.
Yes, I'm assuming that the implementation places all constants in a
separate section (Probably ELF/COFF specific here) protected from
writes at runtime that could not be adjusted up during initialization
at runtime.
>
Obviously, most won't, certainly not arbitrary functions.
Maybe using "hooks?"
>
>Do any C standards define a means to initalize constants to values
obtained from hardware

What do you mean by "C standards"?
Who knows? They all have lots of sections, and numerous
interpretations.
If the latest standard doesn't have
it, the earlier ones probably won't, either.
Allowed usage may be implemented at any time I guess.
If you just mean "any
standard written that involves the C language", a la POSIX, that's a
different story.
Correct.
Obviously, "values obtained from hardware" would have a hard time
getting standardized by anything.
Yes, more competent persons would have to write the exact wording,
assuming the standard does not currently allow it.
>or does the total number of constants and/or cross-compiling prohibit it
completely (although, when cross-compiling, the compiler could create a
value using the resources available i.e. emulation)?
No idea what you're getting at, here.
When cross compiling the hardware running the compiler and the compiler
that compiled the compiler (is that straight?) may use a different
implementation for long double and/or the constant. Ex Microsoft just
uses double as long doubles and therefore may not be able to create a
"most accurate" constant.
>
>I'm hoping something like this is possible...

// rodata
const long double const_pi= LDBL_PI;

When compiling on i386 systems for i386 targets and using the
coprocessor, LDBL_PI evaluates to the 80bit value of fldpi instruction,
and when compiling on non i386 systems for i386 targets, an emulation
value is calculated from available resources.
You don't really want a constant expression (which has a very specific
meaning in C), but a read-only value.
Correct.
C does not directly implement such
semantics.
So it would have to be "added". An arduous process at best.
Many platforms will allow you to implement this one way or
the other (with linker directives or virtual memory protection) but
otherwise nothing but good discipline will help.
Right, but nothing portable, unfortunately.
If you *do* really want a compile-time constant expression based on some
arbitrary platform-specific calculation, you're basically asking your
compiler for magic. It's not wise to expect that.
I'm just happy with their current magic. But, bells and whistles, no
matter how minute, can make a big difference.
If you really, really want this, use code generation and put the
constant in a separate header. Of course, platform-specific code
generation has its own problems -- the main one here being that you have
to implement and run a separate utility just to get the actual program
to compile. This basically means you're extending the implementation
itself to give you what you want, a powerful but difficult-to-maintain
and easily abused technique.
So far, for specific platforms, I'm initializing an array of
sizeof(long double) bytes with the byte codes obtained from the
hardware instruction. This makes way to many assumptions to be
implemented in a portable way. Just with GCC, the -m96bit-long-double
and -m128bit-long-double options complicate things for i386 (although
the extra zeros in the 128bit shouldn't interfere with the 96bit
value).

Aug 31 '06 #9

P: n/a
newsposter0123 wrote:
Skarmander wrote:
>newsposter0123 wrote:
>>The code block below initialized a r/w variable (usually .bss) to the
value of pi. One, of many, problem is any linked compilation unit may
change the global variable.
Well, don't do that then.

I am trying hard to avoid it.
>>would add additional protections, but is not legal C, and, rightly so,
fails on GCC/i386.
It's more correct to say that it's not strictly conforming C. It's legal
for an implementation to compile this, however, since an implementation
may allow additional forms of constant expressions.

Yes, I'm assuming that the implementation places all constants in a
separate section (Probably ELF/COFF specific here) protected from
writes at runtime that could not be adjusted up during initialization
at runtime.
That's not enough, I'm afraid. The implementation would also have to fix up
any evaluations involving the not-so-constant expression as if they took
place at compile time. "A constant expression can be evaluated during
translation rather than runtime, and accordingly may be used in any place
that a constant may be." This is what "constant" actually means in C: an
expression that can be evaluated at translation time.

This is why it's impossible for a compiler to allow
const long x = foo();
with foo() an arbitrary function, because to solve this in general, the
compiler would have to be capable of deferring translation of the entire
program! A C *interpreter* could do it, but that's probably not what you're
looking for.
>Obviously, most won't, certainly not arbitrary functions.
Maybe using "hooks?"
Not for what C calls a constant expression, per the above. But you're
talking about a read-only expression, which could conceivably be done.
>>Do any C standards define a means to initalize constants to values
obtained from hardware
What do you mean by "C standards"?
Who knows? They all have lots of sections, and numerous
interpretations.
The intent is for the number of implementations to be unlimited, but the
number of interpretations to be quite limited. Preferably to one. The draft
copy of the C99 standard I have does a pretty good job.
>If the latest standard doesn't have
it, the earlier ones probably won't, either.
Allowed usage may be implemented at any time I guess.
C has very few "optional" parts, and those are mostly restricted to being
"implementation-defined" (so the actual behavior needs to be documented) or
they don't actually guarantee something useful will happen (because a
platform may not support it at all). Either is useless for your purpose.

In any case, what is contained in the standard and what is provided by
implementations are conceptually different things. If the standard doesn't
have it, it can't be done portably; if the standard has it, it may be done
portably. In this case the standard doesn't have it.
>Obviously, "values obtained from hardware" would have a hard time
getting standardized by anything.
Yes, more competent persons would have to write the exact wording,
assuming the standard does not currently allow it.
What I meant was that not all the competence in the world could condense
this into something usable. The concept is much too broad.
>>or does the total number of constants and/or cross-compiling prohibit it
completely (although, when cross-compiling, the compiler could create a
value using the resources available i.e. emulation)?
No idea what you're getting at, here.

When cross compiling the hardware running the compiler and the compiler
that compiled the compiler (is that straight?) may use a different
implementation for long double and/or the constant. Ex Microsoft just
uses double as long doubles and therefore may not be able to create a
"most accurate" constant.
I now have some idea what you're getting at, but it just illustrates why
what you want is impractical.

Implementations do support implementation-specific constant expressions,
namely exactly those required by the standard. For example, INT_MAX (from
<limits.h>) must be the maximum value that can be stored in an int; it has
to be a constant expression.

That's quite reasonable, but how reasonable is "the value of pi calculated
to as many digits as this implementation supports"? This idea clearly does
not generalize.

For the specific example I mentioned, some implementations define a constant
M_PI which evaluates to a double representing some approximation of pi. I
don't know if it's governed by any standard (not the C standard, in any
case), and even if it is that may not guarantee that the maximum precision
be used, and even if it *does* it may not be practical to provide such a
definition, since it's impossible to implement portably. (Many C library
implementations are at least semi-portable.)
>>I'm hoping something like this is possible...

// rodata
const long double const_pi= LDBL_PI;

When compiling on i386 systems for i386 targets and using the
coprocessor, LDBL_PI evaluates to the 80bit value of fldpi instruction,
and when compiling on non i386 systems for i386 targets, an emulation
value is calculated from available resources.
You don't really want a constant expression (which has a very specific
meaning in C), but a read-only value.
Correct.
>C does not directly implement such
semantics.
So it would have to be "added". An arduous process at best.
I don't know. C++ did it. Is using C++ an option for you?

C++ has different semantics, where code like

int foo(void);
const int x = foo();

is allowed. (Note that "foo()" is still not a constant expression here, nor
is "x"; the value of x is just read-only after initialization.)
>Many platforms will allow you to implement this one way or
the other (with linker directives or virtual memory protection) but
otherwise nothing but good discipline will help.
Right, but nothing portable, unfortunately.
If you're satisfied with a read-only runtime value, then it's easy enough to
implement in C: just write a function that returns the value, or have a
doubly-const pointer to it (the latter can be subverted by people who really
want to, of course, but not silently).

static double const_pi_intern;
const double * const const_pi = &const_pi_intern;

void initialize_const_pi() {
/* ... */
}

or

double const_pi() {
static double const_pi_intern = 0.0;
if (const_pi_intern == 0.0) {
/* Initialize */
}
return const_pi_intern;
}

Neither solution gives you a compile-time constant and both may involve some
runtime overhead (which the compiler may be able to optimize away), so then
the question becomes: what's more important?
>If you really, really want this, use code generation and put the
constant in a separate header. Of course, platform-specific code
generation has its own problems -- the main one here being that you have
to implement and run a separate utility just to get the actual program
to compile. This basically means you're extending the implementation
itself to give you what you want, a powerful but difficult-to-maintain
and easily abused technique.

So far, for specific platforms, I'm initializing an array of
sizeof(long double) bytes with the byte codes obtained from the
hardware instruction. This makes way to many assumptions to be
implemented in a portable way. Just with GCC, the -m96bit-long-double
and -m128bit-long-double options complicate things for i386 (although
the extra zeros in the 128bit shouldn't interfere with the 96bit
value).
Just mark the function that has to initialize the value as non-portable and
requiring a separate implementation on each platform; you can fill the value
with a nonsense constant and barf on startup if the initialization function
isn't implemented.

You seem to be trying to solve the problem of "how to implement something
that cannot be done portably in a portable way". You don't; you isolate it,
flag it as a requirement and move on.

In this case, simply requiring that a particular macro expand to the
constant pi with a particular platform-dependent precision doesn't seem too
much of a burden on would-be porters.

S.
Aug 31 '06 #10

P: n/a
On 31 Aug 2006 08:26:07 -0700, "newsposter0123"
<ne************@yahoo.comwrote in comp.lang.c:
The code block below initialized a r/w variable (usually .bss) to the
value of pi. One, of many, problem is any linked compilation unit may
change the global variable. Adjusting

// rodata
const long double const_pi=0.0;

lines to

// rodata
const long double const_pi=init_ldbl_pi();

would add additional protections, but is not legal C, and, rightly so,
fails on GCC/i386.

Do any C standards define a means to initalize constants to values
obtained from hardware, or does the total number of constants and/or
cross-compiling prohibit it completely (although, when cross-compiling,
the compiler could create a value using the resources available i.e.
emulation)?

I'm hoping something like this is possible...

// rodata
const long double const_pi= LDBL_PI;

When compiling on i386 systems for i386 targets and using the
coprocessor, LDBL_PI evaluates to the 80bit value of fldpi instruction,
and when compiling on non i386 systems for i386 targets, an emulation
value is calculated from available resources.

BEGIN CODE
#include <stdio.h>

// prototype
long double init_ldbl_pi();

// rodata
const long double const_pi=0.0;
As others have pointed out, you just can't do this in C. But there
are two workarounds, this one if you can live with a level of pointer
redirection and an initialization function call in main:

file: global_const.h:
extern const long double *const_pi;
====

file: global_const.c:
long double *const_pi;
static long double const_pi_value;

void init_const_pi(void)
{
const_pi_value = call_some_func();
}

....of course, you must be sure to call the initialization function
before dereferencing the pointer to get the value.

There's another way that eliminates the initialization function,
somewhat reminiscent of the C++ singleton pattern:

file: global_const.h:
long double get_const_pi(void);
====

file: global_const.c:
static long double const_pi_value;

long double get_const_pi(void)
{
if (const_pi_value == 0.0)
{
const_pi_value() = call_some_func();
}
return const_pi_value;
}

In both cases, the initialization function can perform some additional
checking on the first call to decide whether to call some function or
to substitute a constant.

--
Jack Klein
Home: http://JK-Technology.Com
FAQs for
comp.lang.c http://c-faq.com/
comp.lang.c++ http://www.parashift.com/c++-faq-lite/
alt.comp.lang.learn.c-c++
http://www.contrib.andrew.cmu.edu/~a...FAQ-acllc.html
Sep 1 '06 #11

P: n/a
newsposter0123 wrote:
I'm hoping something like this is possible...

// rodata
const long double const_pi= LDBL_PI;

When compiling on i386 systems for i386 targets and using the
coprocessor, LDBL_PI evaluates to the 80bit value of fldpi instruction,
and when compiling on non i386 systems for i386 targets, an emulation
value is calculated from available resources.
The straight-forward way to access the constant pi is to define a macro
with enough digits to satisfy any potential target. For targets with
less precision, the number will be rounded to the approximation for the
specified type for the implementation. You could initialize a const
variable with that value if you wish, but there is no particular
advantage to doing that, as opposed to using a defined constant.

In this case I think the simplest way works best. It is
standard-conforming, obvious, (almost always) the most accurate, and the
most efficient.

--
Thad
Sep 1 '06 #12

P: n/a
Skarmander wrote:
>Yes, I'm assuming that the implementation places all constants in a
separate section (Probably ELF/COFF specific here) protected from
writes at runtime that could not be adjusted up during initialization
at runtime.
That's not enough, I'm afraid. The implementation would also have to fix
up any evaluations involving the not-so-constant expression as if they
took place at compile time. "A constant expression can be evaluated
during translation rather than runtime, and accordingly may be used in
any place that a constant may be." This is what "constant" actually
means in C: an expression that can be evaluated at translation time.

This is why it's impossible for a compiler to allow
const long x = foo();
with foo() an arbitrary function, because to solve this in general, the
compiler would have to be capable of deferring translation of the entire
program! A C *interpreter* could do it, but that's probably not what
you're looking for.
A compiler could provide a LDBL_PI predefined macro (similar to
__LINE__, __FUNCTION__, etc.), whose value was (1) based on the actual
hardware value, if compiling on a system that provides it, or, (2)
based on a stored value, if one was available for the target, or, (3)
based on best approximation, calculated from available resources. With
the number of constants available in the universe, this could lead to
considerable bloat. The total number is reasonable if limited to CPU
hardware supplied constants.
I now have some idea what you're getting at, but it just illustrates why
what you want is impractical.
Well, the
const long double ldbdl_pi=some_function();
part definitely is.
I don't know. C++ did it. Is using C++ an option for you?
Yes, but I would like to remain with C.
If you're satisfied with a read-only runtime value
Thats the idea.
then it's easy
enough to implement in C
static double const_pi_intern;
const double * const const_pi = &const_pi_intern;

void initialize_const_pi() {
/* ... */
}
or

double const_pi() {
static double const_pi_intern = 0.0;
if (const_pi_intern == 0.0) {
/* Initialize */
}
return const_pi_intern;
}
What I am doing now is something like this:

#if _MSC_VER >= 1200
const union {
unsigned char v[sizeof(long double)];
long double d;
} g_ldbl_pi= {0x18,0x2D,0x44,0x54,0xFB,0x21,0x09,0x40};
#define ldbl_pi g_ldbl_pi.d
#elseif (defined __GNUC__) && (defined __i386__)
const union {
unsigned char v[sizeof(long double)];
long double d;
} g_ldbl_pi=
{0x35,0xc2,0x68,0x21,0xa2,0xda,0x0f,0xc9,0x00,0x40 ,0xea,0xbf};
#define ldbl_pi g_ldbl_pi.d
#else
const long double ldbl_pi=3.14;
#endif

/* main function */
int main() {
printf("%Le\n", ldbl_pi);
return 0;
}
In this case, simply requiring that a particular macro expand to the
constant pi with a particular platform-dependent precision doesn't seem
too much of a burden on would-be porters.
Or even for addition to compilers.

Sep 1 '06 #13

P: n/a
In article <11**********************@m79g2000cwm.googlegroups .com>,
newsposter0123 <ne************@yahoo.comwrote:
>Skarmander wrote:
>In this case, simply requiring that a particular macro expand to the
constant pi with a particular platform-dependent precision doesn't seem
too much of a burden on would-be porters.
>Or even for addition to compilers.
You've been talking about hardware PI and so on, but you have
neglected to discuss rounding modes. If a simple #define is not
sufficient for your purposes, then chances are that a single PI
is not sufficient for your purpose: you would likely want
"PI in the current rounding mode". Which becomes more
problematic as a compile time constant if rounding modes can
be changed by program action.

--
All is vanity. -- Ecclesiastes
Sep 1 '06 #14

P: n/a
newsposter0123 wrote:
Skarmander wrote:
<snip>
>In this case, simply requiring that a particular macro expand to the
constant pi with a particular platform-dependent precision doesn't seem
too much of a burden on would-be porters.
Or even for addition to compilers.
Sure. And now I want e, Planck's constant and the square root of 12 to some
platform-defined precision, and the guy behind me has a whole class of
expressions he'd like to be evaluated at compile time...

It has to stop somewhere. Like I said, M_PI is provided on many platforms,
though it doesn't come with any guarantees as far as I'm aware, and it
usually has double precision, not long double precision.

S.
Sep 1 '06 #15

P: n/a
Walter Roberson wrote:
You've been talking about hardware PI and so on, but you have
neglected to discuss rounding modes.
This could be a prolonged discussion on a math library group, but, C
provides simple long double operators (*, /, etc.). Presumably the
constants would be implemented for the target/mode used for these
operations. On the x87, the constant value is independent of the
rounding/ chopping mode, "hardwired in", and has full accuracy. I'm not
sure about other hardware.

Sep 1 '06 #16

P: n/a
Skarmander wrote:
newsposter0123 wrote:
>Skarmander wrote:
<snip>
>>In this case, simply requiring that a particular macro expand to the
constant pi with a particular platform-dependent precision doesn't seem
too much of a burden on would-be porters.
Or even for addition to compilers.
Sure. And now I want e, Planck's constant and the square root of 12 to
some platform-defined precision, and the guy behind me has a whole class
of expressions he'd like to be evaluated at compile time...

It has to stop somewhere. Like I said, M_PI is provided on many
platforms, though it doesn't come with any guarantees as far as I'm
aware, and it usually has double precision, not long double precision.
I'm beginning to think that using available math libraries is the best
solution if doubles do not work for a specific application. Long
doubles just do not have the same portability yet.

Sep 1 '06 #17

P: n/a
"newsposter0123" <ne************@yahoo.comwrites:
I'm beginning to think that using available math libraries is the best
solution if doubles do not work for a specific application. Long
doubles just do not have the same portability yet.
Long double has been in the C standard since 1989. If they
aren't portable now, by your standards, then they may never be.
--
int main(void){char p[]="ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuv wxyz.\
\n",*q="kl BIcNBFr.NKEzjwCIxNJC";int i=sizeof p/2;char *strchr();int putchar(\
);while(*q){i+=strchr(p,*q++)-p;if(i>=(int)sizeof p)i-=sizeof p-1;putchar(p[i]\
);}return 0;}
Sep 1 '06 #18

P: n/a
Skarmander wrote:
newsposter0123 wrote:
>Skarmander wrote:
<snip>
>>In this case, simply requiring that a particular macro expand to the
constant pi with a particular platform-dependent precision doesn't seem
too much of a burden on would-be porters.

Or even for addition to compilers.
Sure. And now I want e, Planck's constant and the square root of 12 to
some platform-defined precision, and the guy behind me has a whole class
of expressions he'd like to be evaluated at compile time...

It has to stop somewhere. Like I said, M_PI is provided on many
platforms, though it doesn't come with any guarantees as far as I'm
aware, and it usually has double precision, not long double precision.

S.
Yes, and that can be a problem. One of the ways to solve that is

#define M_PI 3.14159265358979323846
#define M_PIL 3.1415926535897932384626433832795029L

Since those identifiers are not standard, it is difficult to propose
this. In lcc-win32 this is only defined if you are NOT in -ansic
mode.

But obviously this MUST stop somewhere as you said. One of the
best solutions would be to define all constants in long double
precision:
#define M_PI_2 1.5707963267948966192313216916397514L /* pi/2 */
but that could make that expressions are promoted to long double
precision and done in higher precision.

To avoid that, you can ((double)M_PI_2), but this is difficult too,
since many compilers (lcc-win32 included) do not respect this cast
sometimes.

Yes numerical analysis and numerical software is quite a mess.

jacob
Sep 1 '06 #19

P: n/a
Ben Pfaff wrote:
"newsposter0123" <ne************@yahoo.comwrites:
>I'm beginning to think that using available math libraries is the
best solution if doubles do not work for a specific application.
Long doubles just do not have the same portability yet.

Long double has been in the C standard since 1989. If they
aren't portable now, by your standards, then they may never be.
#define PI (355.0 / 155) /* to 7 sig. digits */

:-)

--
Some informative links:
news:news.announce.newusers
http://www.geocities.com/nnqweb/
http://www.catb.org/~esr/faqs/smart-questions.html
http://www.caliburn.nl/topposting.html
http://www.netmeister.org/news/learn2quote.html
Sep 1 '06 #20

P: n/a
CBFalconer wrote:
>
#define PI (355.0 / 155) /* to 7 sig. digits */
The C standard used to require all constants to be doubles. It probably
now requires them to be long doubles. However, sizeof(float) >
sizeof(double) sizeof(long dobule) is not guaranteed. Nor does it
require the compiler itself to use a specific size of long double. In
the extreme case of the compiler using 32bit floats/doubles/long
doubles (that generally have precision 6 digits of accuracy) to create
64 or even 128 bit long doubles, this would only result in (at best) 5
digits of accuracy. Most modern, mainstream compilers would use >=
64bit doubles/ long doubles and create the proper binary representation
that would convert to 7 significant digits in base 10, and print
accordingly. This is certainly the means to implement the default (3)
case in previous post.

Sep 2 '06 #21

P: n/a
newsposter0123 wrote:
CBFalconer wrote:
>#define PI (355.0 / 155) /* to 7 sig. digits */
The C standard used to require all constants to be doubles. It probably
now requires them to be long doubles.
It doesn't, even if we restrict ourselves to floating-point constants. You
must not have been reading this thread closely, since an example was
mentioned elsewhere by Jacob Navia:

#define M_PI 3.14159265358979323846
#define M_PIL 3.1415926535897932384626433832795029L

The "f" suffix forms float constants, the "l" suffix long double constants,
and no suffix forms double constants.
However, sizeof(float) sizeof(double) sizeof(long dobule) is not
guaranteed.
Correct, but DBL_DIG >= FLT_DIG *is* guaranteed ("double" must have at least
10 digits precision; "float" at least 6).
Nor does it require the compiler itself to use a specific size of long
double.
It wouldn't be logical to do so either; some platforms don't have an
extended floating-point type. Those that do wouldn't be served well by an
arbitrary size demand (floating-point types show much more variation than
the integral types).

C99 adds a way for implementations to signal that they implement IEEE 60559
for floating-point. This gives you a few more guarantees about
floating-point; in particular, "long double" is required to be an IEEE
extended type (bigger than double, in any case).
In the extreme case of the compiler using 32bit floats/doubles/long
doubles (that generally have precision 6 digits of accuracy) to create 64
or even 128 bit long doubles, this would only result in (at best) 5
digits of accuracy.
Your statement is too ambiguous to make much sense of, but I suspect adding
a suffix would solve whatever problems you allude to.
Most modern, mainstream compilers would use >= 64bit doubles/ long
doubles and create the proper binary representation that would convert to
7 significant digits in base 10, and print accordingly. This is certainly
the means to implement the default (3) case in previous post.
I still get the impression that you want guarantees the standard doesn't
give you for a very good reason. The canonical way of solving that is to
demand those guarantees from the implementation as a prerequisite for
portability (to increase portability, the guarantees should be "at least" to
allow the program to fit itself to a platform).

If you need a minimum precision for "long double", say so in your
requirements. If you need a compile-time floating-point constant
representing the value of pi in some platform-dependent precision, say so in
your requirements. If you don't *need* anything but would like to *use*
optional features you expect most platforms to have, write your own API for
them and make its implementation optional (with a way of signalling that
it's not implemented).

From your previous posts, I get the impression that you're demanding
portability where the standard cannot reasonably give it to you. One aspect
of C programming is that C quite possibly has the widest platform base of
any programming language, and its philosophy (part of it, at least) is to
give programmers a language that can be implemented efficiently and can be
used to implement efficient, portable programs. To achieve this it
guarantees less than programmers would like; that's a feature, not a bug.

S.
Sep 2 '06 #22

P: n/a
Skarmander wrote:
newsposter0123 wrote:
>CBFalconer wrote:
>>#define PI (355.0 / 155) /* to 7 sig. digits */
The C standard used to require all constants to be doubles.
I should qualify this as (now obsolete) K&R C required all floating
constants to be double.
It probably
>now requires them to be long doubles.

It doesn't, even if we restrict ourselves to floating-point constants.
Of course floating point constants.
You must not have been reading this thread closely, since an example was
mentioned elsewhere by Jacob Navia:

#define M_PI 3.14159265358979323846
#define M_PIL 3.1415926535897932384626433832795029L

The "f" suffix forms float constants, the "l" suffix long double
constants, and no suffix forms double constants.
Yep, but the C99 standard only recommends that the compile time
creation of floating point constants at least match the run-time
library functions/ conversions. If C99 required that compile time
creation of floating point constants use a precision greater than the
maximum available at run-time, then any floating point constant could
be represented, in binary, accurate to the least significant bit (of
the mantissa).
>
>However, sizeof(float) sizeof(double) sizeof(long dobule) is not
guaranteed.

Correct, but DBL_DIG >= FLT_DIG *is* guaranteed ("double" must have at
least 10 digits precision; "float" at least 6).
Just found that in section 5.2.4 of C99. And, LDBL_DIG is guaranteed to
at least equal DBL_DIG. This guarantees that (in theory anyway) most
accurate constants of type float can be created, at run-time, from type
long double.
C99 adds a way for implementations to signal that they implement IEEE
60559 for floating-point. This gives you a few more guarantees about
floating-point; in particular, "long double" is required to be an IEEE
extended type (bigger than double, in any case).
And for complex numbers, but thats still yet another flame on some
other group.
I still get the impression that you want guarantees the standard doesn't
give you for a very good reason. The canonical way of solving that is to
demand those guarantees from the implementation as a prerequisite for
portability (to increase portability, the guarantees should be "at
least" to allow the program to fit itself to a platform).

If you need a minimum precision for "long double", say so in your
requirements. If you need a compile-time floating-point constant
representing the value of pi in some platform-dependent precision, say
so in your requirements. If you don't *need* anything but would like to
*use* optional features you expect most platforms to have, write your
own API for them and make its implementation optional (with a way of
signalling that it's not implemented).

From your previous posts, I get the impression that you're demanding
portability where the standard cannot reasonably give it to you. One
aspect of C programming is that C quite possibly has the widest platform
base of any programming language, and its philosophy (part of it, at
least) is to give programmers a language that can be implemented
efficiently and can be used to implement efficient, portable programs.
To achieve this it guarantees less than programmers would like; that's a
feature, not a bug.
I'm not really demanding guarantees. Just looking for a means of
obtaining a most accurate read-only long double constant for a given
run-time environment in a portable way.

Sep 2 '06 #23

P: n/a
Skarmander <in*****@dontmailme.comwrites:
[...]
This is why it's impossible for a compiler to allow
const long x = foo();
with foo() an arbitrary function, because to solve this in general,
the compiler would have to be capable of deferring translation of the
entire program! A C *interpreter* could do it, but that's probably not
what you're looking for.
Ahem.
================================
#include <stdio.h>

long foo(void)
{
return 42;
}

int main(void)
{
const long x = foo();
printf("x = %ld\n", x);
return 0;
}
================================

The output is:

x = 42

"const" does *not* mean "constant" in C; it means read-only. The
const qualifier on the declaration of x only means that assigning a
value to x is a constraint violation, and modifying it indirectly by
subverting the "const" qualification:

*(long*)&x = 43;

invokes undefined behavior. It does not require the expression to be
evaluated at compile time (though, as always, the compiler is free to
do so by the "as-if" rule).

--
Keith Thompson (The_Other_Keith) ks***@mib.org <http://www.ghoti.net/~kst>
San Diego Supercomputer Center <* <http://users.sdsc.edu/~kst>
We must do something. This is something. Therefore, we must do this.
Sep 2 '06 #24

P: n/a
CBFalconer <cb********@yahoo.comwrites:
Ben Pfaff wrote:
>"newsposter0123" <ne************@yahoo.comwrites:
>>I'm beginning to think that using available math libraries is the
best solution if doubles do not work for a specific application.
Long doubles just do not have the same portability yet.

Long double has been in the C standard since 1989. If they
aren't portable now, by your standards, then they may never be.

#define PI (355.0 / 155) /* to 7 sig. digits */

:-)
Sure, if by "7" you mean "1".

I think you're looking for (355.0 / 113).

--
Keith Thompson (The_Other_Keith) ks***@mib.org <http://www.ghoti.net/~kst>
San Diego Supercomputer Center <* <http://users.sdsc.edu/~kst>
We must do something. This is something. Therefore, we must do this.
Sep 2 '06 #25

P: n/a
Keith Thompson wrote:
Skarmander <in*****@dontmailme.comwrites:
[...]
>This is why it's impossible for a compiler to allow
const long x = foo();
with foo() an arbitrary function, because to solve this in general,
the compiler would have to be capable of deferring translation of the
entire program! A C *interpreter* could do it, but that's probably not
what you're looking for.

Ahem.
================================
#include <stdio.h>

long foo(void)
{
return 42;
}

int main(void)
{
const long x = foo();
My bad. I should have made clear that we were still talking about objects
with static storage duration, hence the need for the initializer to be a
constant expression. (I was exactly discussing the confusion between
constant expressions and const-qualified objects.)

S.
Sep 2 '06 #26

P: n/a
In article <ln************@nuthaus.mib.org>,
Keith Thompson <ks***@mib.orgwrote:
>#define PI (355.0 / 155) /* to 7 sig. digits */
>Sure, if by "7" you mean "1".
Um, 0 actually. 355/155 is about 2.3.

-- Richard
Sep 2 '06 #27

P: n/a
ri*****@cogsci.ed.ac.uk (Richard Tobin) writes:
In article <ln************@nuthaus.mib.org>,
Keith Thompson <ks***@mib.orgwrote:
>>#define PI (355.0 / 155) /* to 7 sig. digits */
>>Sure, if by "7" you mean "1".

Um, 0 actually. 355/155 is about 2.3.
Pedant!

--
Keith Thompson (The_Other_Keith) ks***@mib.org <http://www.ghoti.net/~kst>
San Diego Supercomputer Center <* <http://users.sdsc.edu/~kst>
We must do something. This is something. Therefore, we must do this.
Sep 2 '06 #28

P: n/a
Richard Tobin wrote:
In article <ln************@nuthaus.mib.org>,
Keith Thompson <ks***@mib.orgwrote:
>>#define PI (355.0 / 155) /* to 7 sig. digits */
>Sure, if by "7" you mean "1".

Um, 0 actually. 355/155 is about 2.3.

-- Richard
2.2903225806451615

--
Joe Wright
"Everything should be made as simple as possible, but not simpler."
--- Albert Einstein ---
Sep 3 '06 #29

P: n/a
Keith Thompson wrote:
CBFalconer <cb********@yahoo.comwrites:
>Ben Pfaff wrote:
>>"newsposter0123" <ne************@yahoo.comwrites:

I'm beginning to think that using available math libraries is the
best solution if doubles do not work for a specific application.
Long doubles just do not have the same portability yet.

Long double has been in the C standard since 1989. If they
aren't portable now, by your standards, then they may never be.

#define PI (355.0 / 155) /* to 7 sig. digits */

:-)

Sure, if by "7" you mean "1".

I think you're looking for (355.0 / 113).
My face now bears a close resemblence to a beet. That'll teach me
to rely on memory.

--
Chuck F (cbfalconer at maineline dot net)
Available for consulting/temporary embedded and systems.
<http://cbfalconer.home.att.net>
Sep 3 '06 #30

P: n/a
Keith Thompson said:
ri*****@cogsci.ed.ac.uk (Richard Tobin) writes:
>In article <ln************@nuthaus.mib.org>,
Keith Thompson <ks***@mib.orgwrote:
>>>#define PI (355.0 / 155) /* to 7 sig. digits */
>>>Sure, if by "7" you mean "1".

Um, 0 actually. 355/155 is about 2.3.

Pedant!
If he were a pedant, he'd have said: 355/155 is 2

--
Richard Heathfield
"Usenet is a strange place" - dmr 29/7/1999
http://www.cpax.org.uk
email: rjh at above domain (but drop the www, obviously)
Sep 3 '06 #31

P: n/a
Richard Heathfield wrote:
Keith Thompson said:
>ri*****@cogsci.ed.ac.uk (Richard Tobin) writes:
>>In article <ln************@nuthaus.mib.org>,
Keith Thompson <ks***@mib.orgwrote:

#define PI (355.0 / 155) /* to 7 sig. digits */
Sure, if by "7" you mean "1".
Um, 0 actually. 355/155 is about 2.3.
Pedant!

If he were a pedant, he'd have said: 355/155 is 2
Which does not make him incorrect since he only said "about" and 23
could be considered to be about 2
int about(double d)
{
return d;
}
--
Flash Gordon.
Sep 3 '06 #32

P: n/a
Flash Gordon said:
Richard Heathfield wrote:
>Keith Thompson said:
>>ri*****@cogsci.ed.ac.uk (Richard Tobin) writes:
<snip>
>>>355/155 is about 2.3.
Pedant!

If he were a pedant, he'd have said: 355/155 is 2

Which does not make him incorrect since he only said "about" and 23
could be considered to be about 2
I suppose we could consider 23 to be about 2, if we're prepared to ignore an
order of magnitude. :-)

--
Richard Heathfield
"Usenet is a strange place" - dmr 29/7/1999
http://www.cpax.org.uk
email: rjh at above domain (but drop the www, obviously)
Sep 3 '06 #33

P: n/a
Richard Heathfield wrote:
Flash Gordon said:
>Richard Heathfield wrote:
>>Keith Thompson said:

ri*****@cogsci.ed.ac.uk (Richard Tobin) writes:

<snip>
>>>>355/155 is about 2.3.
Pedant!
If he were a pedant, he'd have said: 355/155 is 2
Which does not make him incorrect since he only said "about" and 23
could be considered to be about 2

I suppose we could consider 23 to be about 2, if we're prepared to ignore an
order of magnitude. :-)
OK, which one of you stole my decimal point? :-)
--
Flash Gordon
Sep 3 '06 #34

P: n/a
The peccant 155/113 cacophony momentarily obscured the discussion. To
which I add that the difference is 42, not 23, although it is in the
same order of magnitude, but the decimal point is still lost. Perhaps a
matter of improper reference position?

Skarmander wrote:
I should have made clear that we were still talking about
objects with static storage duration, hence the need for the initializer
to be a constant expression. (I was exactly discussing the confusion
between constant expressions and const-qualified objects.)
The following is C99/May 2005?

Point well taken. By using a const-qualified floating point object, the
constants value/precision is limited by the most restrictive of the
run-time environments implementation or the translation environment's
evaluation, even if the hardware, either during translation or
execution, provides a greater precision. Because of this limitation,
any expression involving conversion may never result in a "most
accurate floating point constant". This rules out base 10 floating
point numbers, and probably hexadecimal floating point numbers.

A "most accurate const-qualified floating point type" could be
initialized using a normalized binary float number representation.
Provided the exponent was within range, it could be loaded directly
into the target representation of the const-qualified floating point
type, without alteration. Any least significant excess bits of the
fractional part would just be truncated. If this is already available,
an example would be nice.

However, in order to ensure that "most accurate floating point
constant" was used during the evaluation of an expression , a keyword
identifier would have to be added, similar to __func__, that, when
evaluated, evaluates to the most accurate representation of the
floating point constant available.

BEGIN SPECIFIC MSVC++ 32BIT EXAMPLE

The MSVC++ 32 bit compilers define floating point type double and
floating point type long double. Both are the same size,64bits, with
the same precision, which is less than the maximum evaluable if using a
387 coprocessor, which provides 80bits. Under the theory that constants
should interfere with the precision of an expression during evaluation
as little as possible, the more accurate the constant, the better.

long double a, b=113.0, c=LDBL_PI, d;
a = LDBL_PI/b;
d = c / b;

may produce different results depending on the value of b. When
evaluating the "a" expression, the 80bit value is used for pi, but when
evaluation the "d" expression, only 64bits are used. By default, when
using the 387, all floating points, within the coprocessor, are 80bits.

END SPECIFIC MSVC++ 32BIT EXAMPLE

BEGIN GCC/i386 32BIT EXAMPLE
For the SYSV i386 ABI implementations (GCC), the full 80bits is used
for long double, so both the "a" and "d" expressions should evaluate to
equal results.

However, the statements:
float a, b=113.0, c=LDBL_PI;
a = LDBL_PI/b;
a = c / b;

may produce different results, depending on the value of b and how the
translation environment converts a long double to float.
END GCC/i386 32BIT EXAMPLE

I guess two-cents just ain't worth what it used to be, and is now worth
non-cents.

Sep 5 '06 #35

This discussion thread is closed

Replies have been disabled for this discussion.