On Apr 20, 8:32 pm, Steve Folly <moderatedn...@ spfweb.co.ukwro te:
I had a problem in my code recently which turned out to be the
'the "static initialization order fiasco"' problem
(<http://www.parashift.c om/c++-faq-lite/ctors.html#faq-10.12>)
That problem normally only affects types with non-trivial
constructors. Static initialization is guaranteed to take place
before dynamic.
The FAQ section describes a solution using methods returning
references to static objects.
But consider:
Maths.h:
namespace Maths
{
const double Pi = 3.1415926535897 9323846;
No problem: static initializatino.
const double DegreesToRadian s = Pi / 180.0;
The problem here is the "variable" Pi. Basically, the
standard requires initialization with constant expressions
to occur before any dynamic initialization. It then defines
integral constant expressions (which allow for "const
variables and static data members of integral or enumeration
types initialized wuith constant expressions"); it then goes
on to define other constant expressions (which can only be
used for the purpose of non-local static object
initialization) , amongst which arithmetic constant
experssions: according to the standard (§5.19/3):
An arithmetic constant expression shall satisfy the
requirements for an integral constant expression, except
that
-- floating literals need not be cast to integral or
enumeration type, and
-- conversions to floating point types are permitted.
Note that your expressions do not qualify, because they
contain a const variable which is *not* of integral or
enumeration type.
This looks like an oversight to me. If:
const double DegreesToRadian s = 3.1415926535897 9323846 / 180.0;
requires static initialization, I don't see why your
expression shouldn't. (Historically, of course, C didn't
allow the use of const variables in this context.) On the
other hand... the precision used in floating point
arithmetic like the above is not specified---all that is
guaranteed is that it is at least as much as a double.
Whereas when you assign to a variable, the precision is
guaranteed to be exactly that of the type of the variable.
So that allowing const variables would require that a cross
compiler emulate exactly the floating point of the target
machine; the above, however, only requires some floating
point of as much or greater precision.
const double RadiansToDegree s = 1.0 / DegreesToRadian s;
const double RadiansToThousa ndthsOfMinutes = 180.0 / Pi * 60.0 * 1000.0;
const double FeetToMetres = 0.3048;
const double MetresToFeet = 1.0 / FeetToMetres;
}
The same comments applies to the other constants, of course.
[...]
My problem arose because Maths::Pi had not been initialised before Foo::x,
Foo::x was equal to zero. Probably lucky to be zero at all, could have been
anything I guess?
No. Objects with static lifetime are guaranteed to be
initialized with 0 (converted to the proper type).
The FAQ way to solve this would be to change the constants to functions?
That's the classical solution.
I don't want to change them to macros, but the thought of having to change
these into functions just seems... I dunno... overkill just for the sake of
several constants? (Especially when quite a lot of code uses these
constants, and up until now I think we've been *extremely* lucky!)
I doubt that there's really much difference between an
inline function and a const variable defined in another
translation unit.
Are functions my best way out of this predicament?
Probably. Inline functions will also optimize better, since
the compiler will be able to see the actual value in all of
the translation units.
Otherwise, you could define the values as macros (using only
floating point literals and other macros) in the compilation
unit which defines the variables, something along the lines
of:
namespace Maths
{
#define PI 3.1415926535897 9323846
const double Pi = PI ;
#define DEGREES_TO_RADI ANS PI / 180.0
const double DegreesToRadian s = DEGREES_TO_RADI ANS ;
// ...
}
Since the macros wouldn't be in a header, the namespace
polution is limited.
The thought occurs that members of numeric_limit<c lasses are faced with
the same problem?
Are they? If you look at them carefully, you'll see that
the "constants" which are not necessarily of integral type
(i.e. whose type depends on the instantiation) are in fact
functions. Probably for this very reason. (Although
frankly, a good implementation could arrange for the values
to be expressed as literals. I think it's more a means of
allowing an exact bit pattern to be specified for floating
point values. Something along the lines of:
template<>
double
numeric_limits< double >::max()
{
static unsigned char r[] = { 0x7F, 0xEF, 0xFF, 0xFF,
0xFF, 0xFF, 0xFF, 0xFF } ;
return *reinterpret_ca st< double* >( r ) ;
}
This was probably felt to be more reliable than trying to
express it as a decimal literal with type double.)
Is there still the danger here that using numeric_limit<>
static members might not be initialized themselves when used to initialize
other static data?
No. Since the non-functions all have integral type.
--
James Kanze (Gabi Software) email:
ja*********@gma il.com
Conseils en informatique orientée objet/
Beratung in objektorientier ter Datenverarbeitu ng
9 place Sémard, 78210 St.-Cyr-l'École, France, +33 (0)1 30 23 00 34