473,236 Members | 1,292 Online
Bytes | Software Development & Data Engineering Community
Post Job

Home Posts Topics Members FAQ

Join Bytes to post your question to a community of 473,236 software developers and data experts.

optimization of static data initialization

I've compiled this code:

const int x0 = 10;
const int x1 = 20;
const int x2 = 30;

int x[] = { x2, x0, x1 };

struct Y
{
int i;
double d;
};

const Y y0 = {1, 1.0};
const Y y1 = {2, 2.0};
const Y y2 = {3, 3.0};

Y y[] = { y1, y0, y2 };

int z[] = { y1.i, y0.i, y2.i };

with a couple of compilers, with the highest possible optimization,
and looked at the disassembled object code. With both compilers,
only the x array is initialized from the load image. Move instructions
are generated to initialize both y and z. Why is it hard for the
compiler to initialize all of this from the load image, without having
to execute any init code at run time?

Jul 14 '06 #1
5 2355

wk****@yahoo.com wrote:
I've compiled this code:

const int x0 = 10;
const int x1 = 20;
const int x2 = 30;

int x[] = { x2, x0, x1 };

struct Y
{
int i;
double d;
};

const Y y0 = {1, 1.0};
const Y y1 = {2, 2.0};
const Y y2 = {3, 3.0};

Y y[] = { y1, y0, y2 };

int z[] = { y1.i, y0.i, y2.i };

with a couple of compilers, with the highest possible optimization,
and looked at the disassembled object code. With both compilers,
only the x array is initialized from the load image. Move instructions
are generated to initialize both y and z. Why is it hard for the
compiler to initialize all of this from the load image, without having
to execute any init code at run time?
Before I give a possible answer, let me ask you this: Why do you care?
By definition initialization of a const array of structs will be
executed exactly once each time a program is run. On any modern
processor, that initialization will take, quite literally, less than a
microsecond, which is probably less than the margin of error for timing
the program. So the benefit of the optimization is essentially zero
from a speed standpoint. If your problem is code bloat (because, e.g.,
you are dealing with an imbedded system and every extra byte counts),
you can find a way to hardcode the initialization value if you really,
really had to (e.g., by doing some casting - which you would want to
avoid otherwise, of course).

But that's also leads to the answer to your question. Virtually all
optimizations are implementation-defined. The answer to your question
is "Because you haven't chosen a compiler that supports the
optimization that you seek." Nothing stops you from finding a
different compiler - or, if necessary, paying someone to create a
compiler - that supports your desired optimization. Why doesn't your
average compiler support such an optimization? Well, a compiler-writer
generally is going to devote his or her time to writing optimizations
that lead to the most bang for the buck. The optimization you're
asking for doesn't help anyone except in extraordinarily contrived
situations. Given a choice between (a) efforts that will help make a
loop run more efficiently, or (b) a way of initializing const arrays of
structs that at best will save less than a microsecond each time a
program is run, which do you think the compiler writer will devote time
to? Or, put another way, which optimization do you think consumers
will pay more for?

Best regards,

Tom

Jul 14 '06 #2
Thomas Tutone wrote:
wk****@yahoo.com wrote:
I've compiled this code:

const int x0 = 10;
const int x1 = 20;
const int x2 = 30;

int x[] = { x2, x0, x1 };

struct Y
{
int i;
double d;
};

const Y y0 = {1, 1.0};
const Y y1 = {2, 2.0};
const Y y2 = {3, 3.0};

Y y[] = { y1, y0, y2 };

int z[] = { y1.i, y0.i, y2.i };

with a couple of compilers, with the highest possible optimization,
and looked at the disassembled object code. With both compilers,
only the x array is initialized from the load image. Move instructions
are generated to initialize both y and z. Why is it hard for the
compiler to initialize all of this from the load image, without having
to execute any init code at run time?

Before I give a possible answer, let me ask you this: Why do you care?
By definition initialization of a const array of structs will be
executed exactly once each time a program is run. On any modern
processor, that initialization will take, quite literally, less than a
microsecond, which is probably less than the margin of error for timing
the program. So the benefit of the optimization is essentially zero
from a speed standpoint.
....

You may very well be right. On the other hand, I think we all know
at least one person who has serious money problems, yet indulges
in many small luxuries, giving the argument that "it's only X Y's" (X
being some small number and Y being the name of your local
currency). Maybe that's not why they have money problems,
but it sure doesn't help. So I think it would be valuable for
researchers to try to quantify the value (or lack of value) of
"microoptimizations", either when done by the compiler or
habitually done by hand.

Also, I work on a high-availability application, so optimization of
initialization is of special concern too me.

To some degree, I think your argument is on a slippery slope,
that leads to the conclusion that any sort of compiler optimization
is not really of much value.

Jul 15 '06 #3
wk****@yahoo.com wrote:
Thomas Tutone wrote:
>wk****@yahoo.com wrote:
I've compiled this code:

const int x0 = 10;
const int x1 = 20;
const int x2 = 30;

int x[] = { x2, x0, x1 };

struct Y
{
int i;
double d;
};

const Y y0 = {1, 1.0};
const Y y1 = {2, 2.0};
const Y y2 = {3, 3.0};

Y y[] = { y1, y0, y2 };

int z[] = { y1.i, y0.i, y2.i };

with a couple of compilers, with the highest possible optimization,
and looked at the disassembled object code. With both compilers,
only the x array is initialized from the load image. Move instructions
are generated to initialize both y and z. Why is it hard for the
compiler to initialize all of this from the load image, without having
to execute any init code at run time?

Before I give a possible answer, let me ask you this: Why do you care?
By definition initialization of a const array of structs will be
executed exactly once each time a program is run. On any modern
processor, that initialization will take, quite literally, less than a
microsecond, which is probably less than the margin of error for timing
the program. So the benefit of the optimization is essentially zero
from a speed standpoint.
...

You may very well be right. On the other hand, I think we all know
at least one person who has serious money problems, yet indulges
in many small luxuries, giving the argument that "it's only X Y's" (X
being some small number and Y being the name of your local
currency). Maybe that's not why they have money problems,
but it sure doesn't help. So I think it would be valuable for
researchers to try to quantify the value (or lack of value) of
"microoptimizations", either when done by the compiler or
habitually done by hand.
Hm, why would that be valuable for researchers? In case, their study just
confirms common wisdom (i.e., preconception) that micro-optimization does
not pay off, they probably would not even have a publishable paper. I would
bet you that most researchers estimate that the other outcome is too
unlikely to justify the effort of a study.

Also, I work on a high-availability application, so optimization of
initialization is of special concern too me.

To some degree, I think your argument is on a slippery slope,
that leads to the conclusion that any sort of compiler optimization
is not really of much value.
You snipped the other half of his argument: there are many other
optimizations that yield higher gains for comparable amount of effort on
the compiler writers part. Thus, the particular optimization that you are
interested in is assigned low priority. I cannot see anything unreasonable
here or any kind of slippery slope. The conclusion that all optimization is
close to useless, is nowhere near any sensible interpretation of the given
rationale.
Best

Kai-Uwe Bux
Jul 15 '06 #4

Kai-Uwe Bux wrote:
wk****@yahoo.com wrote:
Thomas Tutone wrote:
wk****@yahoo.com wrote:
....
Before I give a possible answer, let me ask you this: Why do you care?
By definition initialization of a const array of structs will be
executed exactly once each time a program is run. On any modern
processor, that initialization will take, quite literally, less than a
microsecond, which is probably less than the margin of error for timing
the program. So the benefit of the optimization is essentially zero
from a speed standpoint.
...

You may very well be right. On the other hand, I think we all know
at least one person who has serious money problems, yet indulges
in many small luxuries, giving the argument that "it's only X Y's" (X
being some small number and Y being the name of your local
currency). Maybe that's not why they have money problems,
but it sure doesn't help. So I think it would be valuable for
researchers to try to quantify the value (or lack of value) of
"microoptimizations", either when done by the compiler or
habitually done by hand.

Hm, why would that be valuable for researchers? In case, their study just
confirms common wisdom (i.e., preconception) that micro-optimization does
not pay off, they probably would not even have a publishable paper. I would
bet you that most researchers estimate that the other outcome is too
unlikely to justify the effort of a study.
At one time it was common wisdom that old wet rags kept
in the dark would turn into frogs (or something along those lines).
Common belief, if not backed up by quantitative analysis
and data, should only be relied upon as a last resort. Any
researcher thinks they shouldn't publish (and you can always
publish anything now, on the internet if nowhere else) the
results of a study because of the outcome (even if the outcome
just confirms common belief) should not be a researcher.
I wouldn't know enough to rate the relative importance of
studying micro-optimizations, but I continue to think it's
worth studying.
Also, I work on a high-availability application, so optimization of
initialization is of special concern too me.

To some degree, I think your argument is on a slippery slope,
that leads to the conclusion that any sort of compiler optimization
is not really of much value.

You snipped the other half of his argument: there are many other
optimizations that yield higher gains for comparable amount of effort on
the compiler writers part. Thus, the particular optimization that you are
interested in is assigned low priority. I cannot see anything unreasonable
here or any kind of slippery slope. The conclusion that all optimization is
close to useless, is nowhere near any sensible interpretation of the given
rationale.
If you reject optimizations with no verifiable quantitative analysis
and data,
you'll tend to reject all optimizations, because it always easier to do
nothing
than something. So there is the slipperly slope.

Jul 16 '06 #5
wk****@yahoo.com wrote:
>
Kai-Uwe Bux wrote:
>wk****@yahoo.com wrote:
Thomas Tutone wrote:
wk****@yahoo.com wrote:
...
>Before I give a possible answer, let me ask you this: Why do you
care?
By definition initialization of a const array of structs will be
executed exactly once each time a program is run. On any modern
processor, that initialization will take, quite literally, less than a
microsecond, which is probably less than the margin of error for
timing
the program. So the benefit of the optimization is essentially zero
from a speed standpoint.
...

You may very well be right. On the other hand, I think we all know
at least one person who has serious money problems, yet indulges
in many small luxuries, giving the argument that "it's only X Y's" (X
being some small number and Y being the name of your local
currency). Maybe that's not why they have money problems,
but it sure doesn't help. So I think it would be valuable for
researchers to try to quantify the value (or lack of value) of
"microoptimizations", either when done by the compiler or
habitually done by hand.

Hm, why would that be valuable for researchers? In case, their study just
confirms common wisdom (i.e., preconception) that micro-optimization does
not pay off, they probably would not even have a publishable paper. I
would bet you that most researchers estimate that the other outcome is
too unlikely to justify the effort of a study.

At one time it was common wisdom that old wet rags kept
in the dark would turn into frogs (or something along those lines).
Common belief, if not backed up by quantitative analysis
and data, should only be relied upon as a last resort.
This is not just any old common belief, it is the opinion of those who work
on compiler design and have invented and tried a wide variety of
optimization strategies. Chances are that their gut feelings are not far
off.

Any researcher thinks they shouldn't publish (and you can always
publish anything now, on the internet if nowhere else) the
results of a study because of the outcome (even if the outcome
just confirms common belief) should not be a researcher.
I wouldn't know enough to rate the relative importance of
studying micro-optimizations, but I continue to think it's
worth studying.
As a researcher, you get recognition for publishing *interesting* results in
*respected* journals. Just confirming what all your fellows know already
(although, maybe with a little less detail and justification) is not going
to be very interesting and won't make it into a peer-reviewed journal. A
researcher is better off spending his time on writing a different paper.

Also, I work on a high-availability application, so optimization of
initialization is of special concern too me.

To some degree, I think your argument is on a slippery slope,
that leads to the conclusion that any sort of compiler optimization
is not really of much value.

You snipped the other half of his argument: there are many other
optimizations that yield higher gains for comparable amount of effort on
the compiler writers part. Thus, the particular optimization that you are
interested in is assigned low priority. I cannot see anything
unreasonable here or any kind of slippery slope. The conclusion that all
optimization is close to useless, is nowhere near any sensible
interpretation of the given rationale.

If you reject optimizations with no verifiable quantitative analysis
and data,
you'll tend to reject all optimizations, because it always easier to do
nothing
than something. So there is the slipperly slope.
Obviously, no one is going down that alledged slippery slope: Market forces
drive compiler vendors to include optimizations. Market forces also prevent
compiler vendors from investing too much resources on optimizations that
will benefit only a fringe group of customers.

Talking about market forces, ask yourself how much you would be willing to
pay: you could hire someone to hack that kind of optimization into g++.
Best

Kai-Uwe Bux
Jul 16 '06 #6

This thread has been closed and replies have been disabled. Please start a new discussion.

Similar topics

1
by: Harald Deischinger | last post by:
I am using a source file with the following style (I know it is not very beautiful but it is working): SRC1.cpp: static void vFoo() { // bla bla bla } static int iInit1(name, fun) { //...
5
by: Luther Baker | last post by:
Hi, Is the order of initialization guaranteed for static members as it is for instance members? Namely, the order they appear the in the declaration? ie: foo.h:
3
by: DanielBradley | last post by:
Hello all, I have recently been porting code from Linux to cygwin and came across a problem with static const class members (discussed below). I am seeking to determine whether I am programming...
5
by: Jesper Schmidt | last post by:
When does CLR performs initialization of static variables in a class library? (1) when the class library is loaded (2) when a static variable is first referenced (3) when... It seems that...
8
by: Per Bull Holmen | last post by:
Hey Im new to c++, so bear with me. I'm used to other OO languages, where it is possible to have class-level initialization functions, that initialize the CLASS rather than an instance of it....
3
by: Steve Folly | last post by:
Hi, I had a problem in my code recently which turned out to be the 'the "static initialization order fiasco"' problem (<http://www.parashift.com/c++-faq-lite/ctors.html#faq-10.12>) The FAQ...
20
by: JohnQ | last post by:
The way I understand the startup of a C++ program is: A.) The stuff that happens before the entry point. B.) The stuff that happens between the entry point and the calling of main(). C.)...
5
by: desktop | last post by:
Why is this struct illegal: #include<iostream> struct debug { std::string d1 = "bob\n"; }; I get this error:
6
by: gs | last post by:
Hi, I want to know that when memory get allocated to static data member of a class. class A { static int i; }
3
isladogs
by: isladogs | last post by:
The next Access Europe meeting will be on Wednesday 3 Jan 2024 starting at 18:00 UK time (6PM UTC) and finishing at about 19:15 (7.15PM). For other local times, please check World Time Buddy In...
0
by: jianzs | last post by:
Introduction Cloud-native applications are conventionally identified as those designed and nurtured on cloud infrastructure. Such applications, rooted in cloud technologies, skillfully benefit from...
0
by: abbasky | last post by:
### Vandf component communication method one: data sharing ​ Vandf components can achieve data exchange through data sharing, state sharing, events, and other methods. Vandf's data exchange method...
0
by: fareedcanada | last post by:
Hello I am trying to split number on their count. suppose i have 121314151617 (12cnt) then number should be split like 12,13,14,15,16,17 and if 11314151617 (11cnt) then should be split like...
0
by: stefan129 | last post by:
Hey forum members, I'm exploring options for SSL certificates for multiple domains. Has anyone had experience with multi-domain SSL certificates? Any recommendations on reliable providers or specific...
0
Git
by: egorbl4 | last post by:
Скачал я git, хотел начать настройку, а там вылезло вот это Что это? Что мне с этим делать? ...
0
by: DolphinDB | last post by:
The formulas of 101 quantitative trading alphas used by WorldQuant were presented in the paper 101 Formulaic Alphas. However, some formulas are complex, leading to challenges in calculation. Take...
0
by: DolphinDB | last post by:
Tired of spending countless mintues downsampling your data? Look no further! In this article, you’ll learn how to efficiently downsample 6.48 billion high-frequency records to 61 million...
0
by: Aftab Ahmad | last post by:
Hello Experts! I have written a code in MS Access for a cmd called "WhatsApp Message" to open WhatsApp using that very code but the problem is that it gives a popup message everytime I clicked on...

By using Bytes.com and it's services, you agree to our Privacy Policy and Terms of Use.

To disable or enable advertisements and analytics tracking please visit the manage ads & tracking page.