473,396 Members | 1,884 Online
Bytes | Software Development & Data Engineering Community
Post Job

Home Posts Topics Members FAQ

Join Bytes to post your question to a community of 473,396 software developers and data experts.

Compiler optimizations

Word up!

If there are any gcc users here, maybe you could help me out. I have a
program and I've tried compiling it with -O2 and -O3 optimization
settings.

The wired thing is that it actually runs faster with -O2 than with -O3,
even though -O3 is a higher optimization setting.

Have I found a bug in gcc? Could I be doing something wrong?

Cheers.

Jan 15 '08 #1
29 1870
sammy wrote:
....
If there are any gcc users here, maybe you could help me out. I have a
program and I've tried compiling it with -O2 and -O3 optimization
settings.

The wired thing is that it actually runs faster with -O2 than with -O3,
even though -O3 is a higher optimization setting.

Have I found a bug in gcc? Could I be doing something wrong?
The answer to both questions is "possibly". WIthout knowing the
details of your code AND the details of gcc's optimization strategies,
we can't be sure. Something that is in general an optimization could
easily, for a specific program using specific inputs, be a
pessimization instead.

For gcc issues, I'd recommend using a forum specialized for gcc; this
isn't it.

Jan 16 '08 #2
On Jan 15, 3:43*pm, sammy <s...@noemail.spamwrote:
Word up!

If there are any gcc users here, maybe you could help me out. I have a
program and I've tried compiling it with -O2 and -O3 optimization
settings.

The wired thing is that it actually runs faster with -O2 than with -O3,
even though -O3 is a higher optimization setting.

Have I found a bug in gcc? Could I be doing something wrong?
Higher optimization levels just means that compilers are requested to
use more esoteric types of optimization tricks. There is no guarantee
that {for instance} -O1 is faster than -O0 for that matter.
Take the example of inlining... It may make the code run faster due to
reduced function calls or it may make the code so large that stuff
that used to fit in the cache no longer does.

I suggest you direct specific performance problems with the GCC
compiler to the GCC compiler newsgroups.

P.S.
You can get good results with recent versions of GCC by using profile
guided optimization.
Jan 16 '08 #3
ja*********@verizon.net wrote:
sammy wrote:
...
>If there are any gcc users here, maybe you could help me out. I have a
program and I've tried compiling it with -O2 and -O3 optimization
settings.

The wired thing is that it actually runs faster with -O2 than with -O3,
even though -O3 is a higher optimization setting.

Have I found a bug in gcc? Could I be doing something wrong?

The answer to both questions is "possibly". WIthout knowing the
details of your code AND the details of gcc's optimization strategies,
we can't be sure. Something that is in general an optimization could
easily, for a specific program using specific inputs, be a
pessimization instead.

For gcc issues, I'd recommend using a forum specialized for gcc; this
isn't it.
And he may be getting confused by the quantum effects on the system
timer.

--
[mail]: Chuck F (cbfalconer at maineline dot net)
[page]: <http://cbfalconer.home.att.net>
Try the download section.

--
Posted via a free Usenet account from http://www.teranews.com

Jan 16 '08 #4
On Wed, 16 Jan 2008 00:43:25 +0100, sammy wrote:
Word up!

If there are any gcc users here, maybe you could help me out. I have a
program and I've tried compiling it with -O2 and -O3 optimization
settings.

The wired thing is that it actually runs faster with -O2 than with -O3,
even though -O3 is a higher optimization setting.

Have I found a bug in gcc? Could I be doing something wrong?
One point to ponder, which isn't specific to gcc but to optimisation in
general...

Many optimisations consume more space than the less optimised code. Loop
unrolling, for example, can do this. While this _can_ result in faster
code, it can _also_ potentially result in side effects such as exhausting
the cache memory. The net result can be a significant slowdown.

This sort of thing isn't really a bug; the optimiser has no way to know
what machines the code will run on.
Jan 16 '08 #5
Kelsey Bjarnason wrote:
On Wed, 16 Jan 2008 00:43:25 +0100, sammy wrote:
>Word up!

If there are any gcc users here, maybe you could help me out. I have
a program and I've tried compiling it with -O2 and -O3 optimization
settings.

The wired thing is that it actually runs faster with -O2 than with
-O3, even though -O3 is a higher optimization setting.

Have I found a bug in gcc? Could I be doing something wrong?

One point to ponder, which isn't specific to gcc but to optimisation
in general...

Many optimisations consume more space than the less optimised code.
Loop unrolling, for example, can do this. While this _can_ result in
faster code, it can _also_ potentially result in side effects such as
exhausting the cache memory. The net result can be a significant
slowdown.

This sort of thing isn't really a bug; the optimiser has no way to
know what machines the code will run on.
If not the compiler/optimizer, who else?

Bye, Jojo
Jan 16 '08 #6
"Kelsey Bjarnason" <kb********@gmail.comwrote in message
>
This sort of thing isn't really a bug; the optimiser has no way to know
what machines the code will run on.
What input is probably more significant.
The length of an unbounded string is almost certainly a few tens of bytes or
less, for instance, so it makes sense to run a byte-by-byte strlen().
However if the string happens to be a DNA sequence then it may be hundreds
of kilobytes long, and so aligning to a word boundary and doing 32 or 64 bit
fetches will speed up code considerably. It is extremely difficult to tell a
compiler the difference between a sequence and a username, they are both
just strings of arbitrary length, to it.

--
Free games and programming goodies.
http://www.personal.leeds.ac.uk/~bgy1mm

Jan 16 '08 #7
Kelsey Bjarnason wrote:
This sort of thing isn't really a bug; the optimiser has no way to know
what machines the code will run on.
If the compiler writers think that this is relevant and they need to figure
out a way to know any property of the target machine, couldn't they simply
add an option that enables the user to specify those target properties?
Rui Maciel
Jan 16 '08 #8
sammy wrote:
Word up!

If there are any gcc users here, maybe you could help me out. I have a
program and I've tried compiling it with -O2 and -O3 optimization
settings.

The wired thing is that it actually runs faster with -O2 than with -O3,
even though -O3 is a higher optimization setting.

Have I found a bug in gcc? Could I be doing something wrong?

Cheers.
According to my experience with gcc all optimizations beyond 2 are
a waste of time.

The problem with gcc is that every person interested in compiler
algorithms has hacked gcc to put his/her contribution, making
the whole quite messy.

Within lcc-win, I have targeted only ONE optimization strategy:

Code size.

There is nothing that runs faster than a deleted instruction. Lcc-win
features a very simple peephole optimizer, that is after a single
goal: delete redundant loads/stores, and in general to try to
reduce the code size as much as possible.

No other optimizations are done (besides the obvious ones done at
compile time like constant folding, division by constants, etc)

Gcc tries it all. I think there is no optimization that exists
somewhere in compiler books that hasn't been tried in gcc.
Code movement/aligning of the stack/global CSE/
aggressive inlining/ and a VERY long ETC!

The result is not really impressing. the compiler is very slow
and the program is not very fast:

A matrix multiplication program for instance: (time in seconds)

lcc-win -O 1.851
gcc -O2 1.690
gcc -O3 1.802
gcc -O9 1.766
MSVC -Ox 1.427

With -O3 gcc is as slow as lcc-win (what is obviously an excellent
result ) And the delta between gcc and lcc-win in the best case
for gcc is just... 3.1%

If you look at the compilation speed of lcc-win vs gcc (a factor
of 5 or more) and the size of the source code (11MB of C for gcc,
1MB of C for lcc-win) things look clearer.

What is worst for the optimizer compilers is that CPUs are now
so complex that optimizations that before were fine like inlining
have completely lost all their justification now that a processor
can wait up to 50 cycles doing nothing waiting that the RAM
gives it the information.

In this context optimizing for SIZE is a winning strategy. And
allows lcc-win to have almost the same speed as gcc with a FRACTION
of the effort.

Just my $0.02

--
jacob navia
jacob at jacob point remcomp point fr
logiciels/informatique
http://www.cs.virginia.edu/~lcc-win32
Jan 16 '08 #9
"sammy" <sa*@noemail.spamwrote in message
news:sl*******************@nospam.invalid...
If there are any gcc users here, maybe you could help me out. I have a
program and I've tried compiling it with -O2 and -O3 optimization
settings.

The wired thing is that it actually runs faster with -O2 than with -O3,
even though -O3 is a higher optimization setting.
That sometimes happens. Some optimizations, particularly at GCC's higher
levels, are not guaranteed: they pay off most of the time, but sometimes
they hurt you. Also, many of the optimizations depend on the compiler
knowing the exact characteristics of the machine you'll run the code on; if
you tell GCC you have a i386 or a P4, but run the code on an Opteron, you
may get slower execution than if you told it you had an Opteron or used a
lower optimization level.

Depending on the code, using profile-guided optimization can provide a
significant performance boost as the compiler has more data on your specific
program (and your input data) versus static predictions that are tuned for
the "average" program.
Have I found a bug in gcc? Could I be doing something wrong?
If you flip a coin and guess the wrong result, is the coin buggy? No.

A compiler bug is when it doesn't properly translate a correct program.
Unless you're an expert, the most likely cause of improper results is that
your program isn't as correct as you think it is. Many advanced
optimizations cause odd results in C's undefined corners that compiling
simpler optimizations (or none at all) won't expose.

S

--
Stephen Sprunk "God does not play dice." --Albert Einstein
CCIE #3723 "God is an inveterate gambler, and He throws the
K5SSS dice at every possible opportunity." --Stephen Hawking

Jan 16 '08 #10
On Wed, 16 Jan 2008 05:24:07 -0600, Joachim Schmitz wrote
(in article <fm**********@online.de>):
Kelsey Bjarnason wrote:
>On Wed, 16 Jan 2008 00:43:25 +0100, sammy wrote:
>>Word up!

If there are any gcc users here, maybe you could help me out. I have
a program and I've tried compiling it with -O2 and -O3 optimization
settings.

The wired thing is that it actually runs faster with -O2 than with
-O3, even though -O3 is a higher optimization setting.

Have I found a bug in gcc? Could I be doing something wrong?

One point to ponder, which isn't specific to gcc but to optimisation
in general...

Many optimisations consume more space than the less optimised code.
Loop unrolling, for example, can do this. While this _can_ result in
faster code, it can _also_ potentially result in side effects such as
exhausting the cache memory. The net result can be a significant
slowdown.

This sort of thing isn't really a bug; the optimiser has no way to
know what machines the code will run on.
If not the compiler/optimizer, who else?
How can it possibly know which computer(s) you will install and run it
on after it is compiled?

Not every program is something for you to play with for a bit in ~/src
then forget about. ;-)

--
Randy Howard (2reply remove FOOBAR)
"The power of accurate observation is called cynicism by those
who have not got it." - George Bernard Shaw

Jan 16 '08 #11
On Wed, 16 Jan 2008 06:47:46 -0600, Rui Maciel wrote
(in article <47***********************@news.telepac.pt>):
Kelsey Bjarnason wrote:
>This sort of thing isn't really a bug; the optimiser has no way to know
what machines the code will run on.

If the compiler writers think that this is relevant and they need to figure
out a way to know any property of the target machine, couldn't they simply
add an option that enables the user to specify those target properties?
Some provide switches to optimize for different "families" of
processors. The problem is, it doesn't allow for any new hardware that
comes along after the compile is completed or after the compiler was
written. Also, it doesn't allow for a binary to be used on multiple
hardware platforms from the same build while enjoying this special
attention.

--
Randy Howard (2reply remove FOOBAR)
"The power of accurate observation is called cynicism by those
who have not got it." - George Bernard Shaw

Jan 16 '08 #12
On Wed, 16 Jan 2008 09:26:23 -0600, jacob navia wrote
(in article <fm**********@aioe.org>):
Within lcc-win, I have targeted only ONE optimization strategy:

Code size.

There is nothing that runs faster than a deleted instruction.
<shakes head>

#include "examples of loop unrolling improving performance"

I wonder if everyone using lcc-win today realize just how narrow your
view on optimization is?
--
Randy Howard (2reply remove FOOBAR)
"The power of accurate observation is called cynicism by those
who have not got it." - George Bernard Shaw

Jan 16 '08 #13
Randy Howard wrote:
On Wed, 16 Jan 2008 09:26:23 -0600, jacob navia wrote
(in article <fm**********@aioe.org>):
>Within lcc-win, I have targeted only ONE optimization strategy:

Code size.

There is nothing that runs faster than a deleted instruction.

<shakes head>

#include "examples of loop unrolling improving performance"

I wonder if everyone using lcc-win today realize just how narrow your
view on optimization is?

Can you explain the results ?

Of course it is narrow. It is a Reduced Optimization Set Compiler
(ROSC).

Jokes aside, obviously for you, the results aren't important but...
what?
--
jacob navia
jacob at jacob point remcomp point fr
logiciels/informatique
http://www.cs.virginia.edu/~lcc-win32
Jan 16 '08 #14
Randy Howard wrote:
Some provide switches to optimize for different "families" of
processors. Â*The problem is, it doesn't allow for any new hardware that
comes along after the compile is completed or after the compiler was
written. Â*Also, it doesn't allow for a binary to be used on multiple
hardware platforms from the same build while enjoying this special
attention.
That isn't exactly a compiler problem, is it?
Rui Maciel
Jan 16 '08 #15
On Jan 16, 11:34*am, Randy Howard <randyhow...@FOOverizonBAR.net>
wrote:
On Wed, 16 Jan 2008 05:24:07 -0600, Joachim Schmitz wrote
(in article <fmkpgn$v3...@online.de>):


Kelsey Bjarnason wrote:
On Wed, 16 Jan 2008 00:43:25 +0100, sammy wrote:
>Word up!
>If there are any gcc users here, maybe you could help me out. I have
a program and I've tried compiling it with -O2 and -O3 optimization
settings.
>The wired thing is that it actually runs faster with -O2 than with
-O3, even though -O3 is a higher optimization setting.
>Have I found a bug in gcc? Could I be doing something wrong?
One point to ponder, which isn't specific to gcc but to optimisation
in general...
Many optimisations consume more space than the less optimised code.
Loop unrolling, for example, can do this. *While this _can_ result in
faster code, it can _also_ potentially result in side effects such as
exhausting the cache memory. *The net result can be a significant
slowdown.
This sort of thing isn't really a bug; the optimiser has no way to
know what machines the code will run on.
If not the compiler/optimizer, who else?

How can it possibly know which computer(s) you will install and run it
on after it is compiled? *
GCC aside:
-march is supposed to be a promise of that. If you run it on
something else then you won't get the sort of performance you were
hoping for. Many other compilers have this same sort of effect (even
producing code that will only run on certain CPUs in some instances).
Not every program is something for you to play with for a bit in ~/src
then forget about. *;-)
Rats. How deflating.
Jan 16 '08 #16
Randy Howard wrote:
On Wed, 16 Jan 2008 05:24:07 -0600, Joachim Schmitz wrote
(in article <fm**********@online.de>):
>>This sort of thing isn't really a bug; the optimiser has no way to
know what machines the code will run on.
If not the compiler/optimizer, who else?

How can it possibly know which computer(s) you will install and run it
on after it is compiled?
The optimizer could run on the target machine, so "this one"
would be the appropriate answer.

[Warning: mere possibility isn't evidence of implementation.]

--
Contains Billion-Year-Old Materials Hedgehog
Otherface: Jena RDF/Owl toolkit http://jena.sourceforge.net/

Jan 16 '08 #17
Chris Dollin wrote:
Randy Howard wrote:
>On Wed, 16 Jan 2008 05:24:07 -0600, Joachim Schmitz wrote
(in article <fm**********@online.de>):
>>>This sort of thing isn't really a bug; the optimiser has no way to
know what machines the code will run on.
If not the compiler/optimizer, who else?
How can it possibly know which computer(s) you will install and run it
on after it is compiled?

The optimizer could run on the target machine, so "this one"
would be the appropriate answer.

[Warning: mere possibility isn't evidence of implementation.]
Shipping the optimizer with your application?

--
jacob navia
jacob at jacob point remcomp point fr
logiciels/informatique
http://www.cs.virginia.edu/~lcc-win32
Jan 16 '08 #18
On Wed, 16 Jan 2008 15:48:56 -0600, user923005 wrote
(in article
<d1**********************************@z17g2000hsg. googlegroups.com>):
On Jan 16, 11:34*am, Randy Howard <randyhow...@FOOverizonBAR.net>
wrote:
>>
>>>This sort of thing isn't really a bug; the optimiser has no way to
know what machines the code will run on.
If not the compiler/optimizer, who else?

How can it possibly know which computer(s) you will install and run it
on after it is compiled? *

GCC aside:
-march is supposed to be a promise of that. If you run it on
something else then you won't get the sort of performance you were
hoping for. Many other compilers have this same sort of effect (even
producing code that will only run on certain CPUs in some instances).
I think you missed what I was saying there. You can optimize with
something like -march for a specific hardware type, but when you move
that binary to other machines it isn't a promise of anything, it may
not even run properly.
--
Randy Howard (2reply remove FOOBAR)
"The power of accurate observation is called cynicism by those
who have not got it." - George Bernard Shaw

Jan 16 '08 #19
CJ
On 16 Jan 2008 at 22:20, jacob navia wrote:
Chris Dollin wrote:
>The optimizer could run on the target machine, so "this one"
would be the appropriate answer.

[Warning: mere possibility isn't evidence of implementation.]

Shipping the optimizer with your application?
Yes, though that rather depends on shipping the source code with your
application too, so it wouldn't be much use for closed-source programs
like lcc-win for example...

Jan 16 '08 #20
Randy Howard wrote:
I think you missed what I was saying there. Â*You can optimize with
something like -march for a specific hardware type, but when you move
that binary to other machines it isn't a promise of anything, it may
not even run properly.
Where exactly is there a need to run a single, unique binary across a whole
range of different machines? Why should anyone want that?

Assuming that open source software isn't even being considered here, I don't
see how, in this day and age, a distributor of closed-source, proprietary
software is barred from shipping multiple binaries in the install media or
even making it available over the internet. I mean, in this day and age,
software is being distributed through DVD packs and even through the net.
Where exactly is it impossible to pack a hand full of specialised binaries
in the install media and then pick the more appropriate binary for the
current system?
Rui Maciel
Jan 17 '08 #21
jacob navia wrote:
Shipping the optimizer with your application?
There was a time where operating systems also came with a compiler. Heck, it
is still happening in this very day. Nowadays there are quite a lot of
operating systems installing compilers in their default configuration. Take
all those linux distributions, for example. Thanks to that, we see
companies like nvidia relying on the system's compiler to install their
software.
Rui Maciel
Jan 17 '08 #22
On Wed, 16 Jan 2008 20:08:54 -0600, Rui Maciel wrote
(in article <47***********************@news.telepac.pt>):
Randy Howard wrote:
>I think you missed what I was saying there. Â*You can optimize with
something like -march for a specific hardware type, but when you move
that binary to other machines it isn't a promise of anything, it may
not even run properly.

Where exactly is there a need to run a single, unique binary across a whole
range of different machines? Why should anyone want that?
It's the model that pretty much all ISV's use. <insert list of 20,000
canned apps here>
Assuming that open source software isn't even being considered here, I don't
see how, in this day and age, a distributor of closed-source, proprietary
software is barred from shipping multiple binaries in the install media or
even making it available over the internet.
It's not barred from it, technically. It's cost prohibitive to build
and test a lot of versions of the same program just to micro-optimize
for a bunch of slightly different processors, cache sizes, etc.
--
Randy Howard (2reply remove FOOBAR)
"The power of accurate observation is called cynicism by those
who have not got it." - George Bernard Shaw

Jan 17 '08 #23
On Wed, 16 Jan 2008 12:24:07 +0100, Joachim Schmitz wrote:
Kelsey Bjarnason wrote:
>On Wed, 16 Jan 2008 00:43:25 +0100, sammy wrote:
>>Word up!

If there are any gcc users here, maybe you could help me out. I have a
program and I've tried compiling it with -O2 and -O3 optimization
settings.

The wired thing is that it actually runs faster with -O2 than with
-O3, even though -O3 is a higher optimization setting.

Have I found a bug in gcc? Could I be doing something wrong?

One point to ponder, which isn't specific to gcc but to optimisation in
general...

Many optimisations consume more space than the less optimised code.
Loop unrolling, for example, can do this. While this _can_ result in
faster code, it can _also_ potentially result in side effects such as
exhausting the cache memory. The net result can be a significant
slowdown.

This sort of thing isn't really a bug; the optimiser has no way to know
what machines the code will run on.
If not the compiler/optimizer, who else?
Suppose I write code optimized for, oh, pentiums. There are something
like 197 different flavours of pentiums, with different cache sizes,
pipleine depths, who knows what sort of differences.

How does the optimizer know which of those you're going to run the code
on? Perhaps you'll distribute the app and it will have to run on all of
them. Tomorrow's "mini pentium" for embedded systems might have the full
instruction set, but almost no cache - shall the optimizer predict the
future to tell you might, someday, want to run the binary on this chip?

All it can do is generate the code, as efficiently as possible... and let
_you_, the developer, see whether the code generated really was as
efficient as it should have been, then either accept it, or re-compile
with different optimization settings.

Jan 17 '08 #24
>I think you missed what I was saying there. Â*You can optimize with
>something like -march for a specific hardware type, but when you move
that binary to other machines it isn't a promise of anything, it may
not even run properly.

Where exactly is there a need to run a single, unique binary across a whole
range of different machines? Why should anyone want that?
It simplifies virus-writing a lot.

Jan 19 '08 #25
Rui Maciel <ru********@gmail.comwrites:
[snip]
Moreover, please do have in mind that there are quite a few free software
projects, even whole operating system distributions, that distribute
multiple versions of binaries optimised to specific target platforms? The
people behind those projects distribute those binaries on public
repositories open to all and we don't see them bankrupting. How exactly are
the people behind all those free software distributions able to produce
optimised binaries and offer them to everyone interested without charging a
single cent for their work while, according to your statement, the cost for
proprietary software distributors to do an irrelevant fraction of that same
work would be "prohibitive", specially when they charge quite a bit over 50
euros for each download/DVD?
For a free software project, the cost of customized builds for
multiple systems is (a) building and testing the software for each
variant target and (b) providing multiple versions on the download
site. A user who downloaded the wrong variant can just go back and
download the right one.

To do the same for a commercial product, you'd have added production
costs (packaging, keeping track of different versions on DVDs, etc.)
-- and a customer who finds he bought the wrong variant is likely to
have more trouble straightening it out; in the worst case, he might
have to pay for the product again.

I'm sure there are ways to work around these issues. I'm also sure
there are better places for this discussion.

--
Keith Thompson (The_Other_Keith) <ks***@mib.org>
Nokia
"We must do something. This is something. Therefore, we must do this."
-- Antony Jay and Jonathan Lynn, "Yes Minister"
Jan 20 '08 #26
Gordon Burditt wrote:
How much do the tech support calls from people who don't know what
to download cost? Â*If you're distributing an OS, this might not be
much of a problem, but if you're distributing an application, good
luck. Â*Some people don't know how to tell if their desktop is a PC
or a Mac, much less whether it's got an Intel vs. AMD processor,
or whether it's dual-core. Â*And a lot of people know they are
running Windows, but don't recognize such terms as "XP", "Vista",
"Windows 98", etc.
The tech support reference is a bit absurd. We see free software supporting
a vast set of architectures but they don't suffer from any support issue.
Moreover, there is even absolutely no need for the user to know those
details, as even microsoft's windows line of operating systems benefits
from automatic package deployment. It's just a matter of checking the
relevant system properties and deploy the right binary. So what stops
anyone from shipping specialized binaries?
Rui Maciel
Jan 21 '08 #27
On Sun, 20 Jan 2008 16:32:18 -0600, Rui Maciel wrote
(in article <47***********************@news.telepac.pt>):
Gordon Burditt wrote:
>How much do the tech support calls from people who don't know what
to download cost? Â*If you're distributing an OS, this might not be
much of a problem, but if you're distributing an application, good
luck. Â*Some people don't know how to tell if their desktop is a PC
or a Mac, much less whether it's got an Intel vs. AMD processor,
or whether it's dual-core. Â*And a lot of people know they are
running Windows, but don't recognize such terms as "XP", "Vista",
"Windows 98", etc.

The tech support reference is a bit absurd. We see free software supporting
a vast set of architectures but they don't suffer from any support issue.
Right, because for the bulk of the open source community, no formal
support is available at all. It's a non-issue. The rest of the world
does not behave identically though.
Moreover, there is even absolutely no need for the user to know those
details, as even microsoft's windows line of operating systems benefits
from automatic package deployment.
When it works. It still doesn't make it free to develop 10 packages
instead of 1, test them, package them, then test the deployment via
package managers, in 10X the number of cases.
It's just a matter of checking the
relevant system properties and deploy the right binary.
It's not that simple.
So what stops anyone from shipping specialized binaries?
Reality.
--
Randy Howard (2reply remove FOOBAR)
"The power of accurate observation is called cynicism by those
who have not got it." - George Bernard Shaw

Jan 21 '08 #28
Rui Maciel wrote:
The tech support reference is a bit absurd. We see free software supporting
a vast set of architectures but they don't suffer from any support issue.
Moreover, there is even absolutely no need for the user to know those
details, as even microsoft's windows line of operating systems benefits
from automatic package deployment. It's just a matter of checking the
relevant system properties and deploy the right binary. So what stops
anyone from shipping specialized binaries?
The show stopper is the fact that you would have to TEST each
of the binaries before you ship it.
--
jacob navia
jacob at jacob point remcomp point fr
logiciels/informatique
http://www.cs.virginia.edu/~lcc-win32
Jan 21 '08 #29
"Rui Maciel" <ru********@gmail.comwrote in message
news:47***********************@news.telepac.pt...
Gordon Burditt wrote:
>How much do the tech support calls from people who don't know what
to download cost? If you're distributing an OS, this might not be
much of a problem, but if you're distributing an application, good
luck. Some people don't know how to tell if their desktop is a PC
or a Mac, much less whether it's got an Intel vs. AMD processor,
or whether it's dual-core. And a lot of people know they are
running Windows, but don't recognize such terms as "XP", "Vista",
"Windows 98", etc.

The tech support reference is a bit absurd. We see free software
supporting
a vast set of architectures but they don't suffer from any support issue.
Moreover, there is even absolutely no need for the user to know those
details, as even microsoft's windows line of operating systems benefits
from automatic package deployment. It's just a matter of checking the
relevant system properties and deploy the right binary. So what stops
anyone from shipping specialized binaries?
Support costs. Even if you can automatically install the correct binary on
each machine, you still have to test all of the individual binaries in QA,
and you have to troubleshoot the correct ones when customers find problems.
This is why many companies are still shipping binaries that are compiled for
original Pentiums with low optimization settings and debug symbols -- it's
easier to support, and it works, if not optimally, on every system their
customers own.

The FOSS crowd has it a bit easier since they don't have to support
anything, so each user can compile the software however they want. Notice
that those for-profit companies that do support FOSS often require customers
run the binaries _they_ compiled.

S

--
Stephen Sprunk "God does not play dice." --Albert Einstein
CCIE #3723 "God is an inveterate gambler, and He throws the
K5SSS dice at every possible opportunity." --Stephen Hawking

Jan 21 '08 #30

This thread has been closed and replies have been disabled. Please start a new discussion.

Similar topics

3
by: Andrew | last post by:
Where can one find a discussion of the current and future optimizations that the C# compilers do? When I see things like the following I question my faith in csc being good at optimizing. The same...
23
by: JKop | last post by:
I'm looking for a compiler, I'm running Windows XP. Can anyone suggest a good one that can output assembly and that has all sorts of good optimizations in it, all sorts of bells and whistles. I'm...
14
by: joshc | last post by:
I'm writing some C to be used in an embedded environment and the code needs to be optimized. I have a question about optimizing compilers in general. I'm using GCC for the workstation and Diab...
1
by: VM | last post by:
Hello, I'm looking for information on C# compiler optimization or compilation for an engineering short paper I want to write. Any sites with some technical info on the new advances of C#...
3
by: babak | last post by:
Hi I am running a project in eVC 4.0 and I have been running into a bug that only appears in the release build of the project. I eventually found out that when I had the compiler option /Od set...
44
by: Don Kim | last post by:
Ok, so I posted a rant earlier about the lack of marketing for C++/CLI, and it forked over into another rant about which was the faster compiler. Some said C# was just as fast as C++/CLI, whereas...
0
by: Mark Dufour | last post by:
Hello all, As Bearophile pointed out, I have just released Shed Skin 0.0.8. For those of you that do not know Shed Skin, it is an optimizing Python-to-C++ compiler, that allows for translation...
5
by: wkaras | last post by:
I've compiled this code: const int x0 = 10; const int x1 = 20; const int x2 = 30; int x = { x2, x0, x1 }; struct Y {
7
by: llothar | last post by:
Does anybody have some benchmarks or links to articles that compare this for different compiler implementations? I would especially like to see if it is usefull on MSVC, Intel 9.0 C and gcc. Also...
0
by: Charles Arthur | last post by:
How do i turn on java script on a villaon, callus and itel keypad mobile phone
0
by: ryjfgjl | last post by:
In our work, we often receive Excel tables with data in the same format. If we want to analyze these data, it can be difficult to analyze them because the data is spread across multiple Excel files...
0
BarryA
by: BarryA | last post by:
What are the essential steps and strategies outlined in the Data Structures and Algorithms (DSA) roadmap for aspiring data scientists? How can individuals effectively utilize this roadmap to progress...
1
by: nemocccc | last post by:
hello, everyone, I want to develop a software for my android phone for daily needs, any suggestions?
0
by: Hystou | last post by:
There are some requirements for setting up RAID: 1. The motherboard and BIOS support RAID configuration. 2. The motherboard has 2 or more available SATA protocol SSD/HDD slots (including MSATA, M.2...
0
marktang
by: marktang | last post by:
ONU (Optical Network Unit) is one of the key components for providing high-speed Internet services. Its primary function is to act as an endpoint device located at the user's premises. However,...
0
by: Hystou | last post by:
Most computers default to English, but sometimes we require a different language, especially when relocating. Forgot to request a specific language before your computer shipped? No problem! You can...
0
Oralloy
by: Oralloy | last post by:
Hello folks, I am unable to find appropriate documentation on the type promotion of bit-fields when using the generalised comparison operator "<=>". The problem is that using the GNU compilers,...
0
tracyyun
by: tracyyun | last post by:
Dear forum friends, With the development of smart home technology, a variety of wireless communication protocols have appeared on the market, such as Zigbee, Z-Wave, Wi-Fi, Bluetooth, etc. Each...

By using Bytes.com and it's services, you agree to our Privacy Policy and Terms of Use.

To disable or enable advertisements and analytics tracking please visit the manage ads & tracking page.