473,420 Members | 4,530 Online
Bytes | Software Development & Data Engineering Community
Post Job

Home Posts Topics Members FAQ

Join Bytes to post your question to a community of 473,420 software developers and data experts.

Boost Workshop at OOPSLA 2004

CALL FOR PAPERS/PARTICIPATION

C++, Boost, and the Future of C++ Libraries
Workshop at OOPSLA
October 24-28, 2004
Vancouver, British Columbia, Canada
http://tinyurl.com/4n5pf
Submissions

Each participant will be expected to develop a position paper
describing a particular library or category of libraries that is
lacking in the current C++ standard library and Boost. The participant
should explain why the library or libraries would advance the state of
C++ programming. Ideally, the paper should sketch the proposed library
interface and concepts. This will be a unique opportunity to critique
and review library proposals. Alternatively, a participant might
describe the strengths and weaknesses of existing libraries and how
they might be modified to fill the need.

Form of Submissions

Submissions should consist of a 3-10 page paper that gives at least
the motivation for and an informal description of the proposal. This
may be augmented by source or other documentation of the proposed
libraries, if available. Preferred form of submission is a PDF file.

Important Dates

• Submission deadline for early registration: September 10, 2004
• Early Notification of selection: September 15, 2004
• OOPSLA early registration deadline: September 16, 2004
• OOPSLA conference: October 24-28, 2004

Contact committee oo********@crystalclearsoftware.com

Program Committee
Jeff Garland
Nicolai Josuttis
Kevlin Henney
Jeremy Siek

[ See http://www.gotw.ca/resources/clcm.htm for info about ]
[ comp.lang.c++.moderated. First time posters: Do this! ]
Jul 22 '05
205 10426
jc*****@taeus.com (Jerry Coffin) writes:
[snip]
I suppose if you really wanted to isolate the effects of export, you
could do something like starting with some code that didn't use
templates at all, and compare the speed of Comeau to Digital Mars (or
VC++, gcc, etc.)

[snip]

That wouldn't be my approach at all.

I'd estimate the compilation effects of export by produce two
codebases which differed only in their use of export; one would
use export for all function templates, and place all function
template definitions in files seperate from their declarations,
while the other would would use the inclusion model.

I'd compile both the export-using version of the code and the
inclusion-using version of the code with the same compiler. That
way, compiler issues other than export would not come into play.

[ See http://www.gotw.ca/resources/clcm.htm for info about ]
[ comp.lang.c++.moderated. First time posters: Do this! ]

Jul 22 '05 #151
Jean-Marc Bourguet <jm@bourguet.org> writes:
With como, you'll need dependencies on implementation file for
exported template for every compilation unit which is responsible of
providing an instanciation. As far as I remember, in my tests all
the dependancies (even on exported template implementation) where
generated automatically by como.


I didn't remember correctly. Everything was automatic -- which was
what was important to me -- but automatically generated dependencies
took into account only what the preprocessor knew. The recompilations
needed after modifying the implementation of a template where done by
the pre-linker (which is there to trigger recompilation in some other
cases, like missing a template instanciation)

Yours,

--
Jean-Marc

[ See http://www.gotw.ca/resources/clcm.htm for info about ]
[ comp.lang.c++.moderated. First time posters: Do this! ]
Jul 22 '05 #152
Francis Glassborow <fr*****@robinton.demon.co.uk> writes:
In article <b2*************************@posting.google.com> , Jerry
Coffin <jc*****@taeus.com> writes
I've used both Comeau and Intel C++ on Windows


I may be wrong but I thought Intel was based on Comeau.


I think both comeau and Intel use EDG front-end.

Yours,

--
Jean-Marc

[ See http://www.gotw.ca/resources/clcm.htm for info about ]
[ comp.lang.c++.moderated. First time posters: Do this! ]
Jul 22 '05 #153
"Walter" <wa****@digitalmars.nospamm.com> wrote in message
news:<VQBXc.98586$TI1.50760@attbi_s52>...
<ka***@gabi-soft.fr> wrote in message
news:d6**************************@posting.google.c om...
It's also worth pointing out that the company in question only has
three technical employees, and that they also managed to implement a
Java compiler at the same time. Now, I don't know about Digital
Mars, but I'm pretty sure that some of the other vendors do have a
few more technical employees. For some, by several magnitudes even.
It is also fair to point out that said company only develops front
ends. It doesn't develop optimizers, code generators, linkers,
librarians, runtime libraries, debuggers, or IDEs, all of which are
part of a compiler product, and all of which consume a lot of
resources to create, enhance, and maintain, and all of which customers
ask for improvements in.


I know. They also don't sell to end users, which probably saves them
more in support effort than all of the points you mention.

Still, as I said, the difference in some cases (I wasn't thinking of
Digital Mars) is several magnitudes. Are you trying to tell me that all
of these bits multiply the required effort by a thousand or more?

--
James Kanze GABI Software http://www.gabi-soft.fr
Conseils en informatique orientée objet/
Beratung in objektorientierter Datenverarbeitung
9 place Sémard, 78210 St.-Cyr-l'École, France, +33 (0)1 30 23 00 34

[ See http://www.gotw.ca/resources/clcm.htm for info about ]
[ comp.lang.c++.moderated. First time posters: Do this! ]

Jul 22 '05 #154
"Peter C. Chapin" <pc*****@sover.net> wrote in message
news:<MP************************@news.sover.net>.. .
In article <10***************@master.nyc.kbcfp.com>, hy*****@mail.com
says...
> > In any event, solving this problem by introducing a way to
> > control the scope of macro names would seem to me to be more
> > direct and more decisive.
> That would solve the clunky name problem. It would do nothing about
> to allowing anonymous namespaces, or avoiding the need to include
> all of the template implementation's header files into the body of
> every file that uses the template, or about the need to recompile
> the template instantiation code for every compilation unit that
> uses the template.


In addition to allowing anonymous namespace, export reduces the chances
of an accidental violation of the ODR.
I'm not sure the anonymous namespace issue is that big a deal but
certainly I agree that it seems ungainly to process all those template
bodies in each translation unit that wants to use even a single
template declared in a particular header. However, it seems like even
with export, the instantiation context would have to be
recompiled---or at least re-examined---each time a template body was
modified.
From what I understand of the EDG implementation, at least one of the
instantiation contexts would have to be recompiled. I find it not
unusual to have templates which are instantiated with the same arguments
in many files. (The most obvious example would be std::basic_string, I
think.) In such cases, with export, only one of the sources with the
instantation context needs to be recompiled; without export, the
makefile will cause all of them to be recompiled.
The only difference is that with export the compiler is responsible
for locating the relevant instantiations rather than, for example the
programmer's Makefile (not necessarily a bad thing; Makefile
maintenance can be a problem at times). In either case basically the
same amount of compile- time work needs to be done. Am I
misunderstanding something here?


Basically, you're missing that the compiler understands C++, and the
implications of a given change, much better than make does. In theory,
even without export, if you modified the implementation of the template,
the compiler could recognize that this modification only required the
recompilation of a single source, and not of every source which included
the makefile.

In theory... In practice, such compilers are even rarer than compilers
implementing export.

--
James Kanze GABI Software http://www.gabi-soft.fr
Conseils en informatique orientée objet/
Beratung in objektorientierter Datenverarbeitung
9 place Sémard, 78210 St.-Cyr-l'École, France, +33 (0)1 30 23 00 34

[ See http://www.gotw.ca/resources/clcm.htm for info about ]
[ comp.lang.c++.moderated. First time posters: Do this! ]

Jul 22 '05 #155
llewelly wrote:
I'd compile both the export-using version of the code and the
inclusion-using version of the code with the same compiler. That
way, compiler issues other than export would not come into play.


Yes. Notice that the highest savings from export shoudl occur for
those templates which are instantiated over the same types in many
compilation units. That is, if you have a dozen files which all use
map<string,int>, the export form should only need to instantiate the
template once, and would thus hopefully be faster.

[ See http://www.gotw.ca/resources/clcm.htm for info about ]
[ comp.lang.c++.moderated. First time posters: Do this! ]
Jul 22 '05 #156
Walter wrote:
If DMC++ proves that it is still faster than another compiler using export,
then that demonstrates that export is NOT required for fast compiles. I'd
say that was a very meaningful result.


Well, of course export is not needed for fast compiles.
You can buy yourself a mainframe and your compilation
speed will go way up. (I seem to recall an anecdote from
the old days, when Unix was built on an Amdahl mainframe
it looked like someone was printing the Makefile instead
of running it, because the compiles were so fast.)

Non-export is to templates what inline is to ordinary
functions. It's plausible to believe that not forcing
template implementations to be compiled for every module
which uses them should be faster than forcing them to be
recompiled, but that is with respect to a particular
implementaton of a compiler and system.

[ See http://www.gotw.ca/resources/clcm.htm for info about ]
[ comp.lang.c++.moderated. First time posters: Do this! ]
Jul 22 '05 #157
"Walter" <wa****@digitalmars.nospamm.com> wrote in message news:<V%4Vc.281709$a24.136681@attbi_s03>...
All the programmer does is
use the statement:

import foo;

and the entire semantic content of foo.d is available to the compiler,
including whatever template and function bodies are in foo.d. So, is this a
burden the compiler labors under? Not that anyone has noticed, it compiles
code at a far faster rate than a C++ compiler can. I can go into the reasons
why if anyone is interested.


This seems like a strange blanket statement. Yes, I'd like to know why
you think you can assert that no C++ compiler can compile equivalent
code as fast as a D compiler, ever. Thoughts for the context: What if
there are processors or computer architectures that favours one over
the other? Another thought: Optimisations in C++ compilers you may not
have thought of, yet?

Regards,

Terje

[ See http://www.gotw.ca/resources/clcm.htm for info about ]
[ comp.lang.c++.moderated. First time posters: Do this! ]
Jul 22 '05 #158
ga*******@excite.com (galathaea) wrote in message news:<b2**************************@posting.google. com>...

You know, if the c++ committee had the cojones to make standardisation
over the full translation process, we might even see dynamic linking a
possibility for the next language revision.


First, I don't think this is a nice thing to say about the members of
the C++ standards committee, who work hard, _for free_ - even paying
to contribute, to give us a better language. If that doesn't mean they
have "cojones", I don't know what does. :)

Secondly, AFAIK, the C++ standard covers the _full_ translation
process - it just doesn't spefify _how_ something like linking is done
in detail. This gives the implementers flexibility.

Thirdly, it appears from the above that you're not familiar with
http://www.open-std.org/jtc1/sc22/wg...003/n1428.html.

Three strikes and you're out...

To conclude: The C++ standards committee _does_ have "cojones" to make
standardisation over the full translation process (from source code,
to executable), and there exists a dynamic linking standards proposal.

Regards,

Terje

[ See http://www.gotw.ca/resources/clcm.htm for info about ]
[ comp.lang.c++.moderated. First time posters: Do this! ]
Jul 22 '05 #159
In article <b2*************************@posting.google.com> , Jerry
Coffin <jc*****@taeus.com> writes
Francis Glassborow <fr*****@robinton.demon.co.uk> wrote in message news:<b4**************@robinton.demon.co.uk>...
In article <b2*************************@posting.google.com> , Jerry
Coffin <jc*****@taeus.com> writes
I've used both Comeau and Intel C++ on Windows


I may be wrong but I thought Intel was based on Comeau.


I suppose that's possible, but if so it's the first time I've heard of
it. I have heard (and can easily believe) that both are based on the
EDG front-end, but I've never previously heard of any relationship
beyond that.

No, I think you are right. I keep on mixing up which end is which.

--
Francis Glassborow ACCU
Author of 'You Can Do It!' see http://www.spellen.org/youcandoit
For project ideas and contributions: http://www.spellen.org/youcandoit/projects
[ See http://www.gotw.ca/resources/clcm.htm for info about ]
[ comp.lang.c++.moderated. First time posters: Do this! ]
Jul 22 '05 #160
"Walter" <wa****@digitalmars.nospamm.com> wrote in message
news:<WlsYc.82044$Fg5.34192@attbi_s53>...
"Jean-Marc Bourguet" <jm@bourguet.org> wrote in message
news:41***********************@news.free.fr...
> "Walter" <wa****@digitalmars.nospamm.com> writes:
> > Compiler vendors answer to their customers, and
> > by and large do what their customers want them to do. > The only input vendors can get from customers is desiderata. There
> is hopefully a correlation between desiderata and needs, but they
> are not the same things and some times by a large margin especially
> in large corporation where usually there are several layers
> (sometimes non technical or having lost contact with state of the
> art) between the people who could best express the needs and the
> one which is in contact with the providers (each layer introducing
> some bias).

Digital Mars has no such layers, and if you surf the DMC newsgroups,
you'll see I talk directly to the engineers who use it.
And Digital Mars isn't on the approved list of suppliers at my customer,
so I can't use it. Probably because you don't have such layers.
(Perhaps the biggest single advantage g++ has is that I don't have to go
through purchasing to get it:-).)

[...]
I invite you to the DM newsgroups (news.digitalmars.com) and you can
see for yourself what they're asking for <g>.
Which buys me what if you're not on our approved list of suppliers?
You can also surf the microsoft newsgroups, or the borland ones. Given
the sometimes very unkind messages posted there, I doubt any of them
are censored by their respective PR departments. You won't see such
issues here because they are OT for this forum.


They undergo the worse possible censorship. They're ignored by the
deciders. You may be able to get a small improvement in by convincing
the engineers, but you won't get export unless you can convince
purchasing in some of the large accounts to make it an issue.
(Microsoft actually seems better than most big firms here. Perhaps
because individual sales are a larger percentage of their customer base
than, say, at Sun.)

--
James Kanze GABI Software http://www.gabi-soft.fr
Conseils en informatique orientée objet/
Beratung in objektorientierter Datenverarbeitung
9 place Sémard, 78210 St.-Cyr-l'École, France, +33 (0)1 30 23 00 34

[ See http://www.gotw.ca/resources/clcm.htm for info about ]
[ comp.lang.c++.moderated. First time posters: Do this! ]

Jul 22 '05 #161
jc*****@taeus.com (Jerry Coffin) wrote in message
news:<b2*************************@posting.google.c om>...
Jean-Marc Bourguet <jm@bourguet.org> wrote in message
news:<41***********************@news.free.fr>... [ ... ]
> The only input vendors can get from customers is desiderata.
> There is hopefully a correlation between desiderata and
> needs, but they are not the same things and some times by a
> large margin especially in large corporation where usually
> there are several layers (sometimes non technical or having
> lost contact with state of the art) between the people who
> could best express the needs and the one which is in contact
> with the providers (each layer introducing some bias).

This really isn't accurate at all
It pretty much corresponds to my experience.
-- quite a few people who work directly on compilers at various
vendors monitor newsgroups extensively. Posts from people at EDG,
Microsoft, Dinkumware, etc. These aren't just non-technical people
either -- quite a few of them are the people who write code for these
companies. Of course, in some of those cases (e.g. EDG and Dinkumware)
the companies are small enough that there ARE hardly any non-technical
people there. Even in the case of Microsoft (about as big as software
companies get) technical people are easy to find on newsgroups. Most
of them tend more toward MS-specific newsgroups, but at least IMO,
that's not a particular surprise.
In the case of EDG, Dinkumware, and probably Digital Mars, I agree with
you. The companies are small, they listen, and the decision-makers are
the people listening (and are technically oriented). In the case of
larger companies, however, the situation is quite different. I know
people on the technical side in both Sun and Microsoft, I can send them
suggestions, and in some cases, they have even asked my opinion
proactively. But... they don't make the final decision, any more than I
make the final decision with regards to the compiler I use.

And I can't say that all of the fault is in the vendors. Why should
they listen to me, since I have very little, if any influence, in
purchasing? The problem is on both sides, but given the way the market
works, the incentive to correctly can really only come from the buyer's
side. As long as purchasing refuses to even consider Como/Dinkumware,
instead of Sun and Microsoft, because they are not on the approved list
of suppliers, Sun and Microsoft have no real motive to implement
features (like export) that Como/Dinkumware has, but they don't. And
they have every real motive to do whatever it takes to be on the
approved list of suppliers -- not being a specialist in corporate
management, I'm not sure what that is, but Sun and Microsoft apparently
do, and do it very well.
In any case, the bottom line is that in quite a few cases compiler
vendors get input directly from customers, and the people who work
directly on the compiler often receive that input _quite_ directly.


And in most cases, it doesn't matter, because the people who work
directly on the compiler have no, or very little, influence on the
priorities. Which is partially normal, because the people providing
them with the technical input don't control which compiler gets
purchased.

--
James Kanze GABI Software http://www.gabi-soft.fr
Conseils en informatique orientée objet/
Beratung in objektorientierter Datenverarbeitung
9 place Sémard, 78210 St.-Cyr-l'École, France, +33 (0)1 30 23 00 34

[ See http://www.gotw.ca/resources/clcm.htm for info about ]
[ comp.lang.c++.moderated. First time posters: Do this! ]

Jul 22 '05 #162
On 27 Aug 2004 09:02:34 -0400, "Walter"
<wa****@digitalmars.nospamm.com> wrote:
If that's the case, how
about if the operator in question is a member of a different class
than the template parameter?


It first tries to find an operator overload as a member of the left operand.
If there isn't one, it attempts to find a "reverse" operator overload member
of the right operand. This may sound odd at first, and it's a bit more
complex than I just described, but it works out quite nicely and neatly
avoids the need for ADL.


So that's operators covered, but how about non-member functions? There
are lots of idioms in C++ that rely on such things, such as "property
maps". Would it be possible to write something like the Boost graph
library in D? How about this simple example:

template <class T>
T cosSquared(T t)
{
T val = cos(t);
return val * val;
}

//....
std::complex<double> d(1, 2);
d = cosSquared(d);
//or
numerics::ridiculous_number n = ...;
n = cosSquared(n); //can take cosine of ridiculous numbers

Clearly your operator overloading lookup will find operator*, but what
about std::cos(std::complex<T> const&)? Note you can't make cos a
member function, since it has to work for double...

Tom

[ See http://www.gotw.ca/resources/clcm.htm for info about ]
[ comp.lang.c++.moderated. First time posters: Do this! ]
Jul 22 '05 #163

"Terje Sletteb?" <ts*******@hotmail.com> wrote in message
news:b0**************************@posting.google.c om...
"Walter" <wa****@digitalmars.nospamm.com> wrote in message news:<V%4Vc.281709$a24.136681@attbi_s03>... All the programmer does is
use the statement:

import foo;

and the entire semantic content of foo.d is available to the compiler,
including whatever template and function bodies are in foo.d. So, is this a burden the compiler labors under? Not that anyone has noticed, it compiles code at a far faster rate than a C++ compiler can. I can go into the reasons why if anyone is interested. This seems like a strange blanket statement. Yes, I'd like to know why
you think you can assert that no C++ compiler can compile equivalent
code as fast as a D compiler, ever.


I'll start by saying that DMC++ is the fastest C++ compiler ever built. I've
spent a lot of time working with a profiler on it, going through everything
in the critical path.

The critical path turns out to be lexing speed. How fast can it deal with
the raw characters in the source file? Unfortunately, the phases of
translation required in C++ means that each character must be dealt with
multiple times. D is designed so that the source characters only need to be
examined once.

The next issue for C++ is the #inclusion model of compilation - each header
file gets read in and lexed over and over again, dealing with comments,
macros, etc. Various techniques try to minimize this, using #ifdef's, but
the source text still has to be passed over multiple times (those phases of
translation again). #pragma once works better, but is non-standard.

On an empirical note, the D compiler (which shares a code optimizer and
generator with DMC++) compiles equivalent source much faster than DMC++, and
I've made no attempt to profile or optimize the lexer in D. My experience
with trying to speed up C++ compilation did go into the design of D - so it
would be easy to build a searing fast compiler. I invite you to take a look
at the D lexer source - it comes with the DMD distribution
www.digitalmars.com/d/dcompiler.html - and see if that could be done with
C++. Compare it with the g++ lexer source.
Thoughts for the context: What if
there are processors or computer architectures that favours one over
the other?
Another thought: Optimisations in C++ compilers you may not
have thought of, yet?


I think that having to visit each character 3 times instead of once is
always going to be slower <g>.
[ See http://www.gotw.ca/resources/clcm.htm for info about ]
[ comp.lang.c++.moderated. First time posters: Do this! ]

Jul 22 '05 #164

<ka***@gabi-soft.fr> wrote in message
news:d6**************************@posting.google.c om...
"Walter" <wa****@digitalmars.nospamm.com> wrote in message
news:<VQBXc.98586$TI1.50760@attbi_s52>...
<ka***@gabi-soft.fr> wrote in message
news:d6**************************@posting.google.c om...
It's also worth pointing out that the company in question only has
three technical employees, and that they also managed to implement a
Java compiler at the same time. Now, I don't know about Digital
Mars, but I'm pretty sure that some of the other vendors do have a
few more technical employees. For some, by several magnitudes even.

It is also fair to point out that said company only develops front
ends. It doesn't develop optimizers, code generators, linkers,
librarians, runtime libraries, debuggers, or IDEs, all of which are
part of a compiler product, and all of which consume a lot of
resources to create, enhance, and maintain, and all of which customers
ask for improvements in.


I know. They also don't sell to end users, which probably saves them
more in support effort than all of the points you mention.

Still, as I said, the difference in some cases (I wasn't thinking of
Digital Mars) is several magnitudes. Are you trying to tell me that all
of these bits multiply the required effort by a thousand or more?


Which compiler vender has 3000 people working on their compiler? <g> Let's
have some fun with Brand X which has 3000 in their C++ department. That's
3000 times an average cost of $100000 each (salary + benefits + office
space) or a $300,000,000 annual budget just for C++ (and that's very, very
lowball for 3000 engineers). Wow!

Can anyone afford a language that requires such a massive annual investment?
[ See http://www.gotw.ca/resources/clcm.htm for info about ]
[ comp.lang.c++.moderated. First time posters: Do this! ]

Jul 22 '05 #165

<ka***@gabi-soft.fr> wrote in message
news:d6**************************@posting.google.c om...
"Walter" <wa****@digitalmars.nospamm.com> wrote in message
news:<WlsYc.82044$Fg5.34192@attbi_s53>...
"Jean-Marc Bourguet" <jm@bourguet.org> wrote in message
news:41***********************@news.free.fr...
> "Walter" <wa****@digitalmars.nospamm.com> writes:
> > Compiler vendors answer to their customers, and
> > by and large do what their customers want them to do.
> The only input vendors can get from customers is desiderata. There
> is hopefully a correlation between desiderata and needs, but they
> are not the same things and some times by a large margin especially
> in large corporation where usually there are several layers
> (sometimes non technical or having lost contact with state of the
> art) between the people who could best express the needs and the
> one which is in contact with the providers (each layer introducing
> some bias).
Digital Mars has no such layers, and if you surf the DMC newsgroups,
you'll see I talk directly to the engineers who use it.


And Digital Mars isn't on the approved list of suppliers at my customer,
so I can't use it. Probably because you don't have such layers.
(Perhaps the biggest single advantage g++ has is that I don't have to go
through purchasing to get it:-).)


You don't have to go through purchasing to get DMC++ either. It's a free
download from www.digitalmars.com/download/freecompiler.html.

I invite you to the DM newsgroups (news.digitalmars.com) and you can
see for yourself what they're asking for <g>.

Which buys me what if you're not on our approved list of suppliers?


If Digital Mars isn't on your approved list of suppliers, and the ones that
are on your approved list pay no attention to you, what can I say? I'm also
puzzled by Digital Mars not being approved because it doesn't have layers,
yet those layers prevent the vendor from addressing your needs?

You can also surf the microsoft newsgroups, or the borland ones. Given
the sometimes very unkind messages posted there, I doubt any of them
are censored by their respective PR departments. You won't see such
issues here because they are OT for this forum.

They undergo the worse possible censorship. They're ignored by the
deciders. You may be able to get a small improvement in by convincing
the engineers, but you won't get export unless you can convince
purchasing in some of the large accounts to make it an issue.


If you keep buying from Brand X despite them ignoring your requirements,
then are those requirements that important to you? If large accounts don't
find export to be important, how important is it really?
But to be frank, a feature that costs $300,000 to implement needs,
realistically, to be able to generate at least $400,000 in incremental
revenue to make it worthwhile. A feature that costs $5,000 to implement that
would generate incremental revenue of $20,000 is going to get greenlighted
first <g>.
[ See http://www.gotw.ca/resources/clcm.htm for info about ]
[ comp.lang.c++.moderated. First time posters: Do this! ]

Jul 22 '05 #166

"tom_usenet" <to********@hotmail.com> wrote in message
news:rf********************************@4ax.com...
On 27 Aug 2004 09:02:34 -0400, "Walter"
<wa****@digitalmars.nospamm.com> wrote:
If that's the case, how
about if the operator in question is a member of a different class
than the template parameter?

It first tries to find an operator overload as a member of the left operand.If there isn't one, it attempts to find a "reverse" operator overload memberof the right operand. This may sound odd at first, and it's a bit more
complex than I just described, but it works out quite nicely and neatly
avoids the need for ADL.

So that's operators covered, but how about non-member functions? There
are lots of idioms in C++ that rely on such things, such as "property
maps". Would it be possible to write something like the Boost graph
library in D? How about this simple example:

template <class T>
T cosSquared(T t)
{
T val = cos(t);
return val * val;
}

//....
std::complex<double> d(1, 2);
d = cosSquared(d);
//or
numerics::ridiculous_number n = ...;
n = cosSquared(n); //can take cosine of ridiculous numbers

Clearly your operator overloading lookup will find operator*, but what
about std::cos(std::complex<T> const&)? Note you can't make cos a
member function, since it has to work for double...


No, it won't find cos() unless it is imported into the scope of the
template, because D does not support ADL. There are multiple ways to make
this work, however:

1) pass cos in as an alias template parameter.
2) use partial specialization to have two versions of the template, one for
cos() as a class member and the other using cos() for built-in types
3) import the necessary cos() into the module with the template
4) pass in the appropriate import scope as an alias parameter, call it Foo,
and then:
Foo.cos(t)
For cos(double), you'd pass in std.math as the argument for the Foo alias
parameter.
5) box double within a struct X with a member function cos(), and pass in X
instead of double.

Note that in C++ you still have to #include the relevant cos() definitions
anyway.
[ See http://www.gotw.ca/resources/clcm.htm for info about ]
[ comp.lang.c++.moderated. First time posters: Do this! ]

Jul 22 '05 #167

"Hyman Rosen" <hy*****@mail.com> wrote in message
news:10***************@master.nyc.kbcfp.com...
llewelly wrote:
> I'd compile both the export-using version of the code and the
> inclusion-using version of the code with the same compiler. That
> way, compiler issues other than export would not come into play.


Yes. Notice that the highest savings from export shoudl occur for
those templates which are instantiated over the same types in many
compilation units. That is, if you have a dozen files which all use
map<string,int>, the export form should only need to instantiate the
template once, and would thus hopefully be faster.


As I understand member functions of template classes, if a member
function is never called then it is never compiled. For example, this is
why pair<X, Y> can compile successfully when X and Y don't contain default
constructors; as long as you don't invoke pair's default constructor the
fact that it calls nonexistent default constructors for X and Y is
irrelevant. So suppose you have a dozen files that use map<string, int> and
none of them call map::max_size(). If you then create a thirteenth file
that uses map<string, int> and it happens to use the max_size() function on
this class, will the original 12 files need to be recompiled?

Joe Gottman
[ See http://www.gotw.ca/resources/clcm.htm for info about ]
[ comp.lang.c++.moderated. First time posters: Do this! ]
Jul 22 '05 #168
Hyman Rosen <hy*****@mail.com> writes:
llewelly wrote:
> I'd compile both the export-using version of the code and the
> inclusion-using version of the code with the same compiler. That
> way, compiler issues other than export would not come into play.


Yes. Notice that the highest savings from export shoudl occur for
those templates which are instantiated over the same types in many
compilation units. That is, if you have a dozen files which all use
map<string,int>, the export form should only need to instantiate the
template once, and would thus hopefully be faster.


Why would that be any faster than link-time instantiation (a model
supported by EDG and therefore Comeau)?

--
Dave Abrahams
Boost Consulting
http://www.boost-consulting.com

[ See http://www.gotw.ca/resources/clcm.htm for info about ]
[ comp.lang.c++.moderated. First time posters: Do this! ]
Jul 22 '05 #169
"Walter" <wa****@digitalmars.nospamm.com> writes:
I'll start by saying that DMC++ is the fastest C++ compiler ever
built.
That kind of assertion is part of why some people don't trust your
claims. It's meaningless on its face, because what's "fastest"
depends on which programs you try to compile. I also note from
http://biolpc22.york.ac.uk/wx/wxhatc...er_choice.html that
you don't seem to have speed testing some significant contenders.
Metrowerks Codewarrior is one of the fastest compilers in some of my
tests. So it's hard to see how you can claim to have the "fastest
C++ compiler ever built."
I've spent a lot of time working with a profiler on it, going
through everything in the critical path.

The critical path turns out to be lexing speed. How fast can it deal
with the raw characters in the source file?
That's commonly the bottleneck in the parsing process of many
languages...
Unfortunately, the phases of translation required in C++ means that
each character must be dealt with multiple times. D is designed so
that the source characters only need to be examined once.


....but if that's your bottleneck, you obviously haven't tested it on
many template metaprograms. Template instantiation speed can be a
real bottleneck in contemporary C++ code. I happen to know that DMC++
can't yet compile a great deal of Boost, so maybe it's no coincidence.
It's easy to be fastest if you don't conform, and you only benchmark
the features you've implemented.

If you'd like, I'll privately forward you the performance appendix of
http://www.boost-consulting.com/tmpbook, which contains some very
simple benchmarks and graphs showing performance for a few other
compilers. Maybe you can use that as a way to think about optimizing
template instantiation.

--
Dave Abrahams
Boost Consulting
http://www.boost-consulting.com

[ See http://www.gotw.ca/resources/clcm.htm for info about ]
[ comp.lang.c++.moderated. First time posters: Do this! ]
Jul 22 '05 #170
Hyman Rosen <hy*****@mail.com> wrote in message
news:<10***************@master.nyc.kbcfp.com>...
llewelly wrote:
> I'd compile both the export-using version of the code and the
> inclusion-using version of the code with the same compiler. That
> way, compiler issues other than export would not come into play.
Yes. Notice that the highest savings from export shoudl occur for
those templates which are instantiated over the same types in many
compilation units. That is, if you have a dozen files which all use
map<string,int>, the export form should only need to instantiate the
template once, and would thus hopefully be faster.


And this is probably the most usual case. std::basic_string and the
std::iostream hierarcy are certainly among the most widely used
templates, and how many different types to you instantiate them over in
a given program? Of course, the only time anything should change in
one of them is when I install a new version of the compile, in which
case, I'll do a complete rebuild anyway. But many of my own templates
are similar -- they are widely used, but typically with only one or two
different parameters.

--
James Kanze GABI Software http://www.gabi-soft.fr
Conseils en informatique orientée objet/
Beratung in objektorientierter Datenverarbeitung
9 place Sémard, 78210 St.-Cyr-l'École, France, +33 (0)1 30 23 00 34

[ See http://www.gotw.ca/resources/clcm.htm for info about ]
[ comp.lang.c++.moderated. First time posters: Do this! ]

Jul 22 '05 #171
Joe Gottman wrote:
So suppose you have a dozen files that use map<string, int> and
none of them call map::max_size(). If you then create a thirteenth file
that uses map<string, int> and it happens to use the max_size() function on
this class, will the original 12 files need to be recompiled?


No, what for? If compiling any of those twelve would result in a
max_size() different from the thirteenth then the ODR is violated,
and this does not have to be diagnosed

[ See http://www.gotw.ca/resources/clcm.htm for info about ]
[ comp.lang.c++.moderated. First time posters: Do this! ]
Jul 22 '05 #172
"Walter" <wa****@digitalmars.nospamm.com> wrote in message news:<bq3Zc.77738$9d6.9868@attbi_s54>...
[...]
Which compiler vender has 3000 people working on their compiler? <g> Let's
have some fun with Brand X which has 3000 in their C++ department. That's
3000 times an average cost of $100000 each (salary + benefits + office
space) or a $300,000,000 annual budget just for C++ (and that's very, very
lowball for 3000 engineers). Wow!

Can anyone afford a language that requires such a massive annual investment?


DOD? :-)

Cheers,
Nicola Musatti

[ See http://www.gotw.ca/resources/clcm.htm for info about ]
[ comp.lang.c++.moderated. First time posters: Do this! ]
Jul 22 '05 #173
"Walter" <wa****@digitalmars.nospamm.com> writes:
There are multiple ways to make this work, however:

1) pass cos in as an alias template parameter.
2) use partial specialization to have two versions of the template,
one for cos() as a class member and the other using cos() for
built-in types
3) import the necessary cos() into the module with the template
4) pass in the appropriate import scope as an alias parameter, call
it Foo, and then:
Foo.cos(t)
For cos(double), you'd pass in std.math as the argument for the
Foo alias parameter.
5) box double within a struct X with a member function cos(), and
pass in X instead of double.


Could you post a small code sample that illustrates what these various
solutions look like in D? In particular, I'd like to see proposals 1,
3, and 4.

--
Steven E. Harris

[ See http://www.gotw.ca/resources/clcm.htm for info about ]
[ comp.lang.c++.moderated. First time posters: Do this! ]
Jul 22 '05 #174
David Abrahams wrote:
Why would that be any faster than link-time instantiation (a model
supported by EDG and therefore Comeau)?


Because you wouldn't have to include the implementation code
into every module that uses it. Why do we have to keep going
around in these circles?

[ See http://www.gotw.ca/resources/clcm.htm for info about ]
[ comp.lang.c++.moderated. First time posters: Do this! ]
Jul 22 '05 #175

"David Abrahams" <da**@boost-consulting.com> wrote in message
news:up***********@boost-consulting.com...
"Walter" <wa****@digitalmars.nospamm.com> writes:
I'll start by saying that DMC++ is the fastest C++ compiler ever
built. That kind of assertion is part of why some people don't trust your
claims. It's meaningless on its face, because what's "fastest"
depends on which programs you try to compile. I also note from
http://biolpc22.york.ac.uk/wx/wxhatc...er_choice.html that
you don't seem to have speed testing some significant contenders.
Metrowerks Codewarrior is one of the fastest compilers in some of my
tests. So it's hard to see how you can claim to have the "fastest
C++ compiler ever built."


I didn't write that speed benchmark or the web page it's on. You're welcome
to post some benchmarks showing MWC (or any other compiler) is faster. If
so, I'll retract the statement. The wxWindows makes an excellent compile
speed benchmark because:

1) it is not contrived in any way to be a compile speed benchmark
2) it is real, widely used, mainstream C++ code
3) it is very large
4) it is freely available for anyone to verify the results for themselves
5) it has been ported to a very large number of compilers
6) I didn't write wxWindows, didn't run the benchmarks, and have no control
over the web page it's posted on, lest I be accused of bias. But of course I
get accused of it anyway <g>.

I've spent a lot of time working with a profiler on it, going
through everything in the critical path.
The critical path turns out to be lexing speed. How fast can it deal
with the raw characters in the source file?

That's commonly the bottleneck in the parsing process of many
languages...


Yes, but it is worth verifying, since profiling can sometimes produce
surprising results.

Unfortunately, the phases of translation required in C++ means that
each character must be dealt with multiple times. D is designed so
that the source characters only need to be examined once.

...but if that's your bottleneck, you obviously haven't tested it on
many template metaprograms. Template instantiation speed can be a
real bottleneck in contemporary C++ code.


You might be right, but I don't see typical C++ code's dependency on massive
quantities of .h files going away anytime in the foreseeable future. Just
#include'ing windows.h, and all the headers it #include's, is a big
bottleneck. Last time I checked, it defined 10,000+ macros. STL is another
huge source of text that needs to be swallowed.
I happen to know that DMC++
can't yet compile a great deal of Boost, so maybe it's no coincidence.
Since Boost contains workarounds for non-compliance in many compilers, but
such work was not done for DMC++, that is an unfair remark. Currently, David
James is doing some excellent work in adapting Boost for DMC++, and he's
been very helpful in identifying some problems that need fixing. So far,
none of them have any influence on compile speed, and I don't expect they
will. The only C++ feature that did was ADL - which works correctly in
DMC++.
It's easy to be fastest if you don't conform,
I don't know of any correlation between compiler performance and conformity
across compilers, nor do I know of any technical basis for such a
correlation.
and you only benchmark the features you've implemented.
How could enormous (and very complicated) libraries like wxWindows, STL and
STLsoft work with DMC++ if somehow only a carefully selected subset picked
for fast compiling of the language is implemented? I can't even conceive how
to build such a contrived implementation.

If you'd like, I'll privately forward you the performance appendix of
http://www.boost-consulting.com/tmpbook, which contains some very
simple benchmarks and graphs showing performance for a few other
compilers. Maybe you can use that as a way to think about optimizing
template instantiation.


Sure, I'd love to see it. But simple benchmarks imply they are of a small
size. Small size programs can be great for benchmarking optimizer
performance, but typically are not representative benchmarks for compile
speed, because when compile speed matters is when you're trying to stuff
300,000 lines of header files down the compiler's maw.
[ See http://www.gotw.ca/resources/clcm.htm for info about ]
[ comp.lang.c++.moderated. First time posters: Do this! ]
Jul 22 '05 #176
Hyman Rosen <hy*****@mail.com> writes:
David Abrahams wrote:
Why would that be any faster than link-time instantiation (a model
supported by EDG and therefore Comeau)?
Because you wouldn't have to include the implementation code
into every module that uses it.


Okay, but you were talking about saving instantiation time. AFAICT,
export only saves *parsing* time over link-time instantiation.
Why do we have to keep going around in these circles?


I think the issues are subtle, and I have an intense interest in the
performance of template compilation, and its limits. I don't think
we're going around in circles, so much as exploring the details.

--
Dave Abrahams
Boost Consulting
http://www.boost-consulting.com

[ See http://www.gotw.ca/resources/clcm.htm for info about ]
[ comp.lang.c++.moderated. First time posters: Do this! ]
Jul 22 '05 #177
David Abrahams <da**@boost-consulting.com> wrote in message
news:<u8***********@boost-consulting.com>...
Hyman Rosen <hy*****@mail.com> writes:
llewelly wrote:
> I'd compile both the export-using version of the code and the
> inclusion-using version of the code with the same compiler. That
> way, compiler issues other than export would not come into play.
Yes. Notice that the highest savings from export shoudl occur for
those templates which are instantiated over the same types in many
compilation units. That is, if you have a dozen files which all use
map<string,int>, the export form should only need to instantiate the
template once, and would thus hopefully be faster.

Why would that be any faster than link-time instantiation (a model
supported by EDG and therefore Comeau)?


Because make sees the dozen or so object files as being dependent on the
template header file. If modifying the implementation of the template
updates the timestamp of the header, make will recompile all of these
files. In the case of export, the modification is in an implementation
file, the "dependency" is handled by the pre-linker, which knows enough
to only compile one of the source files which triggered the
instantiation, and not all of them.

Typically, compiling one source file is faster than compiling a dozen.

--
James Kanze GABI Software http://www.gabi-soft.fr
Conseils en informatique orientée objet/
Beratung in objektorientierter Datenverarbeitung
9 place Sémard, 78210 St.-Cyr-l'École, France, +33 (0)1 30 23 00 34

[ See http://www.gotw.ca/resources/clcm.htm for info about ]
[ comp.lang.c++.moderated. First time posters: Do this! ]
Jul 22 '05 #178
"Walter" <wa****@digitalmars.nospamm.com> writes:
"David Abrahams" <da**@boost-consulting.com> wrote in message
news:up***********@boost-consulting.com...
> "Walter" <wa****@digitalmars.nospamm.com> writes:
> > I'll start by saying that DMC++ is the fastest C++ compiler ever
> > built. > That kind of assertion is part of why some people don't trust your
> claims. It's meaningless on its face, because what's "fastest"
> depends on which programs you try to compile. I also note from
> http://biolpc22.york.ac.uk/wx/wxhatc...er_choice.html that
> you don't seem to have speed testing some significant contenders.
> Metrowerks Codewarrior is one of the fastest compilers in some of my
> tests. So it's hard to see how you can claim to have the "fastest
> C++ compiler ever built."


I didn't write that speed benchmark or the web page it's on. You're
welcome to post some benchmarks showing MWC (or any other compiler)
is faster.


I'm not claiming some other compiler is faster. I'm saying that _you_
claim DMC++ is "the fastest" on the basis of substantially incomplete
data.
If so, I'll retract the statement.
If you want people to trust your claims, go to some lengths to make
sure they're backed up by substantially complete tests. I don't feel
a need to prove you wrong -- who knows, you might even be right -- but
there isn't enough data to say yet. I'm content to sit on the
sideline pointing out the flimsy foundation for your bold claims ;-)
The wxWindows makes an
excellent compile speed benchmark because:

1) it is not contrived in any way to be a compile speed benchmark
2) it is real, widely used, mainstream C++ code
3) it is very large
4) it is freely available for anyone to verify the results for themselves
5) it has been ported to a very large number of compilers
6) I didn't write wxWindows, didn't run the benchmarks, and have no control
over the web page it's posted on, lest I be accused of bias.
Sure, if you're compiling GUI code written in the style of wxWindows,
it's a good benchmark. If you're doing high-performance scientific
computing it might be completely inappropriate.
But of course I get accused of it anyway <g>.
Actually I didn't accuse you of bias. Everyone expects you to be
biased (at least I do) towards something you wrote. I also expect
claims to be fair and supportable, which I don't think yours are, in
this case.
> > I've spent a lot of time working with a profiler on it, going
> > through everything in the critical path. The critical path
> > turns out to be lexing speed. How fast can it deal with the raw
> > characters in the source file?

>
> That's commonly the bottleneck in the parsing process of many
> languages...


Yes, but it is worth verifying, since profiling can sometimes produce
surprising results.


Yes.
> > Unfortunately, the phases of translation required in C++ means that
> > each character must be dealt with multiple times. D is designed so
> > that the source characters only need to be examined once. > ...but if that's your bottleneck, you obviously haven't tested it
> on many template metaprograms. Template instantiation speed can
> be a real bottleneck in contemporary C++ code.


You might be right, but I don't see typical C++ code's dependency on
massive quantities of .h files going away anytime in the foreseeable
future.


For a certain class of project, that is indeed an important bottleneck
Just #include'ing windows.h, and all the headers it #include's, is a
big bottleneck. Last time I checked, it defined 10,000+ macros. STL
is another huge source of text that needs to be swallowed.
> I happen to know that DMC++
> can't yet compile a great deal of Boost, so maybe it's no coincidence.
Since Boost contains workarounds for non-compliance in many
compilers, but such work was not done for DMC++, that is an unfair
remark.


It's not an unfair remark. Compilers that require fewer workarounds
get ported much more quickly. It seems logical that you haven't speed
tested DMC++ against many template metaprograms if DMC++ can't compile
Boost, for whatever reason.
Currently, David James is doing some excellent work in adapting
Boost for DMC++, and he's been very helpful in identifying some
problems that need fixing.
I know.
So far, none of them have any influence on compile speed, and I
don't expect they will. The only C++ feature that did was ADL -
which works correctly in DMC++.
....which is more than I can say for some other compilers. So, Bravo!
> It's easy to be fastest if you don't conform,


I don't know of any correlation between compiler performance and
conformity across compilers


Well, I can tell you that the front-end widely acknowledged to be the
most conformant is also the slowest in many of our metaprogram
compilation tests. Coincidence?
nor do I know of any technical basis for such a correlation.
I'm not drawing any correlation, though -- you need the 2nd half of
that sentence in order to retain the original intention.
> and you only benchmark the features you've implemented.


How could enormous (and very complicated) libraries like wxWindows,
STL and STLsoft work with DMC++ if somehow only a carefully selected
subset picked for fast compiling of the language is implemented?
I can't even conceive how to build such a contrived implementation.


I'm not claiming it's intentional. Of course you've optimized the
features that you test, and as for the features that don't work, well,
you can't rightly claim your compiler is faster on those than any
compiler that *does* implement them.
> If you'd like, I'll privately forward you the performance appendix of
> http://www.boost-consulting.com/tmpbook, which contains some very
> simple benchmarks and graphs showing performance for a few other
> compilers. Maybe you can use that as a way to think about optimizing
> template instantiation.


Sure, I'd love to see it. But simple benchmarks imply they are of a small
size.


Yes. They measure specific effects in the template machinery that
become significant in complex template metaprograms.
Small size programs can be great for benchmarking optimizer
performance, but typically are not representative benchmarks for
compile speed
True. They only have some relevance to template instantiation
speed. However, some programs' compile times are indeed dominated by
template instantiations.
because when compile speed matters is when you're trying to stuff
300,000 lines of header files down the compiler's maw.


No, compile speed matters when it takes more time that you're willing
to wait, for whatever reason. Saying it only matters when you have
lots of program text is circular reasoning.

--
Dave Abrahams
Boost Consulting
http://www.boost-consulting.com

[ See http://www.gotw.ca/resources/clcm.htm for info about ]
[ comp.lang.c++.moderated. First time posters: Do this! ]
Jul 22 '05 #179
David Abrahams wrote:
Okay, but you were talking about saving instantiation time. AFAICT,
export only saves *parsing* time over link-time instantiation.


Saved time is saved time. A compilation unit that uses strings,
I/O, and a container or two drags in thousands of lines of template
implementation code. Then add to it the benefit of avoiding mixing
the implementation and the user code together.

Given that many compilers already implement precompiled headers and
link-time instantiation, I would think that a good deal of the machinery
needed to implement export is already in place.

I know EDG talks about how difficult export was to implement, but you
must remember that they wanted to implement it correctly. Think of how
many other C++ elements have been implemented in partly broken ways by
various compilers. If vendors waited to release their compilers until
exception handling was correct, or two-phase name lookup was correct,
or covariant return type implementation was correct, or all names were
situated in their correct namespaces, or member templates were correct
we would still be waiting for the first compiler (or maybe the second).

By not providing any implementation at all of export, vendors prevent
users from gaining any experience using it, and in turn the vendors
cannot get any feedback on how to improve it.

[ See http://www.gotw.ca/resources/clcm.htm for info about ]
[ comp.lang.c++.moderated. First time posters: Do this! ]
Jul 22 '05 #180
ka***@gabi-soft.fr writes:
"Peter C. Chapin" <pc*****@sover.net> wrote in message
news:<MP************************@news.sover.net>.. .
> In article <10***************@master.nyc.kbcfp.com>, hy*****@mail.com
> says...
> > > In any event, solving this problem by introducing a way to
> > > control the scope of macro names would seem to me to be more
> > > direct and more decisive. > > That would solve the clunky name problem. It would do nothing about
> > to allowing anonymous namespaces, or avoiding the need to include
> > all of the template implementation's header files into the body of
> > every file that uses the template, or about the need to recompile
> > the template instantiation code for every compilation unit that
> > uses the template.


In addition to allowing anonymous namespace, export reduces the chances
of an accidental violation of the ODR.


In part because it requires the implementation to analyze the very
information required to diagnose many ODR violations. :-)
> I'm not sure the anonymous namespace issue is that big a deal but
> certainly I agree that it seems ungainly to process all those template
> bodies in each translation unit that wants to use even a single
> template declared in a particular header. However, it seems like even
> with export, the instantiation context would have to be
> recompiled---or at least re-examined---each time a template body was
> modified.
From what I understand of the EDG implementation, at least one of the
instantiation contexts would have to be recompiled. I find it not
unusual to have templates which are instantiated with the same arguments
in many files. (The most obvious example would be std::basic_string, I
think.)


Obvious, perhaps, but I think several library implementors provide
std::basic_string<char> and std::basic_string<wchar_t>
pre-instantiated. (The proposed 'extern template' which is not
like export makes this easier, but is not strictly necessary.) So
I don't see export being of any help there. Same with iostreams.

However I think the standard library is full of templates which most
programs instantiate many, many times, with a only a few types
making up the majority of instantiations. vector<int>, and such.
In such cases, with export, only one of the sources with the
instantation context needs to be recompiled; without export, the
makefile will cause all of them to be recompiled.
> The only difference is that with export the compiler is responsible
> for locating the relevant instantiations rather than, for example the
> programmer's Makefile (not necessarily a bad thing; Makefile
> maintenance can be a problem at times). In either case basically the
> same amount of compile- time work needs to be done. Am I
> misunderstanding something here?


Basically, you're missing that the compiler understands C++, and the
implications of a given change, much better than make does. In theory,
even without export, if you modified the implementation of the template,
the compiler could recognize that this modification only required the
recompilation of a single source, and not of every source which included
the makefile.

In theory... In practice, such compilers are even rarer than compilers
implementing export.

[snip]

sadly ...

[ See http://www.gotw.ca/resources/clcm.htm for info about ]
[ comp.lang.c++.moderated. First time posters: Do this! ]
Jul 22 '05 #181
David Abrahams <da**@boost-consulting.com> wrote in message
news:<ub***********@boost-consulting.com>...
I don't know of any correlation between compiler performance and
conformity across compilers
Well, I can tell you that the front-end widely acknowledged to be the
most conformant is also the slowest in many of our metaprogram
compilation tests. Coincidence?


The front-end widely acknowledged to be the most conformant is also the
front-end which offers the most options for supporting legacy code.

It's also a front-end which has been designed to be easily ported to a
variety of back-ends.

Both of these could affect speed. In fact, I imagine that the latter
has a significant negative impact on compile speeds. (In the same way,
g++'s portability often means that it will not be the fastest compiler
on a particular platform. Although there are cases where the native
compiler has done such a bad job...)

Also, which compiler did you measure it on. In at least some cases, it
actually generates C code, which it invokes the C compiler. While
optimal for portability, this strategy will definitly not result in the
fastest compile times.

--
James Kanze GABI Software http://www.gabi-soft.fr
Conseils en informatique orientée objet/
Beratung in objektorientierter Datenverarbeitung
9 place Sémard, 78210 St.-Cyr-l'École, France, +33 (0)1 30 23 00 34

[ See http://www.gotw.ca/resources/clcm.htm for info about ]
[ comp.lang.c++.moderated. First time posters: Do this! ]
Jul 22 '05 #182
In article <ub***********@boost-consulting.com>, David Abrahams
<da**@boost-consulting.com> writes
No, compile speed matters when it takes more time that you're willing
to wait, for whatever reason. Saying it only matters when you have
lots of program text is circular reasoning.


Actually, for me, there are two levels of compile time speed:

1) It is fast enough so that I do not wonder if I should take a break.
2) It is slow enough so that I know I can leave it to run whilst I have
lunch.

It is the bits in between that are really irritating.

There are similar criteria for complete rebuilds but this time allowing
it to run overnight is often acceptable but much more than 8 hours
suggests the need to invest either money in faster hardware, or time in
removing unnecessary dependencies in TUs.
--
Francis Glassborow ACCU
Author of 'You Can Do It!' see http://www.spellen.org/youcandoit
For project ideas and contributions: http://www.spellen.org/youcandoit/projects
[ See http://www.gotw.ca/resources/clcm.htm for info about ]
[ comp.lang.c++.moderated. First time posters: Do this! ]
Jul 22 '05 #183
In article <d6*************************@posting.google.com> ,
ka***@gabi-soft.fr writes
Because make sees the dozen or so object files as being dependent on the
template header file. If modifying the implementation of the template
updates the timestamp of the header, make will recompile all of these
files. In the case of export, the modification is in an implementation
file, the "dependency" is handled by the pre-linker, which knows enough
to only compile one of the source files which triggered the
instantiation, and not all of them.


True, but only relevant if the header file with the template in it is
not yet stable. I doubt that this is true for many uses of templates in
large scale applications.

--
Francis Glassborow ACCU
Author of 'You Can Do It!' see http://www.spellen.org/youcandoit
For project ideas and contributions: http://www.spellen.org/youcandoit/projects
[ See http://www.gotw.ca/resources/clcm.htm for info about ]
[ comp.lang.c++.moderated. First time posters: Do this! ]
Jul 22 '05 #184
Hyman Rosen <hy*****@mail.com> writes:
David Abrahams wrote:
> Okay, but you were talking about saving instantiation time. AFAICT,
> export only saves *parsing* time over link-time instantiation.


Saved time is saved time.


Sure; I have no argument with that, nor with export. I just want
everything clear. AFAICT, export may or may not save instantiation time,
depending on a compiler's instantiation model.

--
Dave Abrahams
Boost Consulting
http://www.boost-consulting.com

[ See http://www.gotw.ca/resources/clcm.htm for info about ]
[ comp.lang.c++.moderated. First time posters: Do this! ]
Jul 22 '05 #185
David Abrahams <da**@boost-consulting.com> writes:
Hyman Rosen <hy*****@mail.com> writes:
> David Abrahams wrote:
>> Why would that be any faster than link-time instantiation (a model
>> supported by EDG and therefore Comeau)?

>
> Because you wouldn't have to include the implementation code
> into every module that uses it.


Okay, but you were talking about saving instantiation time. AFAICT,
export only saves *parsing* time over link-time instantiation.


It mainly save dependancies: the needed recompilations after
a change. (Exactly what is the main speed gain from splitting
the rest of your other sources in several files. As a
matter of fact, you can loose in parsing time in some cases.)

Yours,

--
Jean-Marc

[ See http://www.gotw.ca/resources/clcm.htm for info about ]
[ comp.lang.c++.moderated. First time posters: Do this! ]
Jul 22 '05 #186
Francis Glassborow <fr*****@robinton.demon.co.uk> writes:
In article <d6*************************@posting.google.com> ,
ka***@gabi-soft.fr writes
Because make sees the dozen or so object files as being
dependent on the template header file. If modifying the
implementation of the template updates the timestamp of
the header, make will recompile all of these files. In
the case of export, the modification is in an
implementation file, the "dependency" is handled by the
pre-linker, which knows enough to only compile one of the
source files which triggered the instantiation, and not
all of them.


True, but only relevant if the header file with the
template in it is not yet stable. I doubt that this is
true for many uses of templates in large scale
applications.


I known of at least one case where it was the reverse: I
prefered a design based on inheritance instead of one based
on template because of that problem.

Yours,

--
Jean-Marc

[ See http://www.gotw.ca/resources/clcm.htm for info about ]
[ comp.lang.c++.moderated. First time posters: Do this! ]
Jul 22 '05 #187
David Abrahams <da**@boost-consulting.com> writes:
Hyman Rosen <hy*****@mail.com> writes:
llewelly wrote:
> I'd compile both the export-using version of the code and the
> inclusion-using version of the code with the same compiler. That
> way, compiler issues other than export would not come into play.


Yes. Notice that the highest savings from export shoudl occur for
those templates which are instantiated over the same types in many
compilation units. That is, if you have a dozen files which all use
map<string,int>, the export form should only need to instantiate the
template once, and would thus hopefully be faster.


Why would that be any faster than link-time instantiation (a model
supported by EDG and therefore Comeau)?


Well, it is the only instantiation mechanism currently used
which can benefit easely from export.

Let's look at the 3 instantiation mechanisms used:

- global mechanism (not really link-time instantiation, the
template instances are assigned to compilation units and
when the compilation units is recompiled the instances are
also regenerated)
Every instance is already generated once.
Easy to benefit from the reduction in dependancies (if the
dependancies generated from include do not trigger a
recompilation, the prelinker does so).
Less parsing to do.

- local mechanism with duplicate avoidance (Sun, aka repository)
Every instance is already generated once.
There could be reduction in dependancies but it would be
more difficult to extract than with the global mechanism.
Less parsing to do.

- local mechanism without duplicate avoidance (Borland, gcc,
aka each compilation unit has everything needed and the
linker throw the duplicates away)
Export is of no use: every instance has to be generated
for each compilation unit, there will be no reduction in
dependancies (and handling them automatically will be more
complicated) and there will be little difference in
parsing and the difference could be in favor of the
inclusion model

Yours,

--
Jean-Marc

[ See http://www.gotw.ca/resources/clcm.htm for info about ]
[ comp.lang.c++.moderated. First time posters: Do this! ]
Jul 22 '05 #188
ka***@gabi-soft.fr writes:
David Abrahams <da**@boost-consulting.com> wrote in message
news:<ub***********@boost-consulting.com>...
> I don't know of any correlation between compiler performance and
> conformity across compilers
Well, I can tell you that the front-end widely acknowledged to be the
most conformant is also the slowest in many of our metaprogram
compilation tests. Coincidence?


The front-end widely acknowledged to be the most conformant is also the
front-end which offers the most options for supporting legacy code.

It's also a front-end which has been designed to be easily ported to a
variety of back-ends.

Both of these could affect speed.


They could, but they don't seem to be the main issue. When I report
pathological performance problems usually it turns out to be the
result of the implementors having chosen algorithms that don't scale
well (i.e. have poor big-O complexity). Yes, they tell me when they
fix these things.
In fact, I imagine that the latter has a significant negative impact
on compile speeds. (In the same way, g++'s portability often means
that it will not be the fastest compiler on a particular platform.
I doubt it. Metrowerks is blazing in our tests, and it's been widely
ported.

The legacy code support in other compilers is more likely to be an
issue. For example, I happen to know that even in 2-phase lookup
mode they do syntax checking at instantiation time, because they have
to support 1-phase lookup.
Although there are cases where the native compiler has done such a
bad job...)

Also, which compiler did you measure it on. In at least some cases, it
actually generates C code, which it invokes the C compiler. While
optimal for portability, this strategy will definitly not result in the
fastest compile times.


We tried it on several different compilers that happen to use the same
front-end, including a recent Comeau and several versions of the
Intel compiler.

--
Dave Abrahams
Boost Consulting
http://www.boost-consulting.com

[ See http://www.gotw.ca/resources/clcm.htm for info about ]
[ comp.lang.c++.moderated. First time posters: Do this! ]
Jul 22 '05 #189

"David Abrahams" wrote:
Compilers that require fewer workarounds
get ported [to Boost] much more quickly.

Its more function of compiler popularity than its quality.
/Pavel

[ See http://www.gotw.ca/resources/clcm.htm for info about ]
[ comp.lang.c++.moderated. First time posters: Do this! ]
Jul 22 '05 #190
Jean-Marc Bourguet <jm@bourguet.org> writes:

To be clear, I'll give the names of the instantiation
methods used in C++ Templates, The Complete Guide:
- global mechanism Iterated instantiation.
- local mechanism with duplicate avoidance Queried instantiation.
- local mechanism without duplicate avoidance

Greedy instantiation.

The interest of my names is that they emphazise when the
decision is made (for each compilation units -> local
mechanisms or for the whole program/library -> global
mechanism) and so what information is available. C++TTCG's
names are more descriptive.

Yours,

--
Jean-Marc

[ See http://www.gotw.ca/resources/clcm.htm for info about ]
[ comp.lang.c++.moderated. First time posters: Do this! ]
Jul 22 '05 #191
In article <86************@Zorthluthik.local.bar>, llewelly
<ll*********@xmission.dot.com> writes
Obvious, perhaps, but I think several library implementors provide
std::basic_string<char> and std::basic_string<wchar_t>
pre-instantiated. (The proposed 'extern template' which is not
like export makes this easier, but is not strictly necessary.) So
I don't see export being of any help there. Same with iostreams.

However I think the standard library is full of templates which most
programs instantiate many, many times, with a only a few types
making up the majority of instantiations. vector<int>, and such.


Though no compiler I know of has done it, I think it would be possible
for the common template instantiations to be provided by compiler magic.
IOWs I think that std::string and std::wstring could be handled as if
they were built-in types.

--
Francis Glassborow ACCU
Author of 'You Can Do It!' see http://www.spellen.org/youcandoit
For project ideas and contributions: http://www.spellen.org/youcandoit/projects
[ See http://www.gotw.ca/resources/clcm.htm for info about ]
[ comp.lang.c++.moderated. First time posters: Do this! ]

Jul 22 '05 #192
Francis Glassborow <fr*****@robinton.demon.co.uk> writes:

|> In article <86************@Zorthluthik.local.bar>, llewelly
|> <ll*********@xmission.dot.com> writes
|> >Obvious, perhaps, but I think several library implementors provide
|> > std::basic_string<char> and std::basic_string<wchar_t>
|> > pre-instantiated. (The proposed 'extern template' which is not
|> > like export makes this easier, but is not strictly necessary.) So
|> > I don't see export being of any help there. Same with iostreams.

|> >However I think the standard library is full of templates which most
|> > programs instantiate many, many times, with a only a few types
|> > making up the majority of instantiations. vector<int>, and such.

|> Though no compiler I know of has done it, I think it would be
|> possible for the common template instantiations to be provided by
|> compiler magic. IOWs I think that std::string and std::wstring could
|> be handled as if they were built-in types.

In the context of a discussion of export, it's probably irrelevant
anyway. The implementation of the standard library shouldn't change
much anyway, and techniques like precompiled headers, along with the
fact that implementors of the standard library are required to use odd
names for things that aren't externally visible should take care of all
of the issues adequately.

Not that I don't want export, but I want it for things I write (which
aren't always as stable as the standard library), not for the standard
library.

--
James Kanze
Conseils en informatique orientée objet/
Beratung in objektorientierter Datenverarbeitung
9 place Sémard, 78210 St.-Cyr-l'École, France +33 (0)1 30 23 00 34

[ See http://www.gotw.ca/resources/clcm.htm for info about ]
[ comp.lang.c++.moderated. First time posters: Do this! ]
Jul 22 '05 #193
"Jeremy Siek" <je*********@gmail.com> wrote in message news:21**************************@posting.google.c om...
| CALL FOR PAPERS/PARTICIPATION

| Submissions
|
| Each participant will be expected to develop a position paper
| describing a particular library or category of libraries that is
| lacking in the current C++ standard library and Boost.

here are some ideas that could be discussed.

1. arbitrary precision floats, big_float

2. what would be required to allow std::complex to be instantiable for
user defined types like big_int and big_float s.t. ordinary functions like
exp( complex<big_float> ) would work.
br

Thorsten

[ See http://www.gotw.ca/resources/clcm.htm for info about ]
[ comp.lang.c++.moderated. First time posters: Do this! ]
Jul 22 '05 #194
tom_usenet <to********@hotmail.com> wrote:
On 19 Aug 2004 21:31:41 -0400, go****@vandevoorde.com (Daveed
Vandevoorde) wrote:
export also allows the distribution of templates in compiled form
(as opposed to source form).
This I would love to hear about. What do compiled templates look like?
I hope commercial pressures don't prevent you from replying.


Not commercial pressures; just lack of time. Sorry about that.

The representation of compiled templates would depend on the
back end. Presumably the nondependent parts would be rather
low-level nodes usually fed to an optimizer/code-generator.
The dependent parts would incorporate a good amount of the
front end's data structures.
Are compiled templates easily decompiled (assuming the file format is
not obscure)?
I think not, though it might depend on the template.
Remember that this is only for functions, member functions,
and static data members. Class templates would not really
be different (except that if a template is only used in an
export template it could move from .h to .c).
How much source information can be thrown away in
compiling them?


All local names and positions can be discarded.
All nondependent constructs can be fully reduced.
After EDG implemented export, Stroustrup once asked what change to
C++ might simplify its implementation without giving up on the separate
compilation aspect of it. I couldn't come up with anything other than the
very drastic notion of making the language 100% modular (i.e., every entity
can be declared in but one place).


That is exactly the same conclusion that I have reached (intuitively
rather than through experience or careful working through of the
problem); separate compilation of templates within the usual C++ TU
model pretty much leads you to two phase name lookup and export.


Actually, two-phase name lookup is not quite necessary.
But two-phase name lookup isn't the really hard part of
export either (it's fairly hard, but not worse than some
other widely implemented C++ features). I could imagine
for example that cross-translation unit access would only
be possible through qualified names and that all lookups
would be done at instantiation time (like many inclusion
models do). It wouldn't drastically reduce the cost of
implementing the export-like mechanism, IMO.

(Note that two-phase name lookup predates export by quite
a bit. ADL was the result of a generalization for the
sake of export, and that affected the details of two-phase
name lookup when export was added to the language. However,
the gist of two-phase name lookup was already described in
D&E back in 1994.)

Daveed

[ See http://www.gotw.ca/resources/clcm.htm for info about ]
[ comp.lang.c++.moderated. First time posters: Do this! ]
Jul 22 '05 #195
"David B. Held" <dh***@codelogicconsulting.com> wrote:
Daveed Vandevoorde wrote:
> [...]
> I contend that, all other things being equal, export templates are more
> pleasant to work with than the equivalent inclusion templates. That by
> itself is sufficient to cast doubt on your claim that the feature is "broken
> and useless."


What are your comments on N1426, given that you and the rest of EDG are
thoroughly quoted as being against export?


(Sorry for the delay in answering. I have been busy lately.)

I'm not particularly happy about that paper. It makes it look
like EDG supports its points of view, when in fact that isn't
the case. (I agree neither with the technical aspects nor with
the nontechnical arguments of the paper.)

That said, EDG opposed export at its introduction (I wasn't at
EDG at the time, and I was somewhat sympathetic toward export;
obviously I wasn't a compiler writer at all ;-). If I remember
correctly, EDG's main arguments at the time were that the export
proposal was too poorly understood and almost certainly very
hard to implement.

Daveed

[ See http://www.gotw.ca/resources/clcm.htm for info about ]
[ comp.lang.c++.moderated. First time posters: Do this! ]
Jul 22 '05 #196
Francis Glassborow <fr*****@robinton.demon.co.uk> wrote in message
news:<IH**************@robinton.demon.co.uk>...
In article <d6*************************@posting.google.com> ,
ka***@gabi-soft.fr writes
Because make sees the dozen or so object files as being dependent on
the template header file. If modifying the implementation of the
template updates the timestamp of the header, make will recompile all
of these files. In the case of export, the modification is in an
implementation file, the "dependency" is handled by the pre-linker,
which knows enough to only compile one of the source files which
triggered the instantiation, and not all of them.
True, but only relevant if the header file with the template in it is
not yet stable. I doubt that this is true for many uses of templates
in large scale applications.


It's obviously not true -- in a large scale application, you avoid
templates other than very simple or very standard things (like
std::vector), because you can't afford the dependencies.

But that's circular reasoning: we don't fix the problem, because the
case in question doesn't occur in real code, and the reason it doesn't
occur in real code is because the problem isn't fixed.

I don't know how relevant templates at the application level will be in
the future. Today, they aren't used, because of the dependencies they
introduce. And we do get by without them. However, in an earlier time,
we got by without templates at all, and when I see some of the work of
Andrei and others, I see things which might be relevant in application
level code, if not now, then in the future.

FWIW: I actually had the problem once. It turned out that one of the
"stable" functions was a real performance bottleneck, and had to be
tuned. Tuned, by experimenting with different variations, and running
the results through the profiler. The actual function was called from
exactly one module in the program, but the template was used in about
20. And each time I modified the template, make recompiled all 20
modules. I'll admit that it is an extreme case, but it did actually
occur.

--
James Kanze GABI Software http://www.gabi-soft.fr
Conseils en informatique orientée objet/
Beratung in objektorientierter Datenverarbeitung
9 place Sémard, 78210 St.-Cyr-l'École, France, +33 (0)1 30 23 00 34

[ See http://www.gotw.ca/resources/clcm.htm for info about ]
[ comp.lang.c++.moderated. First time posters: Do this! ]
Jul 22 '05 #197
"Pavel Vozenilek" <pa*************@yahoo.co.uk> writes:
"David Abrahams" wrote:
> Compilers that require fewer workarounds
> get ported [to Boost] much more quickly.
>

Its more function of compiler popularity than its quality.
/Pavel


True. That said, for some compilers there are practically no
workarounds. Porting to those goes very quickly ;-)

--
Dave Abrahams
Boost Consulting
http://www.boost-consulting.com

[ See http://www.gotw.ca/resources/clcm.htm for info about ]
[ comp.lang.c++.moderated. First time posters: Do this! ]
Jul 22 '05 #198
go****@vandevoorde.com (Daveed Vandevoorde) writes:

[...]

| (Note that two-phase name lookup predates export by quite
| a bit. ADL was the result of a generalization for the
| sake of export, and that affected the details of two-phase
| name lookup when export was added to the language. However,
| the gist of two-phase name lookup was already described in
| D&E back in 1994.)

Thanks for hammering this point again. There have been some myths
about two-phase name lookup as a consequence of "export", even from
some "old timer" C++ experts. I would complement your answer by
providing a link to one of the first, publically available mailings,
that was already discussing two-phase name lookup:

http://www.open-std.org/JTC1/SC22/WG...1992/N0209.pdf

That paper has the date "November 1992" and states

Most of the background of this paper is derived from a lengthy
discussion by the authors in May 1992, and subsequent electronic
communication which discussed and reworked the original summary of
that meeting.

--
Gabriel Dos Reis
gd*@integrable-solutions.net

[ See http://www.gotw.ca/resources/clcm.htm for info about ]
[ comp.lang.c++.moderated. First time posters: Do this! ]
Jul 22 '05 #199
ka***@gabi-soft.fr wrote:
I don't know how relevant templates at the application level will be in
the future. Today, they aren't used, because of the dependencies they
introduce.


That's a pretty categorical claim. I know that it's your experience,
but I doubt that it's universal.

[ See http://www.gotw.ca/resources/clcm.htm for info about ]
[ comp.lang.c++.moderated. First time posters: Do this! ]
Jul 22 '05 #200

This thread has been closed and replies have been disabled. Please start a new discussion.

Similar topics

0
by: Benjamin C. Pierce | last post by:
The Twelth International Workshop on Foundations of Object-Oriented Languges (FOOL 12) Saturday 15 January 2005 Long Beach, California, USA Following POPL 05 The search for sound principles...
0
by: Roel Wuyts | last post by:
CALL FOR CONTRIBUTIONS International Workshop on Revival of Dynamic Languages http://pico.vub.ac.be/~wdmeuter/RDL04/index.html (at OOPSLA2004, Vancouver, British Columbia, Canada, October...
0
by: akmal chaudhri | last post by:
Call for papers dataX International Workshop on Database Technologies for Handling XML information on the Web March 14, 2004, Heraklion, Crete (Greece) ...
0
by: Douglas C. Schmidt | last post by:
Hi Folks, On behalf of the entire OOPSLA 2004 Conference Committee, we invite you to contribute and actively participate in next year's OOPSLA, to be held in Vancouver, British Columbia, October...
0
by: Dana Morris | last post by:
Call for Participation OMG's First Annual Software-Based Communications (SBC) Workshop: From Mobile to Agile Communications http://www.omg.org/news/meetings/SBC2004/call.htm September 13-16,...
0
by: Gail E. Harris | last post by:
OOPSLA 2005 is being held in San Diego, Oct 16 to 20. Invited speakers include: Robert Hass, Martin Fowler, Gerald Jay Sussman Grady Booch, Jimmy Wales, Mary Beth Rosson, David P. Reed The...
1
by: Jordan Bruce | last post by:
******************************************************* ONTARIO CANADA INFORMIX USER GROUP (OCIUG) http://www.iiug.org/ociug NEXT MEETING: WEDNESDAY, JULY 14...
0
by: Wim Vanhoof | last post by:
----------------------------------------------------------- WLPE' 06 - CALL FOR PAPERS Workshop on Logic-based Methods in Programming Environments (satellite workshop of ICLP’06) August...
0
by: emmanuelkatto | last post by:
Hi All, I am Emmanuel katto from Uganda. I want to ask what challenges you've faced while migrating a website to cloud. Please let me know. Thanks! Emmanuel
0
BarryA
by: BarryA | last post by:
What are the essential steps and strategies outlined in the Data Structures and Algorithms (DSA) roadmap for aspiring data scientists? How can individuals effectively utilize this roadmap to progress...
1
by: nemocccc | last post by:
hello, everyone, I want to develop a software for my android phone for daily needs, any suggestions?
1
by: Sonnysonu | last post by:
This is the data of csv file 1 2 3 1 2 3 1 2 3 1 2 3 2 3 2 3 3 the lengths should be different i have to store the data by column-wise with in the specific length. suppose the i have to...
0
by: Hystou | last post by:
There are some requirements for setting up RAID: 1. The motherboard and BIOS support RAID configuration. 2. The motherboard has 2 or more available SATA protocol SSD/HDD slots (including MSATA, M.2...
0
by: Hystou | last post by:
Most computers default to English, but sometimes we require a different language, especially when relocating. Forgot to request a specific language before your computer shipped? No problem! You can...
0
Oralloy
by: Oralloy | last post by:
Hello folks, I am unable to find appropriate documentation on the type promotion of bit-fields when using the generalised comparison operator "<=>". The problem is that using the GNU compilers,...
0
agi2029
by: agi2029 | last post by:
Let's talk about the concept of autonomous AI software engineers and no-code agents. These AIs are designed to manage the entire lifecycle of a software development project—planning, coding, testing,...
0
isladogs
by: isladogs | last post by:
The next Access Europe User Group meeting will be on Wednesday 1 May 2024 starting at 18:00 UK time (6PM UTC+1) and finishing by 19:30 (7.30PM). In this session, we are pleased to welcome a new...

By using Bytes.com and it's services, you agree to our Privacy Policy and Terms of Use.

To disable or enable advertisements and analytics tracking please visit the manage ads & tracking page.