By using this site, you agree to our updated Privacy Policy and our Terms of Use. Manage your Cookies Settings.
429,422 Members | 1,615 Online
Bytes IT Community
+ Ask a Question
Need help? Post your question and get tips & solutions from a community of 429,422 IT Pros & Developers. It's quick & easy.

Boost Workshop at OOPSLA 2004

P: n/a
CALL FOR PAPERS/PARTICIPATION

C++, Boost, and the Future of C++ Libraries
Workshop at OOPSLA
October 24-28, 2004
Vancouver, British Columbia, Canada
http://tinyurl.com/4n5pf
Submissions

Each participant will be expected to develop a position paper
describing a particular library or category of libraries that is
lacking in the current C++ standard library and Boost. The participant
should explain why the library or libraries would advance the state of
C++ programming. Ideally, the paper should sketch the proposed library
interface and concepts. This will be a unique opportunity to critique
and review library proposals. Alternatively, a participant might
describe the strengths and weaknesses of existing libraries and how
they might be modified to fill the need.

Form of Submissions

Submissions should consist of a 3-10 page paper that gives at least
the motivation for and an informal description of the proposal. This
may be augmented by source or other documentation of the proposed
libraries, if available. Preferred form of submission is a PDF file.

Important Dates

Submission deadline for early registration: September 10, 2004
Early Notification of selection: September 15, 2004
OOPSLA early registration deadline: September 16, 2004
OOPSLA conference: October 24-28, 2004

Contact committee oo********@crystalclearsoftware.com

Program Committee
Jeff Garland
Nicolai Josuttis
Kevlin Henney
Jeremy Siek

[ See http://www.gotw.ca/resources/clcm.htm for info about ]
[ comp.lang.c++.moderated. First time posters: Do this! ]
Jul 22 '05 #1
Share this Question
Share on Google+
205 Replies


P: n/a
"Jeremy Siek" <je*********@gmail.com> wrote in message
news:21**************************@posting.google.c om...
CALL FOR PAPERS/PARTICIPATION

C++, Boost, and the Future of C++ Libraries
Workshop at OOPSLA
October 24-28, 2004
Vancouver, British Columbia, Canada
http://tinyurl.com/4n5pf

[snip]

I wonder if the submitters follow this post's trail or I should email
them... anyway, here goes.

I am not sure if I'll ever get around to writing it, so I said I'd post the
idea here, maybe someone will pursue it. In short, I think a proposal for a
replacement of C++'s preprocessor would be, I think, welcome.

Today Boost uses a "preprocessor library", which in turn (please correct me
if my understanding is wrong) relies on a program to generate some many big
macros up to a fixed "maximum" to overcome preprocessor's incapability to
deal with variable number of arguments.

Also, please correct me if I'm wrong (because I haven't really looked deep
into it), but my understanding is that people around Boost see the PP
library as a necessary but unpleasantly-smelling beast that makes things
around it smelly as well. [Reminds me of the Romanian story: there was a guy
called Pepelea (pronounced Peh-Peh-leah) who was poor but had inherited a
beautiful house. A rich man wanted to buy it, and Pepelea sold it on one
condition: that Pepelea owns a nail in the living room's wall, in which he
can hang whatever he wanted. Now when the rich man was having guests and
whatnot, Pepelea would drop by and embarraisingly hang a dirty old coat. Of
course in the end the rich man got so exasperated that he gave Pepelea the
house back for free. Ever since that story, "Pepelea's nail" is referred to
as something like... like what the preprocessor is to the C++ language.]

That would be reason one to create a new C++ preprocessor. (And when I say
"new," that's not like in "yet another standard C++ preprocessor". I have
been happy to see my suggestion on the Boost mailing list followed in that
the WAVE preprocessor was built using Boost's own parser generator library,
Spirit.) What I am talking now is "a backwards-INcompatible C++ preprocessor
aimed at displacing the existing preprocessor forever and replacing it with
a better one".

If backed by the large Boost community, the new preprocessor could easily
gain popularity and be used in new projects instead of the old one. To avoid
inheriting past's mistakes, the new preprocessor doesn't need to be
syntax-compatible in any way with the old preprocessor, but only
functionally compatible, in that it can do all that can be done with the
existing preprocessor, only that it has new means to do things safer and
better.

I think that would be great. Because it we all stop coding for a second and
think of it, what's the ugliest scar on C++'s face - what is Pepelea's nail?
Maybe "export" which is so broken and so useless and so abusive that its
implementers have developed Stockholm syndrome during the long years that
took them to implement it? Maybe namespaces that are so badly designed,
you'd think they are inherited from C? I'd say they are good contenders
against each other, but none of them holds a candle to the preprocessor.

So, a proposal for a new preprocessor would be great. Here's a short wish
list:

* Does what the existing one does (although some of those coding patterns
will be unrecommended);

* Supports one-time file inclusion and multiple file inclusion, without the
need for guards (yes, there are subtle issues related to that... let's at
least handle a well-defined subset of the cases);

* Allows defining "hygienic" macros - macros that expand to the same text
independent on the context in which they are expanded;

* Allows defining scoped macros - macros visible only within the current
scope;

* Has recursion and possibly iteration;

* Has a simple, clear expansion model (negative examples abound - NOT like
m4, NOT like tex... :o))

* Supports variable number of arguments. I won't venture into thinking of
more cool support a la scheme or dylan ofr Java Extender macros.
Andrei

[ See http://www.gotw.ca/resources/clcm.htm for info about ]
[ comp.lang.c++.moderated. First time posters: Do this! ]
Jul 22 '05 #2

P: n/a
"Andrei Alexandrescu \(See Website for Email\)"
replacement of C++'s preprocessor...


If you're going to build a better text-substitution layer, or even a
true Lisp-ish macro system (although one which was restricted to
compile-time), why not go further and cleanup more of the language via
this new parser? How about embracing and simplifying template
meta-programming with a better template system that is a true
compile-time functional language in its own right with unlimited
recursion, clear error messages, etc. In fact, it may be possible to
unify both this new uber macro system and the template
expansion/met-programming syntax.

* C++ improvements: Dare to dream, but wear asbestos underpants just
in case.

[ See http://www.gotw.ca/resources/clcm.htm for info about ]
[ comp.lang.c++.moderated. First time posters: Do this! ]
Jul 22 '05 #3

P: n/a
"Andrei Alexandrescu \(See Website for Email\)" <Se****************@moderncppdesign.com> writes:
"Jeremy Siek" <je*********@gmail.com> wrote in message
news:21**************************@posting.google.c om...
> CALL FOR PAPERS/PARTICIPATION
>
> C++, Boost, and the Future of C++ Libraries
> Workshop at OOPSLA
> October 24-28, 2004
> Vancouver, British Columbia, Canada
> http://tinyurl.com/4n5pf [snip]

I wonder if the submitters follow this post's trail or I should email
them... anyway, here goes.

I am not sure if I'll ever get around to writing it, so I said I'd post the
idea here, maybe someone will pursue it. In short, I think a proposal for a
replacement of C++'s preprocessor would be, I think, welcome.


Hard to see how this is going to be about C++ libraries, but I'll
follow along.
Today Boost uses a "preprocessor library", which in turn (please
correct me if my understanding is wrong) relies on a program to
generate some many big macros up to a fixed "maximum" to overcome
preprocessor's incapability to deal with variable number of
arguments.
That's a pretty jumbled understanding of the situation.

The preprocessor library is a library of headers and macros that allow
you to generate C/C++ code by writing programs built out of macro
invocations. You can see the sample appendix at
http://www.boost-consulting.com/mplbook for a reasonably gentle
introduction.

In the preprocessor library's _implementation_, there are lots of
boilerplate program-generated macros, but that's an implementation
detail that's only needed because so many preprocessors are badly
nonconforming. In fact, the library's maintainer, Paul Mensonides,
has a _much_ more elegantly-implemented PP library
(http://sourceforge.net/projects/chaos-pp/) that has almost no
boilerplate, but it only works on a few compilers (GCC among them).

There is no way to "overcome" the PP's incapability to deal with
variable number of arguments other than by using PP data structures as
described in http://boost-consulting.com/mplbook/preprocessor.html to
pass multiple items as a single macro argument, or by extending the PP
to support variadic macros a la C99, as the committee is poised to do.

The PP library is _often_ used to overcome C++'s inability to support
typesafe function (template)s with variable numbers of arguments, by
writing PP programs that generate overloaded function (template)s.
Also, please correct me if I'm wrong (because I haven't really
looked deep into it), but my understanding is that people around
Boost see the PP library as a necessary but unpleasantly-smelling
beast that makes things around it smelly as well.
I don't see it that way, although I wish there were ways to avoid
using it in some of the more common cases (variadic template). Maybe
some others do see it like that.
[Reminds me of the Romanian story: there was a guy called Pepelea
(pronounced Peh-Peh-leah) who was poor but had inherited a beautiful
house. A rich man wanted to buy it, and Pepelea sold it on one
condition: that Pepelea owns a nail in the living room's wall, in
which he can hang whatever he wanted. Now when the rich man was
having guests and whatnot, Pepelea would drop by and embarraisingly
hang a dirty old coat. Of course in the end the rich man got so
exasperated that he gave Pepelea the house back for free. Ever since
that story, "Pepelea's nail" is referred to as something
like... like what the preprocessor is to the C++ language.]
cute.
That would be reason one to create a new C++ preprocessor. (And when
I say "new," that's not like in "yet another standard C++
preprocessor". I have been happy to see my suggestion on the Boost
mailing list followed in that the WAVE preprocessor was built using
Boost's own parser generator library, Spirit.) What I am talking now
is "a backwards-INcompatible C++ preprocessor aimed at displacing
the existing preprocessor forever and replacing it with a better
one".
Bjarne's plan for that is to gradually make the capabilities of the
existing PP redundant by introducing features in the core
language... and then, finally, deprecate it.
If backed by the large Boost community, the new preprocessor could
easily gain popularity and be used in new projects instead of the
old one.
I doubt even with Boost backing that the community at large is likely
to easily accept integrating another tool into its build processes.
The big advantage of the C++ PP is that it's built-in... and that's
one of the biggest reasons that the PP _lib_ is better for my purposes
than any of the ad hoc code generators I've written/used in the past.
To avoid inheriting past's mistakes, the new preprocessor
doesn't need to be syntax-compatible in any way with the old
preprocessor, but only functionally compatible, in that it can do
all that can be done with the existing preprocessor, only that it
has new means to do things safer and better.
I think Bjarne's approach is the best way to do that sort of
replacement. As long as the PP's functionality is really being
replaced by a textual preprocessor (or a token-wise one as we have
today) it's going to suffer many of the same problems. Much of those
jobs should be filled by a more robust metaprogramming system that's
fully integrated into the language and not just a processing phase.
I think that would be great. Because it we all stop coding for a
second and think of it, what's the ugliest scar on C++'s face - what
is Pepelea's nail? Maybe "export" which is so broken and so useless
and so abusive that its implementers have developed Stockholm
syndrome during the long years that took them to implement it?
That's slander ;->. Export could be used to optimize template
metaprograms, for example (compile the templates to executable code
that does instantiations). It may not have been a good idea, but
those who suffered through implementing it now think it has some
potential utility.
Maybe namespaces that are so badly designed, you'd think they are
inherited from C?
Wow, I'm impressed; that's going to piss off both the hardcore C _and_
C++ people!

I've never seen a serious proposal for better namespaces, other than
http://boost-consulting.com/writing/qn.html, which seems to have been
generally ignored. Have you got any ideas?
I'd say they are good contenders against each other, but none of
them holds a candle to the preprocessor.

So, a proposal for a new preprocessor would be great.


If that's your point, I think it's an interesting one, but somehow I
still don't get how it could be appropriate for a workshop on C++
libraries.

--
Dave Abrahams
Boost Consulting
http://www.boost-consulting.com

[ See http://www.gotw.ca/resources/clcm.htm for info about ]
[ comp.lang.c++.moderated. First time posters: Do this! ]
Jul 22 '05 #4

P: n/a
On 11 Aug 2004 16:19:01 -0400, David Abrahams
<da**@boost-consulting.com> wrote:

[snip]
I think that would be great. Because it we all stop coding for a
second and think of it, what's the ugliest scar on C++'s face - what
is Pepelea's nail? Maybe "export" which is so broken and so useless
and so abusive that its implementers have developed Stockholm
syndrome during the long years that took them to implement it?


That's slander ;->. Export could be used to optimize template
metaprograms, for example (compile the templates to executable code
that does instantiations). It may not have been a good idea, but
those who suffered through implementing it now think it has some
potential utility.


Has anyone except Comeau actually implemented it? I think it is a
great idea WRT hiding of implementation and probably (I never actually
used this feature) towards eliminating the code bloat typical of
heavily-templated code.
(...) what's the ugliest scar on C++'s face - what
is Pepelea's nail?


Here I'd have to vote for function throw specs, not export.

--
Bob Hairgrove
No**********@Home.com
Jul 22 '05 #5

P: n/a
"David Abrahams" <da**@boost-consulting.com> wrote in message
news:uz***********@boost-consulting.com...
"Andrei Alexandrescu \(See Website for Email\)"
Today Boost uses a "preprocessor library", which in turn (please
correct me if my understanding is wrong) relies on a program to
generate some many big macros up to a fixed "maximum" to overcome
preprocessor's incapability to deal with variable number of
arguments.
That's a pretty jumbled understanding of the situation.

The preprocessor library is a library of headers and macros that allow
you to generate C/C++ code by writing programs built out of macro
invocations. You can see the sample appendix at
http://www.boost-consulting.com/mplbook for a reasonably gentle
introduction.


Ok, it's a half jumbled understanding of the situ, coupled with a half
jumbled expression of my half-jumbled understanding :o).

First, I've looked in my boost implementation to see things like
BOOST_PP_REPEAT_1_0 to BOOST_PP_REPEAT_1_256 and then BOOST_PP_REPEAT_2_0 to
BOOST_PP_REPEAT_2_256 and so on. My understanding (which I tried to convey
in my post) is that such macros are generated by a program. That program is
admittedly not part of the library as distributed (I believe it is part of
the maintenance process), but I subjectively consider it a witness that a
more elegant approach would be welcome.

Then I've looked again over the PP library (this time through the link
you've sent), and honestly it reminds me of TeX macro tricks more than any
example of elegant programming. As such, I'd find it hard to defend it with
a straight face, and I am frankly surprised you do. But then I understand
the practical utility, as you point out below.
Bjarne's plan for that is to gradually make the capabilities of the
existing PP redundant by introducing features in the core
language... and then, finally, deprecate it.
It's hard to introduce the ability to define syntactic replacement (which
many people consider useful) in the core language.
I doubt even with Boost backing that the community at large is likely
to easily accept integrating another tool into its build processes.
The big advantage of the C++ PP is that it's built-in... and that's
one of the biggest reasons that the PP _lib_ is better for my purposes
than any of the ad hoc code generators I've written/used in the past.
Practicality, and not elegance or suitability, is about the only reason that
I could agree with.
I think Bjarne's approach is the best way to do that sort of
replacement. As long as the PP's functionality is really being
replaced by a textual preprocessor (or a token-wise one as we have
today) it's going to suffer many of the same problems. Much of those
jobs should be filled by a more robust metaprogramming system that's
fully integrated into the language and not just a processing phase.
I think here we talk about different things. One path to pursue is indeed to
provide better means for template programming, and another is to provide
syntactic manipulation. To me, they are different and complementary
techniques.
That's slander ;->. Export could be used to optimize template
metaprograms, for example (compile the templates to executable code
that does instantiations). It may not have been a good idea, but
those who suffered through implementing it now think it has some
potential utility.
Sure. Similarly, they discovered that the expensive air filters for the
space shuttle can be used (only) as coffee filters for the team on the
ground :o).
Maybe namespaces that are so badly designed, you'd think they are
inherited from C?


Wow, I'm impressed; that's going to piss off both the hardcore C _and_
C++ people!


Heh heh... I knew this is gonna be taken that way :o). What I meant was,
many shortcomings of C++ root in a need for compatibility with C. With
namespaces and export, there's no C to blame :o).
I've never seen a serious proposal for better namespaces, other than
http://boost-consulting.com/writing/qn.html, which seems to have been
generally ignored. Have you got any ideas?


That's a good doc solidly motivated; I am sorry it does not get the
attention that it deserves.
So, a proposal for a new preprocessor would be great.


If that's your point, I think it's an interesting one, but somehow I
still don't get how it could be appropriate for a workshop on C++
libraries.


Ok, I'll drop it. May I still bicker about it on the Usenet? :o)
Andrei

[ See http://www.gotw.ca/resources/clcm.htm for info about ]
[ comp.lang.c++.moderated. First time posters: Do this! ]
Jul 22 '05 #6

P: n/a
"David Abrahams" <da**@boost-consulting.com> wrote in message
news:uz***********@boost-consulting.com...
Today Boost uses a "preprocessor library", which in turn (please
correct me if my understanding is wrong) relies on a program to
generate some many big macros up to a fixed "maximum" to overcome
preprocessor's incapability to deal with variable number of
arguments.
That's a pretty jumbled understanding of the situation.

The preprocessor library is a library of headers and macros that allow
you to generate C/C++ code by writing programs built out of macro
invocations. You can see the sample appendix at
http://www.boost-consulting.com/mplbook for a reasonably gentle
introduction.

In the preprocessor library's _implementation_, there are lots of
boilerplate program-generated macros, but that's an implementation
detail that's only needed because so many preprocessors are badly
nonconforming. In fact, the library's maintainer, Paul Mensonides,
has a _much_ more elegantly-implemented PP library
(http://sourceforge.net/projects/chaos-pp/) that has almost no
boilerplate, but it only works on a few compilers (GCC among them).


Yes, for example, it would be relatively easy to a construct a macro that would
(given enough memory) run for 10,000 years generating billions upon trillions of
results. Obviously that isn't useful; I'm merely pointing out that many of the
limits Andrei refers to above aren't really limits.
There is no way to "overcome" the PP's incapability to deal with
variable number of arguments other than by using PP data structures as
described in http://boost-consulting.com/mplbook/preprocessor.html to
pass multiple items as a single macro argument, or by extending the PP
to support variadic macros a la C99, as the committee is poised to do.
Incidentally, variadics make for highly efficient data structures--basically
because they can be unrolled. Given variadics, it is possible to tell if there
is at least a certain number of elements in constant time. This allows unrolled
processing in batch.
The PP library is _often_ used to overcome C++'s inability to support
typesafe function (template)s with variable numbers of arguments, by
writing PP programs that generate overloaded function (template)s.
Yes.
Also, please correct me if I'm wrong (because I haven't really
looked deep into it), but my understanding is that people around
Boost see the PP library as a necessary but unpleasantly-smelling
beast that makes things around it smelly as well.


I don't see it that way, although I wish there were ways to avoid
using it in some of the more common cases (variadic template). Maybe
some others do see it like that.


In some ways, with Chaos more so than Boost PP, preprocessor-based code
generation is very elegant. It should be noted also that well-designed code
generation via the preprocessor typically yields more type-safe code that is
less error-prone and more maintainable than the alternatives.
That would be reason one to create a new C++ preprocessor. (And when
I say "new," that's not like in "yet another standard C++
preprocessor". I have been happy to see my suggestion on the Boost
mailing list followed in that the WAVE preprocessor was built using
Boost's own parser generator library, Spirit.) What I am talking now
is "a backwards-INcompatible C++ preprocessor aimed at displacing
the existing preprocessor forever and replacing it with a better
one".


Bjarne's plan for that is to gradually make the capabilities of the
existing PP redundant by introducing features in the core
language... and then, finally, deprecate it.

I doubt even with Boost backing that the community at large is likely
to easily accept integrating another tool into its build processes.
The big advantage of the C++ PP is that it's built-in... and that's
one of the biggest reasons that the PP _lib_ is better for my purposes
than any of the ad hoc code generators I've written/used in the past.
To avoid inheriting past's mistakes, the new preprocessor
doesn't need to be syntax-compatible in any way with the old
preprocessor, but only functionally compatible, in that it can do
all that can be done with the existing preprocessor, only that it
has new means to do things safer and better.
Safer? In what way? Name clashes? Multiple evaluation?

I have probably written more macros than any other person. Chaos alone has
nearly two thousand *interface* (i.e. not including implementation) macros. The
extent is not so great in the pp-lib, but it is large nonetheless, and the
pp-lib is widely used even if not directly. However, there have been no cases
of name collisions that I am aware of--simply because the library follows simple
guidelines on naming conventions. The fact that users of Boost need not even be
aware of the preprocessor-generation used within Boost is a further testament of
the elegance of the solutions--even in spite of the limitations and hacks
imposed by non-conforming preprocessors.

Consider the recent CUJ with the Matlab article which has unprefixed,
non-all-caps macro definitions *on the cover of the magazine*. Though the code
of which that is a part may well be good overall and serve a useful function,
those macros are a simply bad coding--and nothing can prevent bad coding and
wanton disregard for the consequences of actions.

As far as multiple evaluation is concerned, that is a result of viewing a macro
as a function--which it is not. Macros expand to code--they have nothing
specifically to do with function calls or any other language abstraction. Even
today people recommend, for example, that macros that expand to statements (or
similar) should leave out the trailing semicolon so the result looks like a
normal function call. In general, that is a terrible strategy. Macro
invocations are not function calls, do not have the semantics of function calls,
and should not be intentionally made to *act* like function calls. The code
that a macro expands to is the functional result of that macro and should be
documented as such--not just what that code does.
I think Bjarne's approach is the best way to do that sort of
replacement. As long as the PP's functionality is really being
replaced by a textual preprocessor (or a token-wise one as we have
today) it's going to suffer many of the same problems. Much of those
jobs should be filled by a more robust metaprogramming system that's
fully integrated into the language and not just a processing phase.


This is a fundamental issue. It would indeed be great to have a more advanced
preprocessor capable of doing many of the things that Boost PP (or Chaos) is
designed to enable. However, there will *always* be a need to manipulate source
without the semantic attachment of the underlying language's syntactic and
semantic rules. In many cases those rules lead to generation code that is
significantly more obtuse than it actually needs to be because the restrictions
imposed by syntax are fundamentally at odds with the creation of that syntax (or
the equivalent semantic effect). If there was another metaprogramming layer in
the compilation process (which would be fine), the preprocessor would just be
used to generate that also--for the basic reason that the syntax of the
generated language just gets in the way.

The ability to manipulate the core language without that attachment is one of
the preprocessor's greatest strengths. It is also one of the preprocessor's
greatest weaknesses. Just like any other language feature, particularly in C
and C++, it must be used with care because just like any other language feature,
it can be easily abused. The preprocessor enables very elegant and good
solutions when used well.
I think that would be great. Because it we all stop coding for a
second and think of it, what's the ugliest scar on C++'s face
Without resorting to arbitrary rhetoric such as "macros are evil" what is ugly
about the preprocessor? Certain uses of the preprocessor have in the past
caused (and still cause) problems. However, labeling macros as ugly because
they can be misused is taking the easy way out. It represents a failure to
isolate and understand how those problems surface and how they should be avoided
through specific guidelines (instead of gross generalizations). This has been
happening (and is still ongoing) with the underlying language for some time.
You avoid pitfalls in languages like C and C++ through understanding.
Guidelines themselves are not truly effective unless they are merely reminders
of the reasoning behind the guidelines. Otherwise, they just lead to
brain-dead, in-the-box programming, and inhibit progress.
- what
is Pepelea's nail? Maybe "export" which is so broken and so useless
and so abusive that its implementers have developed Stockholm
syndrome during the long years that took them to implement it?


That's slander ;->. Export could be used to optimize template
metaprograms, for example (compile the templates to executable code
that does instantiations). It may not have been a good idea, but
those who suffered through implementing it now think it has some
potential utility.


I agree.

Regards,
Paul Mensonides

[ See http://www.gotw.ca/resources/clcm.htm for info about ]
[ comp.lang.c++.moderated. First time posters: Do this! ]
Jul 22 '05 #7

P: n/a
"Paul Mensonides" <le******@comcast.net> wrote in message
news:bv********************@comcast.com...
In some ways, with Chaos more so than Boost PP, preprocessor-based code
generation is very elegant. It should be noted also that well-designed code generation via the preprocessor typically yields more type-safe code that is less error-prone and more maintainable than the alternatives.
Would be interesting to see some examples of that around here. I would be
grateful if you posted some.
Safer? In what way? Name clashes? Multiple evaluation?

I have probably written more macros than any other person.


I think you mean "I have probably written more C++ macros than any other
person." That detail is important. I'm not one to claim having written lots
of macros in any language, and I apologize if the amendment above sounds
snooty. I just think it's reasonable to claim that the C++ preprocessor
compares very unfavorably with many other languages' means for syntactic
abstractions.

I totally agree with everything you wrote, but my point was, I believe,
misunderstood. Yes, "macros are evil" is an easy cop-out. But I never said
that. My post says what tantamounts to "The C++ preprocessor sucks". It
sucks because it is not a powerful-enough tool. That's why.

So let me restate my point. Macros are great. I love macros. Syntactic
abstractions have their place in any serious language, as you very nicely
point out. And yes, they are distinct from other means of abstraction. And
yes, they can be very useful, and shouldn't be banned just because they can
be misused.

(I'll make a parenthesis here that I think is important. I believe the worst
thing that the C/C++ preprocessor has ever done is to steer an entire huge
community away from the power of syntactic abstractions.)

So, to conclude, my point was that the preprocessor is too primitive a tool
for implementing syntactic abstractions with.

Let's think wishes. You've done a great many good things with the
preprocessor, so you are definitely the one to be asked. What features do
you think would have made it easier for you and your library's clients?
Andrei

[ See http://www.gotw.ca/resources/clcm.htm for info about ]
[ comp.lang.c++.moderated. First time posters: Do this! ]
Jul 22 '05 #8

P: n/a
"Andrei Alexandrescu (See Website for Email)"
<Se****************@moderncppdesign.com> wrote in message
news:2n************@uni-berlin.de...
First, I've looked in my boost implementation to see things like
BOOST_PP_REPEAT_1_0 to BOOST_PP_REPEAT_1_256 and then BOOST_PP_REPEAT_2_0 to
BOOST_PP_REPEAT_2_256 and so on. My understanding (which I tried to convey
in my post) is that such macros are generated by a program. That program is
admittedly not part of the library as distributed (I believe it is part of
the maintenance process), but I subjectively consider it a witness that a
more elegant approach would be welcome.
Agreed, that implementation is junk and is the result of poor preprocessor
conformance.
Then I've looked again over the PP library (this time through the link
you've sent),
(I believe that the link Dave posted was to Chaos--which is distinct from Boost
Preprocessor.)
and honestly it reminds me of TeX macro tricks more than any
example of elegant programming. As such, I'd find it hard to defend it with
a straight face, and I am frankly surprised you do. But then I understand
the practical utility, as you point out below.


What it reminds you of is irrelevant. You know virtually nothing about how it
works--you've never taken the time. Without that understanding, you cannot
critique its elegance or lack thereof.
> Bjarne's plan for that is to gradually make the capabilities of the
> existing PP redundant by introducing features in the core
> language... and then, finally, deprecate it.


It's hard to introduce the ability to define syntactic replacement (which
many people consider useful) in the core language.


I agree--but that doesn't mean that we can't take steps in that direction.
> I doubt even with Boost backing that the community at large is likely
> to easily accept integrating another tool into its build processes.
> The big advantage of the C++ PP is that it's built-in... and that's
> one of the biggest reasons that the PP _lib_ is better for my purposes
> than any of the ad hoc code generators I've written/used in the past.


Practicality, and not elegance or suitability, is about the only reason that
I could agree with.


Once again, a quick glance is wholly insufficient. You have not taken the time
to learn the idioms involved. The solutions that Chaos uses *internally* are
indeed far more elegant than you realize. Likewise, the solutions that Chaos
(or Boost PP) engenders through client code is more elegant than you realize.
You simply don't know enough about it to weigh the pros and cons.

Regards,
Paul Mensonides

[ See http://www.gotw.ca/resources/clcm.htm for info about ]
[ comp.lang.c++.moderated. First time posters: Do this! ]
Jul 22 '05 #9

P: n/a
"Andrei Alexandrescu \(See Website for Email\)" <Se****************@moderncppdesign.com> writes:
"David Abrahams" <da**@boost-consulting.com> wrote in message
news:uz***********@boost-consulting.com...
> "Andrei Alexandrescu \(See Website for Email\)"
> > Today Boost uses a "preprocessor library", which in turn (please
> > correct me if my understanding is wrong) relies on a program to
> > generate some many big macros up to a fixed "maximum" to overcome
> > preprocessor's incapability to deal with variable number of
> > arguments. >
> That's a pretty jumbled understanding of the situation.
>
> The preprocessor library is a library of headers and macros that allow
> you to generate C/C++ code by writing programs built out of macro
> invocations. You can see the sample appendix at
> http://www.boost-consulting.com/mplbook for a reasonably gentle
> introduction.


Ok, it's a half jumbled understanding of the situ, coupled with a half
jumbled expression of my half-jumbled understanding :o).

First, I've looked in my boost implementation to see things like
BOOST_PP_REPEAT_1_0 to BOOST_PP_REPEAT_1_256 and then BOOST_PP_REPEAT_2_0 to
BOOST_PP_REPEAT_2_256 and so on. My understanding (which I tried to convey
in my post) is that such macros are generated by a program.


Yes, but as I mentioned none of that is required in std C++.
http://sourceforge.net/projects/chaos-pp/ doesn't use any
program-generated macros.
That program is admittedly not part of the library as distributed (I
believe it is part of the maintenance process), but I subjectively
consider it a witness that a more elegant approach would be welcome.
Yeah, I'd rather be using Chaos everywhere instead of the current
Boost PP lib. Too bad it isn't portable in real life.
Then I've looked again over the PP library (this time through the link
you've sent), and honestly it reminds me of TeX macro tricks more than any
example of elegant programming.
Where are the similarities with TeX macro tricks?
As such, I'd find it hard to defend it with a straight face, and I
am frankly surprised you do.
You're surprised I defend the PP library based on the fact that it
reminds _you_ of TeX macros?

The PP lib provides me with an expressive programming system for code
generation using well-understood functional programming idioms. In
the domain of generating C++ from token fragments, it's hard to
imagine what more one could want other than some syntactic sugar and
scoping.
But then I understand the practical utility, as you point out below.
> Bjarne's plan for that is to gradually make the capabilities of the
> existing PP redundant by introducing features in the core
> language... and then, finally, deprecate it.


It's hard to introduce the ability to define syntactic replacement
(which many people consider useful) in the core language.


Right. I personally think the PP will always have a role. That
said, I think its role could be substantially reduced.
> I doubt even with Boost backing that the community at large is likely
> to easily accept integrating another tool into its build processes.
> The big advantage of the C++ PP is that it's built-in... and that's
> one of the biggest reasons that the PP _lib_ is better for my purposes
> than any of the ad hoc code generators I've written/used in the past.


Practicality, and not elegance or suitability, is about the only
reason that I could agree with.


Practicality in this case is elegance. My users can adjust
code-generation parameters by putting -Dwhatever on their
command-line.

FWIW, I designed a sophisticated purpose-built C++ code generation
language using Python and eventually scrapped it. Ultimately the
programs I'd written were harder to understand than those using the PP
lib. That isn't to say someone else can't do better... I'd like to
see a few ideas if you have any.
> I think Bjarne's approach is the best way to do that sort of
> replacement. As long as the PP's functionality is really being
> replaced by a textual preprocessor (or a token-wise one as we have
> today) it's going to suffer many of the same problems. Much of those
> jobs should be filled by a more robust metaprogramming system that's
> fully integrated into the language and not just a processing phase.


I think here we talk about different things. One path to pursue is
indeed to provide better means for template programming and another
is to provide syntactic manipulation. To me, they are different and
complementary techniques.


Metaprogramming != template programming. In meta-Haskell, they
actually manipulate ASTs in the core language. As I understand the
XTI project, it's going in that sort of direction, though a key link
for metaprogramming is missing.
> > Maybe namespaces that are so badly designed, you'd think they are
> > inherited from C?

>
> Wow, I'm impressed; that's going to piss off both the hardcore C _and_
> C++ people!


Heh heh... I knew this is gonna be taken that way :o). What I meant was,
many shortcomings of C++ root in a need for compatibility with C. With
namespaces and export, there's no C to blame :o).


I don't know about that. Isn't C's inclusion model a big part of the
reason that namespaces are not more like modules?
> I've never seen a serious proposal for better namespaces, other than
> http://boost-consulting.com/writing/qn.html, which seems to have been
> generally ignored. Have you got any ideas?


That's a good doc solidly motivated; I am sorry it does not get the
attention that it deserves.


Thanks. Maybe I should re-submit it.
> > So, a proposal for a new preprocessor would be great.

>
> If that's your point, I think it's an interesting one, but somehow I
> still don't get how it could be appropriate for a workshop on C++
> libraries.


Ok, I'll drop it.


If you have PP library ideas, by all means bring those up.
May I still bicker about it on the Usenet? :o)


It's your dime ;-)

--
Dave Abrahams
Boost Consulting
http://www.boost-consulting.com

[ See http://www.gotw.ca/resources/clcm.htm for info about ]
[ comp.lang.c++.moderated. First time posters: Do this! ]
Jul 22 '05 #10

P: n/a
"Paul Mensonides" <le******@comcast.net> wrote in message
(I believe that the link Dave posted was to Chaos--which is distinct from
Boost
Preprocessor.)
I've looked at the existing PP library, not at Chaos.
What it reminds you of is irrelevant. You know virtually nothing about
how it
works--you've never taken the time. Without that understanding, you
cannot
critique its elegance or lack thereof.
It seems like my comments have annoyed you, and for a good reason. Please
accept my apologies.

FWIW, what I looked at were usage samples, not at how it works (either Boost
PP or Chaos). Those *usage* examples I deemed as wanting.
Once again, a quick glance is wholly insufficient. You have not taken the
time
to learn the idioms involved. The solutions that Chaos uses *internally*
are
indeed far more elegant than you realize. Likewise, the solutions that
Chaos
(or Boost PP) engenders through client code is more elegant than you
realize.
You simply don't know enough about it to weigh the pros and cons.


Again, I am sorry if I have caused annoyance. I still believe, however, that
you yourself would be happier, and could provide more abstractions, if
better facilities would be available to you, than what the preprocessor
currently offers. That is what I think would be interesting to discuss.
Andrei

[ See http://www.gotw.ca/resources/clcm.htm for info about ]
[ comp.lang.c++.moderated. First time posters: Do this! ]
Jul 22 '05 #11

P: n/a
> "Andrei Alexandrescu \(See Website for Email\)" <Se****************@moderncppdesign.com> writes:
Today Boost uses a "preprocessor library", which in turn (please
correct me if my understanding is wrong) relies on a program to
generate some many big macros up to a fixed "maximum" to overcome
preprocessor's incapability to deal with variable number of
arguments.

Since no one else has pointed this out, it does this to overcome the
preprocessor's lack of recursion, it's nothing to do with variable
arguments.

David Abrahams <da**@boost-consulting.com> wrote in message news:<uz***********@boost-consulting.com>... In the preprocessor library's _implementation_, there are lots of
boilerplate program-generated macros, but that's an implementation
detail that's only needed because so many preprocessors are badly
nonconforming. In fact, the library's maintainer, Paul Mensonides,
has a _much_ more elegantly-implemented PP library
(http://sourceforge.net/projects/chaos-pp/) that has almost no
boilerplate, but it only works on a few compilers (GCC among them).


Unless I'm missing something, that link goes to an empty sourceforge
project. Which is a pity, because I remember seeing some old chaos
code somewhere, and it looked ace.

Daniel

[ See http://www.gotw.ca/resources/clcm.htm for info about ]
[ comp.lang.c++.moderated. First time posters: Do this! ]
Jul 22 '05 #12

P: n/a
"Andrei Alexandrescu (See Website for Email)"
<Se****************@moderncppdesign.com> wrote
generation via the preprocessor typically yields more type-safe code that is
less error-prone and more maintainable than the alternatives.


Would be interesting to see some examples of that around here. I would be
grateful if you posted some.


Do you mean a code examples or just general examples? As far as general
examples go, the inability to manipulate the syntax of the language leads to
either replication (which is error-prone, dramatically increases the number of
maintenance points, and obscures the abstraction represented by the totality of
the replicated code) or the rejection of implementation strategies that would
otherwise be superior. The ability to adapt, to deal with variability, is often
implemented with less type-safe, runtime-based solutions simply because the
metalanguage doesn't allow a simpler way to get from conception to
implementation.

Regarding actual code examples, here's a Chaos-based version of the old
TYPELIST_1, TYPELIST_2, etc., macros. Note that this example uses variadics
which are likely to be added with C++0x. (It is also a case-in-point of why
variadics are important.)

#include <chaos/preprocessor/control/iif.h>
#include <chaos/preprocessor/detection/is_empty.h>
#include <chaos/preprocessor/facilities/encode.h>
#include <chaos/preprocessor/facilities/split.h>
#include <chaos/preprocessor/limits.h>
#include <chaos/preprocessor/recursion/basic.h>
#include <chaos/preprocessor/recursion/expr.h>

#define TYPELIST(...) TYPELIST_BYPASS(CHAOS_PP_LIMIT_EXPR, __VA_ARGS__)
#define TYPELIST_BYPASS(s, ...) \
CHAOS_PP_EXPR_S(s)(TYPELIST_I( \
CHAOS_PP_OBSTRUCT(), CHAOS_PP_PREV(s), __VA_ARGS__, \
)) \
/**/
#define TYPELIST_INDIRECT() TYPELIST_I
#define TYPELIST_I(_, s, ...) \
CHAOS_PP_IIF _(CHAOS_PP_IS_EMPTY_NON_FUNCTION(__VA_ARGS__))( \
Loki::NilType, \
Loki::TypeList< \
CHAOS_PP_DECODE _(CHAOS_PP_SPLIT _(0, __VA_ARGS__)), \
CHAOS_PP_EXPR_S _(s)(TYPELIST_INDIRECT _()( \
CHAOS_PP_OBSTRUCT _(), CHAOS_PP_PREV(s), \
CHAOS_PP_SPLIT _(1, __VA_ARGS__) \
)) \ \ ) \
/**/

The TYPELIST macro takes the place of all of the TYPELIST_x macros (and more) at
one time, has facilities to handle types with open commas (e.g. std::pair<int,
int>), and this is more-or-less doing it by hand in Chaos. If you used
facilities already available, you could do the same with one macro:

#include <chaos/preprocessor/facilities/encode.h>
#include <chaos/preprocessor/lambda/ops.h>
#include <chaos/preprocessor/punctuation/comma.h>
#include <chaos/preprocessor/recursion/expr.h>
#include <chaos/preprocessor/tuple/for_each.h>

#define TYPELIST(...) \
CHAOS_PP_EXPR( \
CHAOS_PP_TUPLE_FOR_EACH( \
CHAOS_PP_LAMBDA(Loki::TypeList<) \
CHAOS_PP_DECODE_(CHAOS_PP_ARG(1)) CHAOS_PP_COMMA_(), \
(__VA_ARGS__) \
) \
Loki::NilType \
CHAOS_PP_TUPLE_FOR_EACH( \
CHAOS_PP_LAMBDA(>), (__VA_ARGS__) \
) \
) \
/**/

This implementation can process up to ~5000 types and there is no list of 5000
macros anywhere in Chaos. (There are also other, more advanced methods capable
of processing trillions upon trillions of types.)

This example is particularly motivating because it is an example of code used by
clients that is itself client to Chaos. In this case, its primary purpose it
produce facilities for type manipulation (i.e. Loki, MPL, etc.) which raises the
level of abstraction for clients without sacrificing any type safety whatsoever.
Safer? In what way? Name clashes? Multiple evaluation?

I have probably written more macros than any other person.


I think you mean "I have probably written more C++ macros than any other
person." That detail is important.


Yes, it is. I was referring to C and C++ macros.
I'm not one to claim having written lots
of macros in any language, and I apologize if the amendment above sounds
snooty. I just think it's reasonable to claim that the C++ preprocessor
compares very unfavorably with many other languages' means for syntactic
abstractions.
Yes.
I totally agree with everything you wrote, but my point was, I believe,
misunderstood. Yes, "macros are evil" is an easy cop-out. But I never said
that. My post says what tantamounts to "The C++ preprocessor sucks". It
sucks because it is not a powerful-enough tool. That's why.
It is a powerful enough tool, but it could be easier to employ than it is.
So let me restate my point. Macros are great. I love macros. Syntactic
abstractions have their place in any serious language, as you very nicely
point out. And yes, they are distinct from other means of abstraction. And
yes, they can be very useful, and shouldn't be banned just because they can
be misused.

(I'll make a parenthesis here that I think is important. I believe the worst
thing that the C/C++ preprocessor has ever done is to steer an entire huge
community away from the power of syntactic abstractions.)
That is an *extremely* good point.
So, to conclude, my point was that the preprocessor is too primitive a tool
for implementing syntactic abstractions with.
It could be better, by all means, but it is plenty powerful enough to implement
syntactic abstractions--it is more powerful than most people realize. For
example, the first snippet above is using generalized recursion--recursion
itself can be a shareable, extensible, library facility.
Let's think wishes. You've done a great many good things with the
preprocessor, so you are definitely the one to be asked. What features do
you think would have made it easier for you and your library's clients?


The most fundamental thing would be the ability to separate the first arbitrary
preprocessing token (or whitespace separation) from those that follow it in a
sequence of tokens and be able to classify it in some way (i.e. determine what
kind of token it is and what its value is). The second thing would be the
ability to take a single preprocessing token and deconstruct it into characters.
I can do everything else, but can only do those things in a limited ways.

Regards,
Paul Mensonides

[ See http://www.gotw.ca/resources/clcm.htm for info about ]
[ comp.lang.c++.moderated. First time posters: Do this! ]
Jul 22 '05 #13

P: n/a
"David Abrahams" <da**@boost-consulting.com> wrote in message
news:u1***********@boost-consulting.com...
> First, I've looked in my boost implementation to see things like
> BOOST_PP_REPEAT_1_0 to BOOST_PP_REPEAT_1_256 and then BOOST_PP_REPEAT_2_0 to > BOOST_PP_REPEAT_2_256 and so on. My understanding (which I tried to convey
> in my post) is that such macros are generated by a program.


Yes, but as I mentioned none of that is required in std C++.
http://sourceforge.net/projects/chaos-pp/ doesn't use any
program-generated macros.


It does use some, but not for algorithmic constructs. E.g. the closest
equivalent (i.e. as feature-lacking as possible) to BOOST_PP_REPEAT under Chaos
is:

#include <chaos/preprocessor/arithmetic/dec.h>
#include <chaos/preprocessor/control/when.h>
#include <chaos/preprocessor/recursion/basic.h>
#include <chaos/preprocessor/recursion/expr.h>

#define REPEAT(count, macro, data) \
REPEAT_S(CHAOS_PP_STATE(), count, macro, data) \
/**/
#define REPEAT_S(s, count, macro, data) \
REPEAT_I( \
CHAOS_PP_OBSTRUCT(), CHAOS_PP_NEXT(s), \
count, macro, data \
) \
/**/
#define REPEAT_INDIRECT() REPEAT_I
#define REPEAT_I(_, s, count, macro, data) \
CHAOS_PP_WHEN _(count)( \
CHAOS_PP_EXPR_S _(s)(REPEAT_INDIRECT _()( \
CHAOS_PP_OBSTRUCT _(), CHAOS_PP_NEXT(s), \
CHAOS_PP_DEC(count), macro, data \
)) \
macro _(s, CHAOS_PP_DEC(count), data) \
) \
/**/

Regards,
Paul Mensonides

[ See http://www.gotw.ca/resources/clcm.htm for info about ]
[ comp.lang.c++.moderated. First time posters: Do this! ]
Jul 22 '05 #14

P: n/a
"Andrei Alexandrescu (See Website for Email)"
<Se****************@moderncppdesign.com> wrote in message
news:2o************@uni-berlin.de...
"Paul Mensonides" <le******@comcast.net> wrote in message
(I believe that the link Dave posted was to Chaos--which is distinct from
Boost
Preprocessor.)


I've looked at the existing PP library, not at Chaos.


In that case, I agree. Internally, Boost PP is a mess--but a mess caused by
lackluster conformance.
What it reminds you of is irrelevant. You know virtually nothing about
how it
works--you've never taken the time. Without that understanding, you
cannot
critique its elegance or lack thereof.


It seems like my comments have annoyed you, and for a good reason. Please
accept my apologies.


I don't mind the comments. I do mind preconceptions. With the preprocessor
there are a great many preconceptions about what it can and cannot do.

Regards,
Paul Mensonides

[ See http://www.gotw.ca/resources/clcm.htm for info about ]
[ comp.lang.c++.moderated. First time posters: Do this! ]
Jul 22 '05 #15

P: n/a
"Paul Mensonides" <le******@comcast.net> writes:
"David Abrahams" <da**@boost-consulting.com> wrote in message
news:u1***********@boost-consulting.com...
> Yes, but as I mentioned none of that is required in std C++.
> http://sourceforge.net/projects/chaos-pp/ doesn't use any
> program-generated macros.


It does use some, but not for algorithmic constructs. E.g. the closest
equivalent (i.e. as feature-lacking as possible) to BOOST_PP_REPEAT under Chaos
is:

#include <chaos/preprocessor/arithmetic/dec.h>
#include <chaos/preprocessor/control/when.h>
#include <chaos/preprocessor/recursion/basic.h>
#include <chaos/preprocessor/recursion/expr.h>

#define REPEAT(count, macro, data) \
REPEAT_S(CHAOS_PP_STATE(), count, macro, data) \
/**/
#define REPEAT_S(s, count, macro, data) \
REPEAT_I( \
CHAOS_PP_OBSTRUCT(), CHAOS_PP_NEXT(s), \
count, macro, data \
) \
/**/
#define REPEAT_INDIRECT() REPEAT_I
#define REPEAT_I(_, s, count, macro, data) \
CHAOS_PP_WHEN _(count)( \
CHAOS_PP_EXPR_S _(s)(REPEAT_INDIRECT _()( \
CHAOS_PP_OBSTRUCT _(), CHAOS_PP_NEXT(s), \
CHAOS_PP_DEC(count), macro, data \
)) \
macro _(s, CHAOS_PP_DEC(count), data) \
) \
/**/


Confused. I don't see anything here that looks like a
program-generated macro.

--
Dave Abrahams
Boost Consulting
http://www.boost-consulting.com

[ See http://www.gotw.ca/resources/clcm.htm for info about ]
[ comp.lang.c++.moderated. First time posters: Do this! ]
Jul 22 '05 #16

P: n/a
David Abrahams wrote:
> Yes, but as I mentioned none of that is required in std C++.
> http://sourceforge.net/projects/chaos-pp/ doesn't use any
> program-generated macros.


It does use some, but not for algorithmic constructs. E.g. the
closest equivalent (i.e. as feature-lacking as possible) to
BOOST_PP_REPEAT under Chaos is:

#include <chaos/preprocessor/arithmetic/dec.h>
#include <chaos/preprocessor/control/when.h>
#include <chaos/preprocessor/recursion/basic.h>
#include <chaos/preprocessor/recursion/expr.h>

#define REPEAT(count, macro, data) \
REPEAT_S(CHAOS_PP_STATE(), count, macro, data) \
/**/
#define REPEAT_S(s, count, macro, data) \
REPEAT_I( \
CHAOS_PP_OBSTRUCT(), CHAOS_PP_NEXT(s), \
count, macro, data \
) \
/**/
#define REPEAT_INDIRECT() REPEAT_I
#define REPEAT_I(_, s, count, macro, data) \
CHAOS_PP_WHEN _(count)( \
CHAOS_PP_EXPR_S _(s)(REPEAT_INDIRECT _()( \
CHAOS_PP_OBSTRUCT _(), CHAOS_PP_NEXT(s), \
CHAOS_PP_DEC(count), macro, data \
)) \
macro _(s, CHAOS_PP_DEC(count), data) \
) \
/**/


Confused. I don't see anything here that looks like a
program-generated macro.


That was the point. REPEAT is an algorithmic construct that uses recursion, but
it doesn't require macro repetition. However, some lower-level abstractions,
like recursion itself (e.g. EXPR_S) and saturation arithmetic (e.g. DEC),
require macro repetition. For something like recursion, which is not naturally
present in macro expansion, some form of macro repetition will always be
necessary. The difference is that that repetition is hidden behind an
abstraction and the relationship of N macros need not imply on N steps. As
Andrei mentioned, BOOST_PP_REPEAT requires (at least) N macros to repeat N
things. Similarly, BOOST_PP_FOR, BOOST_PP_WHILE, etc., all require (at least) N
macros to perform N steps. That is not the case with Chaos.

#include <chaos/preprocessor/control/inline_when.h>
#include <chaos/preprocessor/recursion/basic.h>
#include <chaos/preprocessor/recursion/expr.h>

#define FOR(pred, op, macro, state) \
FOR_S(CHAOS_PP_STATE(), pred, op, macro, state) \
/**/
#define FOR_S(s, pred, op, macro, state) \
FOR_I( \
CHAOS_PP_OBSTRUCT(), CHAOS_PP_NEXT(s), \
pred, op, macro, state\
) \
/**/
#define FOR_INDIRECT() FOR_I
#define FOR_I(_, s, pred, op, macro, state) \
CHAOS_PP_INLINE_WHEN _(pred _(s, state))( \
macro _(s, state) \
CHAOS_PP_EXPR_S _(s)(FOR_INDIRECT _()( \
CHAOS_PP_OBSTRUCT _(), CHAOS_PP_NEXT(s), \
pred, op, macro, op _(s, state) \
)) \
) \
/**/

#include <chaos/preprocessor/control/iif.h>
#include <chaos/preprocessor/recursion/basic.h>
#include <chaos/preprocessor/recursion/expr.h>

#define WHILE(pred, op, state) \
WHILE_S(CHAOS_PP_STATE(), pred, op, state) \
/**/
#define WHILE_S(s, pred, op, state) \
WHILE_I( \
CHAOS_PP_OBSTRUCT(), CHAOS_PP_NEXT(s), \
pred, op, state \
) \
/**/
#define WHILE_INDIRECT() WHILE_I
#define WHILE_I(_, s, pred, op, state) \
CHAOS_PP_IIF _(pred _(s, state))( \
CHAOS_PP_EXPR_S _(s)(WHILE_INDIRECT _()( \
CHAOS_PP_OBSTRUCT _(), CHAOS_PP_NEXT(s), \
pred, op, op _(s, state) \
)), \
state
) \
/**/

Regards,
Paul Mensonides

[ See http://www.gotw.ca/resources/clcm.htm for info about ]
[ comp.lang.c++.moderated. First time posters: Do this! ]
Jul 22 '05 #17

P: n/a
"Daniel R. James" <da****@calamity.org.uk> wrote in message
(http://sourceforge.net/projects/chaos-pp/) that has almost no
boilerplate, but it only works on a few compilers (GCC among them).


Unless I'm missing something, that link goes to an empty sourceforge
project. Which is a pity, because I remember seeing some old chaos
code somewhere, and it looked ace.


The project is definitely not empty, it just hasn't made any "official"
releases.

Regards,
Paul Mensonides

[ See http://www.gotw.ca/resources/clcm.htm for info about ]
[ comp.lang.c++.moderated. First time posters: Do this! ]
Jul 22 '05 #18

P: n/a
"Paul Mensonides" <le******@comcast.net> wrote in message
news:Iu********************@comcast.com...
Regarding actual code examples, here's a Chaos-based version of the old
TYPELIST_1, TYPELIST_2, etc., macros. Note that this example uses
variadics
which are likely to be added with C++0x. (It is also a case-in-point of
why
variadics are important.)


Cool. Before continuing the discussion, I have a simple question - how does
your implementation cope with commas in template types, for example:

TYPELIST(vector<int, my_allocator<int> >, vector<float>)

would correctly create a typelist of two elements? If not, what steps do I
need to take to creat such a typelists (aside from a typedef)?
Andrei

[ See http://www.gotw.ca/resources/clcm.htm for info about ]
[ comp.lang.c++.moderated. First time posters: Do this! ]
Jul 22 '05 #19

P: n/a
"Andrei Alexandrescu \(See Website for Email\)" <Se****************@moderncppdesign.com> wrote in message news:<2n************@uni-berlin.de>...
"Jeremy Siek" <je*********@gmail.com> wrote in message
news:21**************************@posting.google.c om...
> CALL FOR PAPERS/PARTICIPATION
>
> C++, Boost, and the Future of C++ Libraries
> Workshop at OOPSLA
> October 24-28, 2004
> Vancouver, British Columbia, Canada
> http://tinyurl.com/4n5pf [snip]

I wonder if the submitters follow this post's trail or I should email
them... anyway, here goes.

I am not sure if I'll ever get around to writing it, so I said I'd post the
idea here, maybe someone will pursue it. In short, I think a proposal for a
replacement of C++'s preprocessor would be, I think, welcome.


There is only one replacement for the c++ preprocessor which I would
consider truly up to c++'s potential as a competitive language into
the near future: metacode. Full metacode capabilities, not just a
minor update to template capabilities.

By this I mean the capability to walk the parse tree at compile time
and perform transformations in a meta-type safe manner (analagous to
full second-order lambda capability such as System F). Vandevoorde's
metacode seems a good step in this direction, but I really think that
such a proposal must be as complete as possible (and not a library
proposal but a full language extension).

Consider some of the things I've seen talked about recently on the
newsgroups. Injecting a function call after all constructors have
executed is certainly one of those things the language should allow,
but currently we must work 'around' the language by forcing the use of
factories in such cases or leaving the two-stage use to clients (never
a good idea). In terms of functional relationships, though (even in
the presence of exceptions), such a task is a simple injection in
terms of functional orderings of the ctors and we are currently made
to fight with the language definitions enforced by the compiler.

Or consider the place where I currently use the Boost preprocessing
library the most: serialisation of classes. If we were allowed to
walk the member list for a class, walk its inheritance graph, and
stringise the class names (or produce better unique identifiers),
serialisation would be a cakewalk. Unfortunately, it is made much
more difficult and requires a more difficult object definition model
if any of the tasks of serialisation are to be automated by a library.

And of course there is all that control over the exception path
process, pattern generation, and general aspect functionsl
relationship injection that programmers have been crying about for
years.
Today Boost uses a "preprocessor library", which in turn (please correct me
if my understanding is wrong) relies on a program to generate some many big
macros up to a fixed "maximum" to overcome preprocessor's incapability to
deal with variable number of arguments.
Others have pointed out that this is much more a nonconformance issue
than it is an inherent preprocessor limitation, but I'd like to stress
that a fully recursive code generation system in c++ would not present
such problems.
Also, please correct me if I'm wrong (because I haven't really looked deep
into it), but my understanding is that people around Boost see the PP
library as a necessary but unpleasantly-smelling beast that makes things
around it smelly as well. [Reminds me of the Romanian story: there was a guy
called Pepelea (pronounced Peh-Peh-leah) who was poor but had inherited a
beautiful house. A rich man wanted to buy it, and Pepelea sold it on one
condition: that Pepelea owns a nail in the living room's wall, in which he
can hang whatever he wanted. Now when the rich man was having guests and
whatnot, Pepelea would drop by and embarraisingly hang a dirty old coat. Of
course in the end the rich man got so exasperated that he gave Pepelea the
house back for free. Ever since that story, "Pepelea's nail" is referred to
as something like... like what the preprocessor is to the C++ language.]
You are certainly a storyteller, but I'd give the c++ preprocessor
more credit. It has been the only method to acheive certain
limitations (shortfalls in complete second-order lambda
expressiveness) when needed by the coder. Indeed, if you take one of
the coders duties as minimising the updates needed by future feature
revisions of other coders, the preprocessor has always had its place
secure from typed language features. Again, serialisation has been my
major use, but other reasons for, for instance, type to string
conversion include interception of API's through lookup in the
import/export lists of the modules. A full metacode capability would
make that obsolete (walking the symbol table should be as easy as
walking the parse tree itself).
That would be reason one to create a new C++ preprocessor. (And when I say
"new," that's not like in "yet another standard C++ preprocessor". I have
been happy to see my suggestion on the Boost mailing list followed in that
the WAVE preprocessor was built using Boost's own parser generator library,
Spirit.) What I am talking now is "a backwards-INcompatible C++ preprocessor
aimed at displacing the existing preprocessor forever and replacing it with
a better one".
Certainly full metacoding capabilities would make this obsolete...
If backed by the large Boost community, the new preprocessor could easily
gain popularity and be used in new projects instead of the old one. To avoid
inheriting past's mistakes, the new preprocessor doesn't need to be
syntax-compatible in any way with the old preprocessor, but only
functionally compatible, in that it can do all that can be done with the
existing preprocessor, only that it has new means to do things safer and
better.

I think that would be great. Because it we all stop coding for a second and
think of it, what's the ugliest scar on C++'s face - what is Pepelea's nail?
Maybe "export" which is so broken and so useless and so abusive that its
implementers have developed Stockholm syndrome during the long years that
took them to implement it? Maybe namespaces that are so badly designed,
you'd think they are inherited from C? I'd say they are good contenders
against each other, but none of them holds a candle to the preprocessor.
If we extend the idea of metacode to all of the translation process,
in other words if the programmer were to have control points inserted
into all parts of the code generation process, then export would never
have been a problem to begin with. Unfortunately, the c++
standardisation community feels that processes like linking are
sacrilege and not to be touched by regulation. If a full parse tree
walk were to include the ability to load other translation units and
manipulate their trees, then we wouldn't find export a 'scar' or in
any way difficile.

You know, if the c++ committee had the cojones to make standardisation
over the full translation process, we might even see dynamic linking a
possibility for the next language revision.
So, a proposal for a new preprocessor would be great. Here's a short wish
list:

* Does what the existing one does (although some of those coding patterns
will be unrecommended);

* Supports one-time file inclusion and multiple file inclusion, without the
need for guards (yes, there are subtle issues related to that... let's at
least handle a well-defined subset of the cases);

* Allows defining "hygienic" macros - macros that expand to the same text
independent on the context in which they are expanded;

* Allows defining scoped macros - macros visible only within the current
scope;

* Has recursion and possibly iteration;

* Has a simple, clear expansion model (negative examples abound - NOT like
m4, NOT like tex... :o))

* Supports variable number of arguments. I won't venture into thinking of
more cool support a la scheme or dylan ofr Java Extender macros.


I'm not a big fan of textual macros when typed completion gives the
same computational capability. I really think that metacoding,
architecture generation, and all of those great things we come to look
for in AOP and generative intentional programming is what the c++
standards commitee should focus on. With that type of capability, a
"pre"-processor is superfluous.

-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=

galathaea: prankster, fablist, magician, liar

[ See http://www.gotw.ca/resources/clcm.htm for info about ]
[ comp.lang.c++.moderated. First time posters: Do this! ]
Jul 22 '05 #20

P: n/a
"Andrei Alexandrescu (See Website for Email)"
<Se****************@moderncppdesign.com> wrote in message
news:2o************@uni-berlin.de...
"Paul Mensonides" <le******@comcast.net> wrote in message
news:Iu********************@comcast.com...
> Regarding actual code examples, here's a Chaos-based version of the old
> TYPELIST_1, TYPELIST_2, etc., macros. Note that this example uses
> variadics
> which are likely to be added with C++0x. (It is also a case-in-point of
> why
> variadics are important.)


Cool. Before continuing the discussion, I have a simple question - how does
your implementation cope with commas in template types, for example:

TYPELIST(vector<int, my_allocator<int> >, vector<float>)

would correctly create a typelist of two elements? If not, what steps do I
need to take to creat such a typelists (aside from a typedef)?


You'd just have to parenthesize types that contain open commas:

TYPELIST((vector<int, my_allocator<int> >), vector<float>)

The DECODE macro in the example removes parentheses if they exist. E.g.

CHAOS_PP_DECODE(int) // int
CHAOS_PP_DECODE((int)) // int

Using parentheses is, of course, only necessary for types that contain open
commas. (There is also a ENCODE macro that completes the symmetry, but it is
unnecessary.)

There are also a several other alternatives. It is possible to pass a type
through a system of macros without the system of macros being intrusively
modified (with DECODE or similar). This, in Chaos terminology, is a "rail". It
is a macro invocation that effectively won't expand until some context is
introduced. E.g.

#include <chaos/preprocessor/punctuation/comma.h>

#define A(x) B(x)
#define B(x) C(x)
#define C(x) D(x)
#define D(x) x

A(CHAOS_PP_COMMA())

This will error with too many arguments to B. However, the following disables
evaluation of COMMA() until after the system "returns" from A:

#include <chaos/preprocessor/punctuation/comma.h>
#include <chaos/preprocessor/recursion/rail.h>

#define A(x) B(x)
#define B(x) C(x)
#define C(x) D(x)
#define D(x) x

CHAOS_PP_WALL(A(
CHAOS_PP_UNSAFE_RAIL(CHAOS_PP_COMMA)()
))

(There is also CHAOS_PP_RAIL that is similar, but getting into the difference
here is too complex a subject.)

In any case, the expansion of COMMA is inhibited until it reaches the context
established by WALL. The same thing can be achieved for types non-intrusively.
Chaos has two rail macros designed for this purpose, TYPE and TYPE_II. The
first, TYPE, is the most syntactically clean, but is only available with
variadics:

#include <chaos/preprocessor/facilities/type.h>
#include <chaos/preprocessor/recursion/rail.h>

#define A(x) B(x)
#define B(x) C(x)
#define C(x) D(x)
#define D(x) x

CHAOS_PP_WALL(A(
CHAOS_PP_TYPE(std::pair<int, int>)
))
// std::pair<int, int>

The second, TYPE_II, is more syntactically verbose, but it works even without
variadics without counting commas:

#include <chaos/preprocessor/facilities/type.h>
#include <chaos/preprocessor/recursion/rail.h>

#define A(x) B(x)
#define B(x) C(x)
#define C(x) D(x)
#define D(x) x

CHAOS_PP_WALL(A(
CHAOS_PP_TYPE_II(CHAOS_PP_BEGIN std::pair<int, int> CHAOS_PP_END)
))
// std::pair<int, int>

Thus, you *could* make a typelist using rails such as this to protect
open-comma'ed types, but for typelists (which inherently deal with types), it
would be pointless. Rails are more useful when some arbitrary data that you
need to pass around happens to be a type, but doesn't necessarily have to be.

Regards,
Paul Mensonides

[ See http://www.gotw.ca/resources/clcm.htm for info about ]
[ comp.lang.c++.moderated. First time posters: Do this! ]
Jul 22 '05 #21

P: n/a
"Paul Mensonides" <le******@comcast.net> wrote in message
news:Iu********************@comcast.com...
(from another post)
The DECODE macro in the example removes parentheses if they exist. E.g.

CHAOS_PP_DECODE(int) // int
CHAOS_PP_DECODE((int)) // int
That's what I was hoping for; thanks.

(back to this other post) Regarding actual code examples, here's a Chaos-based version of the old
TYPELIST_1, TYPELIST_2, etc., macros. Note that this example uses
variadics
which are likely to be added with C++0x. (It is also a case-in-point of
why
variadics are important.)

#include <chaos/preprocessor/control/iif.h>
#include <chaos/preprocessor/detection/is_empty.h>
#include <chaos/preprocessor/facilities/encode.h>
#include <chaos/preprocessor/facilities/split.h>
#include <chaos/preprocessor/limits.h>
#include <chaos/preprocessor/recursion/basic.h>
#include <chaos/preprocessor/recursion/expr.h>

#define TYPELIST(...) TYPELIST_BYPASS(CHAOS_PP_LIMIT_EXPR, __VA_ARGS__)
#define TYPELIST_BYPASS(s, ...) \
CHAOS_PP_EXPR_S(s)(TYPELIST_I( \
CHAOS_PP_OBSTRUCT(), CHAOS_PP_PREV(s), __VA_ARGS__, \
)) \
/**/
#define TYPELIST_INDIRECT() TYPELIST_I
#define TYPELIST_I(_, s, ...) \
CHAOS_PP_IIF _(CHAOS_PP_IS_EMPTY_NON_FUNCTION(__VA_ARGS__))( \
Loki::NilType, \
Loki::TypeList< \
CHAOS_PP_DECODE _(CHAOS_PP_SPLIT _(0, __VA_ARGS__)), \
CHAOS_PP_EXPR_S _(s)(TYPELIST_INDIRECT _()( \
CHAOS_PP_OBSTRUCT _(), CHAOS_PP_PREV(s), \
CHAOS_PP_SPLIT _(1, __VA_ARGS__) \
)) \
> \ ) \
/**/


I am sure I will develop a lot more appreciation for this solution once I
will fully understand all of the clever techniques and idioms used.

For now, I hope you will agree with me that the above fosters learning yet
another programming style, which is different than straight programming,
template programming, or MPL-based programming.

I would also like to compare the solution above with the "imaginary" one
that I have in mind as a reference. It uses LISP-macros-like artifacts and a
few syntactic accoutrements.

$define TYPELIST() { Loki::NullType }
$define TYPELIST(head $rest more) {
Loki::Typelist< head, TYPELIST(more) >
}

About all that needs to be explained is that "$rest name" binds name to
whatever other comma-separated arguments follow, if any, and that the
top-level { and } are removed when creating the macro.

If you would argue that your version above is more or as elegant as this
one, we have irreducible opinions. I consider your version drowning in
details that have nothing to do with the task at hand, but with handling the
ways in which the preprocessor is inadequate for the task at hand. Same
opinion goes for the other version below:
#include <chaos/preprocessor/facilities/encode.h>
#include <chaos/preprocessor/lambda/ops.h>
#include <chaos/preprocessor/punctuation/comma.h>
#include <chaos/preprocessor/recursion/expr.h>
#include <chaos/preprocessor/tuple/for_each.h>

#define TYPELIST(...) \
CHAOS_PP_EXPR( \
CHAOS_PP_TUPLE_FOR_EACH( \
CHAOS_PP_LAMBDA(Loki::TypeList<) \
CHAOS_PP_DECODE_(CHAOS_PP_ARG(1)) CHAOS_PP_COMMA_(), \
(__VA_ARGS__) \
) \
Loki::NilType \
CHAOS_PP_TUPLE_FOR_EACH( \
CHAOS_PP_LAMBDA(>), (__VA_ARGS__) \
) \
) \
/**/ This implementation can process up to ~5000 types and there is no list of
5000
macros anywhere in Chaos. (There are also other, more advanced methods
capable
of processing trillions upon trillions of types.)


I guess you have something that increases with the logarithm if that number,
is that correct?
Let's think wishes. You've done a great many good things with the
preprocessor, so you are definitely the one to be asked. What features
do
you think would have made it easier for you and your library's clients?


The most fundamental thing would be the ability to separate the first
arbitrary
preprocessing token (or whitespace separation) from those that follow it
in a
sequence of tokens and be able to classify it in some way (i.e. determine
what
kind of token it is and what its value is). The second thing would be the
ability to take a single preprocessing token and deconstruct it into
characters.
I can do everything else, but can only do those things in a limited ways.


I understand the first desideratum, but not the second. What would be the
second thing beneficial for?
Andrei

[ See http://www.gotw.ca/resources/clcm.htm for info about ]
[ comp.lang.c++.moderated. First time posters: Do this! ]
Jul 22 '05 #22

P: n/a
Andrei Alexandrescu (See Website for Email) wrote:
I am sure I will develop a lot more appreciation for this solution
once I will fully understand all of the clever techniques and idioms
used.
Yes.
For now, I hope you will agree with me that the above fosters
learning yet another programming style, which is different than
straight programming, template programming, or MPL-based programming.
Definitely. It is a wholly different language.
I would also like to compare the solution above with the "imaginary"
one that I have in mind as a reference. It uses LISP-macros-like
artifacts and a few syntactic accoutrements.

$define TYPELIST() { Loki::NullType }
$define TYPELIST(head $rest more) {
Loki::Typelist< head, TYPELIST(more) >
}

About all that needs to be explained is that "$rest name" binds name
to whatever other comma-separated arguments follow, if any, and that
the top-level { and } are removed when creating the macro.
So, for the above you need (basically) two things: overloading on number of
arguments and recursion. Both of those things are already indirectly possible
(with the qualification that overloading on number of arguments is only possible
with variadics). That isn't to say that those facilities wouldn't be useful
features of the preprocessor, because they would. I'm merely referring to those
things which can be done versus those things which cannot with the preprocessor
as it currently exists. I'm concerned more with functionality than I am with
syntactic cleanliness.
If you would argue that your version above is more or as elegant as
this one, we have irreducible opinions.
The direct "imaginary" version is obviously more elegant.
I consider your version
drowning in details that have nothing to do with the task at hand,
but with handling the ways in which the preprocessor is inadequate
for the task at hand. Same opinion goes for the other version below:


But the preprocessor *is* adequate for the task. It just isn't as syntactically
clean as you'd like it to be.
This implementation can process up to ~5000 types and there is no
list of 5000
macros anywhere in Chaos. (There are also other, more advanced
methods capable
of processing trillions upon trillions of types.)


I guess you have something that increases with the logarithm if that
number, is that correct?


Exponential structure, yes. A super-reduction of the idea is this:

#define A(x) B(B(x))
#define B(x) C(C(x))
#define C(x) x

Here, the x argument gets scanned for expansion with a base-2 exponential. With
about 25 macros you're already into millions of scans. Each of those scans can
be an arbitrary computational step.
The most fundamental thing would be the ability to separate the first
arbitrary
preprocessing token (or whitespace separation) from those that
follow it in a
sequence of tokens and be able to classify it in some way (i.e.
determine what
kind of token it is and what its value is). The second thing would
be the ability to take a single preprocessing token and deconstruct
it into characters.
I can do everything else, but can only do those things in a limited
ways.


I understand the first desideratum, but not the second. What would be
the second thing beneficial for?


Identifier and number processing primarily, but also string and character
literals. Given those two things alone you could write a C++ interpreter with
the preprocessor--or, much more simply, you could trivially write your imaginary
example above. Speaking of which, you can already write interpreters that get
close. The one thing that you cannot do is get beyond of arbitrary
preprocessing tokens. They would have to be quoted in some way.

Regards,
Paul Mensonides

[ See http://www.gotw.ca/resources/clcm.htm for info about ]
[ comp.lang.c++.moderated. First time posters: Do this! ]
Jul 22 '05 #23

P: n/a
"Paul Mensonides" <le******@comcast.net> wrote in message news:<CN********************@comcast.com>...
"Andrei Alexandrescu (See Website for Email)"
<Se****************@moderncppdesign.com> wrote in message
news:2o************@uni-berlin.de...
> "Paul Mensonides" <le******@comcast.net> wrote in message
> > (I believe that the link Dave posted was to Chaos--which is distinct from
> > Boost
> > Preprocessor.)

>
> I've looked at the existing PP library, not at Chaos.


In that case, I agree. Internally, Boost PP is a mess--but a mess caused by
lackluster conformance.


I think the actual value of a library can be determined as a
difference between the "mess" it gets as its input, and the "mess" (if
some still remains) its user gets on its output. In this regard, IMO,
it's difficult to overestimate the value of the Boost PP library, no
matter how messy its implementation might be.

(Just a thought from one of recently converted former PP-haters)

Regards,
Arkadiy

[ See http://www.gotw.ca/resources/clcm.htm for info about ]
[ comp.lang.c++.moderated. First time posters: Do this! ]
Jul 22 '05 #24

P: n/a
"Andrei Alexandrescu wrote:
[...]
Maybe "export" which is so broken and so useless and so abusive that its
implementers have developed Stockholm syndrome during the long years that
took them to implement it?


How is "export" useless and broken?

Have you used it for any project? I find it very pleasant
to work with in practice.

Daveed

[ See http://www.gotw.ca/resources/clcm.htm for info about ]
[ comp.lang.c++.moderated. First time posters: Do this! ]
Jul 22 '05 #25

P: n/a
"Paul Mensonides" <le******@comcast.net> wrote in message
news:ct********************@comcast.com...
For now, I hope you will agree with me that the above fosters
learning yet another programming style, which is different than
straight programming, template programming, or MPL-based programming.
Definitely. It is a wholly different language.


Not it only remains for me to convince you that that's a disadvantage :o).
So, for the above you need (basically) two things: overloading on number
of
arguments and recursion. Both of those things are already indirectly
possible
(with the qualification that overloading on number of arguments is only
possible
with variadics). That isn't to say that those facilities wouldn't be
useful
features of the preprocessor, because they would. I'm merely referring to
those
things which can be done versus those things which cannot with the
preprocessor
as it currently exists. I'm concerned more with functionality than I am
with
syntactic cleanliness.


I disagree it's only syntactic cleanliness. Lack of syntactic cleanliness is
the CHAOS_PP_ that you need to prepend to most of your library's symbols.
But let me pull the code again:

#define REPEAT(count, macro, data) \
REPEAT_S(CHAOS_PP_STATE(), count, macro, data) \
/**/
#define REPEAT_S(s, count, macro, data) \
REPEAT_I( \
CHAOS_PP_OBSTRUCT(), CHAOS_PP_NEXT(s), \
count, macro, data \
) \
/**/
#define REPEAT_INDIRECT() REPEAT_I
#define REPEAT_I(_, s, count, macro, data) \
CHAOS_PP_WHEN _(count)( \
CHAOS_PP_EXPR_S _(s)(REPEAT_INDIRECT _()( \
CHAOS_PP_OBSTRUCT _(), CHAOS_PP_NEXT(s), \
CHAOS_PP_DEC(count), macro, data \
)) \
macro _(s, CHAOS_PP_DEC(count), data) \
) \
/**/

As far as I understand, REPEAT, REPEAT_S, REPEAT_INDIRECT, REPEAT_I, and the
out-of-sight CHAOS_PP_STATE, CHAOS_PP_OBSTRUCT, CHAOS_PP_EXPR_S are dealing
with the preprocessor alone and have zero relevance to the task. The others
implement an idiom for looping that I'm sure one can learn, but is far from
familiar to a C++ programmer. To say that that's just a syntactic
cleanliness thing is a bit of a stretch IMHO. By the same argument, any
Turing complete language will do at the cost of "some" syntactic
cleanliness.
I consider your version
drowning in details that have nothing to do with the task at hand,
but with handling the ways in which the preprocessor is inadequate
for the task at hand. Same opinion goes for the other version below:


But the preprocessor *is* adequate for the task. It just isn't as
syntactically
clean as you'd like it to be.


I maintain my opinion that we're talking about more than syntactic
cleanliness here. I didn't say the preprocessor is "incapable" for the task.
But I do believe (and your code strengthened my belief) that it is
"inadequate". Now I looked on www.m-w.com and I saw that inadequate means "
: not adequate : INSUFFICIENT; also : not capable " and that adequate means
"sufficient for a specific requirement" and "lawfully and reasonably
sufficient". I guess I meant it as a negation of the last meaning, and even
that is a bit too strong. Obviously the preprocessor is "capable", because
hey, there's the code, but it's not, let me rephrase - very "fit" for the
task.
I guess you have something that increases with the logarithm if that
number, is that correct?


Exponential structure, yes. A super-reduction of the idea is this:

#define A(x) B(B(x))
#define B(x) C(C(x))
#define C(x) x

Here, the x argument gets scanned for expansion with a base-2 exponential.
With
about 25 macros you're already into millions of scans. Each of those
scans can
be an arbitrary computational step.


Wouldn't it be nicer if you just had one mechanism (true recursion or
iteration) that does it all in one shot?
Andrei

[ See http://www.gotw.ca/resources/clcm.htm for info about ]
[ comp.lang.c++.moderated. First time posters: Do this! ]
Jul 22 '05 #26

P: n/a

"Daveed Vandevoorde" <go****@vandevoorde.com> wrote in message
news:52**************************@posting.google.c om...
"Andrei Alexandrescu wrote:
[...]
> Maybe "export" which is so broken and so useless and so abusive that its > implementers have developed Stockholm syndrome during the long years that > took them to implement it?
How is "export" useless and broken?

Have you used it for any project? I find it very pleasant
to work with in practice.

From my readings on export, the benefits are supposed to be:


1) avoid pollution of the name space with the names involved in the
implementation details of the template, sometimes called "code hygene"

2) template source code hiding

3) faster compilation

Examining each of these in turn:

1) Isn't this what C++ namespaces are for?

2) Given the ease with which Java .class files are "decompiled" back into
source code, and the fact that precompiled templates will necessarilly
contain even more semantic info than .class files, it is hard to see how
exported templates offer secure hiding of template implementations. It is
not analagous to the problem of "turning hamburger back into a cow" that
object file decompilers have. While some "security through obscurity" may be
achieved by not documenting the file format, if the particular compiler
implementation is popular, there surely will appear some tool to do it.

3) The faster compilation is theoretically based on the idea that the
template implementation doesn't need to be rescanned and reparsed every time
it is #include'd. However, many modern C++ compilers already support
"precompiled headers", which already provide just that capability without
export at all. I'd be happy to accept a compilation speed benchmark
challenge of Digital Mars DMC++ with precompiled headers vs an export
implementation.

I look at export as a cost/benefit issue. What are the benefits, and what
are the costs? The benefits, as discussed above, are not demonstrated to be
significant. The cost, however, is enormous - 2 to 3 man years of
implementation effort, which means that other, more desirable, features
would necessarilly get deferred/delayed.

What export does is attempt to graft some import model semantics onto the
inclusion model semantics. The two are fundamentally at odds, hence all the
complicated rules and implementation effort. The D Programming Language
simply abandons the inclusion model semantics completely, and goes instead
with true imported modules. This means that exported templates in D are
there "for free", i.e. they involve no extra implementation effort and no
strange rules. And after using them for a while, yes it is very pleasant to
be able to do:

----- foo.d ----
template Foo(T) { T x; }
----- bar.d ----
import foo;

foo.Foo!(int); // instantiate template Foo with 'int' type
----------------

-Walter
www.digitalmars.com free C/C++/D compilers
[ See http://www.gotw.ca/resources/clcm.htm for info about ]
[ comp.lang.c++.moderated. First time posters: Do this! ]
Jul 22 '05 #27

P: n/a
Walter wrote:
From my readings on export, the benefits are supposed to be:


The benefits of export are the same as the benefits of
separate compilation of non-templated code. That is, for
normal code, I can write

In joe.h:
struct joe { void frob(); double gargle(double); };
In joe.c:
namespace { void fiddle() { } double grok() { return 3.7; } }
void joe::frob() { fiddle(); }
double joe::gargle(double d) { return d + grok(); }

And users of the joe class only include joe.h, and never
have to worry about joe.c. Without export, if joe were a
template class, then every compilation unit which uses a
method of joe would have to include the implementation of
those methods bodily. This constrains the implementation;
for example, that anonymous namespace wouldn't work. With
export, users include the header file and they are done.

[ See http://www.gotw.ca/resources/clcm.htm for info about ]
[ comp.lang.c++.moderated. First time posters: Do this! ]
Jul 22 '05 #28

P: n/a
On 17 Aug 2004 18:03:15 -0400, "Walter"
<wa****@digitalmars.nospamm.com> wrote:

"Daveed Vandevoorde" <go****@vandevoorde.com> wrote in message
news:52**************************@posting.google. com...
"Andrei Alexandrescu wrote:
[...]
> Maybe "export" which is so broken and so useless and so abusive thatits > implementers have developed Stockholm syndrome during the long yearsthat > took them to implement it?
How is "export" useless and broken?

Have you used it for any project? I find it very pleasant
to work with in practice.

From my readings on export, the benefits are supposed to be:


1) avoid pollution of the name space with the names involved in the
implementation details of the template, sometimes called "code hygene"

2) template source code hiding

3) faster compilation


4) avoid pollution of the lookup context in which the template
definition exists with names from the instantiation context, except
where these are required (e.g. dependent names).

5) reduce dependencies
Examining each of these in turn:
Hmm, this is all rehashing I think.
1) Isn't this what C++ namespaces are for?
That ignores macros and argument dependent lookup, which transcend
namespaces (or rather operate in a slightly unpredictable set of
namespaces in the case of the latter).
2) Given the ease with which Java .class files are "decompiled" back into
source code, and the fact that precompiled templates will necessarilly
contain even more semantic info than .class files, it is hard to see how
exported templates offer secure hiding of template implementations. It is
not analagous to the problem of "turning hamburger back into a cow" that
object file decompilers have. While some "security through obscurity" may be
achieved by not documenting the file format, if the particular compiler
implementation is popular, there surely will appear some tool to do it.
Precompiled templates don't need more semantic information than class
files. In particular, all code involving non-dependent names can be
fully compiled, or at the very least, the names can be removed from
the precompiled template file. In other words, the file format might
consist of a combination of ordinary object code intermingled with
other more detailed stuff.

I believe that EDG may be working on something related to this, but
they're keeping fairly schtum about it so I don't know the details.
3) The faster compilation is theoretically based on the idea that the
template implementation doesn't need to be rescanned and reparsed every time
it is #include'd. However, many modern C++ compilers already support
"precompiled headers", which already provide just that capability without
export at all. I'd be happy to accept a compilation speed benchmark
challenge of Digital Mars DMC++ with precompiled headers vs an export
implementation.
The compilation speed advantages also come from dependency reductions.
If template definitions are modified, only the template
specializations need to be recompiled. If the instantiation context
(which I believe only consists of all extern names) is saved in each
case, then the instantiation can be recompiled without having to
recompile the whole TU containing the implicit template instantiation.
I look at export as a cost/benefit issue. What are the benefits, and what
are the costs? The benefits, as discussed above, are not demonstrated to be
significant. The cost, however, is enormous - 2 to 3 man years of
implementation effort, which means that other, more desirable, features
would necessarilly get deferred/delayed.

What export does is attempt to graft some import model semantics onto the
inclusion model semantics. The two are fundamentally at odds, hence all the
complicated rules and implementation effort. The D Programming Language
simply abandons the inclusion model semantics completely, and goes instead
with true imported modules. This means that exported templates in D are
there "for free", i.e. they involve no extra implementation effort and no
strange rules. And after using them for a while, yes it is very pleasant to
be able to do:

----- foo.d ----
template Foo(T) { T x; }
----- bar.d ----
import foo;

foo.Foo!(int); // instantiate template Foo with 'int' type
----------------


It seems to me that there were three alternatives in C++.

1. Don't support any kind of model except the inclusion one. If this
is done, two-phase name lookup should have been dropped as well, since
it is confusing at best, and only really necessary if export is to be
supported. It catches some errors earlier, but at some expense to
programmers. The "typename" and "template" disambiguators could also
blissfully be dropped.

2. Add module support to C++. This obviously is as large a proposal as
export, but clearly #include is unhelpful in a language as complex as
C++; really you just want to import extern names from particular
namespaces, not textually include a whole file.

3. Support separate compilation of templates in some form, without
modules. If you do this, I think you pretty much end up with two phase
name lookup and export (indicating that it isn't broken).

The committee rejected 1, I doubt anyone suggested 2, so 3 was the
remaining choice.

Personally, I might well have gone with 1; templates are complicated
enough, and two-phase name lookup and export have unnecessarily made
them much more complex. On the other hand, 1 doesn't provide the
benefits that export does provide.

So, ignoring implementation difficultly, I think export does just win
as a useful feature. With the implementation difficultly, it's not so
clear.

Tom

[ See http://www.gotw.ca/resources/clcm.htm for info about ]
[ comp.lang.c++.moderated. First time posters: Do this! ]
Jul 22 '05 #29

P: n/a
"Daveed Vandevoorde" <go****@vandevoorde.com> wrote in message
news:52**************************@posting.google.c om...
"Andrei Alexandrescu wrote:
[...]
Maybe "export" which is so broken and so useless and so abusive that its
implementers have developed Stockholm syndrome during the long years
that
took them to implement it?


How is "export" useless and broken?

Have you used it for any project? I find it very pleasant
to work with in practice.


Haven't used export, and not because I didn't wanna.

[Whom do you think I referred to when mentioning the Stockholm syndrome?
:o)]

I'd say, a good feature, like a good business idea, can be explained in a
few words. What is the few-words good explanation of export? (I *am*
interested.)

In addition, a good programming language feature does what it was intended
to do (plus some other neat things :o)), and can be reasonably implemented.

I think we all agree "export" fell the last test.

Does it do what it was intended to do? (Again, I *am* interested.)

A summary of what's the deal with export would be of great help to at least
myself, so I'd be indebted to anyone who'd give me one. For full disclosure,
my current perception is:

1. It's hard to give a good account of what export does in a few words, at
least an account that's impressive.

2. export failed horribly at doing what was initially supposed to do. I
believe what it was supposed to do was true (not "when"s and "cough"s and
"um"s) separate compilation of templates. Admittedly, gaffes in other areas
of language design are at fault for that failure. Correct me if I'm wrong.

3. export failed miserably at being reasonably easy to implement.

Combined with 1 and 2, I can only say: at the very best, export is a Pyrrhic
victory.
Andrei

[ See http://www.gotw.ca/resources/clcm.htm for info about ]
[ comp.lang.c++.moderated. First time posters: Do this! ]
Jul 22 '05 #30

P: n/a
"Hyman Rosen" <hy*****@mail.com> wrote in message news:10***************@master.nyc.kbcfp.com...
| Walter wrote:
| >From my readings on export, the benefits are supposed to be:
|
| The benefits of export are the same as the benefits of
| separate compilation of non-templated code.

| With
| export, users include the header file and they are done.

ok, but with separate compilation we get faster compilation. How much faster will/can the export version be?

Currently is also seriously tedious to implement class template member functions outside the class. I hope experience with
export can help promote class namespaces as described by Carl Daniel (see
http://www.open-std.org/jtc1/sc22/wg...2003/n1420.pdf ).

br

Thorsten

[ See http://www.gotw.ca/resources/clcm.htm for info about ]
[ comp.lang.c++.moderated. First time posters: Do this! ]
Jul 22 '05 #31

P: n/a

"Hyman Rosen" <hy*****@mail.com> wrote in message
news:10***************@master.nyc.kbcfp.com...
The benefits of export are the same as the benefits of
separate compilation of non-templated code. That is, for
normal code, I can write

In joe.h:
struct joe { void frob(); double gargle(double); };
In joe.c:
namespace { void fiddle() { } double grok() { return 3.7; } }
void joe::frob() { fiddle(); }
double joe::gargle(double d) { return d + grok(); }

And users of the joe class only include joe.h, and never
have to worry about joe.c.
They would for templates, since the compiler will need to precompile it at
some point. You'd still have to put a dependency on joe.c in the makefile,
etc., since there's now an order to compiling the source files. Furthermore,
there's no indication to the compiler that the template implementation is in
joe.c, so some sort of cross reference would need building or it'd need to
be manually specified. That's all doable, of course, and is not that big an
issue, but I wished to point out that it isn't quite as simple as object
files are. A similar procedure is necessary for precompiled headers.
Without export, if joe were a
template class, then every compilation unit which uses a
method of joe would have to include the implementation of
those methods bodily.
Yes, that's right. But just what are the benefits of separate compilation?
They are the 3 I mentioned. There is no semantic benefit that namespaces
can't address. (The old problem of needing to separate things because of
insufficient memory for compilation has faded away.)
This constrains the implementation;
for example, that anonymous namespace wouldn't work.
I don't understand why namespaces wouldn't do the job. Isn't that kind of
problem exactly what namespaces were designed to solve?
With export, users include the header file and they are done.


So, we have, for the user:
export template foo ...
v.s.
#include "foo_implementation.h"

and they're done in either case. Sure, the former is slightly prettier, but
if we're going to overhaul C++ for aesthetic appeal, I'd do a lot of things
that are a LOT easier to implement before that one <g>.
[ See http://www.gotw.ca/resources/clcm.htm for info about ]
[ comp.lang.c++.moderated. First time posters: Do this! ]
Jul 22 '05 #32

P: n/a

"tom_usenet" <to********@hotmail.com> wrote in message
news:9k********************************@4ax.com...
On 17 Aug 2004 18:03:15 -0400, "Walter"
<wa****@digitalmars.nospamm.com> wrote:
From my readings on export, the benefits are supposed to be:
1) avoid pollution of the name space with the names involved in the
implementation details of the template, sometimes called "code hygene"

2) template source code hiding

3) faster compilation


4) avoid pollution of the lookup context in which the template
definition exists with names from the instantiation context, except
where these are required (e.g. dependent names).


Shouldn't namespaces should cover this? That's what they're for.
5) reduce dependencies
I think that is another facet of 1 and 4.
Examining each of these in turn:


Hmm, this is all rehashing I think.
1) Isn't this what C++ namespaces are for?


That ignores macros


I'll concede that it can help with macros, though I suggest that the
problems with macros remain and would be far better addressed with things
like scoped macros. Export is a particularly backwards way to solve problems
with the preprocessor, sort of like fixing rust by putting duct tape over it
<g>. Good practice with macros is to treat them all as having potentially
global effect.
and argument dependent lookup, which transcend
namespaces (or rather operate in a slightly unpredictable set of
namespaces in the case of the latter).
I don't see how this would be a problem.

2) Given the ease with which Java .class files are "decompiled" back into
source code, and the fact that precompiled templates will necessarilly
contain even more semantic info than .class files, it is hard to see how
exported templates offer secure hiding of template implementations. It is
not analagous to the problem of "turning hamburger back into a cow" that
object file decompilers have. While some "security through obscurity" may beachieved by not documenting the file format, if the particular compiler
implementation is popular, there surely will appear some tool to do it.


Precompiled templates don't need more semantic information than class
files. In particular, all code involving non-dependent names can be
fully compiled,


There's very little of that in templates, otherwise, they wouldn't need to
be templates. Realistically, I just don't see compiler vendors trying to mix
object code with syntax trees - the implementation cost is very high and the
benefit for a typical template is nil.
or at the very least, the names can be removed from
the precompiled template file.
Removing a few names doesn't help much - .class files remove names from all
the locals, and that hasn't even slowed down making a decompiler for it.
In other words, the file format might
consist of a combination of ordinary object code intermingled with
other more detailed stuff.
Consider that within the precompiled template file, the compiler will need
to extract the names and offsets of all the members of the template classes,
coupled with the syntax trees of all the template functions, waiting to be
decorated with names and types. That is much more than what's in a .class
file. Another way to see this is that template files will necessarilly be
*before* the semantic stage of the compiler, whereas .class files are
generated *after* the semantic stage; more information can be thrown away in
the latter case.

3) The faster compilation is theoretically based on the idea that the
template implementation doesn't need to be rescanned and reparsed every timeit is #include'd. However, many modern C++ compilers already support
"precompiled headers", which already provide just that capability without
export at all. I'd be happy to accept a compilation speed benchmark
challenge of Digital Mars DMC++ with precompiled headers vs an export
implementation.

The compilation speed advantages also come from dependency reductions.
If template definitions are modified, only the template
specializations need to be recompiled. If the instantiation context
(which I believe only consists of all extern names) is saved in each
case, then the instantiation can be recompiled without having to
recompile the whole TU containing the implicit template instantiation.


I'll still be happy to do the benchmark <g>. In my experience with projects
that attempted to maintain a complex dependency database, the time spent
maintaining it exceeded the time spent doing a global rebuild. Worse, the
dependency database was a rich source of bugs, so whenever there were
problems with the result, the first thing one tried was a global rebuild.
(For an example, consider "incremental linker" technology. You'd think that
would be a fairly easy problem, but incremental linkers suffered from so
many bugs that the first step in debugging was to do a clean build. Also,
Digital Mars' optlink does a full build faster than the incremental linkers
do an incremental one. Another example I know of is one where the dependency
database caused a year delay in the project and never did work right,
arguably causing the eventual failure of the entire project.)

So, ignoring implementation difficultly, I think export does just win
as a useful feature. With the implementation difficultly, it's not so
clear.


My problem is with the "ignoring implementation difficulty" bit. I attended
some of the C++ meetings early on, and a clear attitude articulated to me by
more than one member was that implementation difficulty was irrelevant, only
the user experience mattered. My opinion then, as now, is that
implementation difficulty strongly affects the users, since:

1) hard to implement features take a long time to implement, years in the
case of export, so users have to wait
2) hard to implement features usually result in buggy implementations that
take a long time to shake out
3) compiler vendors implement features in different orders, leading to
incompatible implementations
4) spending time on hard to implement features means that other features,
potentially more desirable to users, get backburnered

And we've all seen 1..4 impacting the users for years on end, and is still
ongoing.

There's another issue that may or may not matter depending on who you talk
to: hard to implement features wind up shrinking the number of
implementations. Back in the 80's, I once counted 30 different C compilers
available for just the IBM PC. How many different implementations of C++ are
there now? And it's still shrinking. I venture that a shrinking
implementation base is not good for the long term health of the language.
[ See http://www.gotw.ca/resources/clcm.htm for info about ]
[ comp.lang.c++.moderated. First time posters: Do this! ]
Jul 22 '05 #33

P: n/a
Walter wrote:
"Hyman Rosen" <hy*****@mail.com> wrote
And users of the joe class only include joe.h, and never
have to worry about joe.c.
They would for templates, since the compiler will need to precompile it at
some point. You'd still have to put a dependency on joe.c in the makefile,
etc., since there's now an order to compiling the source files.


If this comes to you from a library provider, they could ship
compiled versions of the implementation file, just as they now
ship object files (assuming the compiler vendor supplied such
support). If it's your own templates, you simply make your
overall project depend on the implementation files if your
vendor uses the freedom given by 14/9 to force you to compile
the implementations first.

Furthermore, there's no indication to the compiler that the
template implementation is in joe.c, so some sort of cross
reference would need building or it'd need to be manually
specified. That's all doable, of course, and is not that big
an issue, but I wished to point out that it isn't quite as
simple as object files are.
How is it different from object files? For normal source files
you must specify which object files are part of your program,
what their source files are, and what other files they depend
on. Some of this is done automatically by various development
platforms, but it is always done. How are exported templates
different?
But just what are the benefits of separate compilation?
They are the 3 I mentioned. There is no semantic benefit
that namespaces can't address.
I will believe this once you agree that C++ should require
definitions of all functions, template or not, to be included
in every compilation unit which calls them. If you do not
agree that this is a good idea for plain functions, I do not
see why it's a good idea for function templates.
I don't understand why namespaces wouldn't do the job.
Isn't that kind of problem exactly what namespaces were
designed to solve?
No. By requiring that implementations be bodily included
where instantiations are needed, the implementations are,
first, subject to the various pollutions of the instantiation
space, not least of which are macros, and secondly, are not
able to use anonymous namespaces since that will break the
ODR. As I said, unless you can convince me that the inclusion
model is good for ordinary methods, I have no reason to believe
that it's good for template methods.

So, we have, for the user:
export template foo ...
v.s.
#include "foo_implementation.h"
and they're done in either case.


You fail to see that the second case exposes the implementation
to the vagaries of the instantiation environment while the first
does not. Think of macros if nothing else.

[ See http://www.gotw.ca/resources/clcm.htm for info about ]
[ comp.lang.c++.moderated. First time posters: Do this! ]
Jul 22 '05 #34

P: n/a
Thorsten Ottosen wrote:
ok, but with separate compilation we get faster compilation.
How much faster will/can the export version be?
Faster by a factor of 3.7. The point is not to have faster
compilation, although that would be nice, but to have cleaner
compilation, so that implementations do not intertwine with
the usage environments any more than required by the lookup
rules.
Currently is also seriously tedious to implement class
template member functions outside the class.


Huh? Why? Just because you have to repeat the template
header part? That's not that much more onerous than
repeating ClassName:: in front of ordinary methods.
Make a macro if it's that bothersome. With export, you
won't have to worry about the macros colliding with
client code!

[ See http://www.gotw.ca/resources/clcm.htm for info about ]
[ comp.lang.c++.moderated. First time posters: Do this! ]
Jul 22 '05 #35

P: n/a
Andrei Alexandrescu (See Website for Email) wrote:
What is the few-words good explanation of export?
Here's my attempt:
Because of historical reasons having to do with how templates are
implemented, template methods (and static data members) are
effectively considered inline, and so their definitions must be
included in any compilation unit which requires an instantiation.

The export keyword breaks this requirement. When a template method
is marked as export, its definition does not get included into
compilation units which use it. Instead, it goes into its own source
file(s), and is compiled separately.

The C++ standard permits an implementation to require that the
definition of an exported method or object must be compiled before a
reference to such an object is compiled.
2. export failed horribly at doing what was initially supposed to do. I
believe what it was supposed to do was true separate compilation of templates.


Compilation of templates is obviously not like compilation of ordinary
source, since template instantiation requires information from the
instantiation context and from the template parameters. But what do you
mean by "true" separate compilation? What kind of separate compilation
is not true? I understand that some people wish that export should be a
way to prevent people from examining template implementation code, but
that's hardly something for the standard to worry about. I understand
that some people either think or wish that export should affect how
temnplates are instantiated, but export has nothing to do with that.

To express it as simply as possible, imagine that C++ required that every
function be declared inline, and that therefore the implementation of every
function must be included in any compilation unit that used it. This is the
model that unexported templates labor under, and is what export is designed
to avoid.

[ See http://www.gotw.ca/resources/clcm.htm for info about ]
[ comp.lang.c++.moderated. First time posters: Do this! ]
Jul 22 '05 #36

P: n/a
je*********@gmail.com (Jeremy Siek) wrote in message news:<21**************************@posting.google. com>...
CALL FOR PAPERS/PARTICIPATION

C++, Boost, and the Future of C++ Libraries
Workshop at OOPSLA
October 24-28, 2004
Vancouver, British Columbia, Canada
http://tinyurl.com/4n5pf


I don't have the time to pursue it myself, but maybe someone else might.

I have written a prototype for an alternative approach to generic
ordered containers.

http://www.geocities.com/wkaras/gen_cpp/avl_tree.html

For someone who says: "I have some instances of a type that I want
to keep in order; I don't mind if the instances are copied in and
out of the heap as the means of storing them in order", STL map/set
are what they're looking for.

For someone who says, "I have some 'things' that I want to keep in
order; the 'things' are unique identified by 'handles'; I am willing
to 'palletize' my 'things' so that each one can store the link 'handles'
necessary to form the container; I don't (necessarily) want to copy
the 'things', and I don't want the container to rely on the heap; I'm
willing to provide more 'glue logic' than what's needed when using
map or set", this alternative type of ordered container is what
they're looking for.

This approach has similarities with the approach that relies on
the items to be collected having a base class with the links
needed to form the container. But it is significantly more
flexible.

On the other hand, given the existence of the WWW, I'm not sure
it's worth the effort to add more templates to the standard lib.
when they can easily be implemented in a portable way. It seems like
the standard lib. is becoming like the Academy Awards for good
code, rather than a way of making it easier to write portable
code.

[ See http://www.gotw.ca/resources/clcm.htm for info about ]
[ comp.lang.c++.moderated. First time posters: Do this! ]
Jul 22 '05 #37

P: n/a
"Andrei Alexandrescu \(See Website for Email\)"

[ ... ]
I'd say, a good feature, like a good business idea, can be explained in a
few words. What is the few-words good explanation of export? (I *am*
interested.) From what I've heard, the good explanation is that it prevented a civil war in the C++ committee, so without it there might not be a C++
standard at all (or at least it might have been delayed considerably).

As to why people thought they wanted it: so it would be possible to
distribute template code as object files in libraries, much like
non-template code is often distributed. I don't believe that export,
as currently defined, actually supports this though.
Does it do what it was intended to do? (Again, I *am* interested.)
It's hard to say without certainty of the intent. Assuming my guess
above at the intent was correct, then I'm quite certain it does NOT do
what's intended.

If, OTOH, the idea is that template code is still distributed as
source code, and export merely allows that source code to be compiled
by itself, then it does what was intended, within a limited scope.

OTOH, if that was desired to speed up compilation, then I think it
basically fails -- at least with Comeau C++, compiling the templates
separately doesn't seem to gain much, at least for me (and since I
have no other compilers that support, or even plan to soon support
export, Comeau is about the only one that currently matters).

[ ... ]
2. export failed horribly at doing what was initially supposed to do. I
believe what it was supposed to do was true (not "when"s and "cough"s and
"um"s) separate compilation of templates. Admittedly, gaffes in other areas
of language design are at fault for that failure. Correct me if I'm wrong.
I don't know how much is true gaffes, and how much the simple fact
that templates are enough different from normal code that what was
expected was simply (at least extremely close to) impossible.
3. export failed miserably at being reasonably easy to implement.


I can hardly imagine how anybody could argue that one.

--
Later,
Jerry.

The universe is a figment of its own imagination.

[ See http://www.gotw.ca/resources/clcm.htm for info about ]
[ comp.lang.c++.moderated. First time posters: Do this! ]
Jul 22 '05 #38

P: n/a
"Hyman Rosen" <hy*****@mail.com> wrote in message news:10***************@master.nyc.kbcfp.com...

| > Currently is also seriously tedious to implement class
| > template member functions outside the class.
|
| Huh? Why? Just because you have to repeat the template
| header part?

yes.

| That's not that much more onerous than
| repeating ClassName:: in front of ordinary methods.

many templates have several parameters; then they might have templated member functions.
This get *very* tedious to define outside a class.

br

Thorsten

[ See http://www.gotw.ca/resources/clcm.htm for info about ]
[ comp.lang.c++.moderated. First time posters: Do this! ]
Jul 22 '05 #39

P: n/a
"Andrei Alexandrescu wrote:
"Daveed Vandevoorde" <go****@vandevoorde.com> wrote in message
news:52**************************@posting.google.c om...
"Andrei Alexandrescu wrote:
[...]
Maybe "export" which is so broken and so useless and so abusive that its
implementers have developed Stockholm syndrome during the long years
that
took them to implement it?
How is "export" useless and broken?

Have you used it for any project? I find it very pleasant
to work with in practice.


Haven't used export, and not because I didn't wanna.


What has prevented you from at least trying it? An affordable
implementation has been available for well over a year.

Without doing so, I fail to see how you can objectively make the
assertions you made.
[Whom do you think I referred to when mentioning the Stockholm syndrome?
:o)]
Adding a smiley to an innapropriate remark does not make it
any more appropriate.
I'd say, a good feature, like a good business idea, can be explained in a
few words. What is the few-words good explanation of export? (I *am*
interested.)
It allows you to separate a function, member function, or static data
member implementation ("definition") in a single translation unit.
In addition, a good programming language feature does what it was intended
to do (plus some other neat things :o)), and can be reasonably implemented.
To clarify: That is "good" in your personal point of view.

The intent of the feature was to protect template definitions from
"name leakage" (I think that's the term that was used at the time;
it refers to picking up unwanted declaration due to excessive
#inclusion). export certainly fulfills that.

export also allows code to be compiled faster. (I'm seeing gains
without even using an export-aware back end.)

export also allows the distribution of templates in compiled form
(as opposed to source form).
I think we all agree "export" fell the last test.
export was hard to implement for us, no doubt.
Does it do what it was intended to do? (Again, I *am* interested.)
Yes, and more. See above.
A summary of what's the deal with export would be of great help to at least
myself, so I'd be indebted to anyone who'd give me one. For full disclosure,
my current perception is:

1. It's hard to give a good account of what export does in a few words, at
least an account that's impressive.
Impressive is in the eye of the beholder. Whenever I use the feature,
I'm impressed that it works so smoothly.
2. export failed horribly at doing what was initially supposed to do. I
believe what it was supposed to do was true (not "when"s and "cough"s and
"um"s) separate compilation of templates. Admittedly, gaffes in other areas
of language design are at fault for that failure. Correct me if I'm wrong.
How does it fail at separate compilation?
3. export failed miserably at being reasonably easy to implement.
While it is true that it was hard to implement for EDG (I am not aware of
anyone else having even tried), it was never claimed by the proponents
that it would be easy to implement.

After EDG implemented export, Stroustrup once asked what change to
C++ might simplify its implementation without giving up on the separate
compilation aspect of it. I couldn't come up with anything other than the
very drastic notion of making the language 100% modular (i.e., every entity
can be declared in but one place). That doesn't mean that a template
separation model is not desirable.
Combined with 1 and 2, I can only say: at the very best, export is a Pyrrhic
victory.


The history C++ "export" feature may well be the very incarnation of irony.
However, I don't think there is a matter of "victory" here.

I contend that, all other things being equal, export templates are more
pleasant to work with than the equivalent inclusion templates. That by
itself is sufficient to cast doubt on your claim that the feature is "broken
and useless."

Daveed

[ See http://www.gotw.ca/resources/clcm.htm for info about ]
[ comp.lang.c++.moderated. First time posters: Do this! ]
Jul 22 '05 #40

P: n/a
Hyman Rosen <hy*****@mail.com> writes:

| Andrei Alexandrescu (See Website for Email) wrote:
| > What is the few-words good explanation of export?

Do those few-words need to be technical or are they marketing purpose?
This is a genuine question, as I suspect that too much hype and
marketing have been pushed against export. That impression I got was
not cleared up after discussion of a well kown paper.

| Here's my attempt:
| Because of historical reasons having to do with how templates are
| implemented, template methods (and static data members) are
| effectively considered inline, and so their definitions must be
| included in any compilation unit which requires an instantiation.

I think that use of "inline" is unfortunate. I don't think that
description accurately covers what CFront did and other historical
repository-based instantiations (like in old Sun CC).

Export is the result of a compromise. A compromise between tenants of
inclusion model only and tenants of separate compilation of templates.

--
Gabriel Dos Reis
gd*@integrable-solutions.net

[ See http://www.gotw.ca/resources/clcm.htm for info about ]
[ comp.lang.c++.moderated. First time posters: Do this! ]
Jul 22 '05 #41

P: n/a

"Hyman Rosen" <hy*****@mail.com> wrote in message
news:10***************@master.nyc.kbcfp.com...
To express it as simply as possible, imagine that C++ required that every
function be declared inline, and that therefore the implementation of every function must be included in any compilation unit that used it. This is the model that unexported templates labor under, and is what export is designed to avoid.


In the D programming language, all functions are potentially inline (at the
compiler's discretion), and so all the function bodies are available, even
though it follows the separate compilation model. All the programmer does is
use the statement:

import foo;

and the entire semantic content of foo.d is available to the compiler,
including whatever template and function bodies are in foo.d. So, is this a
burden the compiler labors under? Not that anyone has noticed, it compiles
code at a far faster rate than a C++ compiler can. I can go into the reasons
why if anyone is interested.

But back to what can be done with C++. Many compilers implement precompiled
headers, which do effectively address this problem reasonably well. Export
was simply not needed to speed up compilation, and for those who don't
believe me, I stand by my challenge to benchmark Digital Mars C++ with
precompiled headers against any export template implementation for project
build speed.
[ See http://www.gotw.ca/resources/clcm.htm for info about ]
[ comp.lang.c++.moderated. First time posters: Do this! ]
Jul 22 '05 #42

P: n/a

"Hyman Rosen" <hy*****@mail.com> wrote in message
news:10***************@master.nyc.kbcfp.com...
Walter wrote:
> "Hyman Rosen" <hy*****@mail.com> wrote
>>And users of the joe class only include joe.h, and never
>>have to worry about joe.c.

>
> They would for templates, since the compiler will need to precompile it at > some point. You'd still have to put a dependency on joe.c in the makefile, > etc., since there's now an order to compiling the source files.


If this comes to you from a library provider, they could ship
compiled versions of the implementation file, just as they now
ship object files (assuming the compiler vendor supplied such
support). If it's your own templates, you simply make your
overall project depend on the implementation files if your
vendor uses the freedom given by 14/9 to force you to compile
the implementations first.


I agree it's not a big problem, it just isn't as simple as not having to
worry about joe.c <g>.
> Furthermore, there's no indication to the compiler that the
> template implementation is in joe.c, so some sort of cross
> reference would need building or it'd need to be manually
> specified. That's all doable, of course, and is not that big
> an issue, but I wished to point out that it isn't quite as
> simple as object files are.

How is it different from object files? For normal source files
you must specify which object files are part of your program,
what their source files are, and what other files they depend
on. Some of this is done automatically by various development
platforms, but it is always done. How are exported templates
different?


The cross reference database is what the librarian does <g>. And again, I
agree that this is not a huge problem, but it is a problem and it does, for
example, impose an order on the compilations that was not required before.
But I'll also say that, as a compiler vendor, one of the most common tech
questions I get is "I am getting an undefined symbol message from the
linker, what do I do now?" I imagine this would be worse with template
interdependencies, but I could be wrong.

> But just what are the benefits of separate compilation?
> They are the 3 I mentioned. There is no semantic benefit
> that namespaces can't address.

I will believe this once you agree that C++ should require
definitions of all functions, template or not, to be included
in every compilation unit which calls them. If you do not
agree that this is a good idea for plain functions, I do not
see why it's a good idea for function templates.


The D language does this, and it works fine. I've also heard of C++
compilers that do cross-module optimization that do this as well. Good idea
or not, it is doable with far less effort than export. But I emphasize that
the way the C++ language was designed makes it *easy* to implement separate
compilation for functions. That same design makes it *very hard* to do
separate compilation for templates, no matter how conceptually the same we
might wish them to be. If C++ had the concept of modules (rather than its
focus on source text), this would not be such a big problem. And it is not
enough that an idea be just a good idea, its advantages must outweigh the
costs. The costs of export are enormous, and the corresponding enormous gain
just isn't there.

> I don't understand why namespaces wouldn't do the job.
> Isn't that kind of problem exactly what namespaces were
> designed to solve?

No. By requiring that implementations be bodily included
where instantiations are needed, the implementations are,
first, subject to the various pollutions of the instantiation
space, not least of which are macros, and secondly, are not
able to use anonymous namespaces since that will break the
ODR.


Why not use a named namespace?
> So, we have, for the user:
> export template foo ...
> v.s.
> #include "foo_implementation.h"
> and they're done in either case.


You fail to see that the second case exposes the implementation
to the vagaries of the instantiation environment while the first
does not. Think of macros if nothing else.


I'll agree on the macro issue, but nothing else <g>. And to repeat what I
said earlier about the macro issue, export seems to be an awfully expensive
solution to the macro scoping problem, especially since the macro pollution
issue still remains. Wouldn't it be better to come at the macro problem
head on?
[ See http://www.gotw.ca/resources/clcm.htm for info about ]
[ comp.lang.c++.moderated. First time posters: Do this! ]
Jul 22 '05 #43

P: n/a
"Hyman Rosen" <hy*****@mail.com> wrote in message
news:10***************@master.nyc.kbcfp.com...
Andrei Alexandrescu (See Website for Email) wrote:
What is the few-words good explanation of export?
Here's my attempt:
Because of historical reasons having to do with how templates are
implemented, template methods (and static data members) are
effectively considered inline, and so their definitions must be
included in any compilation unit which requires an instantiation.

The export keyword breaks this requirement. When a template method
is marked as export, its definition does not get included into
compilation units which use it. Instead, it goes into its own source
file(s), and is compiled separately.

The C++ standard permits an implementation to require that the
definition of an exported method or object must be compiled before a
reference to such an object is compiled.


Thanks.
2. export failed horribly at doing what was initially supposed to do. I
believe what it was supposed to do was true separate compilation of
templates.


Compilation of templates is obviously not like compilation of ordinary
source, since template instantiation requires information from the
instantiation context and from the template parameters. But what do you
mean by "true" separate compilation? What kind of separate compilation
is not true?


By "true" separate compilation I understand the dependency gains that
separate compilation achieves. That is crucial, and whole projects can be
organized around that idea. It translates into what needs be compiled when
something is touched, with the expected build speed and stability tradeoffs.

Templates cannot be meaningfully typechecked during their compilation. They
also cause complications when instantiated in different contexts. That makes
them unsuitable for "true" separate compilation. Slapping a keyword that
makes them appear as separately compilable while the true inside the
compilation system guts is different (in terms of dependency management and
compilation speed) didn't help.
I understand that some people wish that export should be a
way to prevent people from examining template implementation code, but
that's hardly something for the standard to worry about.
I consider that a secondary issue.
To express it as simply as possible, imagine that C++ required that every
function be declared inline, and that therefore the implementation of
every
function must be included in any compilation unit that used it. This is
the
model that unexported templates labor under, and is what export is
designed
to avoid.


I don't think that that's a good parallel. Because the non-inline functions
can be typechecked and compiled to executable code in separation. Templates
cannot.
Andrei

[ See http://www.gotw.ca/resources/clcm.htm for info about ]
[ comp.lang.c++.moderated. First time posters: Do this! ]
Jul 22 '05 #44

P: n/a
"Jerry Coffin" <jc*****@taeus.com> wrote in message
news:b2*************************@posting.google.co m...
2. export failed horribly at doing what was initially supposed to do. I
believe what it was supposed to do was true (not "when"s and "cough"s and
"um"s) separate compilation of templates. Admittedly, gaffes in other
areas
of language design are at fault for that failure. Correct me if I'm
wrong.


I don't know how much is true gaffes, and how much the simple fact
that templates are enough different from normal code that what was
expected was simply (at least extremely close to) impossible.


Fundamentally templates are sensitive to the point where they are
instantiated, and not on the parameters alone. That's a classic PL design
mistake because it undermines modularity. The effects of such a mistake are
easily visible :o). They've been visible in a couple of early languages as
well.

Andrei

[ See http://www.gotw.ca/resources/clcm.htm for info about ]
[ comp.lang.c++.moderated. First time posters: Do this! ]
Jul 22 '05 #45

P: n/a
Walter wrote:
That same design makes it *very hard* to do separate compilation for templates,
no matter how conceptually the same we might wish them to be.
Just because it's hard for compiler makers doesn't mean it's hard for
compiler users. Indeed, I find the inclusion model for templates to
be difficult and annoying and find export to be perfectly intuitive.
its advantages must outweigh the costs. The costs of export are enormous, and the
corresponding enormous gain just isn't there.
Export imposes no cost at all on programmers. It's a simple concept and simple to use.
Compiler vendors for the most part can't be bothered to implement it, because everyone
uses the inclusion model because they have to.
Why not use a named namespace?


Because then the inclusion model would violate the ODR, unless I prepare *another*
source file with the implementation of those gloabl functions that my template
methods want to call. Why should I jump through hoops when the standard give me a
perfectly simple way not to?

[ See http://www.gotw.ca/resources/clcm.htm for info about ]
[ comp.lang.c++.moderated. First time posters: Do this! ]
Jul 22 '05 #46

P: n/a
"Andrei Alexandrescu \(See Website for Email\)" <Se****************@moderncppdesign.com> writes:

| "Daveed Vandevoorde" <go****@vandevoorde.com> wrote in message
| news:52**************************@posting.google.c om...
| > "Andrei Alexandrescu wrote:
| > [...]
| > > Maybe "export" which is so broken and so useless and so abusive that its
| > > implementers have developed Stockholm syndrome during the long years
| > > that
| > > took them to implement it?
| >
| > How is "export" useless and broken?
| >
| > Have you used it for any project? I find it very pleasant
| > to work with in practice.
|
| Haven't used export, and not because I didn't wanna.

Very interesting.

--
Gabriel Dos Reis
gd*@integrable-solutions.net

[ See http://www.gotw.ca/resources/clcm.htm for info about ]
[ comp.lang.c++.moderated. First time posters: Do this! ]
Jul 22 '05 #47

P: n/a
"Daveed Vandevoorde" <go****@vandevoorde.com> wrote in message
news:52**************************@posting.google.c om...
"Andrei Alexandrescu wrote:
Without doing so, I fail to see how you can objectively make the
assertions you made.
[Whom do you think I referred to when mentioning the Stockholm syndrome?
:o)]


Adding a smiley to an innapropriate remark does not make it
any more appropriate.


Sorry you found it inappropriate. I thought of emailing a private excuse,
but excuses must be made in the front of those who witnessed the offense.
So - I am sorry; I had found the comparison funny and meant it without harm.

I think what I need to do is go use the feature, and then make noise when
I'm more based...
Andrei

[ See http://www.gotw.ca/resources/clcm.htm for info about ]
[ comp.lang.c++.moderated. First time posters: Do this! ]
Jul 22 '05 #48

P: n/a
Jerry Coffin wrote:
I don't believe that export, as currently defined, actually supports this though.


Why don't you believe it? What is it about export that you think would prevent
a compiler vendor from doing just that? Obviously compiled templates act as
further input to the "compiler" rather than to the "linker" (assuming those
distinctions exist), but what of that?
3. export failed miserably at being reasonably easy to implement.

I can hardly imagine how anybody could argue that one.


Who said it was supposed to be easy? Templates aren't easy, exceptions
aren't easy, manipulating vtable pointers during construction isn't easy.

[ See http://www.gotw.ca/resources/clcm.htm for info about ]
[ comp.lang.c++.moderated. First time posters: Do this! ]
Jul 22 '05 #49

P: n/a
Thorsten Ottosen wrote:
many templates have several parameters; then they might have templated member functions.
This get *very* tedious to define outside a class.


Like I said, just use a macro. Combine this with export,
so that the macro is hidden in the implementation file.

[ See http://www.gotw.ca/resources/clcm.htm for info about ]
[ comp.lang.c++.moderated. First time posters: Do this! ]
Jul 22 '05 #50

205 Replies

This discussion thread is closed

Replies have been disabled for this discussion.