By using this site, you agree to our updated Privacy Policy and our Terms of Use. Manage your Cookies Settings.
440,929 Members | 1,245 Online
Bytes IT Community
+ Ask a Question
Need help? Post your question and get tips & solutions from a community of 440,929 IT Pros & Developers. It's quick & easy.

To bean or not to bean

P: n/a
Stroustrup's view on classes, for the most part, seems to be centered around
the notion of invariants. After a bit of adjusting to the use of yamt (yet
another math term) in computer science, I came to appreciate some of the
significance in what he is asserting. I believe a good example would be
the Riemann-Christoffel curvature tensor. In the 4-space of general
relativity, there are 256 components of this beast.

One approach to representing this tensor might be a struct with 256 float
members all public. The user simply treats it as a big bag of bits. This
tensor - by definition - has certain invariant properties which must remain
constant under certain groups of operations. That means that, either the
user must be carful to enforce the rules preserving the invariants if he is
going to modify the components directly. For this reason, it makes sense
to put the data in a class and only allow controlled access through the use
of invariant preserving access methods.

Stroustrup also argues that a reasonable test to determine if you really do
have a class with invariants is to ask if there are multiple
representations of the data structure which are functionally equivalent.
Indeed there are for this tensor. And that can actually buy you a lot.
Although the tensor has 256 components, only 20 are independent. That
means you can represent the tensor as an object of only 20 data members,
and if circumstances demand all 256 components be available in the program,
that can be accomplished by multiply indexing the storage locations through
access methods which collectively give the illusion that the entire tensor
exists with all of it's 256 components.

So far, so good. I pretty much agree with his reasoning for distinguishing
between what should properly be represented as a class with private (or
protected) data, as opposed to simply a struct with all public members and
direct user access. I like a lot of the careful discernment found in C++ as
opposed to Java, for example. There are not concepts of constness and
mutable caching in typical Java literature. That shows me C++ has
expressive powers that go well beyond other general purpose languages.

There is however, one point that Stroustrup doesn't address regarding
'accessor' methods. He tells us we should avoid writhing classes with
direct accessor methods to manipulate the data. The reasoning seems
obvious to me. If the user has a reason to micro-manage the data, it
probably doesn't have a clearly defined invariant that could and should be
preserved under the operations applied to it collectively.

The Java community operates with a different philosophy regarding data and
classes. Basically, nothing is public other than static constants, and
accessor methods. The simplest notion of a been in Java is just a class
with data members read and written through the mediation of 'set' and 'get'
methods. The payoff to this very protective approach to managing data is
that it facilitates such design features as event-listener patterns, and
concurrent programming. The access methods provide a natural means of
locking access to data, and also monitoring access for purposes of event
notification.

One of the more important developments in TC++PL(SE) is the creation of
abstract user interface components as a means of demonstrating the various
options for using inheritance and class hierarchies. Stroustrup argues -
correctly IMO - that a programmer should strive to solve problems
independently of the proprietary libraries he may be developing with.

This is what I have been calling an AUI (abstract user interface) for a few
years. I decided I would try the exercise as part of a project I'm working
on with the goal of visually representing the various access patterens use
to simulate multidimensional arrays using valarray, slices, and gslices.
The program creates a tree of grid cell objects which become the elements
of the graphical representation of the multidimensional matrix.

I created an interface class with all pure virtual functions, and one data
member. The data member is an object that holds default values to be
shared among the different grid elements. This may seem somewhat
convoluted, but it enables me to do some interesting things, such as change
the value of the default values object, notify all the grid cell objects
which own a DefaultBox that their defaults have been changed, and they need
to consider updating.

That demonstrates what I was getting at regarding the use of an
event/listener (observer) pattern to adapt to changes in a particular
object. There's something else that my design also seems to dictate.

The class used to default initialize grid cells is a good example of what
I'm finding appropriate for my design, and which seems to contradict
Stroustrup's recommendation to avoid 'set' and 'get' functions. That's
actually a pretty significant part of what my grid cell objects consist of.
That is, data members with read and mutate functions for the data.

#ifndef STHBOXDEFAULTS_H
#define STHBOXDEFAULTS_H
#include <string>

namespace sth
{
using std::string;
/**
@author Steven T. Hatton
*/
class BoxDefaults
{
public:
BoxDefaults( bool is_leaf_ = false,
bool is_vertical_ = false,
string text_ = "[...]",
RgbColor bg_color_ = RgbColor::YELLOW,
RgbColor text_color_ = RgbColor::BLACK,
RgbColor edge_color_ = RgbColor::RED,
double h_ = 35,
double w_ = 50,
double border_w_ = 5,
double x_ = 0,
double y_ = 0,
int bg_z_ = 1,
int text_z_ = 2
)
: is_leaf( is_leaf_ ),
is_vertical( is_vertical_ ),
text( text_ ),
bg_color( bg_color_ ),
text_color( text_color_ ),
edge_color( edge_color_ ),
h( h_ ),
w( w_ ),
border_w( border_w )
{}

virtual ~BoxDefaults(){}
/**
* Are child boxes layed out vertically?
*/
virtual bool Is_vertical() const
{
return this->is_vertical;
}

virtual void Is_vertical(const bool& is_vertical_)
{
this->is_vertical = is_vertical_;
}

/**
* Is this a leaf node?
*/
virtual bool Is_leaf() const
{
return this->is_leaf;
}

virtual void Is_leaf(const bool& is_leaf_)
{
this->is_leaf = is_leaf_;
}

virtual RgbColor Bg_color() const
{
return this->bg_color;
}

virtual void Bg_color(const RgbColor& bg_color_)
{
this->bg_color = bg_color_;
}
/* etc., etc. */

};
}

This seems rather natural to me, but it makes me a bit uneasy, because
Stroustrup has suggested it may not be a good design approach. I will not
that he also insists that his advice and opinions are not intended as
inviolable dictates, nor are they expected to be applicable equally in all
circumstances.

What is your opinion of the use of the 'Java Beans' design approach? I
realize there is more to a Bean than simply a class with data, events, and
associated listeners. Nonetheless, that seems to be the definitive essence
of what they areally are. There are some aspects of the JavaBean
specification that strike me as arcane and seem to offer far more potential
trouble than potential worth. I do believe the general approach is
something worth understanding, and considering for C++ design. Here's the
JavaBean spec:

http://java.sun.com/products/javabeans/docs/spec.html

What do you think of these ideas as potentially applicable to C++ code?
--
"[M]y dislike for the preprocessor is well known. Cpp is essential in C
programming, and still important in conventional C++ implementations, but
it is a hack, and so are most of the techniques that rely on it. ...I think
the time has come to be serious about macro-free C++ programming." - B. S.

Jul 22 '05 #1
Share this Question
Share on Google+
61 Replies


P: n/a
On Sat, 28 Aug 2004 04:39:31 -0400, "Steven T. Hatton"
<su******@setidava.kushan.aa> wrote:

[snip]
So far, so good. I pretty much agree with his reasoning for distinguishing
between what should properly be represented as a class with private (or
protected) data, as opposed to simply a struct with all public members and
direct user access. I like a lot of the careful discernment found in C++ as
opposed to Java, for example. There are not concepts of constness and
mutable caching in typical Java literature. That shows me C++ has
expressive powers that go well beyond other general purpose languages.
<OT-advocacy-mode>
Every language should have its strong points which distinguish it from
other languages, giving it its reason for existence. I'll only say
that I like C++ because it allows the developer to be as much or as
little OOP as she likes. With Java, you are pretty much stuck with OOP
whether or not that is the appropriate tool for the task at hand.
</OT-advocacy-mode>

I'll snip the rest because I merely want to comment on your naming
style:

[snip] class BoxDefaults
{
public:
BoxDefaults( bool is_leaf_ = false,
bool is_vertical_ = false,
string text_ = "[...]",
RgbColor bg_color_ = RgbColor::YELLOW,
RgbColor text_color_ = RgbColor::BLACK,
RgbColor edge_color_ = RgbColor::RED,
double h_ = 35,
double w_ = 50,
double border_w_ = 5,
double x_ = 0,
double y_ = 0,
int bg_z_ = 1,
int text_z_ = 2
)
: is_leaf( is_leaf_ ),
is_vertical( is_vertical_ ),
text( text_ ),
bg_color( bg_color_ ),
text_color( text_color_ ),
edge_color( edge_color_ ),
h( h_ ),
w( w_ ),
border_w( border_w )

^^^^^^^^
Oops ... recursive initialization here!
This is why I do not like trailing underscores.

Also, what do you think of this:
: is_leaf ( is_leaf_ )
, is_vertical( is_vertical_ )
, text ( text_ )
, bg_color ( bg_color_ )
, text_color ( text_color_ )
, edge_color ( edge_color_ )
, h ( h_ )
, w ( w_ )
etc. as opposed to having the commas at the end? I find it much easier
to avoid superfluous comma errors like this because I don't have to
read to the end of the line to find it.

--
Bob Hairgrove
No**********@Home.com
Jul 22 '05 #2

P: n/a
"Bob Hairgrove" <in*****@bigfoot.com> wrote...
[..]
Also, what do you think of this:
: is_leaf ( is_leaf_ )
, is_vertical( is_vertical_ )
, text ( text_ )
, bg_color ( bg_color_ )
, text_color ( text_color_ )
, edge_color ( edge_color_ )
, h ( h_ )
, w ( w_ )
etc. as opposed to having the commas at the end? I find it much easier
to avoid superfluous comma errors like this because I don't have to
read to the end of the line to find it.


That's my choice as well (without the excessive spaces, though).
Makes adding things clearer, especially to the end, when otherwise
I usually forget to add the comma after the last initialiser.

V
Jul 22 '05 #3

P: n/a
Bob Hairgrove wrote:
On Sat, 28 Aug 2004 04:39:31 -0400, "Steven T. Hatton"
<su******@setidava.kushan.aa> wrote:

[snip]
So far, so good. I pretty much agree with his reasoning for
distinguishing between what should properly be represented as a class with
private (or protected) data, as opposed to simply a struct with all public
members and direct user access. I like a lot of the careful discernment
found in C++ as
opposed to Java, for example. There are not concepts of constness and
mutable caching in typical Java literature. That shows me C++ has
expressive powers that go well beyond other general purpose languages.
<OT-advocacy-mode>
Every language should have its strong points which distinguish it from
other languages, giving it its reason for existence. I'll only say
that I like C++ because it allows the developer to be as much or as
little OOP as she likes. With Java, you are pretty much stuck with OOP
whether or not that is the appropriate tool for the task at hand.
</OT-advocacy-mode>


Somewhere out on the net there is a lexer "written in Java" that consists of
one class with all of the code written as member functions. The program is
virtually indistinguishable from C other than the use of the necessary
class to get the thing bootstrapped. I believe a lot of people are missing
the real significance of the design choice made by the Java designers that
requires all code to be in a class. It wasn't really done for the benefit
of the application programmer. It was done because it facilitates the
loading and execution of the program by the JVM.

To some extent, it is an artifact of the way the language is implemented.
If you think of the top level class as part of the execution environment,
you will understand there is far less difference between Java and C++ in
this regard than there otherwise may seem.

As you've suggested this is somewhat off topic. I'm discussing it here
because it provides a contrast to C++ that might serve to solidify
understanding, and lead to new approaches to programming in C++. I don't
believe Trolltech strove to emulate Java in developing their toolkit, but
there are some significant similarities in the way things are done in Qt.

Unlike Java, Qt does not restrict you from using namespace local
declarations, but it does encourage the design approach of putting
everything in some class, and treating your program as a collection of
interacting objects.

Note: what follows is pseudo-history:

Think of it like this. Originally programs were a long sequence of
instructions read in and executed one after the other. Some of these
instructions were able to tell the computer to goto and execute a
previously encountered instruction and to continue from there sequentially
as if the instruction had appeared after the branch point.

Then some guy decided goto was a four-letter-word, and advocated banishing
its use. The result was to hide goto instructions behind fancy constructs
such as function calls, and switch statements. Nonetheless, a program
remained a sequence of instructions to be executed as if the computer were
reading down a page, with the understanding that it may examine a
cross-reference at times. That is what C-style programming is.

At some point in the ancient past the notion of an eventloop was discovered.
The result was Emacs. Emacs later mated with C, resulting in several
offspring. Namely XEmacs, ECMAScript (JavaScript), (GNU)Emacs, Mozilla
etc. The person who facilitated this seemingly unnatural union of Lisp and
C was a man named James Gosling. This is the same James Gosling who
invented Java. Note that an interesting thing happened when these ideas
were merged. Some of the offspring are programs, and some are programming
languages. And some can't make up their mind which way they go.

At the same time as these offspring were being engendered and/or maturing a
more carfully arranged union between C and Simula was consummated. The
result was the highly cultured and demanding C++. C++ retains the original
sequential execution model inherited from C. C++ is a
*_programming_language_* not a program (damnit!) And that's the fundamental
difference between C++ and Java.
border_w( border_w )

^^^^^^^^
Oops ... recursive initialization here!
This is why I do not like trailing underscores.


I dislike the entire member initialization facility in C++. As for trailing
underscores, I don't get stung like that very often. As long as I retain
consistency, it is easy to identify such errors. Note the code you are
critiquing was copied directly out of the edit buffer of a program under
development. I hadn't even completed the member initialization block. Any
system such as the use of trailing underscores is subject to error. The
only thing worse is to not have a system at all. The use of leading
underscores is actually reserved for the implementation, though a program
can define and use such identifiers without errors being generated,
provided there are not conflicts with the implementations use of the same.

Also, what do you think of this:
: is_leaf ( is_leaf_ )
, is_vertical( is_vertical_ )
, text ( text_ )
, bg_color ( bg_color_ )
, text_color ( text_color_ )
, edge_color ( edge_color_ )
, h ( h_ )
, w ( w_ )
etc. as opposed to having the commas at the end?
I used to be a strong advocate of putting commas at the beginning of such
continued lines, but that was contrary to the culture in which I found
myself. I do like the style, and, after testing the auto-formatting in
KDevelop, and Emacs, I have discovered that both allign the code quite
nicely in the way you have shown above.

I also agree regarding the placement of the leading parentheses. I was
experimenting with the alternative because the long whitespace between the
identifier and the opening parenthesis also bothers me a bit. Your
approach is easier on the eye.

As for trailing underscores, what is your alternative?
I find it much easier
to avoid superfluous comma errors like this because I don't have to
read to the end of the line to find it.


Agreed. What about situations such as:

bigLongLeftHandSide
= bigLongIdentifier
->bigLongMemberName
->yourGonnaWrapSucker
->bigLongFunctionName();
?

-- "[M]y dislike for the preprocessor is well known. Cpp is essential in C
programming, and still important in conventional C++ implementations, but
it is a hack, and so are most of the techniques that rely on it. ...I think
the time has come to be serious about macro-free C++ programming." - B. S.

Jul 22 '05 #4

P: n/a
Victor Bazarov wrote:
That's my choice as well (without the excessive spaces, though).
Makes adding things clearer, especially to the end, when otherwise
I usually forget to add the comma after the last initialiser.

V


For me, that trailing comma gone missing becomes quite obvious when I C-x
C-h C-M-\ the remaining line are shot off the right side of the page into
oblivion. Same with KDevelop.
-- "[M]y dislike for the preprocessor is well known. Cpp is essential in C
programming, and still important in conventional C++ implementations, but
it is a hack, and so are most of the techniques that rely on it. ...I think
the time has come to be serious about macro-free C++ programming." - B. S.

Jul 22 '05 #5

P: n/a
Steven T. Hatton wrote:
Somewhere out on the net there is a lexer "written in Java" that consists of one class with all of the code written as member functions. The program is virtually indistinguishable from C other than the use of the necessary
class to get the thing bootstrapped. I believe a lot of people are missing the real significance of the design choice made by the Java designers that
requires all code to be in a class. It wasn't really done for the benefit
of the application programmer. It was done because it facilitates the
loading and execution of the program by the JVM.
You cannot legislate morality.

In Ruby, everything is a method. Ruby does not enforce this by beating the
programmer with syntax errors for their "heresy". Ruby enforces this by
helping. It provides a global object, and all statements execute within its
context. Because one cannot step outside this cosmological object,
everything in Ruby is inside an object, without syntax errors if you decline
to create an object.

(However, in Ruby, nothing will stop you from writing everything into one
long method... You cannot legislate morality. You can, however, legislate
immorality.)
As you've suggested this is somewhat off topic. I'm discussing it here
because it provides a contrast to C++ that might serve to solidify
understanding, and lead to new approaches to programming in C++. I don't
believe Trolltech strove to emulate Java in developing their toolkit, but
there are some significant similarities in the way things are done in Qt.
F--- topicality. And Qt came first, right?
-- "[M]y dislike for the preprocessor is well known. Cpp is essential in C
programming, and still important in conventional C++ implementations, but
it is a hack, and so are most of the techniques that rely on it. ...I think the time has come to be serious about macro-free C++ programming." - B. S.


The preprocessor rules.

--
Phlip
http://industrialxp.org/community/bi...UserInterfaces

Jul 22 '05 #6

P: n/a
Phlip wrote:
F--- topicality. And Qt came first, right?
Qt development begain in 1991, and the first public release was on the 20th
of May, 1995. The same week as the first release of Java.
The preprocessor rules.


The preprocessor is the main reason that there is no serious C++ contender
to the highly successful J2EE development platform. The Cpp, and the
techniques it supports do not scale to that level of programming.
Stroustrup has some opinions that I don't fully accept. AAMOF, my
intention in beginning this thread was to discuss one such opinion. In the
past, when I have taken a position contrary to his, I eventually came to
realize he was more correct than I originally thought. I can't imagine
anyone has a more extensive understanding of the relationship between C++
and the Cpp than he does. Why do you think he is wrong about it? (I can
only assume that was your intended meaning in saying "The preprocessor
rules".)

--
"[M]y dislike for the preprocessor is well known. Cpp is essential in C
programming, and still important in conventional C++ implementations, but
it is a hack, and so are most of the techniques that rely on it. ...I think
the time has come to be serious about macro-free C++ programming." - B. S.

Jul 22 '05 #7

P: n/a
Steven T. Hatton wrote:
Stroustrup's view on classes, for the most part, seems to be centered around
the notion of invariants. After a bit of adjusting to the use of yamt
(yet another math term) in computer science,
I came to appreciate some of the significance in what he is asserting.
Please site passage an verse.
You probably misunderstood what Stroustrup was saying.
There is nothing terribly profound in language design.
It is a mistake to read too much into what Stroustup says.
He really tries to explain his ideas in the most straight-forward manner
using *plain* English.
I believe a good example would be the Riemann-Christoffel curvature tensor.


No, it isn't a good example.
Very few computer programmers have cause to truck with such things.

A C++ class (or struct) is used
to introduce a User Defined Type (UDT) into a program.
A C++ class is most useful in implementing an Abstract Data Type (ADT).
The reason for data hiding (private data members) is practical.
It allows the class library developer to change the data representation
without any changes to application programs which use the class library
except, possibly, recompiling the application program and linking in
the revised class library.
Jul 22 '05 #8

P: n/a
Steven T. Hatton wrote:
The preprocessor is the main reason that there is no serious C++ contender
to the highly successful J2EE development platform. The Cpp, and the
techniques it supports do not scale to that level of programming.
Stroustrup has some opinions that I don't fully accept. AAMOF, my
intention in beginning this thread was to discuss one such opinion. In the past, when I have taken a position contrary to his, I eventually came to
realize he was more correct than I originally thought. I can't imagine
anyone has a more extensive understanding of the relationship between C++
and the Cpp than he does. Why do you think he is wrong about it? (I can
only assume that was your intended meaning in saying "The preprocessor
rules".)


Use the Cpp for:

- token pasting
- stringerization
- conditional compilation (oh, yeah. Everyone likes that one...)

Putting those inside a C language would make it not a C language. And they
permit techniques that more "modern" languages need but can't do.

A repost:

Visual Studio surfs to errors using <F8>: Go To Output Window Next Location.
The Windows SDK function to write text into the output panel, for this
feature to read it and surf to an error, is OutputDebugString().
Putting them all together yields this killer trace macro:

#define db(x_) do { std::stringstream z; \
z << __FILE__ << "(" << __LINE__ << ") : " \
#x_ " = " << x_ << endl; \
cout << z.str() << std::flush; \
OutputDebugStringA(z.str().c_str()); \
} while (false)

That takes any argument, including expressions, which support operator<<. We
will return to these techniques while exploring more Fault Navigation issues
in C++.

db(q) pushes "C:\path\source.cpp(99) : q = 5\n" into the Output Debug panel.
<F8> parses the file name and line number and navigates your editor directly
to the line containing the db(q).

Those are major wins. Tracing with db() is very low cost for very high
feedback.

C++ has flaws. But those of you inclined to dismiss it entirely are invited
to write db(), with all these features, in your favorite language.

--
Phlip
http://industrialxp.org/community/bi...UserInterfaces
Jul 22 '05 #9

P: n/a
E. Robert Tisdale wrote:
Steven T. Hatton wrote:
Stroustrup's view on classes, for the most part, seems to be centered
around
the notion of invariants. After a bit of adjusting to the use of yamt
(yet another math term) in computer science,
I came to appreciate some of the significance in what he is asserting.
Please site passage an verse.

Regarding what? I'm not sure what you are questioning.
You probably misunderstood what Stroustrup was saying.
There is nothing terribly profound in language design.
It is a mistake to read too much into what Stroustup says.
He really tries to explain his ideas in the most straight-forward manner
using *plain* English.
I believe a good example would be the Riemann-Christoffel curvature
tensor.
No, it isn't a good example.
Very few computer programmers have cause to truck with such things.


For my purposes, it was a good example. It is irrelevant whether a person
understand the full definition of the tensor. Only that it has invariants,
and that it has multiple data elements, some of which are redundant.
A C++ class (or struct) is used
to introduce a User Defined Type (UDT) into a program.
A C++ class is most useful in implementing an Abstract Data Type (ADT).
The reason for data hiding (private data members) is practical.
It allows the class library developer to change the data representation
without any changes to application programs which use the class library
except, possibly, recompiling the application program and linking in
the revised class library.


I'm not sure how that really address my question. I am asking whether the
creation of a class that consists of data members which are all
manipulatable through the use of access funcitons is contrary to philosophy
that a class should have a clearly defined invariant that is preserved when
operated on by invoking its member functions, or friend functions.

More specifically, I am asking if the creation of a class that consists of
nothing but data fields and set and get methods is an indication that I am
doing something wrong.
--
"[M]y dislike for the preprocessor is well known. Cpp is essential in C
programming, and still important in conventional C++ implementations, but
it is a hack, and so are most of the techniques that rely on it. ...I think
the time has come to be serious about macro-free C++ programming." - B. S.

Jul 22 '05 #10

P: n/a
Phlip wrote:
Steven T. Hatton wrote:
I can't imagine
anyone has a more extensive understanding of the relationship between C++
and the Cpp than he does. Why do you think he is wrong about it?
Use the Cpp for:

- token pasting

Why? What can that give me that I cannot achieve using the internal
features of the language?
- stringerization
I'm not familiar with the term. Care to explain?
- conditional compilation (oh, yeah. Everyone likes that one...)
I am aware that the Cpp is used for this. There have, as yet, been no
viable alternatives introduced, and accepted into the C++ standard.
Putting those inside a C language would make it not a C language. And they
permit techniques that more "modern" languages need but can't do.

A repost:

Visual Studio surfs to errors using <F8>: Go To Output Window Next
Location. The Windows SDK function to write text into the output panel,
for this feature to read it and surf to an error, is OutputDebugString().
Putting them all together yields this killer trace macro:

#define db(x_) do { std::stringstream z; \
z << __FILE__ << "(" << __LINE__ << ") : " \
#x_ " = " << x_ << endl; \
cout << z.str() << std::flush; \
OutputDebugStringA(z.str().c_str()); \
} while (false)

That takes any argument, including expressions, which support operator<<.
We will return to these techniques while exploring more Fault Navigation
issues in C++.

db(q) pushes "C:\path\source.cpp(99) : q = 5\n" into the Output Debug
panel. <F8> parses the file name and line number and navigates your editor
directly to the line containing the db(q).

Those are major wins. Tracing with db() is very low cost for very high
feedback.
I guess I'm not getting what that does for me. Can you explain how I would
gain from the use of such a macro? I don't use microsoft products very
often, so I really have no idea of what you are talking about. Can you
explain how I could use this macro without using a specific IDE?
C++ has flaws. But those of you inclined to dismiss it entirely are
invited to write db(), with all these features, in your favorite language.


Who has dismissed C++?

--
"[M]y dislike for the preprocessor is well known. Cpp is essential in C
programming, and still important in conventional C++ implementations, but
it is a hack, and so are most of the techniques that rely on it. ...I think
the time has come to be serious about macro-free C++ programming." - B. S.

Jul 22 '05 #11

P: n/a
Steven T. Hatton wrote:
E. Robert Tisdale wrote:
Steven T. Hatton wrote:

Stroustrup's view on classes, for the most part,
seems to be centered around the notion of invariants.
After a bit of adjusting to the use of yamt
(yet another math term) in computer science,
I came to appreciate some of the significance in what he is asserting.


Please site passage an verse.


Regarding what? I'm not sure what you are questioning.


I'm questioning your interpretation of "Stroustrup's view on classes".
I'm assuming that you read something that Stroustup wrote and published
and that you aren't communicating with him privately
and that you are not reading his mind.
It would help if you could cite publication and, perhaps,
quote the passage that he wrote.
You probably misunderstood what Stroustrup was saying.
There is nothing terribly profound in language design.
It is a mistake to read too much into what Stroustup says.
He really tries to explain his ideas in the most straight-forward manner
using *plain* English.

I believe a good example would be the Riemann-Christoffel curvature
tensor.


No, it isn't a good example.
Very few computer programmers have cause to truck with such things.


For my purposes, it was a good example.
It is irrelevant whether a person understand
the full definition of the tensor.
Only that it has invariants,
and that it has multiple data elements, some of which are redundant.
A C++ class (or struct) is used
to introduce a User Defined Type (UDT) into a program.
A C++ class is most useful in implementing an Abstract Data Type (ADT).
The reason for data hiding (private data members) is practical.
It allows the class library developer to change the data representation
without any changes to application programs which use the class library
except, possibly, recompiling the application program and linking in
the revised class library.


I'm not sure how that really address my question.
I am asking whether the creation of a class that consists of data members
which are all manipulatable through the use of access functions
is contrary to philosophy that
a class should have a clearly defined invariant that is preserved
when operated on by invoking its member or friend functions functions.

More specifically, I am asking if the creation of a class
that consists of nothing but data fields and set and get methods
is an indication that I am doing something wrong.


It is an *indication* that you might be doing something wrong.
If only get and set methods are defined,
the object is nothing more than a "junk box".
An array (of any type) is an example of such an object.
A vector, matrix or tensor object is superficially similar to an array
but with vector arithmetic operations and other interesting methods.

I'll offer a more simple example:

class SocialSecurityNumber {
private:
unsigned long int Number;
public:
explicit
SocialSecurityNumber(unsigned long int n): Number(n) {
// throw exception if n does not represent a valid SSN
}
explicit
SocialSecurityNumber(std::string n) {
// throw an exception if n does not represent a valid SSN
}
operator unsigned long int(void) const {
return Number;
}
operator std::string(void) const {
std::ostringstream oss;
oss << Number/1000000 << '-'
<< (Number/10000)%100 << '-'
<< Number%10000;
return oss.str();
}
};

You can't do much with a social security number except set and get it.
In this case, the setter is a constructor and the getter is a type cast.
There is no "invarient" because a social security number can't change
except via the assignment operator

SocialSecurityNumber& operator=(const SocialSecurityNumber&);

All this shows is that even if all you have is getters and setters,
the needn't be named

SocialSecurityNumber::get(/* arguments */) const;
SocialSecurityNumber::set(/* arguments */);
Jul 22 '05 #12

P: n/a
E. Robert Tisdale wrote:
Steven T. Hatton wrote: I'm questioning your interpretation of "Stroustrup's view on classes".
I'm assuming that you read something that Stroustup wrote and published
and that you aren't communicating with him privately
and that you are not reading his mind.
It would help if you could cite publication and, perhaps,
quote the passage that he wrote.
TC++PL(SE) 24.3.7.1 Invariants

"The values of the members and the objects referred to by members are
collectively called the /state/ of the object (or simply, its /value/). A
major concern of a class design is to get an object into a well-defined
state (initialization/construction), to maintain a well-defined state as
operations are performed, and finally to destroy the object gracefully.
The property that makes the state of an object well-defined is called
its /invariant/."
....
"Much of the skill in class design involves making a class simple enough to
make it possible to implement it so that it has a useful invariant that can
be expressed simply. It is easy enough to state that every class needs an
invariant. The hard part is to come up with a useful invariant that is
easy to comprehend and that doesn't impose unacceptable constraints on the
implementer or on the efficiency of the operations."

I'll offer a more simple example:

class SocialSecurityNumber {
private:
unsigned long int Number;
public:
explicit
SocialSecurityNumber(unsigned long int n): Number(n) {
// throw exception if n does not represent a valid SSN
}
explicit
SocialSecurityNumber(std::string n) {
// throw an exception if n does not represent a valid SSN
}
operator unsigned long int(void) const {
return Number;
}
operator std::string(void) const {
std::ostringstream oss;
oss << Number/1000000 << '-'
<< (Number/10000)%100 << '-'
<< Number%10000;
return oss.str();
}
};

You can't do much with a social security number except set and get it.
In this case, the setter is a constructor and the getter is a type cast.
There is no "invarient" because a social security number can't change
except via the assignment operator

SocialSecurityNumber& operator=(const SocialSecurityNumber&);

All this shows is that even if all you have is getters and setters,
the needn't be named

SocialSecurityNumber::get(/* arguments */) const;
SocialSecurityNumber::set(/* arguments */);


There is an invariant in the SocialSecurityNumber. It is established by
checking that it is valid when constructed, and maintained by checking that
it is valid when assigned to.
--
"[M]y dislike for the preprocessor is well known. Cpp is essential in C
programming, and still important in conventional C++ implementations, but
it is a hack, and so are most of the techniques that rely on it. ...I think
the time has come to be serious about macro-free C++ programming." - B. S.

Jul 22 '05 #13

P: n/a
On Sat, 28 Aug 2004 21:18:57 -0400, "Steven T. Hatton"
<su******@setidava.kushan.aa> wrote:

[snip]
As for trailing underscores, what is your alternative?


I like to prefix the names of member data with "m_". It's only one
more character to type, and so much easier to read.
I find it much easier
to avoid superfluous comma errors like this because I don't have to
read to the end of the line to find it.


Agreed. What about situations such as:

bigLongLeftHandSide
= bigLongIdentifier
->bigLongMemberName
->yourGonnaWrapSucker
->bigLongFunctionName();
?


I like the 80 column rule. It helps to use indentation to add
structure, as in the example above:

bigLongLeftHandSide =
bigLongIdentifier
->bigLongMemberName
->yourGonnaWrapSucker
->bigLongFunctionName();

I like to move the lines after the first one far enough over to the
right to make it obvious that they are on the RHS. And I try to avoid
overly long names, too.

My compiler usually catches missing semicolons, but often has problems
reporting a missing comma ... i.e., there will be an error message,
but it will be somewhat cryptic.

--
Bob Hairgrove
No**********@Home.com
Jul 22 '05 #14

P: n/a
Bob Hairgrove wrote:
On Sat, 28 Aug 2004 21:18:57 -0400, "Steven T. Hatton"
<su******@setidava.kushan.aa> wrote:

[snip]
As for trailing underscores, what is your alternative?
I like to prefix the names of member data with "m_". It's only one
more character to type, and so much easier to read.


I refuse! I will not be assimilated! I will never do that! ;) Actually,
after bashing my knuckles a few times on the alternative, that approach
does have its appeal. I just find it redundant. That's what /this/ is
for. And I do use /this/ religiously. The one place where it doesn't
serve me well is in the member initialization list.

I seem to recall reading that /this/ should be available there, but it has
never worked there for me when I tried to use it. I should try to look it
up in the Standard to see what is actually specified, but not right now.
bigLongLeftHandSide =
bigLongIdentifier
->bigLongMemberName
->yourGonnaWrapSucker
->bigLongFunctionName();

I like to move the lines after the first one far enough over to the
right to make it obvious that they are on the RHS. And I try to avoid
overly long names, too.
I am slowly adjusting to using shorter names in C++. The Java dogma is to
spell out everything, and follow strict rules of capitalization. The
advantage is that it is very predictable - so long as no one decides color
should have a 'u' in it. The disadvantage is that it tends to obscure the
logic a bit, and leads to the problem we are discussing.

The place where I encounter the need to wrap my member access strings is
when working with XML DOM.

As for indenting the second and subsequent '->' or '.', my editors don't do
that by default, and I find the meaning pretty clear without the additional
indentation.
My compiler usually catches missing semicolons, but often has problems
reporting a missing comma ... i.e., there will be an error message,
but it will be somewhat cryptic.


I prefer the editor to catch such mistakes. It took me some time to get
used to Emacs, but once I caught on, I found the syntax sensitive
indentation invaluable. Regardless of whether I'm writing DocBook EBNF XML
or C++, if I mess up the syntax, the indentation stops working after the
point where I made the mistake. KDevelop also supports a very similar kind
of behavior.

With JBuilder, the editor actually highlights syntax errors, and even
catches what I consider to be semantic errors such as using an identifier
which is not in scope. KDevelop does some of this, but I don't believe it
is even possible with C++ to provide the level of edit-time error detection
I've seen done with Java. See my signature for the reason.
--
"[M]y dislike for the preprocessor is well known. Cpp is essential in C
programming, and still important in conventional C++ implementations, but
it is a hack, and so are most of the techniques that rely on it. ...I think
the time has come to be serious about macro-free C++ programming." - B. S.

Jul 22 '05 #15

P: n/a
Steven T. Hatton wrote:
Use the Cpp for:

- token pasting
Why? What can that give me that I cannot achieve using the internal
features of the language?
The TEST_() macro uses it.
- stringerization


I'm not familiar with the term. Care to explain?


My #define db() sample.
- conditional compilation (oh, yeah. Everyone likes that one...)


I am aware that the Cpp is used for this. There have, as yet, been no
viable alternatives introduced, and accepted into the C++ standard.


That's my point. Those who bust on the CPP overlook CC, and CC works fine to
make large systems managable, such as Linux.

Another great thing about the CPP is it's language-agnostic (so long as you
don't find a language that does not balance "" and () operators, or use
commas for something silly). So the same macros can influence, for example,
your .cpp, .rc, and .idl files, etc.
Putting those inside a C language would make it not a C language. And they permit techniques that more "modern" languages need but can't do.

A repost:

Visual Studio surfs to errors using <F8>: Go To Output Window Next
Location. The Windows SDK function to write text into the output panel,
for this feature to read it and surf to an error, is OutputDebugString(). Putting them all together yields this killer trace macro:

#define db(x_) do { std::stringstream z; \
z << __FILE__ << "(" << __LINE__ << ") : " \
#x_ " = " << x_ << endl; \
cout << z.str() << std::flush; \
OutputDebugStringA(z.str().c_str()); \
} while (false)

That takes any argument, including expressions, which support operator<<. We will return to these techniques while exploring more Fault Navigation
issues in C++.

db(q) pushes "C:\path\source.cpp(99) : q = 5\n" into the Output Debug
panel. <F8> parses the file name and line number and navigates your editor directly to the line containing the db(q).

Those are major wins. Tracing with db() is very low cost for very high
feedback.


I guess I'm not getting what that does for me. Can you explain how I

would gain from the use of such a macro? I don't use microsoft products very
often, so I really have no idea of what you are talking about. Can you
explain how I could use this macro without using a specific IDE?
It is a trace macro, like TRACE(). You put an expression in, and the macro
inserts its value into both the console and the Output panel (which all MS
Windows IDEs support).

Read my verbiage again.
C++ has flaws. But those of you inclined to dismiss it entirely are
invited to write db(), with all these features, in your favorite

language.
Who has dismissed C++?


I copied that repost from an essay, where, as a rhetorical technique, I
hypothesized that someone dismissed C++.

--
Phlip
http://industrialxp.org/community/bi...UserInterfaces
Jul 22 '05 #16

P: n/a
Phlip wrote:
Steven T. Hatton wrote:
> Use the Cpp for:
>
> - token pasting
Why? What can that give me that I cannot achieve using the internal
features of the language?
The TEST_() macro uses it.


What is the TEST_() macro? I searched the ISO/IEC 14882:2003 for the string
"TEST" using acrobat, and got no hits. I assume it is not part of standard
C++?
> - stringerization


I'm not familiar with the term. Care to explain?


My #define db() sample.


That doesn't explain what the word means.
> - conditional compilation (oh, yeah. Everyone likes that one...)


I am aware that the Cpp is used for this. There have, as yet, been no
viable alternatives introduced, and accepted into the C++ standard.


That's my point. Those who bust on the CPP overlook CC, and CC works fine
to make large systems managable, such as Linux.


Linux is written in C. It is a very controlled development process that
requires a great deal of specialized expertise to work on. Working on any
given component does not require knowledge of a wide variety of rapidly
changing interfaces.

I really don't know what you mean by the "CC". To me, CC is a somewhat
antiquated alias for the C Compiler.

Now if you want to talk about program suites such as the KDE, /that/ is
written in C++ using Qt. I've been working with the KDE since spring of
1997. It is an extremely impressive project, with many gifted
contributors. The component of the KDE which I have spent a good deal of
the past several months working with is KDevelop. That is the IDE
distrubuted with the KDE. I am acutely aware of its capabilities and
limitations.

I've also worked with J2EE fairly extensively. For purposes of developing
enterprise applications such as the web based interface for the US Army's
personnel database, BEA's WebLogic is easier to use, and facilitates faster
development than anything I know of based on C++.
Another great thing about the CPP is it's language-agnostic (so long as
you don't find a language that does not balance "" and () operators, or
use commas for something silly). So the same macros can influence, for
example, your .cpp, .rc, and .idl files, etc.
That doesn't address the problems created by using the CPP.

http://www.freshsources.com/bjarne/ALLISON.HTM

//--------------excerpt-----------------------------
CUJ: What do you do for your day job now?

BS: I'm trying to build up a research group to focus on large-scale
programming -- that is, to do research on the use of programming in large
programs rather than just language design, just the study of small
(student) programs, or the exclusive focus on design and/or process. I
think that programming technique, programming language, and the individual
programmer have a central role in the development of large systems. Too
often, either the scale of industrial projects or the role of programming
is ignored. This research will involve work on libraries and tools.
....

CUJ: What is the next step in the evolution of C++?

BS: Tools/environments and library design. I'd like to see incremental
compilers and linkers for C++. Something like two seconds is a suitable
time for re-compiling and re-linking a medium-sized C++ program after a
change that is localized to a few functions. I'd like to see browsers and
analysis tools that know not only syntax but also the type of every entity
of a program. I'd like to see optimizers that actually take notice of C++
constructs and do a decent job of optimizing them, rather than simply
throwing most of the useful information away and giving the rest to an
optimizer that basically understands only C. I'd like to see debuggers
integrated with the incremental compiler so that the result approximates a
C++ interpreter. (I'd also like to see a good portable C++ interpreter.)
None of this is science fiction; in fact, I have seen experimental versions
of most of what I suggest -- and more. We still suffer from
first-generation C++ environments and tools.
....

The preprocessor is one of the main factors that has led to the lack of more
sophisticated C program development environments: the fact that the source
text seen by the programmer isn't the text seen by the compiler is a
crippling handicap. I think the time has come to be serious about
macro-free C++ programming.
//--------------end-excerpt-----------------------------
It is a trace macro, like TRACE(). You put an expression in, and the macro
inserts its value into both the console and the Output panel (which all MS
Windows IDEs support).

Read my verbiage again.


Either I'm missing something, or that is far from impressive. The average
Java IDE can trace code and show me the value of any variable in detail. I
can also browse through the activation stack which is presented as a
clickable tree with all the classes and class members available for
inspection.

--
"[M]y dislike for the preprocessor is well known. Cpp is essential in C
programming, and still important in conventional C++ implementations, but
it is a hack, and so are most of the techniques that rely on it. ...I think
the time has come to be serious about macro-free C++ programming." - B. S.

Jul 22 '05 #17

P: n/a
Steven T. Hatton wrote:
Phlip wrote:
> Use the Cpp for:
>
> - token pasting
The TEST_() macro uses it.


What is the TEST_() macro? I searched the ISO/IEC 14882:2003 for the

string "TEST" using acrobat, and got no hits. I assume it is not part of standard
C++?


Steve, people are allowed to write other macros than those that appear in
The Standard. Google this newsgroup for my street name, TEST_, and
Steinbach.
> - stringerization

I'm not familiar with the term. Care to explain?


My #define db() sample.


That doesn't explain what the word means.


Look inside my macro for the # operator.
> - conditional compilation (oh, yeah. Everyone likes that one...)

I am aware that the Cpp is used for this. There have, as yet, been no
viable alternatives introduced, and accepted into the C++ standard.


That's my point. Those who bust on the CPP overlook CC, and CC works fine to make large systems managable, such as Linux.


Linux is written in C.


Snip from here down. I can't find evidence you are reading my posts.

Please take a deep breath, and entertain the idea that passionately decrying
a handful of keywords is a kind of zealotry.

--
Phlip
http://industrialxp.org/community/bi...UserInterfaces
Jul 22 '05 #18

P: n/a
Phlip wrote:
Steven T. Hatton wrote:
Phlip wrote:
>> > Use the Cpp for:
>> >
>> > - token pasting > The TEST_() macro uses it.


What is the TEST_() macro? I searched the ISO/IEC 14882:2003 for the

string
"TEST" using acrobat, and got no hits. I assume it is not part of
standard C++?


Steve, people are allowed to write other macros than those that appear in
The Standard. Google this newsgroup for my street name, TEST_, and
Steinbach.


This is silly. You used a macro that you wrote as an example that I am
expected to be familiar with, without even mentioning that you wrote it.
>> > - stringerization
>>
>> I'm not familiar with the term. Care to explain?
>
> My #define db() sample.


That doesn't explain what the word means.


Look inside my macro for the # operator.


Perhaps you could simply provide a definition for the term. My attempting
to extract a definition from an example has the significant potential for
my arriving at a different definition than the one you intend.
Snip from here down. I can't find evidence you are reading my posts.

Please take a deep breath, and entertain the idea that passionately
decrying a handful of keywords is a kind of zealotry.


Sorry if you were confused by my use of words I learned in college when I
studied computer science. I can't say I will refrain from doing so in the
future, but I will do my best to restrict my vocabulary when addressing you
directly.

Now, back to the topic at hand. I changed the subject field in the heading
to reflect the fork this thread has taken. For the moment, forget any of
my own comments regarding the Cpp, and explain to the news group where
Bjarne Stroustrup is in error regarding the opinions expressed in the
following:
http://www.freshsources.com/bjarne/ALLISON.HTM

//--------------excerpt-----------------------------
*CUJ:*What*do*you*do*for*your*day*job*now?

BS: I'm trying to build up a research group to focus on large-scale
programming -- that is, to do research on the use of programming in large
programs rather than just language design, just the study of small
(student) programs, or the exclusive focus on design and/or process. I
think that programming technique, programming language, and the individual
programmer have a central role in the development of large systems. Too
often, either the scale of industrial projects or the role of programming
is ignored. This research will involve work on libraries and tools.
....

*CUJ:*What*is*the*next*step*in*the*evolution*of*C+ +?

BS: Tools/environments and library design. I'd like to see incremental
compilers and linkers for C++. Something like two seconds is a suitable
time for re-compiling and re-linking a medium-sized C++ program after a
change that is localized to a few functions. I'd like to see browsers and
analysis tools that know not only syntax but also the type of every entity
of a program. I'd like to see optimizers that actually take notice of C++
constructs and do a decent job of optimizing them, rather than simply
throwing most of the useful information away and giving the rest to an
optimizer that basically understands only C. I'd like to see debuggers
integrated with the incremental compiler so that the result approximates a
C++ interpreter. (I'd also like to see a good portable C++ interpreter.)
None of this is science fiction; in fact, I have seen experimental versions
of most of what I suggest -- and more. We still suffer from
first-generation C++ environments and tools.
....

The preprocessor is one of the main factors that has led to the lack of more
sophisticated C program development environments: the fact that the source
text seen by the programmer isn't the text seen by the compiler is a
crippling handicap. I think the time has come to be serious about
macro-free C++ programming.
//--------------end-excerpt-----------------------------

--
"[M]y dislike for the preprocessor is well known. Cpp is essential in C
programming, and still important in conventional C++ implementations, but
it is a hack, and so are most of the techniques that rely on it. ...I think
the time has come to be serious about macro-free C++ programming." - B. S.

Jul 22 '05 #19

P: n/a
Steven T. Hatton wrote:
E. Robert Tisdale wrote:
Steven T. Hatton wrote:

I'm questioning your interpretation of "Stroustrup's view on classes".
I'm assuming that you read something that Stroustup wrote and published
and that you aren't communicating with him privately
and that you are not reading his mind.
It would help if you could cite publication and, perhaps,
quote the passage that he wrote.


TC++PL(SE) 24.3.7.1 Invariants

"The values of the members and the objects referred to by members are
collectively called the /state/ of the object (or simply, its /value/). A
major concern of a class design is to get an object into a well-defined
state (initialization/construction), to maintain a well-defined state as
operations are performed, and finally to destroy the object gracefully.
The property that makes the state of an object well-defined is called
its /invariant/."
...
"Much of the skill in class design involves making a class simple enough to
make it possible to implement it so that it has a useful invariant that can
be expressed simply. It is easy enough to state that every class needs an
invariant. The hard part is to come up with a useful invariant that is
easy to comprehend and that doesn't impose unacceptable constraints on the
implementer or on the efficiency of the operations."
I'll offer a more simple example:

class SocialSecurityNumber {
private:
unsigned long int Number;
public:
explicit
SocialSecurityNumber(unsigned long int n): Number(n) {
// throw exception if n does not represent a valid SSN
}
explicit
SocialSecurityNumber(std::string n) {
// throw an exception if n does not represent a valid SSN
}
operator unsigned long int(void) const {
return Number;
}
operator std::string(void) const {
std::ostringstream oss;
oss << Number/1000000 << '-'
<< (Number/10000)%100 << '-'
<< Number%10000;
return oss.str();
}
};

You can't do much with a social security number except set and get it.
In this case, the setter is a constructor and the getter is a type cast.
There is no "invarient" because a social security number can't change
except via the assignment operator

SocialSecurityNumber& operator=(const SocialSecurityNumber&);

All this shows is that even if all you have is getters and setters,
the needn't be named

SocialSecurityNumber::get(/* arguments */) const;
SocialSecurityNumber::set(/* arguments */);


There is an invariant in the SocialSecurityNumber.
It is established by checking that it is valid when constructed
and maintained by checking that it is valid when assigned to.


Yes. The invariant is trivial.
And I think that your understanding of invariance is consistent
with the way that Stroustrup uses the term.
Notice that, in the example above, there is no way to construct
an invalid SocialSecurityNumber and no assignment is defined
except the default assignment from another valid SocialSecurityNumber.
Jul 22 '05 #20

P: n/a
E. Robert Tisdale wrote:
Steven T. Hatton wrote:

There is an invariant in the SocialSecurityNumber.
It is established by checking that it is valid when constructed
and maintained by checking that it is valid when assigned to.


Yes. The invariant is trivial.
And I think that your understanding of invariance is consistent
with the way that Stroustrup uses the term.
Notice that, in the example above, there is no way to construct
an invalid SocialSecurityNumber and no assignment is defined
except the default assignment from another valid SocialSecurityNumber.


Ironically, your SSN is very close to a class I created while playing around
with developing an OO parser/validator for C++. My class is Identifier.

/************************************************** *************************
* Copyright (C) 2004 by Steven T. Hatton *
* ha*****@globalsymmetry.com *
* *
* This program is free software; you can redistribute it and/or modify *
* it under the terms of the GNU General Public License as published by *
* the Free Software Foundation; either version 2 of the License, or *
* (at your option) any later version. *
* *
* This program is distributed in the hope that it will be useful, *
* but WITHOUT ANY WARRANTY; without even the implied warranty of *
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the *
* GNU General Public License for more details. *
* *
* You should have received a copy of the GNU General Public License *
* along with this program; if not, write to the *
* Free Software Foundation, Inc., *
* 59 Temple Place - Suite 330, Boston, MA 02111-1307, USA. *

************************************************** *************************/
#ifndef STH_CLASSBUILDERIDENTIFIER_H
#define STH_CLASSBUILDERIDENTIFIER_H

#include <string>
#include <cctype>

#include "InvalidIdentifier_Exception.h"

namespace sth
{
namespace ClassBuilder
{
/**
@author Steven T. Hatton
*/
class Identifier
{
public:

Identifier();
Identifier(const std::string& value_)
throw(InvalidIdentifier_Exception);
~Identifier();

/**
* Convenience function to support input validation.
* @param c
* @return
*/
static bool is_valid_char(const char& c)
{
return isalnum(c) || c == '_';
}

/**
* Convenience function to support input validation.
* @param c
* @return
*/
static bool is_valid_first_char(const char& c)
{
return Identifier::is_valid_char(c) && !isdigit(c);
}

static bool is_valid(const std::string& s);

operator std::string()
{
return this->value;
}

private:
std::string value;
};
};
};

#endif

#include "Identifier.h"

namespace sth
{

namespace ClassBuilder
{

Identifier::Identifier()
{}

Identifier::Identifier(const std::string& value_)
throw(InvalidIdentifier_Exception)
: value(value_)
{
if(!is_valid(this->value))
{
throw InvalidIdentifier_Exception();
}
}

Identifier::~Identifier()
{}

}
;

};

/*!
\fn sth::ClassBuilder::Identifier::is_valid(const std::string& s)
*/
bool sth::ClassBuilder::Identifier::is_valid(const std::string& s)
{
if((s.length() == 0) || !is_valid_first_char(s[0]))
{
return false;
}
for(size_t i = 1; i < s.length(); i++)
{
if(!is_valid_char(s[i]))
{
return false;
}
}
return true;
}
I finally realized what I am really questioning is the relationship between
things like property lists and the concept of invariants. I had already
come to the conclusion that it is reasonable for a class to have some
invariants which are preserved under a given set of operations. For
example, a rectangle should preserve its right angles an side lengths under
translations and rotations. But the length of the sides would likely be
adjustable under other operations. A subset of these operations could be
defined so that the ratios of the side lengths are preserved.

I don't know exactly what Stroustrup's position is on that aspect of
invariance. This is the article that really helped me understand what he
was getting at with invariants.

http://www.research.att.com/~bs/eh_brief.pdf

Consider this list of properties describing a simple box that displays text
with specified colors, size, shape and location, as well as some other
information about its role in a collection of other objects of this class:
bool is_leaf;
bool is_vertical;
std::string text;
RgbColor bg_color;
RgbColor text_color;
RgbColor edge_color;
double h;
double w;
double border_w;
double x;
double y;
int bg_z;
int text_z;
The data members were not chose because I recognized an invariant and
identified them as essential in maintaining that invariant. They were
chosen because they are what I need in order to implement my design, and
they are all essential to the one aspect of the program which involves
presenting the computed results. The variables such as RgbColor type
members are designed to preserve their own invariants. And that invariance
is a result of using unsigned char for the member data. RgbColor is a
struct.

One thing I've been wondering about is the notion of distributed invariants.
That is, in the case of my property objects, it has become reasonable to
employ an observer pattern that involves notifying some objects that the
properties have changed. Looking at the data I have listed above, it seems
fairly orthogonal. Any one member can be changed without modifying the
others. The one exception is that the text position needs to be calculated
when any of the other geometric data is modified.

There is, however a requirement that different objects in the system remain
synchronized with the state of the properties object. I'm wondering how
useful it is to extend the notion of class invariants to a notion of
distributed state with multiple participants.

--
"[M]y dislike for the preprocessor is well known. Cpp is essential in C
programming, and still important in conventional C++ implementations, but
it is a hack, and so are most of the techniques that rely on it. ...I think
the time has come to be serious about macro-free C++ programming." - B. S.

Jul 22 '05 #21

P: n/a
"Steven T. Hatton" <su******@setidava.kushan.aa> wrote in message
news:ss********************@speakeasy.net...
Perhaps you could simply provide a definition for the term. My attempting
to extract a definition from an example has the significant potential for
my arriving at a different definition than the one you intend.
Stringizing is a macro operator that converts a macro argument to a string
literal. E.g.

#define STR(x) #x
STR(abc) // "abc"
Now, back to the topic at hand. I changed the subject field in the heading
to reflect the fork this thread has taken. For the moment, forget any of
my own comments regarding the Cpp, and explain to the news group where
Bjarne Stroustrup is in error regarding the opinions expressed in the
following:


Sure. The main factors, by far, that has led to the lack of more sophisticated
development environments (in C++) is templates (or more specifically,
second-phase lookup) and (in C and C++) context-dependent parsing. For a tool
to do an analysis that is anywhere near complete, it has to do a nearly complete
parse of the language, *and* it has to be able to deal with intermediate stages
of construct completion (i.e. as you type in an IDE; it can't just bail like a
compiler can). The preprocessor is almost insignificant in comparison.

Regards,
Paul Mensonides
Jul 22 '05 #22

P: n/a
Paul Mensonides wrote:
"Steven T. Hatton" <su******@setidava.kushan.aa> wrote in message
news:ss********************@speakeasy.net...
Perhaps you could simply provide a definition for the term. My
attempting to extract a definition from an example has the significant
potential for my arriving at a different definition than the one you
intend.


Stringizing is a macro operator that converts a macro argument to a string
literal. E.g.

#define STR(x) #x
STR(abc) // "abc"


And that buys you what? I don't see the profound usefullness of this trick.
Now, back to the topic at hand. I changed the subject field in the
heading
to reflect the fork this thread has taken. For the moment, forget any of
my own comments regarding the Cpp, and explain to the news group where
Bjarne Stroustrup is in error regarding the opinions expressed in the
following:


Sure. The main factors, by far, that has led to the lack of more
sophisticated development environments (in C++) is templates (or more
specifically,
second-phase lookup) and (in C and C++) context-dependent parsing. For a
tool to do an analysis that is anywhere near complete, it has to do a
nearly complete parse of the language, *and* it has to be able to deal
with intermediate stages of construct completion (i.e. as you type in an
IDE; it can't just bail like a
compiler can). The preprocessor is almost insignificant in comparison.

Regards,
Paul Mensonides


I've given this issue some consideration, and I've also observed the
behavior of KDevelop. I agree that templates throw the IDE a curve ball.
I suspect this is (in part) what Stroustrup was addressing when he
mentioned incremental compilation and a C++ interpreter. There are at
least two options for dealing with templates and edit-time error detection
and code completion. They are not mutually exclusive.

One is to present the 'raw' interface of the template to the user. That is,
you just parse the template and show the user code completion options in
the 'raw' form, e.g., if the user is creating a deque the following
completion options could be presented:
....
void push_front(const T& x);
void push_back(const T& x);
iterator insert(iterator position, const T& x);
void insert(iterator position, size_type n, const T& x);
....

Clearly some level of error detection could be implemented at that level as
well. For example. If the user attempted to invoke push_font(const T& x)
the checker could detect that error.

Your observation regarding two phase lookup, if I understand your meaning,
is relevant, but not a show-stopper. This relates back to my early comment
about KDevelop. I don't know how they do it, but they are providing code
completion for classes instantiated from templates. That requires that the
template code has been compiled. To that I say 'big deal'! It's that way
with JBuilder and Java. JBuilder balks at providing error detection and
code completion in some cases in which the code has not been compiled, and
it gets it wrong sometimes when the code has changed, and has not been
recompiled.

One might argue that the same could be done with the Cpp. My response is
that the Cpp is far less structured, and trying to predict what someone
might have done to his code with it, is beyond what I would expect a
reasonable programmer to care to address.

This is a perfect example of how the CPP undermines the integrity of C++.
It's from the xerces C++ samples:
CreateDOMDocument.cpp
http://tinyurl.com/3wt4x

#include <xercesc/util/PlatformUtils.hpp>
#include <xercesc/util/XMLString.hpp>
#include <xercesc/dom/DOM.hpp>
#if defined(XERCES_NEW_IOSTREAMS)
#include <iostream>
#else
#include <iostream.h>
#endif

XERCES_CPP_NAMESPACE_USE
....

#define X(str) XStr(str).unicodeForm()

....

unsigned int elementCount = doc->getElementsByTagName(X("*"))->getLength();
XERCES_STD_QUALIFIER cout << "The tree just created contains: "
<< elementCount << " elements." << XERCES_STD_QUALIFIER endl;
....

Using JBuilder, and the Java counterpart, I can fly through XML coding.
It's a pleasure to work with. The above excerpt is a mangled and tangled
maze of inelegant kludgery. There are historical reasons for the code
being the way it is. But there is no justification for this kind of
anti-programming in any code written with the future in mind.

I pretty much figured out what all the macro mangling does in the above
sample. But by that time I was so disgusted, I lost interest in using the
library. There are countless examples of such anti-code in most of the C++
libraries I've seen.

I should be able to write "using xercesc::XMLString" and the compiler should
pull in what it needs, and ONLY WHAT IT NEEDS, to provide that class. I
should not need the #include a file that #includes a file that #includes a
file that provides the definition of the namespace macro, and another three
or more levels of copy and paste to get the bloody class declaration, and
then pray that it can find the corresponding definition.

For anybody who wants to tell me that the reason that C++ code is this hard
to understand is that C++ is more powerful, please explain to me why
IBM-the company that wrote the Xerces C++ code-is using the Java
counterpart to serve out IBM's latest online C++ language and library
documentation.

http://publib.boulder.ibm.com/infoce...elp/index.jsp\
?topic=/com.ibm.vacpp7a.doc/language/ref/clrc00about_this_reference.htm

Years ago I read Kernighan and Ritchie's TCPL(2E). I've read all of
TC++PL(SE) except for the appendices, from which I've read selections. I
read the core language sections twice. I've also read much other material
on the language, and I've been writing a good deal of code. Additionally,
I've read The Java Programming language 3rd Edition, and have considerable
experience working with Java. From that foundation I have formed the
opinion, shared by the creator of C++, that the preprocessor undermines the
integrity of the core C++ programming language and is the main reason for
the comparative ease of use Java provides. The Cpp is not a strength, it is
a liability.

--
"[M]y dislike for the preprocessor is well known. Cpp is essential in C
programming, and still important in conventional C++ implementations, but
it is a hack, and so are most of the techniques that rely on it. ...I think
the time has come to be serious about macro-free C++ programming." - B. S.

Jul 22 '05 #23

P: n/a
Steven T. Hatton wrote:
Paul Mensonides wrote:
"Steven T. Hatton" <su******@setidava.kushan.aa> wrote in message
news:ss********************@speakeasy.net...
Perhaps you could simply provide a definition for the term. My
attempting to extract a definition from an example has the
significant potential for my arriving at a different definition
than the one you intend.
Stringizing is a macro operator that converts a macro argument to a
string literal. E.g.

#define STR(x) #x
STR(abc) // "abc"


And that buys you what? I don't see the profound usefullness of this
trick.


It buys you meaningful assertions with replicating code among other things.
I've given this issue some consideration, and I've also observed the
behavior of KDevelop. I agree that templates throw the IDE a curve
ball.
I suspect this is (in part) what Stroustrup was addressing when he
mentioned incremental compilation and a C++ interpreter. There are at
least two options for dealing with templates and edit-time error
detection and code completion. They are not mutually exclusive.

One is to present the 'raw' interface of the template to the user.
That is, you just parse the template and show the user code
completion options in the 'raw' form, e.g., if the user is creating a
deque the following completion options could be presented:
The 'raw' interface of which specialization of the template? The point is that
a tool, to be at all effective in modern programming, has to do near full parse
of the code and semantic analysis. With C++ especially, that is *far* from
trivial.
Your observation regarding two phase lookup, if I understand your
meaning, is relevant, but not a show-stopper.
Actually, it pretty much is a show-stopper, more so than anything else. A tool
can trivially preprocess a file, but a tool cannot do any meaningful code
completion inside a template. E.g. what options might a tool give me at this
point:

template<class T> void f(T) {
X<T>:: /* here */
}

The answer is "nothing" because it cannot know what T is, and therefore cannot
tell what specialization of X is chosen, nor can it even tell what X
specializations there might be. In generic programming, there is very little
code in a template body that is not dependent on template parameters. This is
second phase lookup (a very useful feature) at work. What it boils down to is
that code analysis such as completion is fundamentally useless inside template
code, and that is precisely the place where it would be most useful.
This relates back to my
early comment about KDevelop. I don't know how they do it, but they
are providing code completion for classes instantiated from
templates. That requires that the template code has been compiled.
To that I say 'big deal'!
Okay, I'll follow this line (even though "partially" compiling C++ as you type
would be incredibly expensive). If you say 'big deal' to that, than what is the
problem with the preprocessor? Tools can trivially preprocess source code much
easier than they can parse the underlying language itself.
It's that way with JBuilder and Java.
JBuilder balks at providing error detection and code completion in
some cases in which the code has not been compiled, and it gets it
wrong sometimes when the code has changed, and has not been
recompiled.

One might argue that the same could be done with the Cpp. My response
is that the Cpp is far less structured, and trying to predict what
someone might have done to his code with it, is beyond what I would
expect a reasonable programmer to care to address.
What on earth are you talking about? Cpp *is* well-structured, and more
importantly, it is a straightline process. If a tool cares to look at the code
resulting from the preprocessor, it merely has to preprocess it. To that I say
'big deal'.
This is a perfect example of how the CPP undermines the integrity of
C++.
No, it is an example of code that *may* be misusing the preprocessor. Even if
it is, so what? If I wanted to produce asinine and unreadable code, use of the
preprocessor is not required. In fact, it is possible in *any* language with
*any* feature-set.
It's from the xerces C++ samples:
CreateDOMDocument.cpp
http://tinyurl.com/3wt4x

#include <xercesc/util/PlatformUtils.hpp>
#include <xercesc/util/XMLString.hpp>
#include <xercesc/dom/DOM.hpp>
#if defined(XERCES_NEW_IOSTREAMS)
#include <iostream>
#else
#include <iostream.h>
#endif

XERCES_CPP_NAMESPACE_USE
...

#define X(str) XStr(str).unicodeForm()

...

unsigned int elementCount =
doc->getElementsByTagName(X("*"))->getLength(); XERCES_STD_QUALIFIER
cout << "The tree just created contains: " << elementCount << "
elements." << XERCES_STD_QUALIFIER endl; ...

Using JBuilder, and the Java counterpart, I can fly through XML
coding. It's a pleasure to work with. The above excerpt is a mangled
and tangled maze of inelegant kludgery. There are historical reasons
for the code being the way it is. But there is no justification for
this kind of anti-programming in any code written with the future in
mind.
Actually, there is. Because the underlying language is so complex, there are
numerous bugs and missing features in every single existing C++ compiler (more
so for some than others). Even looking at the tiny snippet of code above, which
I know nothing about as a whole, I can tell that it is working around that exact
issue. In this example, the preprocessor merely makes it possible to do what
otherwise could not be done.
I pretty much figured out what all the macro mangling does in the
above sample. But by that time I was so disgusted, I lost interest
in using the library. There are countless examples of such anti-code
in most of the C++ libraries I've seen.
There are plenty of ways to misuse the preprocessor, just as there are plenty of
ways to misuse any language feature in any language. So what? Further, the
above code is 'anti-code' only in the sense that it is working around flaws in
the compiler, standard library, or isolating inherent platform-related
dependencies.
I should be able to write "using xercesc::XMLString" and the compiler
should pull in what it needs, and ONLY WHAT IT NEEDS, to provide that
class. I should not need the #include a file that #includes a file
that #includes a file that provides the definition of the namespace
macro, and another three or more levels of copy and paste to get the
bloody class declaration, and then pray that it can find the
corresponding definition.
What does this have to do with the C or C++ preprocessor? C++ does not have a
module system. You're talking about fundamental changes to the language, not
problems with the preprocessor. Further, there are things called "good
organization" and "good code structure" that make dealing with these issues
trivial (e.g. to use interface X include file Y and link to Z--well designed
headers can be blackboxes).
For anybody who wants to tell me that the reason that C++ code is
this hard to understand is that C++ is more powerful, please explain
to me why IBM-the company that wrote the Xerces C++ code-is using the
Java counterpart to serve out IBM's latest online C++ language and
library documentation.
Because they felt like it? Honestly, who cares? In my opinion, which is what
matters to me, Java sucks on so many levels it is unusable.

Regarding C++ (in general)... C++ is more powerful--that is unquestionable.
Whether all that power is necessary to accomplish some specific task is another
question altogether.
http://publib.boulder.ibm.com/infoce...elp/index.jsp\
?topic=/com.ibm.vacpp7a.doc/language/ref/clrc00about_this_reference.htm

Years ago I read Kernighan and Ritchie's TCPL(2E). I've read all of
TC++PL(SE) except for the appendices, from which I've read
selections. I read the core language sections twice. I've also read
much other material on the language, and I've been writing a good
deal of code. Additionally, I've read The Java Programming language
3rd Edition, and have considerable experience working with Java.
From that foundation I have formed the opinion, shared by the creator
of C++,
So what? Bjarne is not the ultimate authority on what is good and bad.
that the preprocessor undermines the integrity of the core
C++ programming language and is the main reason for the comparative
ease of use Java provides. The Cpp is not a strength, it is a
liability.


You're opinion, which you are entitled to, is simply wrong. The preprocessor is
not to blame, the complexity of the language as a whole is. For an experienced
C++ programmer, using Java is like tying your hands behind your back. The
language (Java) not only promotes, but enforces bad design as a result of
seriously lacking feature set--with the excuse that it is protecting you from
yourself. C++ doesn't make such decisions for us, instead it gives us the tools
to make abstractions that do it for us. In other words, it isn't the result of
arrogance propelled by limited vision.

Regards,
Paul Mensonides
Jul 22 '05 #24

P: n/a
Paul Mensonides wrote:
Steven T. Hatton wrote:
#define STR(x) #x
STR(abc) // "abc"


And that buys you what? I don't see the profound usefullness of this
trick.


It buys you meaningful assertions with replicating code among other
things.


Perhaps this is useful. I have never tried using assertions. When I read
about them in TC++PL(SE) it basically went. 'Check this assertion thing
out. Pretty cool, eh? They're macros. They suck!'

There are places where retaining some of the Cpp for the implementation to
use seems to make sense. For example the various __LINE__, __FILE__,
__DATE__, etc. are clearly worth having around so debuggers and other tools
can use them. They should not be considered part of the core language to
be used by application programmers.

One is to present the 'raw' interface of the template to the user.
That is, you just parse the template and show the user code
completion options in the 'raw' form, e.g., if the user is creating a
deque the following completion options could be presented:


The 'raw' interface of which specialization of the template?


That seems like an ambiguity that could be resolved by examining the
parameter when necessary.
The point is
that a tool, to be at all effective in modern programming, has to do near
full parse
of the code and semantic analysis. With C++ especially, that is *far*
from trivial.
I agree. However, the presence of the CPP complicates this issue to the
point where it seems to degenerate into an exercise in absurdity. I will
observe that many Java IDEs do this rather successfully.
Your observation regarding two phase lookup, if I understand your
meaning, is relevant, but not a show-stopper.


Actually, it pretty much is a show-stopper, more so than anything else. A
tool can trivially preprocess a file, but a tool cannot do any meaningful
code
completion inside a template. E.g. what options might a tool give me at
this point:

template<class T> void f(T) {
X<T>:: /* here */
}


Whatever can be extracted from X. If there are specializations of X the it
seems reasonable to provide the superset of options with an indication of
which are specific to a given specialization.
The answer is "nothing" because it cannot know what T is, and therefore
cannot tell what specialization of X is chosen, nor can it even tell what
X specializations there might be.
Why can't it tell what specialization for X exist?
In generic programming, there is very little
code in a template body that is not dependent on template parameters.
This is second phase lookup (a very useful feature) at work. What it
boils down to is that code analysis such as completion is fundamentally
useless inside template code, and that is precisely the place where it
would be most useful.
I'm not convinced that either of these assertions are correct. It may be
more difficult to provide meaningful code completion inside a template, but
I believe it is fundamentally possible to provide a significant amount of
information in that context. Furthermore, I don't accept the notion that
code completion inside a template is where it would be most useful. By
their nature, templates are abstractions, and their use implies a certain
amount of selective ignorance that would preclude your knowing the details
of how specific template parameters would affect the context.
This relates back to my
early comment about KDevelop. I don't know how they do it, but they
are providing code completion for classes instantiated from
templates. That requires that the template code has been compiled.
To that I say 'big deal'!


Okay, I'll follow this line (even though "partially" compiling C++ as you
type
would be incredibly expensive). If you say 'big deal' to that, than what
is the
problem with the preprocessor? Tools can trivially preprocess source code
much easier than they can parse the underlying language itself.


Well, to some extent there is syntax checking going on with KDevelop. It's
not at the level I would like, but it continues to improve. As for the
cost of compiling C++, I'm not convinced that the proprocessor and the
compilation mechanisms it supports and encourages aren't a significant part
of the problem. The current approach seems rather monolithic. I suspect a
more compartmentalized strategy would prove far more efficient. I have to
admit that g++ can take an absurd amount of time to compile fairly simple
programs. My impression is that is due to the recompilation of units that
invariably produce identical results.
One might argue that the same could be done with the Cpp. My response
is that the Cpp is far less structured, and trying to predict what
someone might have done to his code with it, is beyond what I would
expect a reasonable programmer to care to address.


What on earth are you talking about? Cpp *is* well-structured, and more
importantly, it is a straightline process. If a tool cares to look at the
code
resulting from the preprocessor, it merely has to preprocess it. To that
I say 'big deal'.


But now you are talking about something editing your code at the same time
you are, but not displaying the results. What I should have said is that
the result of using the Cpp is far less structured than the result of using
a programming language.
But there is no justification for
this kind of anti-programming in any code written with the future in
mind.


Actually, there is. Because the underlying language is so complex, there
are numerous bugs and missing features in every single existing C++
compiler (more
so for some than others). Even looking at the tiny snippet of code above,
which I know nothing about as a whole, I can tell that it is working
around that exact
issue. In this example, the preprocessor merely makes it possible to do
what otherwise could not be done.


I see no reason to try to support a compiler that doesn't understand
namespaces. What significant platform is restricted to using such a
compiler?
I pretty much figured out what all the macro mangling does in the
above sample. But by that time I was so disgusted, I lost interest
in using the library. There are countless examples of such anti-code
in most of the C++ libraries I've seen.


There are plenty of ways to misuse the preprocessor, just as there are
plenty of
ways to misuse any language feature in any language. So what? Further,
the above code is 'anti-code' only in the sense that it is working around
flaws in the compiler, standard library, or isolating inherent
platform-related dependencies.


As I said, there are historical reason for the code to be that way. That is
far from the only place where the people used the CPP to rewrite code that
should be straight forward C++. There are arguments for generating code in
ways that currently rely on the Cpp. I use them all the time.
http://doc.trolltech.com/3.3/moc.html
I should be able to write "using xercesc::XMLString" and the compiler
should pull in what it needs, and ONLY WHAT IT NEEDS, to provide that
class. I should not need the #include a file that #includes a file
that #includes a file that provides the definition of the namespace
macro, and another three or more levels of copy and paste to get the
bloody class declaration, and then pray that it can find the
corresponding definition.


What does this have to do with the C or C++ preprocessor? C++ does not
have a module system. You're talking about fundamental changes to the
language, not problems with the preprocessor.


"I suspect my dislike for the preprocessor is well known. Cpp is essential
in C programming, and still important in conventional C++ implementations,
but it is a hack, and so are most of the techniques that rely on it.
*_It_has_been_my_long-term_aim_to_make_Cpp_redundant_*."
Further, there are things called "good
organization" and "good code structure" that make dealing with these
issues trivial (e.g. to use interface X include file Y and link to Z--well
designed headers can be blackboxes).
I try hard to get things right. There are some aspects of Xerces which do
show a better face than what I previously introduced:
http://cvs.apache.org/viewcvs.cgi/xm...c/xercesc/dom/

The separation of interface and implementation is textbook proper C++. I
still find the added level of complexity involved in using headers
unnecessary, and a sgnificant burden. I'm currently working on a tool that
will mitigate this drudgery by treating the class as a unit, rather than a
composit.
For anybody who wants to tell me that the reason that C++ code is
this hard to understand is that C++ is more powerful, please explain
to me why IBM-the company that wrote the Xerces C++ code-is using the
Java counterpart to serve out IBM's latest online C++ language and
library documentation.


Because they felt like it? Honestly, who cares? In my opinion, which is
what matters to me, Java sucks on so many levels it is unusable.

That's a silly statement. I've seen it used to do losts of useful things.
This may be a fairly useless program, but I believe it demonstrates that
Java is capable of supporting the development of fairly sophisticated
programs. It's my second Java3d project. It was pretty much a limbering
up exercise.
http://baldur.globalsymmetry.com//pr...ing-basis.html
Regarding C++ (in general)... C++ is more powerful--that is
unquestionable.
What do you mean by powerful? It seems clear that major players in the
industry do not consider C++ to be appropriate for many major applications.
Whether all that power is necessary to accomplish some
specific task is another question altogether.
http://publib.boulder.ibm.com/infoce...elp/index.jsp\
?topic=/com.ibm.vacpp7a.doc/language/ref/clrc00about_this_reference.htm

Years ago I read Kernighan and Ritchie's TCPL(2E). I've read all of
TC++PL(SE) except for the appendices, from which I've read
selections. I read the core language sections twice. I've also read
much other material on the language, and I've been writing a good
deal of code. Additionally, I've read The Java Programming language
3rd Edition, and have considerable experience working with Java.
From that foundation I have formed the opinion, shared by the creator
of C++,
So what? Bjarne is not the ultimate authority on what is good and bad.


No, but I find it interesting that his thoughts on this matter seem to
reflect my own - mostly independently formed - assessment. I believe it is
also significant that he does hold such a strong opinion about the matter.

He may not agree with me here, and even if he does, he may be unwilling to
say as much for the sake of polity. The way the standard headers are used
is a logical mess. It causes the programmer to waste valuable time
searching for things that should be immediately accessible by name. The is
true of most other libraries as well. There is no need for this lack of
structure other than the simple fact that no one has been able to move the
mindset of some C++ programmers out of the Nixon era.
that the preprocessor undermines the integrity of the core
C++ programming language and is the main reason for the comparative
ease of use Java provides. The Cpp is not a strength, it is a
liability.


You're opinion, which you are entitled to, is simply wrong. The
preprocessor is
not to blame, the complexity of the language as a whole is. For an
experienced
C++ programmer, using Java is like tying your hands behind your back. The
language (Java) not only promotes, but enforces bad design as a result of
seriously lacking feature set--with the excuse that it is protecting you
from
yourself.


In the case of the CPP, it isn't so much me I want to be protected from.
It's the people who think it's a good idea to use it for anything beyond
simple conditional compilation, some debugging, and perhaps for #including
header files. I will admit that I still find the use of header files
bothersome. They represented one of the biggest obsticles I encountered
when first learning C++.
C++ doesn't make such decisions for us, instead it gives us the tools
to make abstractions that do it for us. In other words, it isn't the
result of arrogance propelled by limited vision.


Leaving some things to choice serves no usful purpose beyond pleasing
multiple constituents. There are places where a lack of rule is not
empowering, it is restricting. I hear diving in Rome is not what one could
properly call a civilized affair. Personally, I like the idea that people
stop at stoplights, use trunsignals appropriatly, stay in one lane under
normal circumstances, etc.
--
"[M]y dislike for the preprocessor is well known. Cpp is essential in C
programming, and still important in conventional C++ implementations, but
it is a hack, and so are most of the techniques that rely on it. ...I think
the time has come to be serious about macro-free C++ programming." - B. S.

Jul 22 '05 #25

P: n/a
Steven T. Hatton wrote:
It buys you meaningful assertions with replicating code among other
things.
Perhaps this is useful. I have never tried using assertions. When I
read about them in TC++PL(SE) it basically went. 'Check this
assertion thing out. Pretty cool, eh? They're macros. They suck!'


Assertions are invaluable tools.
There are places where retaining some of the Cpp for the
implementation to use seems to make sense. For example the various
__LINE__, __FILE__, __DATE__, etc. are clearly worth having around so
debuggers and other tools can use them.
Debuggers? Macros don't exist after compilation, so I'm not sure that __DATE__
and __TIME__ would have any useful meaning at all, and debuggers (if source
information is available) already know the line and file.
They should not be
considered part of the core language to be used by application
programmers.
An immediate example comes to mind. What if I write a program that, when
executed with a --version option, outputs a copyright notice, version number,
and the build date and time? (This is a fairly common scenario, BTW.)
One is to present the 'raw' interface of the template to the user.
That is, you just parse the template and show the user code
completion options in the 'raw' form, e.g., if the user is creating
a deque the following completion options could be presented:


The 'raw' interface of which specialization of the template?


That seems like an ambiguity that could be resolved by examining the
parameter when necessary.


Yes, but that means that the IDE has to be able to parse and semantically
analyze the entire source code--including doing overload resolution, partial
ordering, template declaration instantiation, etc..
The point is
that a tool, to be at all effective in modern programming, has to do
near full parse
of the code and semantic analysis. With C++ especially, that is
*far* from trivial.


I agree. However, the presence of the CPP complicates this issue to
the point where it seems to degenerate into an exercise in absurdity.


I think you have a serious misunderstanding of the preprocessor. How does CPP
complicate this issue?
I will observe that many Java IDEs do this rather successfully.
Parsing Java is quite a bit simpler than parsing C++.
Your observation regarding two phase lookup, if I understand your
meaning, is relevant, but not a show-stopper.


Actually, it pretty much is a show-stopper, more so than anything
else. A tool can trivially preprocess a file, but a tool cannot do
any meaningful code
completion inside a template. E.g. what options might a tool give
me at this point:

template<class T> void f(T) {
X<T>:: /* here */
}


Whatever can be extracted from X. If there are specializations of X
the it seems reasonable to provide the superset of options with an
indication of which are specific to a given specialization.


It doesn't know all of the specializations. As a general rule, it only knows
about a few general specializations.
The answer is "nothing" because it cannot know what T is, and
therefore cannot tell what specialization of X is chosen, nor can it
even tell what X specializations there might be.


Why can't it tell what specialization for X exist?


Because they might not exist yet.
In generic programming, there is very little
code in a template body that is not dependent on template parameters.
This is second phase lookup (a very useful feature) at work. What it
boils down to is that code analysis such as completion is
fundamentally useless inside template code, and that is precisely
the place where it would be most useful.


I'm not convinced that either of these assertions are correct. It
may be more difficult to provide meaningful code completion inside a
template, but I believe it is fundamentally possible to provide a
significant amount of information in that context. Furthermore, I
don't accept the notion that code completion inside a template is
where it would be most useful. By their nature, templates are
abstractions, and their use implies a certain amount of selective
ignorance that would preclude your knowing the details of how
specific template parameters would affect the context.


That's true, but (as a general rule) well-designed template code is also the
most complex code. Code completion outside of template code, while useful, is
only a small benefit.
Okay, I'll follow this line (even though "partially" compiling C++
as you type
would be incredibly expensive). If you say 'big deal' to that, than
what is the
problem with the preprocessor? Tools can trivially preprocess
source code much easier than they can parse the underlying language
itself.


Well, to some extent there is syntax checking going on with KDevelop.
It's not at the level I would like, but it continues to improve. As
for the cost of compiling C++, I'm not convinced that the
proprocessor and the compilation mechanisms it supports and
encourages aren't a significant part of the problem.


Look, if a tool author is willing to fully parse the underlying language,
preprocessing the source as it does so is trivial in comparison. If you
disagree, tell me why.
One might argue that the same could be done with the Cpp. My
response
is that the Cpp is far less structured, and trying to predict what
someone might have done to his code with it, is beyond what I would
expect a reasonable programmer to care to address.


What on earth are you talking about? Cpp *is* well-structured, and
more importantly, it is a straightline process. If a tool cares to
look at the code
resulting from the preprocessor, it merely has to preprocess it. To
that I say 'big deal'.


But now you are talking about something editing your code at the same
time you are, but not displaying the results. What I should have
said is that the result of using the Cpp is far less structured than
the result of using a programming language.


No, I'm not. C and C++ code can be preprocessed as it is parsed into a syntax
tree in a single pass. It's not like when you type something, the IDE goes and
tries to find it in the source code--that would be *incredibly* inefficient. In
essence, it is already rewriting the code into an internal format that is
designed for algorithmic navigation that also discards information that it
doesn't care about.
But there is no justification for
this kind of anti-programming in any code written with the future in
mind.


Actually, there is. Because the underlying language is so complex,
there are numerous bugs and missing features in every single
existing C++ compiler (more
so for some than others). Even looking at the tiny snippet of code
above, which I know nothing about as a whole, I can tell that it is
working around that exact
issue. In this example, the preprocessor merely makes it possible
to do what otherwise could not be done.


I see no reason to try to support a compiler that doesn't understand
namespaces. What significant platform is restricted to using such a
compiler?


I wish it was that simple. In many companies, there is a lot of inertia from a
compiler version. I.e. it often takes years to upgrade to new compilers (if at
all)--simply because the time required to make older code compatible with the
new compiler can be massively expensive. Thus, new code gets written for older
compilers all the time.
I pretty much figured out what all the macro mangling does in the
above sample. But by that time I was so disgusted, I lost interest
in using the library. There are countless examples of such
anti-code
in most of the C++ libraries I've seen.


There are plenty of ways to misuse the preprocessor, just as there
are plenty of
ways to misuse any language feature in any language. So what?
Further, the above code is 'anti-code' only in the sense that it is
working around flaws in the compiler, standard library, or isolating
inherent platform-related dependencies.


As I said, there are historical reason for the code to be that way.


There are current reasons as well.
That is far from the only place where the people used the CPP to
rewrite code that should be straight forward C++.
The preprocessor does not 'rewrite' code--it expands macros which are *part* of
the code. In doing so, it can reap major benefits in readability and
maintenance.
What does this have to do with the C or C++ preprocessor? C++ does
not have a module system. You're talking about fundamental changes
to the language, not problems with the preprocessor.


"I suspect my dislike for the preprocessor is well known. Cpp is
essential in C programming, and still important in conventional C++
implementations, but it is a hack, and so are most of the techniques
that rely on it.
*_It_has_been_my_long-term_aim_to_make_Cpp_redundant_*."


More than anything else, Bjarne hates #define, not #include, BTW. The
preprocessor is not a hack, it is an incredibly useful tool that can, like every
other tool, be misused.
Further, there are things called "good
organization" and "good code structure" that make dealing with these
issues trivial (e.g. to use interface X include file Y and link to
Z--well designed headers can be blackboxes).


I try hard to get things right. There are some aspects of Xerces
which do show a better face than what I previously introduced:
http://cvs.apache.org/viewcvs.cgi/xm...c/xercesc/dom/

The separation of interface and implementation is textbook proper
C++. I still find the added level of complexity involved in using
headers unnecessary, and a sgnificant burden.


What complexity are you talking about exactly? Separation of interface and
implementation is a cornerstone of separate compilation--which is one of the
fundamental reasons that C++ is able to scale to very large projects without
dropping efficiency.
I'm currently working
on a tool that will mitigate this drudgery by treating the class as a
unit, rather than a composit.
What do you mean by 'composite', separation of interface and implementation?
For anybody who wants to tell me that the reason that C++ code is
this hard to understand is that C++ is more powerful, please explain
to me why IBM-the company that wrote the Xerces C++ code-is using
the Java counterpart to serve out IBM's latest online C++ language
and library documentation.


Because they felt like it? Honestly, who cares? In my opinion,
which is what matters to me, Java sucks on so many levels it is
unusable.

That's a silly statement. I've seen it used to do losts of useful
things.


That's not exactly what I meant. It is unusable to me because I am accustomed
to a much more complete toolset. With Java, you have to jump through a lot of
hoops to get around language-imposed limitations. Granted, there is a measure
of safety in some things, but then it is immediately lost by lack of generics
(real generics, not half-ass generics). Languages can be loosely classified by
how much safety they enforce by default. C++ is at one end. It allows many
things that could be unsafe because it allows access to the "details". Other
languages are at the other end, such as Scheme or Haskell. Such languages can
yield higher productivity because they rely much more on compiler optimization
of the details. Both are valid strategies. Java (and it isn't just Java) is
smack in the middle of the two approaches, which ends up providing only a small
fraction of the benefits of either. If I want control of details (for whatever
reason) I'll use a language (like C++) that doesn't actively work against me
controlling those details. If I want a higher-level language where I can
largely ignore many of those details, I'll use a real higher-level language
(like Haskell).
This may be a fairly useless program, but I believe it
demonstrates that Java is capable of supporting the development of
fairly sophisticated programs.
I agree that Java is capable.
It's my second Java3d project. It
was pretty much a limbering up exercise.
http://baldur.globalsymmetry.com//pr...ing-basis.html
Regarding C++ (in general)... C++ is more powerful--that is
unquestionable.
What do you mean by powerful?


I mean that it gives you access to lower-level details while simultaneously
giving you tools to create higher-level abstractions.
It seems clear that major players in
the industry do not consider C++ to be appropriate for many major
applications.
Ha. The opposite is true.
So what? Bjarne is not the ultimate authority on what is good and
bad.


No, but I find it interesting that his thoughts on this matter seem to
reflect my own - mostly independently formed - assessment. I believe
it is also significant that he does hold such a strong opinion about
the matter.


There are other people, also actively involved in the design of C++ and just as
authoritative, that have opposing viewpoints.
He may not agree with me here, and even if he does, he may be
unwilling to say as much for the sake of polity. The way the standard
headers are used is a logical mess. It causes the programmer to
waste valuable time searching for things that should be immediately
accessible by name.
That is what documentation is for, which you need anyway. (Further, good
documentation is significantly more involved that what can be trivially put in
header file comments.) As an aside, the standard library header structure is
not that well organized--mostly because some headers contain too many things
(e.g. <algorithm>).
The is true of most other libraries as well.
There is no need for this lack of structure other than the simple
fact that no one has been able to move the mindset of some C++
programmers out of the Nixon era.
An interface is specified in some file, which you include to access the
interface. The documentation tells you what file you need to include and what
you need to link to (if anything) to use the interface. That's pretty simple
and well-structured.
that the preprocessor undermines the integrity of the core
C++ programming language and is the main reason for the comparative
ease of use Java provides. The Cpp is not a strength, it is a
liability.


You're opinion, which you are entitled to, is simply wrong. The
preprocessor is
not to blame, the complexity of the language as a whole is. For an
experienced
C++ programmer, using Java is like tying your hands behind your
back. The language (Java) not only promotes, but enforces bad
design as a result of seriously lacking feature set--with the excuse
that it is protecting you from
yourself.


In the case of the CPP, it isn't so much me I want to be protected
from. It's the people who think it's a good idea to use it for
anything beyond simple conditional compilation, some debugging, and
perhaps for #including header files.


Give an example of why this protection is necessary. There is only one: name
conflicts introduced by unprefixed or lowercase macro definitions. That is just
bad design, and you need a whole lot more changes to the language to prevent
someone else's bad design from affecting you.
I will admit that I still find
the use of header files bothersome. They represented one of the
biggest obsticles I encountered when first learning C++.
They are different than Java, but they aren't fundamentally complex.
C++ doesn't make such decisions for us, instead it gives us the tools
to make abstractions that do it for us. In other words, it isn't the
result of arrogance propelled by limited vision.


Leaving some things to choice serves no usful purpose beyond pleasing
multiple constituents.


I'm not talking about providing two or more near identical features that all
have the same tradeoffs. Fundamentally, it's about being able to pick which
tradeoffs are worthwhile for a particular thing--and that serves an incredibly
useful purpose.
There are places where a lack of rule is not
empowering, it is restricting.
Example?
I hear diving in Rome is not what one
could properly call a civilized affair. Personally, I like the idea
that people stop at stoplights, use trunsignals appropriatly, stay in
one lane under normal circumstances, etc.


I really dislike analogies because they are so readily available to support any
argument, but there are some key concepts in this one. First, what you describe
is a system that is enforced by convention rather than by the streets,
stoplights, signs, vehicles, etc. (i.e. the "language"). Second, the ability to
deviate from the conventions is vital because there are unforeseeable variables
(or foreseeable variables that are too costly to handle) that enter the
system--such as a power outage or severe whether. Under the Java model, when
those circumstances occur (i.e. when you know better than the system enforced by
the language) you can't do anything about it. In C++, when you know better than
the system enforced by conventions, you can. (Note that both of these
situations occur in both Java and in C++, it isn't nearly as black-and-white as
this generalization. However, as a generalization, it is true. C++ gives you
more flexibility by moving the system from the language to convention moreso
than Java does.)

Regards,
Paul Mensonides
Jul 22 '05 #26

P: n/a
Paul Mensonides wrote:
Steven T. Hatton wrote:
It buys you meaningful assertions with replicating code among other
things.
Perhaps this is useful. I have never tried using assertions. When I
read about them in TC++PL(SE) it basically went. 'Check this
assertion thing out. Pretty cool, eh? They're macros. They suck!'


Assertions are invaluable tools.


Some people seem to think so. I read up on them in both Java and C++, and
was also aware of them in C. They never seemed to be much use. I'll grant
you, with the weak exception handling of C++ such a thing might be a bit
handy. Too bad C++ doesn't have printStackTrace. I can't even think of a
problem I've had where such a mechanism would be of much use.
There are places where retaining some of the Cpp for the
implementation to use seems to make sense. For example the various
__LINE__, __FILE__, __DATE__, etc. are clearly worth having around so
debuggers and other tools can use them.


Debuggers? Macros don't exist after compilation, so I'm not sure that
__DATE__ and __TIME__ would have any useful meaning at all, and debuggers
(if source information is available) already know the line and file.


Oh well, one less argument in favor of keeping the preprocessor.
They should not be
considered part of the core language to be used by application
programmers.


An immediate example comes to mind. What if I write a program that, when
executed with a --version option, outputs a copyright notice, version
number,
and the build date and time? (This is a fairly common scenario, BTW.)


This kind of thing?

"GNU Emacs 21.3.50.2 (i686-pc-linux-gnu, X toolkit, Xaw3d scroll bars) of
2004-08-30 on ljosalfr"

Nothin' a bit of sed and date in the Makefile won't do for you.
That seems like an ambiguity that could be resolved by examining the
parameter when necessary.


Yes, but that means that the IDE has to be able to parse and semantically
analyze the entire source code--including doing overload resolution,
partial ordering, template declaration instantiation, etc..


No it doesn't. Even if it were necessary to do all you said, I don't
believe it is beyond the capabilities of existing technology. But, as I've
already explained, it isn't necessary to parse everything at edit time in
order for such a tool to be useful. They can often rely on the results of
a previous compilation.

Oh, and I just checked. KDevelop is giving me code completion on templates.
I agree. However, the presence of the CPP complicates this issue to
the point where it seems to degenerate into an exercise in absurdity.


I think you have a serious misunderstanding of the preprocessor. How does
CPP complicate this issue?


Because it supports the antiquated technique of pasting together a
translation unit out of a bunch of different files, and it modifies the
source code between the time the program is edited and the time it is
actually compiled.
I will observe that many Java IDEs do this rather successfully.


Parsing Java is quite a bit simpler than parsing C++.


Some of that is due to a simpler grammar, and some (much) of it is due to
the fact that Java uses a superior mechanism for locating resources
external to the actual file containing the source under development.
It doesn't know all of the specializations. As a general rule, it only
knows about a few general specializations.
What is "it"? I'm not following you here. The specialization are defined
somewhere in the code base, so they can be cached like anything else.
The answer is "nothing" because it cannot know what T is, and
therefore cannot tell what specialization of X is chosen, nor can it
even tell what X specializations there might be.


Why can't it tell what specialization for X exist?


Because they might not exist yet.


What are you talking about? Either they do or they don't exist. It would
be damn hard for any toll to provide code completion based on code you have
yet to write.

That's true, but (as a general rule) well-designed template code is also
the
most complex code. Code completion outside of template code, while
useful, is only a small benefit.
There are a lot of darn simple templates in the Standard Library.
Look, if a tool author is willing to fully parse the underlying language,
preprocessing the source as it does so is trivial in comparison. If you
disagree, tell me why.
The problem is knowing what needs to be preprocessed. There is also the
problem the the preprocessor does not adhere to scoping rules, so the tool
cannot limit the scope under consideration without ignoring the potential
impact of the preprocessor.
But now you are talking about something editing your code at the same
time you are, but not displaying the results. What I should have
said is that the result of using the Cpp is far less structured than
the result of using a programming language.


No, I'm not. C and C++ code can be preprocessed as it is parsed into a
syntax
tree in a single pass. It's not like when you type something, the IDE
goes and
tries to find it in the source code--that would be *incredibly*
inefficient. In essence, it is already rewriting the code into an
internal format that is designed for algorithmic navigation that also
discards information that it doesn't care about.


Nonetheless, the ast is not going to directly coincide with what is in the
edit buffer. Adding one preprocessor directive can add dozens of source
files to the translation unit. That is unstructured, and unpredictable.
I see no reason to try to support a compiler that doesn't understand
namespaces. What significant platform is restricted to using such a
compiler?


I wish it was that simple. In many companies, there is a lot of inertia
from a
compiler version. I.e. it often takes years to upgrade to new compilers
(if at all)--simply because the time required to make older code
compatible with the
new compiler can be massively expensive. Thus, new code gets written for
older compilers all the time.


I don't believe there is a compelling reason to introduce new libraries
intended for general use in forward looking technology filled with ugly
kludges in order to try to be compatable with the least common denominator.
There are better ways of dealing with such issues. By bending over
backward to try to remain compatable with obsolete technology, you
compromise your own produce and encourage the survival of technology that
was outdate for a reason.
As I said, there are historical reason for the code to be that way.


There are current reasons as well.


And the result is that people don't use the product nearly as much as they
otherwise would.

The preprocessor does not 'rewrite' code--it expands macros which are
*part* of
the code. In doing so, it can reap major benefits in readability and
maintenance.
I've more often seen the opposite effect. Most use of macros makes code
less comprehendable, and trying to track down the point of definition can
be exasperating.
What does this have to do with the C or C++ preprocessor? C++ does
not have a module system. You're talking about fundamental changes
to the language, not problems with the preprocessor.


"I suspect my dislike for the preprocessor is well known. Cpp is
essential in C programming, and still important in conventional C++
implementations, but it is a hack, and so are most of the techniques
that rely on it.
*_It_has_been_my_long-term_aim_to_make_Cpp_redundant_*."


More than anything else, Bjarne hates #define, not #include, BTW. The
preprocessor is not a hack, it is an incredibly useful tool that can, like
every other tool, be misused.


I think its primary use is as a crutch the C++ can't so without because it
was never attempted.

The separation of interface and implementation is textbook proper
C++. I still find the added level of complexity involved in using
headers unnecessary, and a significant burden.


What complexity are you talking about exactly? Separation of interface
and implementation is a cornerstone of separate compilation--which is one
of the fundamental reasons that C++ is able to scale to very large
projects without dropping efficiency.


Headers should not be the means of achieving the separation of
implementation and interface. The only place the ISO/IEC 14882:2003 even
mentions a header file is in the C compatability appendix. The language
specification /should/ address this issue by providing a better solution
than that which currently exists.
I'm currently working
on a tool that will mitigate this drudgery by treating the class as a
unit, rather than a composit.


What do you mean by 'composite', separation of interface and
implementation?


My actually having to maintain redundant constructs between these files.
Changing one member in a class can result in having to edit the location
where the member is defined, the parameter list in the constructor
declaration, the parameter list in constructor definition, the member
initialization list, and perhaps parameter lists in both the the header and
source file of any functions involved. Additionally, I am likely to have
to remove a forward declaration and a #include. There may also be a
requirement to modify the destructor.

A header doesn't really give you a genuine interface, and the tradeoff in
trying to make a header devoid of anything but pointer definitions in order
to prevent dependencies can be a nuisance as well. Some of this is
inevitable no matter what. Some of it is beyond repair. Could have,
perhaps should have, been done differently at the outset. Some of the
relative complexity compared to Java is due to the fact that C++ has
pointers, references, and (regular) variables. So my tool, if I ever get
it completed, is intended to mitigate more than could be solved by
eliminating the #include. I favor the falsely advertised feature described
here:

http://gcc.gnu.org/onlinedocs/gcc-3....++%20Interface

It don't work!
Java (and it isn't just Java)
is smack in the middle of the two approaches, which ends up providing only
a small
fraction of the benefits of either. If I want control of details (for
whatever reason) I'll use a language (like C++) that doesn't actively work
against me
controlling those details. If I want a higher-level language where I can
largely ignore many of those details, I'll use a real higher-level
language (like Haskell).
It's not the safety that makes Java useful. It is that fact that it
facilitates the location and leveraging of resources. Much of that has to
do with the superior mechanism of importing declarations. Java more
effectively separates interface from implementation than does C++. I
believe some of what Java does could be done in C++ without negatively
impacting the language.
Regarding C++ (in general)... C++ is more powerful--that is
unquestionable.


What do you mean by powerful?


I mean that it gives you access to lower-level details while
simultaneously giving you tools to create higher-level abstractions.


I agree that C++ does that. What it doesn't do well is facilitate the
location of resources, and the isolation of components.
It seems clear that major players in
the industry do not consider C++ to be appropriate for many major
applications.


Ha. The opposite is true.


I'm not saying no one is using C++. I am saying that there are major areas
where C++ is not the language of choice, and it is due to the problems I've
been discussing. It's a combination of many issues which individually seem
trivial, but when they are combined become genuine obstacles to making
progress. Sure, with a few years experience people can learn to compensate
for these defects. But a lot of people won't get the luxury of being only
moderately productive for that amount of time.

He may not agree with me here, and even if he does, he may be
unwilling to say as much for the sake of polity. The way the standard
headers are used is a logical mess. It causes the programmer to
waste valuable time searching for things that should be immediately
accessible by name.


That is what documentation is for, which you need anyway.


No /that/ is what _interfaces_ are for! The programming language should be
the primary means of communication between the author and the reader. API
documentation can be very useful, and much of it can be generated by tools
such as JavaDoc and Doxygen.
(Further, good
documentation is significantly more involved that what can be trivially
put in header file comments.)
Sometimes it's nice to have more than just the autogenerated html, but even
that can be quite useful:
http://www.kdevelop.org/HEAD/doc/api...ameterAST.html

Trolltech generates all their API documentation from the source code:

http://doc.trolltech.com/3.3/qstring.html

And of course the highly successful Java API documentation is likewise
generated directly from the source files:
http://java.sun.com/j2ee/1.4/docs/api/index.html
As an aside, the standard library header structure
is not that well organized--mostly because some headers contain too many
things (e.g. <algorithm>).
And the namespace is flat. There should be one mechanism for determining
the subset of the library responsible for any particular area of
functionality. As it stands you take the intersection of the namespace and
the header name. Since some headers include other headers, you often end
up with more than you specified. That is bad. It can introduce hidden
dependencies.
The is true of most other libraries as well.
There is no need for this lack of structure other than the simple
fact that no one has been able to move the mindset of some C++
programmers out of the Nixon era.


An interface is specified in some file, which you include to access the
interface. The documentation tells you what file you need to include and
what
you need to link to (if anything) to use the interface. That's pretty
simple and well-structured.


An interface should be self-describing, and should not introduce more into
an environment than is essential to serve the immediate purpose.
In the case of the CPP, it isn't so much me I want to be protected
from. It's the people who think it's a good idea to use it for
anything beyond simple conditional compilation, some debugging, and
perhaps for #including header files.


Give an example of why this protection is necessary. There is only one:
name
conflicts introduced by unprefixed or lowercase macro definitions. That
is just bad design, and you need a whole lot more changes to the language
to prevent someone else's bad design from affecting you.


I've already provided an example in the Xerces code. That's a friggin slap
in the face to a person who wants to read that code.
I will admit that I still find
the use of header files bothersome. They represented one of the
biggest obstacles I encountered when first learning C++.


They are different than Java, but they aren't fundamentally complex.


No, but they can combine to create very unstructured complexity.
C++ doesn't make such decisions for us, instead it gives us the tools
to make abstractions that do it for us. In other words, it isn't the
result of arrogance propelled by limited vision.


Leaving some things to choice serves no useful purpose beyond pleasing
multiple constituents.


I'm not talking about providing two or more near identical features that
all
have the same tradeoffs. Fundamentally, it's about being able to pick
which tradeoffs are worthwhile for a particular thing--and that serves an
incredibly useful purpose.


I'm talking about the simple things like not specifying a file name
extension. Sure, it seems trivial, but it can be a PITA when switching
between tools which default to different conventions, or mixing libraries
that use different conventions. Some tools think .c files are C files and
go into C mode, not C++ mode unless you punch it a couple of times. And
worse is the .h file, because more people are likely to name their C++
header files that way. Others want to call everything .cpp which I find
annoying, but quite common. Others correctly prefer the .cc and .hh
extentions.
There are places where a lack of rule is not
empowering, it is restricting.


Example?
I hear diving in Rome is not what one
could properly call a civilized affair. Personally, I like the idea
that people stop at stoplights, use trunsignals appropriately, stay in
one lane under normal circumstances, etc.

... However, as a generalization, it is true. C++ gives
you more flexibility by moving the system from the language to convention
moreso than Java does.)


The problem as I see it is that the lack of specification of certain things
such as name resolution based on fully qualified identifiers rather than
relying on the programmer to #include the file containing the declaration
is a major structural deficiency in C++. I wish the standard simply said
'given a fully qualified identifier the implementation shall resolve that
name and make the declaration and/or definition available to the compiler
as needed'.
--
"[M]y dislike for the preprocessor is well known. Cpp is essential in C
programming, and still important in conventional C++ implementations, but
it is a hack, and so are most of the techniques that rely on it. ...I think
the time has come to be serious about macro-free C++ programming." - B. S.

Jul 22 '05 #27

P: n/a
Steven T. Hatton wrote:
Paul Mensonides wrote:
Steven T. Hatton wrote:
It buys you meaningful assertions with replicating code among other
things.

Perhaps this is useful. I have never tried using assertions. When I
read about them in TC++PL(SE) it basically went. 'Check this
assertion thing out. Pretty cool, eh? They're macros. They suck!'
Assertions are invaluable tools.


Some people seem to think so.


Some people, including me, *do* think so.
I read up on them in both Java and C++, and was also aware of them in C.
They never seemed to be much use.
"Read up on them"? Have you ever used this feature? Sometimes you realize
that something is useful not from reading about it.
I'll grant you, with the weak exception handling of C++ such a thing
might be a bit handy.
Assertions and exceptions are completely different beasts. Assertions are
about aborting the programm and printing a useful statement (useful
predominantly in debugging). Exceptions provide a flow construct to escape
from arbitrary levels of nesting (useful predominantly in postponing the
handling of error conditions).
Too bad C++ doesn't have printStackTrace. I can't even think of
problem I've had where such a mechanism would be of much use.


My debugger prints the stacktrace just fine.


I will just snip the rest of your post, as I have a more fundamental issue
with the direction of your reasoning. I have been following your posts for
quite some time now, starting with the rant about how the standard headers
are not faithfully represented in the header files of the implementation
you are using and about how that poses a difficulty for the IDE of your
dreams.

The overall structure of your approach to C++, as I understand it from the
collection of your posts, seems to be something like this:

a) Let us consider feature / mechanism X. It has the following uses that I
do approve of: <some list>.

b) Unfortunately, X also allows for the following that I do not approve of:
<some list>. These uses are bad because:

* I do not see how that could be useful to anybody.
* It will not allow me to have my IDE.
* If others can do something like that, I will have to adjust.
* Bjarne Stroustrup seems to say so.
* In Java you cannot do that; and Java rocks.
* ...

c) May I suggest to replace X by some feature set / mechanism that would
only allow for the uses listed in (a). [Specifics about the proposed
replacement are unfortunately missing at this time.]
In short, you want to enforce coding policies by language design. I,
however, like C++ precisely because it does not enforce policies but
provides mechanisms, and a lot. E.g., [try, throw, catch] to me is not
about error handling but about stack unwinding; and your suggestion that
throw() should only accept arguments derived from std::exception would
break some of my code. I like to explore the possibilites, and every once
in a while, I am really awestruck at how something can be done elegantly in
C++ in a way that I could not have fathomed.

I do not think that C++ is perfect, but I dislike the direction in which
you want to push it. To me, it looks as though you are about to cripple the
language.
Best

Kai-Uwe Bux
Jul 22 '05 #28

P: n/a
Kai-Uwe Bux wrote:
Steven T. Hatton wrote:
Paul Mensonides wrote:
Assertions are invaluable tools.
Some people seem to think so.


Some people, including me, *do* think so.

I read up on them in both Java and C++, and was also aware of them in C.
They never seemed to be much use.


"Read up on them"? Have you ever used this feature? Sometimes you realize
that something is useful not from reading about it.


There has to be some need I have before I look for something to fill it.
The idea of aborting a program on failure is simply not something I believe
to be a good practice.
I'll grant you, with the weak exception handling of C++ such a thing
might be a bit handy.


Assertions and exceptions are completely different beasts. Assertions are
about aborting the programm and printing a useful statement (useful
predominantly in debugging). Exceptions provide a flow construct to escape
from arbitrary levels of nesting (useful predominantly in postponing the
handling of error conditions).


Actually Stroustrup goes on to demonstrate an alternative form of assertion
which also failed to appeal to me. He uses a template that takes an
invariant as a parameter. And get this. It throws an exception rather
than aborting the program. In general that is the kind of thing I was
talking about, but I simply don't find cluttering my programs with
debugging code a good idea, nor, in general do I find it useful. I've
noticed C and C++ programmers new to Java tend to use stuff like if(DEBUG
{/*...*/} until they realize it really isn't all that useful in that
context.
Too bad C++ doesn't have printStackTrace. I can't even think of
problem I've had where such a mechanism would be of much use.


My debugger prints the stacktrace just fine.


When the code crashes? Will it print it to a web browser when your web
application aborts? How do you use something like abort in a lodable
module hosted by an application server?
* I do not see how that could be useful to anybody.
That is not something I really care about, as long as the feature doesn't
impact anything else.
* It will not allow me to have my IDE. This is a very important issue. The availability of superior IDEs for C++
was one of the key features that distinguished it from the pack in the
early days. What I've seen done with Java IDEs has shown me that they can
be extremely powerful. Perhaps the more recent Microsoft IDEs for C++ are
similar in their capabilities to what JBuilder, Eclipse, NetBeans, etc
provide for Java. Something tells me that is not the case.
* If others can do something like that, I will have to adjust.
Some things I accept as unlikely to change. For example, I don't believe
there will ever be a file naming specification for C++. Some things seem
more significant. For example, the dependence on header files to include
resources. I've seen better ways of handling that situation, and I think
it is a serious impediment to providing far more powerful support to the
language.
* Bjarne Stroustrup seems to say so.
You may care to note that this thread was forked from a thread I started
with the purpose of questioning one of his more strongly voiced opinions.
* In Java you cannot do that; and Java rocks.
For the most part, I've pointed to things you /can/ do with Java and
you /can't/ do with C++. There are good ideas in Java. The people who
created the language are not stupid, and they had solid accomplishments
using C (and perhaps C++) before they started working on Java. To ignore
Java, as many C++ programmers would like to do, is an unwise approach to a
legitimately successful competitor to C++.
* ...

c) May I suggest to replace X by some feature set / mechanism that would
only allow for the uses listed in (a). [Specifics about the proposed
replacement are unfortunately missing at this time.]
Well, then you haven't read all of what I've posted. I've been very
specific about what I would like the exception handling to do. I also
posted a fairly extensive explanation of how I believe the Java library
model would work for C++. I also proposed the addition of a feature to
allow for user defined infix operators.

You may also care to notice that I have been persuaded on several occasions
that my proposals were not as viable as I originally thought.

In short, you want to enforce coding policies by language design. I,
however, like C++ precisely because it does not enforce policies but
provides mechanisms, and a lot. E.g., [try, throw, catch] to me is not
about error handling but about stack unwinding; and your suggestion that
throw() should only accept arguments derived from std::exception would
break some of my code.
Then you didn't read all of what I wrote about the topic. Either that, or
you chose to ignore it.
I like to explore the possibilites, and every once
in a while, I am really awestruck at how something can be done elegantly
in C++ in a way that I could not have fathomed.

I do not think that C++ is perfect, but I dislike the direction in which
you want to push it. To me, it looks as though you are about to cripple
the language.


How so? By introducing a more elegant, efficient, and effective means of
managing libraries? I know damn good and well that the Cpp is going to be
around for the foreseeable future. That doesn't mean I can't criticize its
use, and proposed solutions for supporting the same functionality without
resorting to using it.
--
"[M]y dislike for the preprocessor is well known. Cpp is essential in C
programming, and still important in conventional C++ implementations, but
it is a hack, and so are most of the techniques that rely on it. ...I think
the time has come to be serious about macro-free C++ programming." - B. S.

Jul 22 '05 #29

P: n/a
Steven T. Hatton wrote:
Kai-Uwe Bux wrote:
"Read up on them"? Have you ever used this feature? Sometimes you realize that something is useful not from reading about it.
There has to be some need I have before I look for something to fill it.
The idea of aborting a program on failure is simply not something I

believe to be a good practice.
In a softer language, array[x] throws an exception for you if x is out of
bounds. In a C language, you have the option to either cross your fingers
and do nothing, override [] and throw an exception, or override [] and
provide an assertion that only compiles without NDEBUG activated.

The C languages need the ability to do nothing, or conditionally compile an
exception, to compete with assembly language.

(BTW, if you are not competing with assembly language, _don't use C++_...)

On the test side, here's an assertion:

#define CPPUNIT_ASSERT_EQUAL(sample, result) \
if ((sample) != (result)) { stringstream out; \
out << __FILE__ << "(" << __LINE__ << ") : "; \
out << #sample << "(" << (sample) << ") != "; \
out << #result << "(" << (result) << ")"; \
cout << out.str() << endl; \
OutputDebugStringA(out.str().c_str()); \
OutputDebugStringA("\n"); \
__asm { int 3 } }

The stringerizer, #, converts an expression into a string, and operator<<
formats the expression's value as a string. The macro inserts both these
strings into a stringstream object. Both cout and OutputDebugStringA() reuse
this object's value.

The result, at test failure time, is this line:

c:\...\project.cpp(56) : "Ignatz"(Ignatz) != name(Ignotz)

<F8> takes us directly to the failing test assert statement. The assertion
provides...

- fault navigation - the editor takes you to the failing line
- expression source reflected into the output
- expression values reflected into the output
- a breakpoint

On a platform descended from the Intel x86 architecture, if you run these
tests from a command line, not the editor, the OS will treat __asm { int 3 }
as a hard error, and it will offer to raise the default debugger.

When NDEBUG is off (what some folk call "Debug Mode"), tests can
aggressively exercise production code, making its internal assertions safer
to turn off.
Actually Stroustrup goes on to demonstrate an alternative form of assertion which also failed to appeal to me. He uses a template that takes an
invariant as a parameter. And get this. It throws an exception rather
than aborting the program. In general that is the kind of thing I was
talking about, but I simply don't find cluttering my programs with
debugging code a good idea, nor, in general do I find it useful. I've
noticed C and C++ programmers new to Java tend to use stuff like if(DEBUG
{/*...*/} until they realize it really isn't all that useful in that
context.
Aggressive testing tends to help code right-size its assertions, and
exceptions, and hide them behind minimal interfaces. For example, this loads
a library, pulls a function pointer out of it, and asserts it found the
function:

HMODULE libHandle = LoadLibrary("opengl32.dll");
void (APIENTRY *glBegin) (GLenum mode);
FARPROC farPoint = GetProcAddress(libHandle, "glBegin");
assert(farPoint);
glBegin = reinterpret_cast</something/ *> (farPoint);
// use glBegin() like a function
FreeLibrary(libHandle);

The assert is not the problem here - the /something/ is. It should be this:

glBegin = reinterpret_cast
<
void (APIENTRY *) (GLenum mode) (farPoint);
To reduce the risk, and avoid writing the complete type of glBegin() and
every other function we must spoof more than once, we upgrade the situation
to use a template:

template<class funk>
void
get(funk *& pointer, char const * name)
{
FARPROC farPoint = GetProcAddress(libHandle, name);
assert(farPoint);
pointer = reinterpret_cast<funk *> (farPoint);
} // return the pointer by reference

That template resolves the high-risk /something/ for us. It collects
glBegin's target's type automatically, puts it into funk, and reinterprets
the return value of GetProcAddress() into funk's pointer's type. And it
calls assert() (which could be your exception) only once, behind the get()
interface.

Pushing the risk down into a relatively typesafe method permits a very clean
list of constructions for all our required function pointers:

void (APIENTRY *glBegin) (GLenum mode);
void (APIENTRY *glEnd) (void);
void (APIENTRY *glVertex3f) (GLfloat x, GLfloat y, GLfloat z);

get(glBegin , "glBegin" );
get(glEnd , "glEnd" );
get(glVertex3f , "glVertex3f" );

That strategy expresses each function pointer's type-the high risk part-once
and only once. (And note how the template keyword converts statically typed
C++ into a pseudo-dynamic language.)
For the most part, I've pointed to things you /can/ do with Java and
you /can't/ do with C++.
And the things we can't do in Java (stringerization, token pasting,
conditional compilation, etc.) you decry as "bad C++ style".

Java was invented because too many programmers not bright enough to use C or
C++ were forced into it by schools and managers. They did not write
'new'-free code, and did not use smart pointers where they needed a 'new'.
So Java's inventors said, "Hey, let's tell everyone we don't permit
pointers. Yeah, that's the ticket, pointers are bad. Oh, also we need to
pass primitives by reference, and we need to store heterogenous arrays, and
we need to declare exceptions at all interfaces," and they ended up doing a
zillion extra things to the language specification that C++ lets you do
_with_ the language.
How so? By introducing a more elegant, efficient, and effective means of
managing libraries? I know damn good and well that the Cpp is going to be
around for the foreseeable future. That doesn't mean I can't criticize its use, and proposed solutions for supporting the same functionality without
resorting to using it.


Please base that criticism on your direct personal experience, not on
quoting Bjarne.

--
Phlip
http://industrialxp.org/community/bi...UserInterfaces
Jul 22 '05 #30

P: n/a
Steven T. Hatton wrote:
Kai-Uwe Bux wrote:
Steven T. Hatton wrote:
Paul Mensonides wrote:

Assertions are invaluable tools.

Some people seem to think so.
Some people, including me, *do* think so.

I read up on them in both Java and C++, and was also aware of them in C.
They never seemed to be much use.


"Read up on them"? Have you ever used this feature? Sometimes you realize
that something is useful not from reading about it.


There has to be some need I have before I look for something to fill it.
The idea of aborting a program on failure is simply not something I
believe to be a good practice.


That is why assertions are preproccessed away in production code. And of
course, the preprocessor would allow you to write an assertion macro that
does not abort.

I'll grant you, with the weak exception handling of C++ such a thing
might be a bit handy.


Assertions and exceptions are completely different beasts. Assertions are
about aborting the programm and printing a useful statement (useful
predominantly in debugging). Exceptions provide a flow construct to
escape from arbitrary levels of nesting (useful predominantly in
postponing the handling of error conditions).


Actually Stroustrup goes on to demonstrate an alternative form of
assertion
which also failed to appeal to me. He uses a template that takes an
invariant as a parameter. And get this. It throws an exception rather
than aborting the program. In general that is the kind of thing I was
talking about, but I simply don't find cluttering my programs with
debugging code a good idea, nor, in general do I find it useful. I've
noticed C and C++ programmers new to Java tend to use stuff like if(DEBUG
{/*...*/} until they realize it really isn't all that useful in that
context.


An assert( blah ) line is not cluttering your program with debug code but a
concise way of stating an invariant (e.g., within a loop, at the entry of a
block, or before a return statement).

Too bad C++ doesn't have printStackTrace. I can't even think of
problem I've had where such a mechanism would be of much use.


My debugger prints the stacktrace just fine.


When the code crashes? Will it print it to a web browser when your web
application aborts? How do you use something like abort in a lodable
module hosted by an application server?


Admittedly not. I was thinking of a debugging session to get my code right
so that it would not crash in the field.

* I do not see how that could be useful to anybody.
That is not something I really care about, as long as the feature doesn't
impact anything else.
* It will not allow me to have my IDE.

This is a very important issue. The availability of superior IDEs for C++
was one of the key features that distinguished it from the pack in the
early days. What I've seen done with Java IDEs has shown me that they can
be extremely powerful. Perhaps the more recent Microsoft IDEs for C++ are
similar in their capabilities to what JBuilder, Eclipse, NetBeans, etc
provide for Java. Something tells me that is not the case.
* If others can do something like that, I will have to adjust.


Some things I accept as unlikely to change. For example, I don't believe
there will ever be a file naming specification for C++. Some things seem
more significant. For example, the dependence on header files to include
resources. I've seen better ways of handling that situation, and I think
it is a serious impediment to providing far more powerful support to the
language.
* Bjarne Stroustrup seems to say so.


You may care to note that this thread was forked from a thread I started
with the purpose of questioning one of his more strongly voiced opinions.
* In Java you cannot do that; and Java rocks.


For the most part, I've pointed to things you /can/ do with Java and
you /can't/ do with C++.


This is an interesting point. I would like to make a distinction. I think
you pointed to things you *can know* in Java but you *cannot know* in C++.
In C++, because of the preprocessor, you cannot really be sure that what
you read is not transformed into something entirely different. Because of
conditional compilation, you cannot know which headers are included in
which order. Because of the flexibility of throw(), you cannot be sure
about the type of the object thrown. All these things are things you cannot
know about, because of things you (or others) *can do*.

This is precisely what I was refering to farther down the post when I said
that you apparently want to enforce coding policy by language design. I am
not saying that that is necessarily a bad thing. I am, however, saying that
I like C++ because it does not do that.
There are good ideas in Java. The people who
created the language are not stupid, and they had solid accomplishments
using C (and perhaps C++) before they started working on Java. To ignore
Java, as many C++ programmers would like to do, is an unwise approach to a
legitimately successful competitor to C++.

I neither deny that Java is successful nor that Java has incorporated some
good ideas. And I never claimed Java to be designed by stupid people. I
just do not feel the need to make C++ more Java-like.
* ...

c) May I suggest to replace X by some feature set / mechanism that would
only allow for the uses listed in (a). [Specifics about the proposed
replacement are unfortunately missing at this time.]


Well, then you haven't read all of what I've posted.


I will note that it is very hard to read "all of what you've posted": you
are prolific. I am sure that I missed many of your points. I appologize.
I've been very
specific about what I would like the exception handling to do. I also
posted a fairly extensive explanation of how I believe the Java library
model would work for C++. I also proposed the addition of a feature to
allow for user defined infix operators.

You may also care to notice that I have been persuaded on several
occasions that my proposals were not as viable as I originally thought.

In short, you want to enforce coding policies by language design. I,
however, like C++ precisely because it does not enforce policies but
provides mechanisms, and a lot. E.g., [try, throw, catch] to me is not
about error handling but about stack unwinding; and your suggestion that
throw() should only accept arguments derived from std::exception would
break some of my code.


Then you didn't read all of what I wrote about the topic. Either that, or
you chose to ignore it.


It is a pitty that you chose to pick on the example and did not address the
main point that I raised int the topic sentence. I appologize if I
mischaracterized your opinion on exceptions. I still feel that you prefer
the language to enforce policies rather than have it provide mechanisms.
I like to explore the possibilites, and every once
in a while, I am really awestruck at how something can be done elegantly
in C++ in a way that I could not have fathomed.

I do not think that C++ is perfect, but I dislike the direction in which
you want to push it. To me, it looks as though you are about to cripple
the language.


How so? By introducing a more elegant, efficient, and effective means of
managing libraries? I know damn good and well that the Cpp is going to be
around for the foreseeable future. That doesn't mean I can't criticize
its use, and proposed solutions for supporting the same functionality
without resorting to using it.


Sure you can criticize the cpp. I was not denying any rights of yours. I
was pointing toward an underlying philosophy in your way of criticizing
various features of C++, including the preproccessor. It is that philosophy
that makes me uneasy about the direction of your proposals.

To take your last paragraph as an example: You start by talking about
library management when finally addressing the preprocessor in the
negative. This creates the subtext: "library management is *the* legitimate
use of the preprocessor, let's find a better way to that (so that hopefully
nobody will use the preprocessor anymore)."

Again, I am not saying that policy enforcing is a bad idea. I just prefer
to have at least one really powerful language around that does not do that.
And I like C++ to be that language.

Best

Kai-Uwe Bux
Jul 22 '05 #31

P: n/a
Phlip wrote:
Steven T. Hatton wrote:
Kai-Uwe Bux wrote:
> "Read up on them"? Have you ever used this feature? Sometimes you realize > that something is useful not from reading about it.
There has to be some need I have before I look for something to fill it.
The idea of aborting a program on failure is simply not something I

believe
to be a good practice.


In a softer language, array[x] throws an exception for you if x is out of
bounds. In a C language, you have the option to either cross your fingers
and do nothing, override [] and throw an exception, or override [] and
provide an assertion that only compiles without NDEBUG activated.


Or you can use the library correctly to set the bounds of your index.
There's no need to override [] when using the standard library containers.
They offer both checked and unchecked indexing. You also don't need to use
NDEBUG to eliminate the debugging code from the release build. You can use
a static const just as easily. If the compiler can determine that an
expression evaluates to false at compile time, it should omit that code
from what it emits. I therefore don't need to use #ifdef...#endif
The C languages need the ability to do nothing, or conditionally compile
an exception, to compete with assembly language.

(BTW, if you are not competing with assembly language, _don't use C++_...)

On the test side, here's an assertion:

#define CPPUNIT_ASSERT_EQUAL(sample, result) \
if ((sample) != (result)) { stringstream out; \
out << __FILE__ << "(" << __LINE__ << ") : "; \
out << #sample << "(" << (sample) << ") != "; \
out << #result << "(" << (result) << ")"; \
cout << out.str() << endl; \
OutputDebugStringA(out.str().c_str()); \
OutputDebugStringA("\n"); \
__asm { int 3 } }
Why do I need a macro here? As has already been observed, if I have the
source, I don't need __LINE__ and __FILE__ information in order to locate
the error. It is unlikely I will be debugging code without having the
source available to me. An exception to that might be when using an
embedded implementation.

If I throw an exception, the debugger will take me to the exact location of
the origin, and it shows me all the context variables, and their values.
The stringerizer, #, converts an expression into a string, and operator<<
formats the expression's value as a string. The macro inserts both these
strings into a stringstream object. Both cout and OutputDebugStringA()
reuse this object's value.
The one thing that would currently not be doable with a template is to
instantiate it using a string or char*. I can do that with an exception,
however.
The result, at test failure time, is this line:

c:\...\project.cpp(56) : "Ignatz"(Ignatz) != name(Ignotz)

<F8> takes us directly to the failing test assert statement. The assertion
provides...

- fault navigation - the editor takes you to the failing line
- expression source reflected into the output
- expression values reflected into the output
- a breakpoint

On a platform descended from the Intel x86 architecture, if you run these
tests from a command line, not the editor, the OS will treat __asm { int 3
} as a hard error, and it will offer to raise the default debugger.

When NDEBUG is off (what some folk call "Debug Mode"), tests can
aggressively exercise production code, making its internal assertions
safer to turn off.
But I get all that without using assert.
Actually Stroustrup goes on to demonstrate an alternative form of

assertion
which also failed to appeal to me. He uses a template that takes an
invariant as a parameter. And get this. It throws an exception rather
than aborting the program. In general that is the kind of thing I was
talking about, but I simply don't find cluttering my programs with
debugging code a good idea, nor, in general do I find it useful. I've
noticed C and C++ programmers new to Java tend to use stuff like if(DEBUG
{/*...*/} until they realize it really isn't all that useful in that
context.
[snip]

Interesting, but I don't see how it relates to the discussion. I didn't say
I don't believe in testing my code, I just don't like leaving a lot of suff
behind which more often than not was put there to find a specific problem.
There are typically certain points in a program at which one test can
verify that many parameters are correct. That is where I am likely to
leave some kind of test code. Depending on the situation, I may leave it
on or off.
For the most part, I've pointed to things you /can/ do with Java and
you /can't/ do with C++.
And the things we can't do in Java (stringerization, token pasting,
conditional compilation, etc.) you decry as "bad C++ style".


I never decried conditional compilation as bad. In many cases there are
better strategies than putting the conditional code in the main body of the
application.

http://www.mozilla.org/projects/nspr...tml/index.html
Java was invented because too many programmers not bright enough to use C
or C++ were forced into it by schools and managers. They did not write
'new'-free code, and did not use smart pointers where they needed a 'new'.
So Java's inventors said, "Hey, let's tell everyone we don't permit
pointers. Yeah, that's the ticket, pointers are bad. Oh, also we need to
pass primitives by reference, and we need to store heterogenous arrays,
and we need to declare exceptions at all interfaces," and they ended up
doing a zillion extra things to the language specification that C++ lets
you do _with_ the language.


And you get threadding, unicode, effortless portability, incredibly smooth
refactoring, highlevel abstraction with the tools to support it, great,
well organized documentation, easy to integrate libraries of all kinds, a
full suite of basic networking components, encryption, introspection,
remote method invocation, class loading, a reasonably functional GUI tool
kit, 2D graphics, xml support, a very nice I/O library, compression
libraries, an easy to use build system, etc., etc...., most of it out of
the box.
How so? By introducing a more elegant, efficient, and effective means of
managing libraries? I know damn good and well that the Cpp is going to
be
around for the foreseeable future. That doesn't mean I can't criticize

its
use, and proposed solutions for supporting the same functionality without
resorting to using it.


Please base that criticism on your direct personal experience, not on
quoting Bjarne.


I already have.

--
"[M]y dislike for the preprocessor is well known. Cpp is essential in C
programming, and still important in conventional C++ implementations, but
it is a hack, and so are most of the techniques that rely on it. ...I think
the time has come to be serious about macro-free C++ programming." - B. S.

Jul 22 '05 #32

P: n/a
Kai-Uwe Bux wrote:
Steven T. Hatton wrote:
There has to be some need I have before I look for something to fill it.
The idea of aborting a program on failure is simply not something I
believe to be a good practice.


That is why assertions are preproccessed away in production code. And of
course, the preprocessor would allow you to write an assertion macro that
does not abort.


It was really a philosophical point. I don't want my code to abort for any
but absolutely unrecoverable circumstances. I started working with
hardware in 1979. failing to maintain an operational system was not merely
bad for my reputation, it was a threat to national security. My more
recent experience has been of the same nature. Add to that the fact that I
was developing code to run on servers, as services. Crashing the server
was just not a good development strategy. Especially because I was the
sysadmin.
An assert( blah ) line is not cluttering your program with debug code but
a concise way of stating an invariant (e.g., within a loop, at the entry
of a block, or before a return statement).

I believe I understand the strategy. In the kinds of development I've done
I haven't seem much need for that kind of thing. I'm more likely to want
to leave dumpers that will spit all the members of a class out to a stream.

Admittedly not. I was thinking of a debugging session to get my code right
so that it would not crash in the field.
But that's the nature of application servers. A single application should
not bring down the server. I'll admit I have very little experience with
C++ in that area, so there may be ways to call abort and only crash the
service, and not the server.
For the most part, I've pointed to things you /can/ do with Java and
you /can't/ do with C++.


This is an interesting point. I would like to make a distinction. I think
you pointed to things you *can know* in Java but you *cannot know* in C++.
In C++, because of the preprocessor, you cannot really be sure that what
you read is not transformed into something entirely different. Because of
conditional compilation, you cannot know which headers are included in
which order. Because of the flexibility of throw(), you cannot be sure
about the type of the object thrown. All these things are things you
cannot know about, because of things you (or others) *can do*.


Yes. You are correct. Information technology is about information. Good
solid easily obtainable information is vital to design, to implementation,
to trouble shooting and to security.

In short, you want to enforce coding policies by language design. I,
however, like C++ precisely because it does not enforce policies but
provides mechanisms, and a lot. E.g., [try, throw, catch] to me is not
about error handling but about stack unwinding; and your suggestion that
throw() should only accept arguments derived from std::exception would
break some of my code.


Then you didn't read all of what I wrote about the topic. Either that,
or you chose to ignore it.


It is a pitty that you chose to pick on the example and did not address
the main point that I raised int the topic sentence. I appologize if I
mischaracterized your opinion on exceptions. I still feel that you prefer
the language to enforce policies rather than have it provide mechanisms.


In the case of exceptions, I have solid experience that supports the
opinions I hold. What I suggested simply works better as a default. And
that is what I was referring to when I suggested you hadn't read everything
I wrote. I explicitly said that the should be configurable through some
mechanism similar to the one currently used to switch out handlers. I'm
not sure how that might be accomplished, but I suspect someone in the C++
community is capable of finding a means.

To take your last paragraph as an example: You start by talking about
library management when finally addressing the preprocessor in the
negative. This creates the subtext: "library management is *the*
legitimate use of the preprocessor, let's find a better way to that (so
that hopefully nobody will use the preprocessor anymore)."

Again, I am not saying that policy enforcing is a bad idea. I just prefer
to have at least one really powerful language around that does not do
that. And I like C++ to be that language.


Actually, to a large extent, it's the other way around. I believe the
library management in C++ stinks. If there were a better system - which I
believe is highly doable - I would have far less to complain about
regarding the CPP. The CPP does go against certain principles of
encapsulation that are generally regarded as good in computerscience. A
better library system would go a long way toward addressing that as well.
If people aren't #including 15 megs of source in every file, there is less
(virtually no) opportunity for hidden variables entering the environment.
--
"[M]y dislike for the preprocessor is well known. Cpp is essential in C
programming, and still important in conventional C++ implementations, but
it is a hack, and so are most of the techniques that rely on it. ...I think
the time has come to be serious about macro-free C++ programming." - B. S.

Jul 22 '05 #33

P: n/a
Steven T. Hatton talking about assertions (via cpp) wrote:
There has to be some need I have before I look for something to fill
it. The idea of aborting a program on failure is simply not
something I believe to be a good practice.


Who says assertions have to abort your program? The assertions in our
code invoke a program specific interactive debugger that allows us to
trace what is going on at the level of user (and developer) visible
objects. Thus, if there is a problem connecting a "net" to a "gate"
via a "pin" (our code is used for circuit design) and an assertion
fires, the developer can inspect (graphically) the net, gate, and pin
and look at the diagram or user written source code that is involved,
and if necessary inspect the internal structure of each of the objects
(or look at other connected objects). The developer can also escape
into the normal C++ debugger if necessary. et cetera, et cetera, et
cetera.... And perhaps, most importantly, we can decide which
assertions represent errors that users can make (i.e. connecting a net
so that the net has two different drivers, which makes the net an
invalid circuit) or one that results from something unexpected
happening in the code (finding a net without any drivers after all the
nets have been checked to assure that they all have exactly one
driver), which means the code has some undetected flaw in it that
caused bad data to reach the point in question (or perhaps the code at
the current point is wrong).

Note, that nowhere did I say that our assertions abort the execution
of the program--assertions tell you where something went wrong, a good
assertion package helps you debug that point--a poor assertion
paqckage simply "gives up" (with a message) at the point where the
code failed--however, even that is better than the code randomly doing
something further wrong and possibly silently producing incorrect
results or mysteriously crashing in some unrelated part of the
program.

However, in either case, assertions are invaluable in making certain
that the code does exactly what we say it does and that flaws are
caught as near to the source of the error as possible. As a result,
the team has been amazingly productive. It has really paid off in the
maintenance/enhacement phase, where we can quickly upgrade or change
the semantics of various pieces knowing that if we violate any
downstream assumptions, those problems will be caught by the developer
before finishing coding, "unit" testing, and checking in. If you
don't use assertions, your programs probably continue to do bad things
in the sections you haven't well tested and you have to infer the
problem from the tea-leaves at the point where you finally figure out
the the program has done something wrong (like crashed)--that doesn't
work so well on a 600K line program, where there are huge blocks of
code you have never read much less understand or wrote.

Now, to bring this back to the cpp. Our assertion code is a set of
cpp macros. Originally, they were just the macros supplied by the C++
compiler. However, because that C++ assertion support was written as
cpp macros, when we realized that we wanted something more
sophisticated than what our C++ vendor provided, we were able to
customize it for ourselves. Moreover, because we now have our own cpp
macros, we can use several different compilers and have our system
work the same way. (And, yes, just like the XERCES code you were
complaining about, not all of our target compilers are "modern" so we
have some macros that deal with broken compilers that we are REQUIRED
to support.)

(This by the way is a flaw in JAVA. If I go to a web-site that has
JAVA that is poorly written and only works with one version of the
spec and that isn't the version installed on my machine, the web-site
is broken. What does one do when one has two different web sites,
that both require different versions of the spec? In C++ the web site
author can write his code to be compatible not only with the newest
spec, but also with older versions (perhaps with less functionality),
so that the user can pick an old version of the compiler and have all
the code be happy--maybe not as "fancy" as one would like, but
working! BTW, I experience this problem on a daily basis, as I have
tow web sites that I need to visit and they use incompatible JAVA
versions. I have two machines so that I can work around that
problem.)

And, what is the beauty of those macros (besides their portability),
is that our primary source code (the uses of those macros) is easy to
read. The syntax of the assert (and other macros) is quite simple and
obvious. That is only doable because they are macros. If they
weren't macros, some of the behind-the-scene-stuff like determining
the current object would have to be present in the source code of each
use. Without cpp, either the source code (at assertion use) would
have to be much uglier or the "assertion support" would have to be
built into the compiler. And if the assertion support were built into
the compiler, we would only get what the C++ compiler vendor provided
us, which would probably be a poor assertion package on most
platforms. Also note that the sophistication of our assertion support
has grown with our code. If it wasn't done via macros, the
behind-the-scenes stuff that cpp hides would have to have been changed
at every assertion use when we realized that the was "a better way" to
do things, as we have numerous times. In a language without cpp, we
simply wouldn't have bothered to do it, and we wouldn't have been
nearly as productive as a result.

So, that's the beauty of cpp, it lets one write simple obvious code
and put behind it something sophisticated that makes the code do the
right thing. It lets one do that in a compiler independent manner and
do so even in the pressence of seriously broken compilers. And, it
lets us "the application developers" do it and does not make us
dependent on our compiler vendors.

Oh, and by the way, I think that by "cleverly" using function
prototypes that match our macros, we get nice code completion of the
macros in our IDE--the best of both worlds.

Note, somewhere you wrote that you can do some of the same things with
sed and make. If you don't like cpp, why would you want to impose an
external macro processor (that's what you are using sed for in that
scenario) on your code? At least with cpp, you can know that you have
something that is portable. For the longest time, I used systems that
didn't even have sed on them. I may still do so. I don't know. It
isn't something I would use. Moreover, with sed macros, your IDE has
no hope of knowing exactly how your code will be transformed before
being compiled. That is much worse than cpp. (I can just imagine how
our development environment would be obscure if to get assertions, we
had to preprocess all our code with sed scripts. Wouldn't that be a
joy???)

-Chris

************************************************** ***************************
Chris Clark Internet : co*****@world.std.com
Compiler Resources, Inc. Web Site : http://world.std.com/~compres
23 Bailey Rd voice : (508) 435-5016
Berlin, MA 01503 USA fax : (978) 838-0263 (24 hours)
------------------------------------------------------------------------------
Jul 22 '05 #34

P: n/a
Chris F Clark wrote:
Steven T. Hatton talking about assertions (via cpp) wrote:
There has to be some need I have before I look for something to fill
it. The idea of aborting a program on failure is simply not
something I believe to be a good practice.
Who says assertions have to abort your program?


IIRC, that was the context not the necessary behavior. And I was really
intending the official <cassert> which is said to be documented in the C
standard documentation. I don't have that, but K&R tell me it aborts the
program. They don't tell me I can change that behavior.
Note, that nowhere did I say that our assertions abort the execution
of the program--assertions tell you where something went wrong, a good
assertion package helps you debug that point--a poor assertion
paqckage simply "gives up" (with a message) at the point where the
code failed--however, even that is better than the code randomly doing
something further wrong and possibly silently producing incorrect
results or mysteriously crashing in some unrelated part of the
program.

However, in either case, assertions are invaluable in making certain
that the code does exactly what we say it does and that flaws are
caught as near to the source of the error as possible. As a result,
the team has been amazingly productive. It has really paid off in the
maintenance/enhacement phase, where we can quickly upgrade or change
the semantics of various pieces knowing that if we violate any
downstream assumptions, those problems will be caught by the developer
before finishing coding, "unit" testing, and checking in.
In the sense of JUnit, I have used that approach. In that case you create a
harness, and some use cases to run against your code, testing that
preconditions produce correct post conditions. That typically stands
outside the actual program code.
If you
don't use assertions, your programs probably continue to do bad things
in the sections you haven't well tested and you have to infer the
problem from the tea-leaves at the point where you finally figure out
the the program has done something wrong (like crashed)--that doesn't
work so well on a 600K line program, where there are huge blocks of
code you have never read much less understand or wrote.
It probably also depends on the nature of the product whether such things
are generally useful. You seem to have something akin to a huge Karnaugh
map. Probably much more suited to such structured evaluation than the
kinds of systems I've worked on. Also bear in mind that I do use
exceptions in similar ways.
Now, to bring this back to the cpp. Our assertion code is a set of
cpp macros. Originally, they were just the macros supplied by the C++
compiler. However, because that C++ assertion support was written as
cpp macros, when we realized that we wanted something more
sophisticated than what our C++ vendor provided, we were able to
customize it for ourselves. Moreover, because we now have our own cpp
macros, we can use several different compilers and have our system
work the same way. (And, yes, just like the XERCES code you were
complaining about, not all of our target compilers are "modern" so we
have some macros that deal with broken compilers that we are REQUIRED
to support.)
I understand that such things are required in some cases. There also seems
to be a tendency to imply 'we would have done it like this anyway' in many
cases. I understand that it can add flexibility to your code base if you
can systematically change your namespace name throughout the code base by
modifying a macro somewhere. There are other ways of accomplishing such
things which preserve and indeed exploit the integrity of the language. If
the code base has a predictable and regular structure, it is quite easy to
perform systematic global manipulations. I've done it in various
circumstances. Such a task is an anathema to the sensibilities of may C
and C++ programmers, for understandable reasons.
(This by the way is a flaw in JAVA. If I go to a web-site that has
JAVA that is poorly written and only works with one version of the
spec and that isn't the version installed on my machine, the web-site
is broken. It sounds ancient. There was a significant change between the 1.x and 2.x
Java versions that impacts the GUI.
What does one do when one has two different web sites,
that both require different versions of the spec?
I believe there are ways of addressing that problem. Solaris used to have
three versions of Java installed, and was able to pick the right one for
every app somehow.
In C++ the web site
author can write his code to be compatible not only with the newest
spec, but also with older versions (perhaps with less functionality),
so that the user can pick an old version of the compiler and have all
the code be happy--maybe not as "fancy" as one would like, but
working!
That's a completely different issue. With Java, you are running an applet
on your local JVM. With a page served out with C++ you are either getting
HTML served to you, or you are running some kind of pluggin that has to be
specifically compiled for your OS and hardware. If its the former, Java
can do that in ways there are completely indistinguishable from what a C++
server will do. If you are talking about running a pluggin, then I can
assure you with the utmost confidence there are _more_ compatability issues
with C++ than with Java.

And, what is the beauty of those macros (besides their portability),
is that our primary source code (the uses of those macros) is easy to
read. The syntax of the assert (and other macros) is quite simple and
obvious. That is only doable because they are macros. If they
weren't macros, some of the behind-the-scene-stuff like determining
the current object would have to be present in the source code of each
use.
I don't follow here. How can a macro determine the current object without
some kind of intentional intervention? I'm not saying it can't be done.
Perhaps you are talking about something similar to Qt's moc?
Without cpp, either the source code (at assertion use) would
have to be much uglier or the "assertion support" would have to be
built into the compiler. And if the assertion support were built into
the compiler, we would only get what the C++ compiler vendor provided
us, which would probably be a poor assertion package on most
platforms. Also note that the sophistication of our assertion support
has grown with our code. If it wasn't done via macros, the
behind-the-scenes stuff that cpp hides would have to have been changed
at every assertion use when we realized that the was "a better way" to
do things, as we have numerous times.
It sounds like macros are working well for you. And there certainly is
merit in 'if it ain't broke, don't fix it'. At the same time, I have to
wonder whether you would not have discovered an equally elegant approach
using other techniques. I also have to wonder what other internal features
of the language would have evolved to fill such needs. I can also
appreciate that macros may serve you better in supporting backward
compatibility. What I'm striving for is a way of doing things better if
you can afford a clean start.

Take careful not that I am not actually advocated abolishing the CPP, nor am
I advocating the abolishment of the #include. I simply want superior means
of doing some of the things the CPP is currently used for.
In a language without cpp, we
simply wouldn't have bothered to do it, and we wouldn't have been
nearly as productive as a result.
I'll have to take that as an opinion, not a conclusion. I've never used
them, but Java supports some kind of assertion. And as I mentioned earlier
there is JUnit as well.
So, that's the beauty of cpp, it lets one write simple obvious code
and put behind it something sophisticated that makes the code do the
right thing. It lets one do that in a compiler independent manner and
do so even in the pressence of seriously broken compilers. And, it
lets us "the application developers" do it and does not make us
dependent on our compiler vendors.
Again, I have to wonder if the situation is a clear cut as you say it is.
I've seen 'pre-processing' done with Java, and I've seen some rather
inventive manipulations of Lisp which transcend the core language to add
functionality. I doubt it's likely to happen, but I suspect that if the
CPP were removed from the language, people would rediscover sed and awk in
a hurry.
Oh, and by the way, I think that by "cleverly" using function
prototypes that match our macros, we get nice code completion of the
macros in our IDE--the best of both worlds.
Simple code completion really isn't much of an issue. I am talking about
far more sophisticated features such as being able to locate any class and
import the fully qualified class name by typing a few characters of the
name, and hitting a key combo. Also virtually complete error detection
before you compile. Note, somewhere you wrote that you can do some of the same things with
sed and make. If you don't like cpp, why would you want to impose an
external macro processor (that's what you are using sed for in that
scenario) on your code? At least with cpp, you can know that you have
something that is portable. For the longest time, I used systems that
didn't even have sed on them. I may still do so. I don't know. It
isn't something I would use. Moreover, with sed macros, your IDE has
no hope of knowing exactly how your code will be transformed before
being compiled. That is much worse than cpp. (I can just imagine how
our development environment would be obscure if to get assertions, we
had to preprocess all our code with sed scripts. Wouldn't that be a
joy???)


I wasn't fully serious when I said that. OTOH, over the years, I've
encountered a good deal of that kind of thing. Also worth mentioning is
that asserts are really not something I see a a significant problem. At
worst I think they are a bit tacky.

--
"[M]y dislike for the preprocessor is well known. Cpp is essential in C
programming, and still important in conventional C++ implementations, but
it is a hack, and so are most of the techniques that rely on it. ...I think
the time has come to be serious about macro-free C++ programming." - B. S.

Jul 22 '05 #35

P: n/a
Steven T. Hatton wrote:
And you get threadding, unicode, effortless portability, incredibly smooth
refactoring, highlevel abstraction with the tools to support it, great,
Threading is good??
The one thing that would currently not be doable with a template is to
instantiate it using a string or char*. I can do that with an exception,
however.

I can do it with a scalable cluster of furbies:

http://www.trygve.com/furbeowulf.html ;-)
#define CPPUNIT_ASSERT_EQUAL(sample, result) \
if ((sample) != (result)) { stringstream out; \
out << __FILE__ << "(" << __LINE__ << ") : "; \
out << #sample << "(" << (sample) << ") != "; \
out << #result << "(" << (result) << ")"; \
cout << out.str() << endl; \
OutputDebugStringA(out.str().c_str()); \
OutputDebugStringA("\n"); \
__asm { int 3 } }


Why do I need a macro here?


So the IDE can automatically take you to a failing line in a test.

(No, I don't want a test case to throw an exception at failure time.)
As has already been observed, if I have the
source, I don't need __LINE__ and __FILE__ information in order to locate
the error. It is unlikely I will be debugging code without having the
source available to me. An exception to that might be when using an
embedded implementation.

If I throw an exception, the debugger will take me to the exact location of the origin, and it shows me all the context variables, and their values.


There's a cycle of gain-saying going on here. I propose that the CPP offers

- stringerization
- token pasting
- conditional compilation

Not only can't other C++ mechanisms supply those, other languages have a
very hard time supplying them either. However, whenever I give an example of
one of those solving a quite legitimate problem, you declaim how a similar
problem could be solved via a different technique.

We could have the same conversation regarding 'int'.

--
Phlip
http://industrialxp.org/community/bi...UserInterfaces
Jul 22 '05 #36

P: n/a
Phlip wrote:
Steven T. Hatton wrote:
And you get threadding, unicode, effortless portability, incredibly
smooth refactoring, highlevel abstraction with the tools to support it,
great,
Threading is good??


Hu? Even Microsoft eventually figured that out. Yes. Thread support is
crucial for creating any sophisticated application that can be expected to
do more than one thing at a time.
- stringerization
- token pasting
- conditional compilation

Not only can't other C++ mechanisms supply those, other languages have a
very hard time supplying them either. However, whenever I give an example
of one of those solving a quite legitimate problem, you declaim how a
similar problem could be solved via a different technique.


You were presenting these example in support of your claim that the Cpp is
extremely valuable. You are also claiming these things can't be none in
other languages. Java code can be used to generate new classes at runtime,
and load them. Even JavaScript can preform self modification, and much
more powerfully than the CPP. But, the services these examples provide for
you, other than conditional compilation, are things that I have no need
for. There are other ways of achieving the same ends that don't require as
much effort on my part. Not to say your macros are difficult to write or
to use. It's just that I have solutions to these problems that require
virtually no effort.

If you want to impress me, do this in 50 keystrokes, or fewer:
/* original text */
RgbColor text_color;
bool is_leaf;
bool is_vertical;
std::string text;
RgbColor bg_color;
RgbColor edge_color;
double border_w;
double h;
double w;
double x;
double y;
int bg_z;
int text_z;
std::string text;

/* regexp search and replace results*/

<< "text_color" << text_color << "\n"
<< "is_leaf" << is_leaf << "\n"
<< "is_vertical" << is_vertical << "\n"
<< "text" << text << "\n"
<< "bg_color" << bg_color << "\n"
<< "edge_color" << edge_color << "\n"
<< "border_w" << border_w << "\n"
<< "h" << h << "\n"
<< "w" << w << "\n"
<< "x" << x << "\n"
<< "y" << y << "\n"
<< "bg_z" << bg_z << "\n"
<< "text_z" << text_z << "\n"
<< "text" << text << "\n"
--
"[M]y dislike for the preprocessor is well known. Cpp is essential in C
programming, and still important in conventional C++ implementations, but
it is a hack, and so are most of the techniques that rely on it. ...I think
the time has come to be serious about macro-free C++ programming." - B. S.

Jul 22 '05 #37

P: n/a
On Wed, 01 Sep 2004 03:52:53 -0400,
Steven T. Hatton <su******@setidava.kushan.aa> wrote:
Phlip wrote:
Steven T. Hatton wrote:
And you get threadding, unicode, effortless portability, incredibly
smooth refactoring, highlevel abstraction with the tools to support it,
great,


Threading is good??


Hu? Even Microsoft eventually figured that out. Yes. Thread support is
crucial for creating any sophisticated application that can be expected to
do more than one thing at a time.


Strangely enough I've been using non-threaded applications that
do more than one thing at a time and are reasonably sophisticated.

Where sophisticated means things like: makes multiple concurrent TCP
connections, encodes and decodes audio sending and receiving it over
UDP, displays GUI and so on - without a thread in sight (well one
thread for the pedants).

I even write them occassionaly.

Threading throws away decades of work in creating systems with useful
protected memory spaces for processes. And lets the average programmer
meet all the problems and reinvent (poorly) all the solutions all over
again. Rather than using the solution implemented by the (hopefully
much more experienced and competent in the domain) OS authors.

Of course there's that vanishingly small percentage of problems that
are best solved with threads, but chances are you, me, and the
next guy aren't working on one of them.

And of course there's Windows and Java which make up a large chunk
of the platforms that are programmed for and need threads to do
anything more interesting than "Hello World" because the main
alternatives (non-blocking IO and processes) are brain damaged,
broken, slow, or all of the above.

But this has nothing to do with C++ :)

--
Sam Holden
Jul 22 '05 #38

P: n/a
Sam Holden wrote:
Steven T. Hatton wrote:
Phlip wrote:
Threading is good??
Hu? Even Microsoft eventually figured that out. Yes. Thread support is
crucial for creating any sophisticated application that can be expected to do more than one thing at a time.


Strangely enough I've been using non-threaded applications that
do more than one thing at a time and are reasonably sophisticated.

Where sophisticated means things like: makes multiple concurrent TCP
connections, encodes and decodes audio sending and receiving it over
UDP, displays GUI and so on - without a thread in sight (well one
thread for the pedants).

I even write them occassionaly.

Threading throws away decades of work in creating systems with useful
protected memory spaces for processes. And lets the average programmer
meet all the problems and reinvent (poorly) all the solutions all over
again. Rather than using the solution implemented by the (hopefully
much more experienced and competent in the domain) OS authors.

Of course there's that vanishingly small percentage of problems that
are best solved with threads, but chances are you, me, and the
next guy aren't working on one of them.

And of course there's Windows and Java which make up a large chunk
of the platforms that are programmed for and need threads to do
anything more interesting than "Hello World" because the main
alternatives (non-blocking IO and processes) are brain damaged,
broken, slow, or all of the above.


I am aware that sometimes a program must eat a sandwich with one hand and
drive a car with the other.

I have never personally seen a situation improved by threads. If "Process A
takes to long, and we need Process B to run at the same time", this
indicates that A incorrectly couples to its event driver. Event driven
programs should respond to events and update their object model. They should
not go into a loop and stay in it for a while,

People thread when they are unaware of how select() or
MsgWaitForMultipleObjects() work. Then if they need inter-thread
communication, they add back the semaphores that their kernel would have
provided.

Steven T. Hatton wrote:
- stringerization
- token pasting
- conditional compilation

You were presenting these example in support of your claim that the Cpp is
extremely valuable.
Not extremely. Just more valuable than mimicking Bjarne's more advanced
opinion.
If you want to impress me, do this in 50 keystrokes, or fewer:
/* original text */
RgbColor text_color;
bool is_leaf;
bool is_vertical;
std::string text;
RgbColor bg_color;
RgbColor edge_color;
double border_w;
double h;
double w;
double x;
double y;
int bg_z;
int text_z;
std::string text;

/* regexp search and replace results*/

<< "text_color" << text_color << "\n"
<< "is_leaf" << is_leaf << "\n"
<< "is_vertical" << is_vertical << "\n"
<< "text" << text << "\n"
<< "bg_color" << bg_color << "\n"
<< "edge_color" << edge_color << "\n"
<< "border_w" << border_w << "\n"
<< "h" << h << "\n"
<< "w" << w << "\n"
<< "x" << x << "\n"
<< "y" << y << "\n"
<< "bg_z" << bg_z << "\n"
<< "text_z" << text_z << "\n"
<< "text" << text << "\n"


So far as I understand the question, here:

http://www.codeproject.com/macro/metamacros.asp

CPP strikes again!

--
Phlip
http://industrialxp.org/community/bi...UserInterfaces
Jul 22 '05 #39

P: n/a
Steven T. Hatton wrote:
Kai-Uwe Bux wrote:
Steven T. Hatton wrote: [snip]
For the most part, I've pointed to things you /can/ do with Java and
you /can't/ do with C++.


This is an interesting point. I would like to make a distinction. I think
you pointed to things you *can know* in Java but you *cannot know* in
C++. In C++, because of the preprocessor, you cannot really be sure that
what you read is not transformed into something entirely different.
Because of conditional compilation, you cannot know which headers are
included in which order. Because of the flexibility of throw(), you
cannot be sure about the type of the object thrown. All these things are
things you cannot know about, because of things you (or others) *can do*.


Yes. You are correct. Information technology is about information. Good
solid easily obtainable information is vital to design, to implementation,
to trouble shooting and to security.


a) This is rethoric. The information in "Information technology is about
information." is the information, my program deals with. The information in
"Good solid easily obtainable information is vital to design, to
implementation, to trouble shooting and to security." is information about
the structure of my code. The second statement may be true, but it does not
follow from the first. You are using one term to reference two different
entities.

b) The information about my code is already available. After all, it must
be sufficient for the compiler to generate object code. However, I agree
that in C++ the information may be scattered around and is not as local as
the human mind (or some IDE) would like it to be.

c) I still see that there are trade offs. If you increase what can be known
at coding time at the cost of what can be done, then there are trade offs.
Since, apparently, I do not face the difficulties you are dealing with, I
would rather keep the power of what can be done.

In short, you want to enforce coding policies by language design. I,
however, like C++ precisely because it does not enforce policies but
provides mechanisms, and a lot. E.g., [try, throw, catch] to me is not
about error handling but about stack unwinding; and your suggestion
that throw() should only accept arguments derived from std::exception
would break some of my code.

Then you didn't read all of what I wrote about the topic. Either that,
or you chose to ignore it.


It is a pitty that you chose to pick on the example and did not address
the main point that I raised int the topic sentence. I appologize if I
mischaracterized your opinion on exceptions. I still feel that you prefer
the language to enforce policies rather than have it provide mechanisms.


In the case of exceptions, I have solid experience that supports the
opinions I hold. What I suggested simply works better as a default. And
that is what I was referring to when I suggested you hadn't read
everything
I wrote. I explicitly said that the should be configurable through some
mechanism similar to the one currently used to switch out handlers. I'm
not sure how that might be accomplished, but I suspect someone in the C++
community is capable of finding a means.


I will drop exceptions. Obviously talking about them just gives you an
opportunity not to address the issue of enforcing policies versus providing
mechanisms.
To take your last paragraph as an example: You start by talking about
library management when finally addressing the preprocessor in the
negative. This creates the subtext: "library management is *the*
legitimate use of the preprocessor, let's find a better way to that (so
that hopefully nobody will use the preprocessor anymore)."

Again, I am not saying that policy enforcing is a bad idea. I just prefer
to have at least one really powerful language around that does not do
that. And I like C++ to be that language.


Actually, to a large extent, it's the other way around. I believe the
library management in C++ stinks. If there were a better system - which I
believe is highly doable - I would have far less to complain about
regarding the CPP. The CPP does go against certain principles of
encapsulation that are generally regarded as good in computerscience. A
better library system would go a long way toward addressing that as well.
If people aren't #including 15 megs of source in every file, there is less
(virtually no) opportunity for hidden variables entering the environment.


Maybe, if cpp was even more powerful and more convenient, a superior
library management could be implemented using macros.
Best

Kai-Uwe Bux
Jul 22 '05 #40

P: n/a
Steven T. Hatton wrote:
Chris F Clark wrote:
Who says assertions have to abort your program?


IIRC, that was the context not the necessary behavior. And I was really
intending the official <cassert> which is said to be documented in the C
standard documentation. I don't have that, but K&R tell me it aborts the
program. They don't tell me I can change that behavior.


Absolutely, Steve. Any macro (or little function) you write, which has a
name or intent anything like "assert", must abort your program. It's the
rules!

--
Phlip
http://industrialxp.org/community/bi...UserInterfaces
Jul 22 '05 #41

P: n/a
Steve Hatton wrote:
Take careful not that I am not actually advocated abolishing the CPP, nor am
I advocating the abolishment of the #include. I simply want superior means
of doing some of the things the CPP is currently used for.


Then the title is wrong. I could care less about #include. The quote
from BS isn't about #include either. The part of cpp he doesn't like
is the textual manipulation stuff. That is the part that is
*NECESSARY* and powerful.

At best #include is a small implementation detail for doing imports.
Sure it is ugly in its implementation, but it harks back to the fairly
early days of C. I believe there are even C++ compilers that don't
use/need it. The current standard certainly implies that with the new
syntax of #include for the language specified libraries. Those
#include statements don't need to be implemented as files.

Of course, for backward compatibility, there will always be a way to
implement them using files--too many C++ compilers have and always
will have implemented them that way. And, backward compatibility is
truly important in C++--and actually in almost every programming
language.

If you want to break backward compatibility, it is better to design a
"new" language, i.e. Java, C#, D, than to try to change an old
language. Users are going to migrate to newer languages anyway, and
you are better off making your new language be one of the alternatives
they can try, than trying to get all the compiler vendors for your
current language to come on board with your backward incompatible
version.

The exception being when the language is very young. C++ history
proves that. They managed to change some fairly key things in a
non-backward compatible way between Cfront 1.x and Cfront 2.x and we
users migrated. (It took some CPP macros, so that we could have both
1.x and 2.x compatible code, but that was okay.) However, some of the
changes from Cfront 2.x to the approved standard are not yet
incoporated into the most widely used compilers. C++ is too old
*NOW*. Any other backward incompatible changes will be mostly like
the changes that went into FORTRAN 90--I bet there were a lot more
FORTRAN 77 compilers than there ever will be FORTRAN 90 ones. I doubt
that in 10 years from now I will be writing any C++, maybe not in 5.
On the other hand, I've written C++ code for 15 years already, so it
won't be like I didn't use it for a long time.

-Chris
Jul 22 '05 #42

P: n/a
I wrote:
And, what is the beauty of those macros (besides their portability),
is that our primary source code (the uses of those macros) is easy to
read. The syntax of the assert (and other macros) is quite simple and
obvious. That is only doable because they are macros. If they
weren't macros, some of the behind-the-scene-stuff like determining
the current object would have to be present in the source code of each
use.
Steve Hatton replied: I don't follow here. How can a macro determine the current object without
some kind of intentional intervention? I'm not saying it can't be done.
Perhaps you are talking about something similar to Qt's moc?


It takes some "cooperation". In fact, recently we have been trying to
templatize some of the cooperative parts. Here is a post I wrote on
that particular topic.

------------------------------------------------------------------------

In our C++ project we have some internal bug reporting macros that we
use to get useful information when the program does something
unexpected. Essentially at the point of the error, we invoke an
internal interactive debugger that knows the [important] classes
within our system and allow us to walk around the objects that exist
at the time of the fault. It mostly works fairly well. That catch
being that we have to hand implement some of the code each of the
classes (thus the "important class" caveat), which suggests that what
we really need is a template based approach, since it would be much
nicer to have the system automatically generate an implementation for
all classes, eliminating the important/unimportant distinction.

In addition, our current implementation does not work in static member
functions, which is a secondary problem (but a key one if we go to a
template based approach). Currently, we can sidestep this problem
because we have no important classes that also have static member
functions that use the bug reporting macro. (Actually, this topic
came up because I just made a class "important" that had a static
member function that used the bug report macro, and I had to remove
the static member function from the class and make it a global
function to work around the limitation of our current scheme.)

Essentially, we want an ErrorHere function that works anywhere in our
code. If it is in a non-static member function of the class, it calls
the ErrorHereInClass virtual member function of the class which allows
the developer to see the data of the object being operated upon.
Anywhere else, we want the code to default to "assert(false)" like
behaviour. Our current implementation pretty much achieves that. In
objects that we care about, we have a few things that we add to the
class source code and then have an appropriate ErrorHereInClass
function that we can specialize. In classes where haven't done that,
we get the default implementation, which essentially prints out an
appropriate error.

Most of the functionality is implemented via an "ErrorHere" header
file. The ErrorHere header file defines a couple of classes and some
macros (to sugar things by hiding some of the internal mechanism).
The key macro appearing to be a set of functions that work like
"assert(false)", i.e. you call one of them when the code is in trouble
and it figures out how to best report to the user the location of the
problem.

Here is tersely what the ErrorHere header looks like:

class ErrorHereDeveloper; // controls some internal features

class ErrorHereClass {
public:
ErrorHereClass( ErrorHereDeveloper *dev ) :
errorHereDeveloper( dev )
{}
~ErrorHereClass();
virtual void ErrorHereInClass( const char *textToReport ) // overriden
{ cout << textToReport << endl; }
void ErrorHereReport( const char *textToReport )
{ ErrorHereInClass( textToReport ); }
protected: // data is here for this class and derived classes to use
ErrorHereDeveloper *errorHereDeveloper;
};

// default version that returns above "base" class
ErrorHereClass ErrorHereClassFactory( ErrorHereDeveloper *dev )
{
ErrorHereClass mine( dev );
return mine;
}

// use this call when any developer can handle this bug
extern void ErrorHere( const char *textToReport, ... );
extern class ErrorHereDeveloper *anyone;
#define ErrorHere ErrorHereClassFactory( anyone ).ErrorHereReport

// I use this call when I want users to report the bug only to me
extern void ErrorHereForChris( const char *textToReport, ... );
extern class ErrorHereDeveloper *chris;
#define ErrorHere ErrorHereClassFactory( chris ).ErrorHereReport

.. . .

As you can see, a "call" to ErrorHere in the pressence of just this
header file,calls the "global" ErrorHereClassFactory which returns a
class where the ErrorHereReport function takes and simply prints the
string (using a virtual function). Of course, our real implementation
does something more complex, but this is the essential "base"
functionality.

Now, let's look at a customized class, say "Square":

class ErrorHereClassSquare;

class Square {
ErrorHereClassSquare ErrorHereClassFactory( ErrorHereDeveloper *dev )
{
ErrorHereClassSquare mine( this, dev );
return mine;
}
virtual void ErrorHereInClass( const char *textToReport )
{ cout << "Square[" << length << ", " << width << "]: " <<
textToReport << endl; }
// rest of Square
. . .
int length, width; // (ok, maybe I meant rectangle!)
};

class ErrorHereClassSquare : public ErrorHereClass {
public:
ErrorHereClassSquare( Square *square; ErrorHereDeveloper *dev )
: ErrorHereClass( dev ),
mySquare( square )
{}
~ErrorHereClassSquare();
virtual void ErrorHereInClass( const char *textToReport ) // override
{ if ( square ) square->ErrorHereInClass( textToReport );
else cout << textToReport << endl; }
private:
Square *mySquare;
};

So, as you can see, the special class ErrorHereClassSquare and the per
class function ErrorHereClassFactory allows us to customize the code
for our classes. However, this code is "boilerplate" and it would be
nice if we could somehow use a template to create them. I think I
understand how to write the template that will create the
ErrorHere<ClassSquare>. That seems relatively straight forward.

However, it would also be nice, if we could somehow get the
functionality to work in static member functions, i.e. to have
something that would "revert" to the default implementation if the
"this" pointer wasn't available. I have no idea how to make a
(function?) template that tests its current context and determines if
the function it is being called within has a "this" pointer or not.
My fear is that since we make ErrorHere calls through-out our code
(including in static member functions and non-member functions) that
if we attempt ErrorHere in a static member function with a template
solution in place, that this will result in a call to the template
class constructor for the corresponding ErrorHere<Class> without
having a this pointer to pass to the class. At least that's what
happens when I define a per class ErrorClassFactory method in a class
that makes ErrorHere calls from static member functions.

I apologize if this is trivial to accomplish, but it is just too
subtle for me and my meager template programming ability. In the end
I would like something that works within the limits of both Visual C++
6.x and g++ 3.2.3.

-Chris

************************************************** ***************************
Chris Clark Internet : co*****@world.std.com
Compiler Resources, Inc. Web Site : http://world.std.com/~compres
23 Bailey Rd voice : (508) 435-5016
Berlin, MA 01503 USA fax : (978) 838-0263 (24 hours)
------------------------------------------------------------------------------

Jul 22 '05 #43

P: n/a
Steve T Hatton tallking about assertions wrote:
In the sense of JUnit, I have used that approach. In that case you create a
harness, and some use cases to run against your code, testing that
preconditions produce correct post conditions. That typically stands
outside the actual program code.
That's the difference with assertions. They are inside the actual
program code. That's right. We ship our production code with the
assertions turned on. (Actually, the systems has levels, so the
production code has the key assertions turned on, but the ones which
have n**2 and worse performance are turned off, as they are only for
extreme debugging cases.)

We want the assertions turned on in the production code, because our
clients/users are building chips that will be stamped into millions
(perhaps billions) of pieces of sillicon that will sell for hundreds
of $ each. If our software does the wrong thing, and that causes them
to misdesign the chip, the cost is not even something one wants to
think about. I think you will find that the safety critical people
(people who make life sustaining software, banks, rocket control
systems) all take a similar point of view. We want to be able to
build tools that we can make certain are reliable and we are willing
to put great effort into assuring that those tools are reliable.
Running with assertions turned on in production code, is a small price
for us to pay in that regard.

The point being is that we know that in 600K lines of code there are
likely to still be bugs. There is no way we could test all the
combinations of interactions. More importantly, we are adding new
features all the times. It is simply impossible to get each of those
new features to work in every case with all the current features for
all the subtle cases. Therefore, we have assertions that tell us when
something is broken, and they invoke a user and developer friendly
debugger.

We can't depend on simple per unit testing, because most of the
interesting invariants are not isolated to one piece of code. It's
not like we have simple loops on arrays and we want to make sure we
don't step off the end (well, we have those too). We have a model
that has subtle semantics. That is what we want to make sure we have
right.
It probably also depends on the nature of the product whether such things
are generally useful. You seem to have something akin to a huge Karnaugh
map. Probably much more suited to such structured evaluation than the
kinds of systems I've worked on. [more below]
Nothing at all like that. We have a graphics front end of about 100K
lines, a verilog compiler of another 100k lines, an internal model
(the nets and gates etc.), another 150K lines, an interpreter for
still another 100K lines, and numerous other "parts" that are much
smaller. However, all of these pieces are pretty much standard code
in their domain.

The key point is that they are all non-trivial parts and they all
interact. So, if I am fixing something in the compiler, I want to
know if I screw up and produce a model that doesn't make sense for
some inputs, because if it does, then the code which runs in the
interepreter is likely to perform some sort of non-sensical
calculation and return "5" to the user for some calculation that was
only 2 bits wide.
Also bear in mind that I do use exceptions in similar ways.


That's good. However, with a code base like ours it is hard to make
systemic changes if we haven't packaged the code into easily
identifiable pieces. Assertions are a nice tool for packaging up one
piece. If you want consider the following:

#define assert(x) if (! x) throw(problemHere)

Now, all are assertions have just become exception based. Yes, one
probably wants something more sophisticated, but it illstrates the
point. Assertions are a way of indicating at the source level what we
expect to be true at run-time. The implementation of how the
assertion works is not important for that to be true.

And, that brings us back to the text processing power of CPP. The
point of CPP is to take something that looks like one kind of C++
statement and turn it into something much more sophisticated. And, to
do so in a systematic way through-out a large project.

I don't want to run an editor macro over the entire source tree just
because I have found a slight improvement in how we can debug
something. Because, if the code has been written by hand (e.g. "if
(!x) throw(problemHere);") there is a good change that some copy of
that code is spaced differently or somehow not "right" for my editor
macro. (Gee, look line wrapping made it happen right here in my email
sample.) As a result, the code will be broken and I won't even know,
because I can't look reliably at all the code. However, a CPP macro
will reliably replace all the instances, and if it is broken I'll get
a compiler syntax error on the resulting code because it won't be
valid C++.

Now, are there ways to make CPP do that job better, probably yes. For
example, if you want to propose adding a scoping feature that when
used limited the scope of macros, so that they would have scope, just
like other constructs, I would be supportive, because if it were
available in the compilers I used it would make my life better. And
if it weren't, well I'd be no worse off. (Well, assuming that you
remembered the backward compatibility rule, that existing code not
using the feature should not be broken by the feature.)

-Chris

************************************************** ***************************
Chris Clark Internet : co*****@world.std.com
Compiler Resources, Inc. Web Site : http://world.std.com/~compres
23 Bailey Rd voice : (508) 435-5016
Berlin, MA 01503 USA fax : (978) 838-0263 (24 hours)
------------------------------------------------------------------------------
Jul 22 '05 #44

P: n/a
Chris F Clark wrote:
We want the assertions turned on in the production code, because our
clients/users are building chips that will be stamped into millions
(perhaps billions) of pieces of sillicon that will sell for hundreds
of $ each. If our software does the wrong thing, and that causes them
to misdesign the chip, the cost is not even something one wants to
think about. I think you will find that the safety critical people
(people who make life sustaining software, banks, rocket control
systems) all take a similar point of view. We want to be able to
build tools that we can make certain are reliable and we are willing
to put great effort into assuring that those tools are reliable.
Running with assertions turned on in production code, is a small price
for us to pay in that regard.
Point. In a softer language that "takes care of assertions for us", control
over those assertions is much harder to exert.

The Ariane IV rocket, a few minutes off the launch pad, slammed its rockets
to one side, stalled, broke up, and . It slammed its rockets over because
the language, Ada, took care of an assertion for the programmer, when the
programmers ignored it. The controller with the exception should have not
throw it. Instead it threw, and the rocket controllers mistook the exception
for a command.

(There are many, many other rationales for which bit of the Ariane mission
could have worked better. Flight did not require the controller involved,
and it could have been switched off.)

Returning control to programmers (and enabling programmers, within a
process, to use that control) can be better than a language that
The point being is that we know that in 600K lines of code there are
likely to still be bugs. There is no way we could test all the
combinations of interactions. More importantly, we are adding new
features all the times. It is simply impossible to get each of those
new features to work in every case with all the current features for
all the subtle cases. Therefore, we have assertions that tell us when
something is broken, and they invoke a user and developer friendly
debugger.
Righteous.
That's good. However, with a code base like ours it is hard to make
systemic changes if we haven't packaged the code into easily
identifiable pieces. Assertions are a nice tool for packaging up one
piece. If you want consider the following:

#define assert(x) if (! x) throw(problemHere)


Ahem.

#define assert_(x) if (! (x)) throw(problemHere(#x, __FILE__,
__LINE__))

;-)

I could continue to add a Debug-mode version of that assertion which halts
the program with a breakpoint on the failing line.

(And re-writing a Standard C++ Library thing is against my religion.)

--
Phlip
http://industrialxp.org/community/bi...UserInterfaces
Jul 22 '05 #45

P: n/a
I wrote:
.... If you want consider the following:

#define assert(x) if (! x) throw(problemHere)
Philip corrected: Ahem.

#define assert_(x) if (! (x)) throw(problemHere(#x, __FILE__,
__LINE__))

;-)

I could continue to add a Debug-mode version of that assertion which halts
the program with a breakpoint on the failing line.


I presume the correction is due to my overly simplistic assertion. In
this case, I was trying to point out that one could use exceptions to
implement assertions and trying to use the simplest code possible.
However, Philip's point is accurate in that, one probably wants
something more sophisticated than my strawman example. As I'm sure
Philip knows (and this is just for other readers who aren't as C++
savvy), in fact, if one reads a typical assertion implementation, one
learns that CPP is just one of the tools in the bag-of-tricks to
getting a sophisticated assertion tool. One generally calls an
"assertion_failure" routine with parameters like #x, __FILE__,
etc. and that routine then decides how the assertion is to be
reported, e.g. by throwing an exception.

And, in fact, Philip's correction nicely illustrates the broader
point. If I had coded the first exception throwing assertion model in
a real app, and a coworker had seen it, they could have fixed it just
as Philip did. As a result, all of my assertions in the code would
now work better!

That would not have happened if I had written the code inline in all
the places I might have desired the check. They might have found one
instance. If they were motivated and had the time, they might have
done a search for other places, but they might easily have missed some
place where my coding was slightly different and it wasn't obvious
that this was an exception for assertion error checking.

Thanks Philip!
-Chris

************************************************** ***************************
Chris Clark Internet : co*****@world.std.com
Compiler Resources, Inc. Web Site : http://world.std.com/~compres
23 Bailey Rd voice : (508) 435-5016
Berlin, MA 01503 USA fax : (978) 838-0263 (24 hours)
------------------------------------------------------------------------------
Jul 22 '05 #46

P: n/a
Chris F Clark wrote:
Steve T Hatton tallking about assertions wrote:
In the sense of JUnit, I have used that approach. In that case you
create a harness, and some use cases to run against your code, testing
that
preconditions produce correct post conditions. That typically stands
outside the actual program code.
That's the difference with assertions. They are inside the actual
program code. That's right. We ship our production code with the
assertions turned on. (Actually, the systems has levels, so the
production code has the key assertions turned on, but the ones which
have n**2 and worse performance are turned off, as they are only for
extreme debugging cases.)


To some extent we are discussing two different issues. One is assertions as
a software engineering practice, the other is how they are implemented.
I've taken no position on the former matter other than to say that
particular approaches did not appeal to me, and I have not found things
called assertions useful. On the latter point I have pointed out that
TC++PL(SE) suggests a native language alternative to the use of macros for
assertions.
We want the assertions turned on in the production code, because our
clients/users are building chips that will be stamped into millions
(perhaps billions) of pieces of sillicon that will sell for hundreds
of $ each. If our software does the wrong thing, and that causes them
to misdesign the chip, the cost is not even something one wants to
think about. I think you will find that the safety critical people
(people who make life sustaining software, banks, rocket control
systems) all take a similar point of view. We want to be able to
build tools that we can make certain are reliable and we are willing
to put great effort into assuring that those tools are reliable.
Running with assertions turned on in production code, is a small price
for us to pay in that regard.
I'm beginning to understand that this is an issue of vocabulary. What you
are calling assertions are sometimes called consistency checks.
Stroustrup's presentation of assertions is in conjunction with the concept
of invariants. Invariants trace back to the original theories of database
management and ACID transactions. So it can probably be shown that the use
of assertions is logically similar, or identical, to the approaches used in
DBMS's. I haven't given this much thought, and the terminology describing
ACID transactions is not currently part of my daily vocabulary, so I
suspect there is much room for refinement of this notion.
The point being is that we know that in 600K lines of code there are
likely to still be bugs. There is no way we could test all the
combinations of interactions. More importantly, we are adding new
features all the times. It is simply impossible to get each of those
new features to work in every case with all the current features for
all the subtle cases.
What I was saying about removing things had to do with removing them when
you know you have it right. If I have a bug in my software that I'm not
seeing right off, I will put in checks that test values in the areas where
I believe the problem exists. Often, when I isolate and correct the
problem, I can study that piece of code closely enough to convince myself
that similar problems are not present, e.g., suppose I have an index that's
going out of bounds on me. I put in some checks to detect where it's going
out of bounds. When I find that I wrote i < x, when I should have written
i < f(x), and i and f(x) are clearly defined. The error trapping code
which often functions in a way similar to assertions, becomes superfluous.

At that point leaving it is would result in nothing but code bloat that
obscures the logic of the program. There are often boundary points between
modules in compartmentalized code at which the kinds of checks made by
assertions can be performed to verify that the data entering, and/or
leaving, the component is valid. I've often called these sanity checks.

My first use of C++ exceptions was for just such a situation. I have a
multidimensional grid consisting of an arbitrary number of conceptual
arrays each having an arbitrary number of indeces. The values are defined
at runtime. I also have function objects which generate addresses into
this grid. It can either be addressed using an n-tuple of indeces, or it
can be addressed using a single index. The actual data is stored linearly
in one valarray. The rest is just a conceptual indexing mechanism I
devised to study the algorithms involved in this kind of situation.

Now, this is a bit trickier than simply checking that I wrote i < x rather
than i <= x for my array bounds. In that situation I put in a try catch
that test the addresses generated by the addressor objects. Within 2
minutes of introducing the checks, I was finding problems I had not
foreseen. I could have called my checks assertions, but I wasn't thinking
along those lines.
Therefore, we have assertions that tell us when
something is broken, and they invoke a user and developer friendly
debugger.
This makes perfect sense to me. I will point out that soft-coded assertion
switches might provide more flexibility than traditional macro-based
assertions. And, certainly, we could devise a way of combining these ideas.
Assertions are a way of indicating at the source level what we
expect to be true at run-time. The implementation of how the
assertion works is not important for that to be true.
I agree. This is just a question of semantics. What I was calling
assertions at the time you entered this discussion were specifically the
ones in <cassert>. I will grant that I also said I didn't find
Stroustrup's alternative of using templates rather than macros overly
appealing. I may reconsider that in light of this discussion.
And, that brings us back to the text processing power of CPP. The
point of CPP is to take something that looks like one kind of C++
statement and turn it into something much more sophisticated. And, to
do so in a systematic way through-out a large project.
But a macro has what amounts to a signature just like a function, or a
template. I have yet to be convinced that same systematic replacement
can't be done from within the language by changing definitions. To some
extent, assertion macros seem to be providing a bit of compile-time
introspection for you.

In Java, since every object has a /class/ member describing the class of
that object, there is no need to use such a mechanism. No, I am not
advocating a UBC for C++, I'm already convinced it's not a good idea. But,
a Sub-Universal Base Class, might be worth considering.
However, a CPP macro
will reliably replace all the instances, and if it is broken I'll get
a compiler syntax error on the resulting code because it won't be
valid C++.
In this case, the real argument for retaining the CPP would be that it is
available on all conforming implementations. Certainly you can accomplish
all of this from outside the language. It's done all the time. Even
though Trolltech uses macros to identify their extensions to C++, they go
beyond simple macro replacement to provide the meta object support. (Just
to preempt the cries of foul: The result of moccing the code is again
standard C++).
Now, are there ways to make CPP do that job better, probably yes. For
example, if you want to propose adding a scoping feature that when
used limited the scope of macros, so that they would have scope, just
like other constructs, I would be supportive, because if it were
available in the compilers I used it would make my life better.
To some extent, it would no longer be the CPP, it would be the C++PP. If
the C++ preprocessor were to significantly diverge from the C preprocessor,
it would really throw a wrench into the C compatability gears. I suspect
if the result of my starting this thread is the introduction of a C++PP, I
will be receiving death-threats from the FSF.
And
if it weren't, well I'd be no worse off. (Well, assuming that you
remembered the backward compatibility rule, that existing code not
using the feature should not be broken by the feature.)


There's also the deprecation rule.
--
"[M]y dislike for the preprocessor is well known. Cpp is essential in C
programming, and still important in conventional C++ implementations, but
it is a hack, and so are most of the techniques that rely on it. ...I think
the time has come to be serious about macro-free C++ programming." - B. S.

Jul 22 '05 #47

P: n/a
Chris F Clark wrote:
Steve Hatton wrote:
Take careful not that I am not actually advocated abolishing the CPP, nor
am
I advocating the abolishment of the #include. I simply want superior
means of doing some of the things the CPP is currently used for.
Then the title is wrong.


I have to disagree. I borrowed the form from Edsger W. Dijkstra's title for
a reason. The goto was never abolished, but it is rarely used in most
code.

hattons@ljosalfr:/usr/src/linux/kernel/
Wed Sep 01 18:15:45:> grep goto *.c | wc -l
290

I could care less about #include. The quote
from BS isn't about #include either. The part of cpp he doesn't like
is the textual manipulation stuff. That is the part that is
*NECESSARY* and powerful.
I understand that his focus is on the use of defines, nonetheless, I believe
I am, to a large extent addressing the same problem. I don't want
unnecessary stuff introduced into my translation unit that can result in
unexpected behavior, or hidden dependencies. In addition, and this may go
beyond what Stroustrup is concerned with, I find the use of #include to
introduce resources messy, redundant, inefficient, often confusing, and
significantly annoying.

I see little gained by isolating a given #define to a namespace, if it can
still sneak into my translation unit in a #include. Sure, it narrows down
the possibilities, but the problem remains. When I import an identifier
into my file, that's _all_ I want. Sure Koening lookup might technically
bring in a bit more, but I don't believe that introduces anything like the
problem I am trying to solve.
At best #include is a small implementation detail for doing imports.
Sure it is ugly in its implementation, but it harks back to the fairly
early days of C. I believe there are even C++ compilers that don't
use/need it.
If you can provide an example, please share it with us. I really, really,
want to explore this area.
The current standard certainly implies that with the new
syntax of #include for the language specified libraries. Those
#include statements don't need to be implemented as files.
But that is not what I'm talking about. The Standard still specifies that
the entire contents of the header is included in the translation unit. I
believe it would be trickier in C++ than in Java to accomplish what I'm
describing, but I believe it is doable. The problem with C++ is that more
of the supporting infrastructure needs to be compiled per instance, than
does with Java. That means I cannot simply import a template in the way
Java imports a class. The template will (officially) have to be compiled.
(I will note that gcc 3.4.x is now advertising precompiled templates.) But,
even in Java, there is often a need to compile a class before it can be
used where it is imported, so this is simply a metter of degrees.
Of course, for backward compatibility, there will always be a way to
implement them using files--too many C++ compilers have and always
will have implemented them that way. And, backward compatibility is
truly important in C++--and actually in almost every programming
language.
I've seen that issue dealt with from both approaches. Certainly, it is
easier to break version 1.x when going to 2.x, than it is to break 3.x when
going to 4.x. Both Java and Qt were willing to forego backward
compatability for the sake of progress. There are ways of grandfathering
in older software by lugging around antiquated libraries and compilers, and
creating glue-code to bridge the differences.
If you want to break backward compatibility, it is better to design a
"new" language, i.e. Java, C#, D, than to try to change an old
language. Users are going to migrate to newer languages anyway, and
you are better off making your new language be one of the alternatives
they can try, than trying to get all the compiler vendors for your
current language to come on board with your backward incompatible
version.
I agree, for the most part. However, backward compatability can sometimes be
achieved as an either/or option. Either you use the new feature, or the old
one for a given translation unit, or a given program.
C++ is too old
*NOW*. Any other backward incompatible changes will be mostly like
the changes that went into FORTRAN 90--I bet there were a lot more
FORTRAN 77 compilers than there ever will be FORTRAN 90 ones. I doubt
that in 10 years from now I will be writing any C++, maybe not in 5.
On the other hand, I've written C++ code for 15 years already, so it
won't be like I didn't use it for a long time.


What ever happened to Claudio Puviani?
--
"[M]y dislike for the preprocessor is well known. Cpp is essential in C
programming, and still important in conventional C++ implementations, but
it is a hack, and so are most of the techniques that rely on it. ...I think
the time has come to be serious about macro-free C++ programming." - B. S.

Jul 22 '05 #48

P: n/a
Kai-Uwe Bux wrote:
Steven T. Hatton wrote:
Yes. You are correct. Information technology is about information. Good
solid easily obtainable information is vital to design, to
implementation, to trouble shooting and to security.
a) This is rethoric. The information in "Information technology is about
information." is the information, my program deals with. The information
in "Good solid easily obtainable information is vital to design, to
implementation, to trouble shooting and to security." is information about
the structure of my code. The second statement may be true, but it does
not follow from the first. You are using one term to reference two
different entities.


It's not contradictory, it is simply recursive. I am applying the same
argument a mechanical engineer would use to explain his use of computers to
do his job. The programs and tools a software engineer uses are the
programming language, the compiler, the IDE, the libraries, etc.
b) The information about my code is already available. After all, it must
be sufficient for the compiler to generate object code. However, I agree
that in C++ the information may be scattered around and is not as local as
the human mind (or some IDE) would like it to be.
And I believe this is a significant problem that can and should be
addressed.
c) I still see that there are trade offs. If you increase what can be
known at coding time at the cost of what can be done, then there are trade
offs. Since, apparently, I do not face the difficulties you are dealing
with, I would rather keep the power of what can be done.


But what power do you lose by being able to import a resource with a using
declaration, or to bring in -perhaps implicitly- an entire namespace with a
using declaration? That really is what I am suggesting, more than
anything. WRT exceptions, the fact of the matter is that placing certain
requirements (restrictions) on their use improves their usability. If that
behavior can be configured at compile time in such a way that the
restrictions are not enforced, you have lot virtually nothing. The only
issue becomes the requirement that you do specify the alternative behavior.
Most compilers would probably provide a means of modifying the option
through a commandline switch, so compiling code that doesn't use the
feature would require no more than adding one command to your autoconf.ac,
or environment variables.
In the case of exceptions, I have solid experience that supports the
opinions I hold. What I suggested simply works better as a default. And
that is what I was referring to when I suggested you hadn't read
everything
I wrote. I explicitly said that the should be configurable through some
mechanism similar to the one currently used to switch out handlers. I'm
not sure how that might be accomplished, but I suspect someone in the C++
community is capable of finding a means.


I will drop exceptions. Obviously talking about them just gives you an
opportunity not to address the issue of enforcing policies versus
providing mechanisms.


Its your issue, not mine. The only reason I mentioned exceptions is that it
is the only place where I am advocating placing restrictions on what you
can do by default in C++. There are many restrictions placed on your use
of code. That's what type checking is all about. If you are suggesting
that I want to see changes in the way C++ is used in general, yes, that is
correct.
Actually, to a large extent, it's the other way around. I believe the
library management in C++ stinks. If there were a better system - which
I believe is highly doable - I would have far less to complain about
regarding the CPP. The CPP does go against certain principles of
encapsulation that are generally regarded as good in computerscience. A
better library system would go a long way toward addressing that as well.
If people aren't #including 15 megs of source in every file, there is
less (virtually no) opportunity for hidden variables entering the
environment.


Maybe, if cpp was even more powerful and more convenient, a superior
library management could be implemented using macros.


I actually have a wishlist item in the KDevelop bug database that suggests
attempting such a thing using #pragma and a strategy of filename, resource
name consonance.

--
"[M]y dislike for the preprocessor is well known. Cpp is essential in C
programming, and still important in conventional C++ implementations, but
it is a hack, and so are most of the techniques that rely on it. ...I think
the time has come to be serious about macro-free C++ programming." - B. S.

Jul 22 '05 #49

P: n/a
Sam Holden wrote:
On Wed, 01 Sep 2004 03:52:53 -0400,
Steven T. Hatton <su******@setidava.kushan.aa> wrote:
Phlip wrote:
Steven T. Hatton wrote:

And you get threadding, unicode, effortless portability, incredibly
smooth refactoring, highlevel abstraction with the tools to support it,
great,

Threading is good??
Hu? Even Microsoft eventually figured that out. Yes. Thread support is
crucial for creating any sophisticated application that can be expected
to do more than one thing at a time.


Strangely enough I've been using non-threaded applications that
do more than one thing at a time and are reasonably sophisticated.

Where sophisticated means things like: makes multiple concurrent TCP
connections, encodes and decodes audio sending and receiving it over
UDP, displays GUI and so on - without a thread in sight (well one
thread for the pedants).


OK. Let me be clear as to what I really meant be threading. I really
intended concurrent programming with resource locking and synchronization.
Technically, threading means tracing the same executable image with
multiple instruction pointers. It's main advantages are the multiple use
of the same executable image by different 'processes', and the reduced need
for context switching between processes.
I even write them occassionaly.

Threading throws away decades of work in creating systems with useful
protected memory spaces for processes. And lets the average programmer
meet all the problems and reinvent (poorly) all the solutions all over
again. Rather than using the solution implemented by the (hopefully
much more experienced and competent in the domain) OS authors.
What you seem to be suggesting is that threads are used in situations where
multiple processes would be better. Am I to also understand that all
concurrency is bad?
Of course there's that vanishingly small percentage of problems that
are best solved with threads, but chances are you, me, and the
next guy aren't working on one of them.


Please clarify what you mean by 'thread'. I suspect we aren't talking about
the same thing.
--
"[M]y dislike for the preprocessor is well known. Cpp is essential in C
programming, and still important in conventional C++ implementations, but
it is a hack, and so are most of the techniques that rely on it. ...I think
the time has come to be serious about macro-free C++ programming." - B. S.

Jul 22 '05 #50

61 Replies

This discussion thread is closed

Replies have been disabled for this discussion.