473,757 Members | 9,463 Online
Bytes | Software Development & Data Engineering Community
+ Post

Home Posts Topics Members FAQ

Alternatives to the C++ Standard Library?

Now that I have a better grasp of the scope and capabilities of the C++
Standard Library, I understand that products such as Qt actually provide
much of the same functionality through their own libraries. I'm not sure
if that's a good thing or not. AFAIK, most of Qt is compatable with the
Standard Library. That is, QLT can interoperate with STL, and you can
convert back and forth between std::string and Qt::QString, etc.

Are there any other libraries based on Standard C++, but not on the C++
Standard Library? What might these be? Why do people create these? Are
any significantly different from the C++ Standard Library? Is it good that
people create their own basic libraries instead of using the Standard
Library?
--
If our hypothesis is about anything and not about some one or more
particular things, then our deductions constitute mathematics. Thus
mathematics may be defined as the subject in which we never know what we
are talking about, nor whether what we are saying is true.-Bertrand Russell
Jul 23 '05
43 5013
Panjandrum wrote:
Calum Grant wrote:
But I'm sure these "easier" approaches have some disadvantages. Anyway
which did you have in mind?


We probably agree that a template parameter should be used for the type
of object in the container. Use your imagination for the rest of the
'design space'.


I'm probably just too dense and don't get it at all. Of course, my
problem is already that I think template parameters can and should
be used for other things than object types in containers. Maybe this
limits my imagination already too much. Can you help out the stupid,
like me, and point the blind into the right direction with a hint?
Thank you very much!
--
<mailto:di***** ******@yahoo.co m> <http://www.dietmar-kuehl.de/>
<http://www.eai-systems.com> - Efficient Artificial Intelligence
Jul 23 '05 #21
Dietmar Kuehl wrote:

/*
Of course, STL is not ideal (in fact, I have
a major problem with the basis STL is built upon: the iterator concept
should have been split into two distinct concepts, i.e. cursors and
property maps)...
*/

And later:

... cursors provide a position and some form
of a "key" and a property map provides values associated with the "key".
However, the "key" can be just an index into a vector or a reference to
the value itself. How the property map accesses a value related to a
key is entirely up to the property map. In particular, you should rather
envision an index lookup than a lookup in map or a has, i.e. the
property map access should be considered to be a cheap O(1) access
although it may be more involved.


Hi, Dietmar. All very interesting - it caught my attention partly becase I
am in the early stages of designing a template-based sequence analysis
library (sequence as in bioinformatics rather than functional analysis) and
I was thinking of making iterators one of two or three central organizing
ideas. So my question is just: is your master's thesis still available
on-line? I went here:

http://www.boost.org/libs/graph/doc/publications.html

And found a link to "Design Pattern for the Implementation of Graph
Algorithms", but, alas, the link is broken.

Just as an aside - the inputs, and often the outputs, of sequence analysis
are sequences - strings - but intermediate data structures like Suffix
Trees have nodes and edges with properties that vary by algorithm. So the
property map idea looks interesting.

Regards, Michael

Jul 23 '05 #22
Michael Olea wrote:
Hi, Dietmar. All very interesting - it caught my attention partly becase I
am in the early stages of designing a template-based sequence analysis
library (sequence as in bioinformatics rather than functional analysis)
and I was thinking of making iterators one of two or three central
organizing ideas.
Which is quite reasonable although, as stated, you will indeed want to
split the iterator concept into two distinct concepts: one for moving
(cursors) and one for accessing data (property maps).
So my question is just: is your master's thesis still
available on-line? I went here:

http://www.boost.org/libs/graph/doc/publications.html

And found a link to "Design Pattern for the Implementation of Graph
Algorithms", but, alas, the link is broken.
Yes, I know, but I don't think I can fix the link. My thesis is still
available at
<http://www.dietmar-kuehl.de/generic-graph-algorithms.pdf> . However,
there are a few things to note:

- The Boost Graph Library (BGL) by Jeremy Siek et al. is based on
very similar ideas but is completely available as a library you
can download and there is a book describing it written by Jeremy.
I'd recommend that you rather look at this.
- Since I wrote the thesis, some things evolved and the important
concepts have undergone some changes, not to mentioned renaming
(due to discussions with Jeremy and others). In particular, what
I originally called "Data Accessor" is now called "Property Map".
Also, to distinguish concepts, I still used the term iterator for
the traversal concept rather than using "Cursor".
- The currently favored concept for property maps looks quite
different, partially due to quite resent discussions with Dave
Abrahams and Doug Gregor. In short, the concept looks like this
("pm" is a property map; "c" is cursor; "v" is a value as obtained
by combining the type of "*c" with the type of "pm"):

- reading a value: v = pm(*c); // requires readable PM
- writing a value: pm(*c, v) // requires writable PM
- lvalue access: type& v = pm(*c, v); // requires lvalue PM

For more information on the current state of property maps, have
a look at
<http://boost-consulting.com/projects/mtl4/libs/sequence/doc/html/cursors_and_pro perty_maps.html >
Just as an aside - the inputs, and often the outputs, of sequence analysis
are sequences - strings - but intermediate data structures like Suffix
Trees have nodes and edges with properties that vary by algorithm. So the
property map idea looks interesting.


The original idea of property maps (then called data accessors)
arose in the context where there are several problems related to
properties associated with nodes and edges:

- The set of used properties vary widely between different algorithms
and depending on the used algorithm you want to provide them
somehow.
- Different algorithms think of the same property in different terms.
For example, what is a length to one algorithm (e.g. a path find
algorithm) could be a capacity to another algorithm (e.g. a flow
algorithm). Still, the algorithm want to communicate with each
other. This was actually the primary reason why I introduced
property maps for the graph abstraction.
- The same properties are often differently represented and
interrelated. For example, in flow networks each directed edge
often has a total capacity, a current flow, and a free capacity.
Obviously, "free capacity == capacity - current flow" hold, i.e.
it is sufficient to represent only two of the three attributes
and depending on the context you don't know which is the best
combination.

From your description it sounds as if there are similar issues for
the analysis of sequences and I guess that property maps can be a
viable to address these problems.
--
<mailto:di***** ******@yahoo.co m> <http://www.dietmar-kuehl.de/>
<http://www.eai-systems.com> - Efficient Artificial Intelligence
Jul 23 '05 #23
Steven T. Hatton wrote:
Dietmar Kuehl wrote:
... but then, the C++ committee got hurt at the [b]leading edge before
(although STL is a huge success, there were also many problems introduced
by standardizing features nobody had experience with; although this does
not that badly apply to STL itself but it clearly applies to features
which were necessary to make STL work).
Yes, but years later, C++ programmers are usually passionate advocate of
the STL.


Of course, they are. As far as I can tell, the majority of the C++
committee members (if not all of them) think that the introduction
of STL was the right thing to do. Still, it was and is a problematic
path in many areas. The STL is riddled with many problems, too, and
the specification is actually quite imprecise in many areas
- something which was only learned with the experience of using the
STL. Although many think that the STL was the right thing to happen
at that time, I have the impression that many also think that such
a process should not repeated with the next feature. There is an
advantage of operating at the leading edge but lack of experience
can also be quite harmful if the result is cast into stone in the
form of an international industry standard.
STL is mostly about interoperabilit y.


Many people will claim performance is also one of the primary advantage of
compile time polymorphism.


A certain form of performance was indeed a design rule for how the
STL works (actually, the rules were quite strict in that they kind
of required that the abstraction penalty is non-existant or at
least rather low). However, this is kind of a restriction on the
design space. The primary goal was to find an abstraction which
makes all algorithms applicable to all reasonable data structures
while writing each algorithm a small number of times. Of course,
"algorithm" is actually a misnomer in this context: something like
"solver" or "operation" would be better since the different
implementations of an "algorithms " are actually different algorithms
(in the conventional meaning of the term) solving the same problem
while taking advantage of different properties in the input (i.e.
they are based on different requirements; e.g. 'distance(beg, end)'
actually counts the number of elements for all iterator categories
unless the iterators are random access iterators in which case the
function just uses a substraction: these are two different
algorithms solving the same problem).

Minimizing the abstraction penalty actually excludes the use of
run-time polymorphism for the general concept because run-time
polymorphism invariably has some abstraction penalty, e.g. because
the resulting blocks fed to the optimizer are chopped up at each
lately bound function call. Of course, where reasonable, run-time
polymorphism can still be used. On the other hand, the concepts
used by STL are rather small operations and the ration between
computations done by a single variation point and a possible
function call overhead is typically rather bad, i.e. when using
STL concepts you are better off not also using run-time
polymorphism. When using run-time polymorphism you want your
variations points to do more computations, i.e. you would fold
multiple operations into one call. This can be witnessed by the
radically different strategies used in STL and sequence accesses
in "purely object-oriented languages" (like Java or C#). To access
a single element in a sequence STL [normally] uses three operations:

- a check whether the current position indicates that processing
has completed ("current == end");
- access to the current element ("*current")
- moving to the next element ("++current" )

In Java and C# these three operations are folded into one operation
("Next()") which returns a null value if there is no next position
and otherwise returns the current value and moves the iterator to
the next position.

Now, which is better? If you don't pay for calling multiple
functions, the former is more desirable because it offers more
flexibility and allows for smaller points of variability (although
it is still too course-grained...). This approach also plays well
with having a hierarchy of concepts which may take advantage of
special features although only a minimalist requirement is made
(which in turn caters for some very powerful optimizations because
this can take advantage of special structures in the underlying
data structure).

On the other hand, if each operation incurs a function call or at
least chops up your optimization blocks, the extra benefit is
clearly dwarfed by additional costs. Thus, in "purely
object-oriented languages" (i.e. in languages which incur arbitrary
restrictions without any gain but with, IMO, huge losses)
an infe^H^H^H^H^H^ H different approach is used which reduces the
costs (the pros) and the possibilities (the cons).
Thus it is only consistent to
provide facilities which allow interoperabilit y with object oriented
features.


I need to get around to running some tests on algorithms using mem_fun
verses using for loops and static invocation.


You might want to also have a look at TR1's "mem_fn" which is
also available from Boost.
The hardest thing for me to deal with when it comes to generic programming
has been the fact that the concept is often not clear from the context.
Yes, this is indeed a recognized problem. There are proposals to
add some form of explicit concepts to the C++ core language. The
major discussion is currently between two competing views on how
the details should look. The basic idea is to somehow describe a
"concept" which is essentially a set of operations used by
algorithms and provided by the entities passed as arguments to
these algorithms. The biggest argument is whether the arguments
have to specify more or less explicitly that they implement a
concept or if this should be implicitly determined by the compiler
based on the presence of operations for the arguments. You might
want to search the papers of the last few meetings if you are
interested in details.
Apparently, you haven't understood what STL is about ;-) There are just
a few kind of entities which are completely described by concepts with
the central concept (group) being the iterators.


I was considering the iterators to be integral to the containers.


It is necessary that a container is accessible view iterators to
be used with STL algorithms. However, iterators need not at all be
related with any container. For example, an iterator over the even
integers could compute its current state and operations without the
need of any container.
Actually, STL
essentially revolves around iterators: they are used by the algorithms
and provided by the containers which decouples the algorithms from the
containers and allows independent realization of algorithms and
containers. BTW, containers are entirely irrelevant to STL. STL
operates on sequences, not containers. There is a huge conceptual
difference between these two.


I almost used the word "collection " because, to me, the STL "containers "
really aren't containers. This is especially true of vector and deque. I
would not even use the term "sequence". To me, "sequence" suggests the
elements under discussion have some inherent ordering.


They have, indeed! In fact, the iteration over a sequence just
shows how the elements are ordered [currently]. However, the
element order may depend on other aspects than the element values.
For example, it may depend on the insertion order instead. I think,
this pretty accurately reflects typical definitions of "sequence".
See, for example, <http://en.wikipedia.or g/wiki/Sequence>.
Actually, I didn't understand what you meant.
Well, I wasn't really precise at all. See the subthread about
cursors and property maps if you are interested in more details
on property maps.
I don't think it is but I'm not sure. I'd consider it more important
to work on some form of lambda expressions because this would remove
the need to name these types in the first place! The above looks
cunningly as if it were used as some part of a loop. There should be
no necessity to write a loop to process the elements in a sequence.


I was actually thinking in more general terms of being able to access the
typedefs inside a class through an instance.


I realized this. However, at least in the context of STL this need
should not arise at all...
I'm pretty sure the way lambda expressions will work is similar to the way
Mathematica works. That is significantly different than the way
traditional C++ works. Functional programming is not an intuiteve art for
me. I've done some fun stuff with it, but it usually requires more
careful thought than writing out a proceedural block of code.


Well, I consider the ability to inject portions of code into a
templatized function taking operations as arguments as an
application for lambda functions. It is a relatively primitive
use but it would drastically ease the use STL algorithms. It is
related to functional programming in this use, although real
lambda expressions would probably allow at least some forms of
functional programming.
--
<mailto:di***** ******@yahoo.co m> <http://www.dietmar-kuehl.de/>
<http://www.eai-systems.com> - Efficient Artificial Intelligence
Jul 23 '05 #24
Dietmar Kuehl wrote:
As far as I can tell, the majority of the C++
committee members (if not all of them) think that the introduction
of STL was the right thing to do. Still, it was and is a problematic
path in many areas. The STL is riddled with many problems, too, and
the specification is actually quite imprecise in many areas
- something which was only learned with the experience of using the
STL. Although many think that the STL was the right thing to happen
at that time, I have the impression that many also think that such
a process should not repeated with the next feature. There is an
advantage of operating at the leading edge but lack of experience
can also be quite harmful if the result is cast into stone in the
form of an international industry standard.
Read between the lines: 'We know that making STL the C++ Standard
library was a huge mistake. But we'll never admit it.'
STL is mostly about interoperabilit y.


Many people will claim performance is also one of the primary advantage of
compile time polymorphism.


Minimizing the abstraction penalty actually excludes the use of
run-time polymorphism for the general concept because run-time
polymorphism invariably has some abstraction penalty, e.g. because
the resulting blocks fed to the optimizer are chopped up at each
lately bound function call.


Haha, the myth of the 'overhead' of a virtual function call has been
repeated for 20 years. It's usually spread by the same people who
recommend std::vector<std ::string> > or boost::shared_p tr (which even
unnecessarily allocate lot of memory on the heap).
This can be witnessed by the
radically different strategies used in STL and sequence accesses
in "purely object-oriented languages" (like Java or C#). To access
a single element in a sequence STL [normally] uses three operations:

- a check whether the current position indicates that processing
has completed ("current == end");
- access to the current element ("*current")
- moving to the next element ("++current" )

In Java and C# these three operations are folded into one operation
("Next()") which returns a null value if there is no next position
and otherwise returns the current value and moves the iterator to
the next position.

Now, which is better?
Wrong question. I'm not fond of Java collections and iterators. But the
right question is: 'Which is more usable?'. You find many STL-related
questions in C++ newsgroups but only few questions related to
collections and iterators in Java groups.
It is necessary that a container is accessible view iterators to
be used with STL algorithms. However, iterators need not at all be
related with any container. For example, an iterator over the even
integers could compute its current state and operations without the
need of any container.
Means, iterator and container are overlapping concepts in STL. You
could drop containers.
Well, I consider the ability to inject portions of code into a
templatized function taking operations as arguments as an
application for lambda functions.
.... and a way to further comliplicate the C++ language to satisfy the
request of a few academics. Sorting and searching are the only
'algorithms' one needs. The rest is better done with iterators and an
'old-fashioned' for-loop.
It is a relatively primitive
use but it would drastically ease the use STL algorithms. It is
related to functional programming in this use, although real
lambda expressions would probably allow at least some forms of
functional programming.


Take up time! Don't finish the next C++ Standard before 2034 after
having extended and 'enhanced' the language beyond recognizability and,
of course, after having included the whole Boost (5 times the size of
today) into the next Standard library. In the meantime let us work with
the 'manageable mess' called C++98 and reach a secure place before the
next 'big thing' in C++ happens.

Jul 23 '05 #25
Panjandrum wrote:
Dietmar Kuehl wrote:
As far as I can tell, the majority of the C++
committee members (if not all of them) think that the introduction
of STL was the right thing to do. Still, it was and is a problematic
path in many areas. The STL is riddled with many problems, too, and
the specification is actually quite imprecise in many areas
- something which was only learned with the experience of using the
STL. Although many think that the STL was the right thing to happen
at that time, I have the impression that many also think that such
a process should not repeated with the next feature. There is an
advantage of operating at the leading edge but lack of experience
can also be quite harmful if the result is cast into stone in the
form of an international industry standard.


Read between the lines: 'We know that making STL the C++ Standard
library was a huge mistake. But we'll never admit it.'


I don't need to read between the lines because I know what I want
to express: Introducing the STL was the right thing to do. The
process by which it was done was suboptimal, though. Both things are
recognized and stated. I can't speak for other committee members,
of course, but I my impression is that the majority of the committee
members agrees with view. However, I'm 100% positive that the majority
of the committee does *NOT* consider the introduction of STL to be a
mistake.
Minimizing the abstraction penalty actually excludes the use of
run-time polymorphism for the general concept because run-time
polymorphism invariably has some abstraction penalty, e.g. because
the resulting blocks fed to the optimizer are chopped up at each
lately bound function call.


Haha, the myth of the 'overhead' of a virtual function call has been
repeated for 20 years. It's usually spread by the same people who
recommend std::vector<std ::string> > or boost::shared_p tr (which even
unnecessarily allocate lot of memory on the heap).


Note that I didn't even mention any "'overhead' of a virtual function
call"! This is indeed neglectable, especially with modern processors.
From my experience the performance loss derives from reducing the
the basic blocks for the optimizer which is what I said above. Of
course, virtual functions are just one reason why the basic blocks
are reduced.
Now, which is better?


Wrong question. I'm not fond of Java collections and iterators. But the
right question is: 'Which is more usable?'. You find many STL-related
questions in C++ newsgroups but only few questions related to
collections and iterators in Java groups.


More usable? This is quite obvious: since neither Java nor .Net ships
with a reasonable collection of algorithms at all, I wouldn't even
consider their approach to be some form of competition. Of course,
nobody asks how to use Java enumerators with algorithms. However, this
is not because it is easier to use with their algorithms but it is the
lack of corresponding algorithms in the first place! Also, enumerators
indeed have a simpler interface but also less capabilities. Nobody
would ask how to move three items back with an enumerator because it
is impossible anyway.

That said, it is recognized, too, that it would be desirable to have
a simpler interface to algorithms which is traded for less capabilities.
For example, it is desirable to have an algorithm "sort()" which takes
a container, i.e. getting rid of the need to obtain the iterators from
the container. Of course, the sort function taking a container would
itself just delegate to the sort function taking iterators. On reason
why such a mechanism is not part of the STL is that no one had any
experience with using STL at the time it was introduced and thus some
glaringly obvious things are missing. This is exactly the procedural
problem I mentioned before.
It is necessary that a container is accessible view iterators to
be used with STL algorithms. However, iterators need not at all be
related with any container. For example, an iterator over the even
integers could compute its current state and operations without the
need of any container.


Means, iterator and container are overlapping concepts in STL. You
could drop containers.


No, they are not. Actually, they have nothing in common. The only
relation between the iterator and container concepts in STL is the
fact that containers provide some functions using iterators. In fact,
iterators and containers are related the same way enumerators and
containers are related in Java or .Net. Like iterators, enumerators
are also not necessarily related to a container but a container is
required to provide enumerators.
Well, I consider the ability to inject portions of code into a
templatized function taking operations as arguments as an
application for lambda functions.


... and a way to further comliplicate the C++ language to satisfy the
request of a few academics. Sorting and searching are the only
'algorithms' one needs. The rest is better done with iterators and an
'old-fashioned' for-loop.


Really? For resource allocations it is not that unusual to optimize
a flow network (max flow, min cost flow, etc.) and you just throw
the corresponding algorithms together on the fly by using loops?
That's pretty impressive! ... or, maybe, you have not been exposed
to non-trivial problems which I would consider more likely. Also,
even if you just use trivial algorithm like copying data, it is
possible to improve the performance by using an algorithm which
takes advantage of the underlying data structures. Of course, this
is only important when working on performance critical software.
From your answer I would guess that you also were never exposed to
this area. Writing trivial, non-performance critical software is
quite easy to do, BTW. If you are creating non-trivial and
performance critical stuff, you will welcome the flexibility and
performance STL provides - assuming you bother to understand what
it is about which you obviously haven't yet done.
--
<mailto:di***** ******@yahoo.co m> <http://www.dietmar-kuehl.de/>
<http://www.eai-systems.com> - Efficient Artificial Intelligence
Jul 23 '05 #26
Panjandrum wrote:
Dietmar Kuehl wrote:
As far as I can tell, the majority of the C++
committee members (if not all of them) think that the introduction
of STL was the right thing to do. Still, it was and is a problematic
path in many areas. The STL is riddled with many problems, too, and
the specification is actually quite imprecise in many areas
- something which was only learned with the experience of using the
STL. Although many think that the STL was the right thing to happen
at that time, I have the impression that many also think that such
a process should not repeated with the next feature. There is an
advantage of operating at the leading edge but lack of experience
can also be quite harmful if the result is cast into stone in the
form of an international industry standard.
Read between the lines: 'We know that making STL the C++ Standard
library was a huge mistake. But we'll never admit it.'


I don't believe that is what Dietmar meant, and I don't believe there are
many who would agree with your assessment. People such as Stroustrup, and
Josuttis are very candid in admitting there are shortcomings in C++. There
is room for improvement in both the core language, and the library. There
are also some things that should have been done differently, but are not
likely to change because they are now part of the foundation.

One big advantage to the STL is that it is highspeed and low overhead. It's
really not /that/ hard to understand, and it is not a requirement. Using it
requires a different kind of thinking than using comperable Java
constructs, but when you get the hang of it, you can write some very
concise and powerful code.

C++ and the Standard Library do not strive to be a complete SDK. They are
intended to be the foundation upon which an comprehensive SDK can be built.
There are many things I don't like about C++, and, in particular the lack
of standardization in practices such as file naming, and directory
organization. For the most part, I'd like to see the CPP get deep-sixed.

I would like to hear the opinion of anybody reading this who has extensive
experience working with Java as to whether Java provides a more coherent
means of locating program entities. In Java that basically means classes
and packages.

But I really don't have any major complaints about the Standard Library.
There are a lot of minor problems. For example, I would really like to see
the I/O flag mechanism wrapped up in classes, or even little namespaces. I
would like to see the entire library segmented into namespaces as well.
>> STL is mostly about interoperabilit y.
>
> Many people will claim performance is also one of the primary advantage
> of compile time polymorphism.


Minimizing the abstraction penalty actually excludes the use of
run-time polymorphism for the general concept because run-time
polymorphism invariably has some abstraction penalty, e.g. because
the resulting blocks fed to the optimizer are chopped up at each
lately bound function call.


Haha, the myth of the 'overhead' of a virtual function call has been
repeated for 20 years. It's usually spread by the same people who
recommend std::vector<std ::string> > or boost::shared_p tr (which even
unnecessarily allocate lot of memory on the heap).


I've read the test numbers for some cases. I don't know if they've changed,
or how much they've changed, but I've seen numbers ranging from twice to 8
times the time cost for a virtual function call compared to a static call.
Often this is negligible, and not worth taking into consideration. For
example, if it is a function that executes once when a window is created,
or a program is loaded, it probably doesn't make an observable difference.
OTOH, if you are trying to compute a large spreadsheet, or process complex
3D graphics, that time can start to really matter. And these are the areas
where the STL is most suited to.
Wrong question. I'm not fond of Java collections and iterators. But the
right question is: 'Which is more usable?'. You find many STL-related
questions in C++ newsgroups but only few questions related to
collections and iterators in Java groups.
I agree that Java-style iterators have some modest syntactictic advantages
over C++ style iterators. I really hope the new for() semantics are
approved for C++0X. I know a lot of people have the hots for Boost.Lambda,
and see the new for() semantics as a challenge to its attractiveness. All
I can say is that the foreach construct (the new for() semantics) is
intuitively obvious to me, and will not take more than a few seconds to
learn.

There may be real expressive power here, but C++ already takes about a solid
year of very hard work to gain a basic understanding of the core language
and library. I would have to spend a fair amount of time playing with this
before I could use it well:
http://www.boost.org/doc/html/lambda/le_in_details.html

And even then, I wonder if the code will be as understandable as more
traditional code using a foreach construct.
It is necessary that a container is accessible view iterators to
be used with STL algorithms. However, iterators need not at all be
related with any container. For example, an iterator over the even
integers could compute its current state and operations without the
need of any container.


Means, iterator and container are overlapping concepts in STL. You
could drop containers.


No, containers provide more than what is associated with their corresponding
iterators. They provide allocation management, some safe access through
at(), information about their state such as size(), a means of
communicating standard structure and behavior, automatic sorting (in some
cases), and probably other advantages I can't think of right now.
... and a way to further comliplicate the C++ language to satisfy the
request of a few academics. Sorting and searching are the only
'algorithms' one needs. The rest is better done with iterators and an
'old-fashioned' for-loop.


Again, I disagree. Though most of the 53 (IIRC) STL algorithms are for
searching and sorting, set_union, set_intersectio n, set_difference,
generate, unique, fill, replace, count, transform, copy, merge, the heap
algorithms etc. are all potentially useful.
It is a relatively primitive
use but it would drastically ease the use STL algorithms. It is
related to functional programming in this use, although real
lambda expressions would probably allow at least some forms of
functional programming.


Take up time! Don't finish the next C++ Standard before 2034 after
having extended and 'enhanced' the language beyond recognizability and,
of course, after having included the whole Boost (5 times the size of
today) into the next Standard library. In the meantime let us work with
the 'manageable mess' called C++98 and reach a secure place before the
next 'big thing' in C++ happens.


The changes I've seen which look like they're on the shortlist don't look
too bad to me. I'm pushing hard for your favorite item, reference counted
object support. One item that I really like, but which doesn't seem to
have a lot of support is this:

http://www.open-std.org/jtc1/sc22/wg...2004/n1671.pdf

It may seem completely esoteric to a lot of people, but to me, it addresses
a specific problem I've encountered. That is working with reference
counted function objects.
--
If our hypothesis is about anything and not about some one or more
particular things, then our deductions constitute mathematics. Thus
mathematics may be defined as the subject in which we never know what we
are talking about, nor whether what we are saying is true.-Bertrand Russell
Jul 23 '05 #27
Steven T. Hatton wrote:
The changes I've seen which look like they're on the shortlist don't look
too bad to me. I'm pushing hard for your favorite item, reference counted
object support. One item that I really like, but which doesn't seem to
have a lot of support is this:

http://www.open-std.org/jtc1/sc22/wg...2004/n1671.pdf
[for those who don't look: this paper is about overloading operators
"." and ".*"]

My understanding is that it is generally seen as an unfortunate
omission much like template typedefs: although it was deliberately
not included the last time, there seems to be general consensus that
it should have been supported now that we have more experience with
various aspects of the language. The tricky part is probably getting
the semantics right...
It may seem completely esoteric to a lot of people, but to me, it
addresses a specific problem I've encountered. That is working with
reference counted function objects.


More generally, it provides the possibility to have transparent
proxy classes which is quite important in many cases.
--
<mailto:di***** ******@yahoo.co m> <http://www.dietmar-kuehl.de/>
<http://www.eai-systems.com> - Efficient Artificial Intelligence
Jul 23 '05 #28
Dietmar Kuehl wrote:
Steven T. Hatton wrote:
Yes, but years later, C++ programmers are usually passionate advocate of
the STL.
Of course, they are. As far as I can tell, the majority of the C++
committee members (if not all of them) think that the introduction
of STL was the right thing to do. Still, it was and is a problematic
path in many areas. The STL is riddled with many problems, too, and
the specification is actually quite imprecise in many areas
- something which was only learned with the experience of using the
STL.


IMO, the wording of the Standard is too lawyerly. I really believe it could
be translated into English if someone who understands the intended meaning
were motivated to do so.
Although many think that the STL was the right thing to happen
at that time, I have the impression that many also think that such
a process should not repeated with the next feature. There is an
advantage of operating at the leading edge but lack of experience
can also be quite harmful if the result is cast into stone in the
form of an international industry standard.
Understood. That's why I favor a very limited, orthogonal collection of
features that really need to be standardized at the foundational level.
Some of the essentials of thread support fall into that category, and the
new move semantics and rvalue references may also be valuable additions.
Though I have not studied these closely. The unique_ptr<> looked pretty
handy when I saw it.
STL is mostly about interoperabilit y.


Many people will claim performance is also one of the primary advantage
of compile time polymorphism.


A certain form of performance was indeed a design rule for how the
STL works (actually, the rules were quite strict in that they kind
of required that the abstraction penalty is non-existant or at
least rather low). However, this is kind of a restriction on the
design space.


I believe it may be the strongest justification for having the 'T' in STL.
The primary goal was to find an abstraction which
makes all algorithms applicable to all reasonable data structures
while writing each algorithm a small number of times.
There are a some algorithms that are not applicable to associative
containers because they rearage the elements, and would therefore break the
sorting order.
Of course,
"algorithm" is actually a misnomer in this context: something like
"solver" or "operation" would be better since the different
implementations of an "algorithms " are actually different algorithms
(in the conventional meaning of the term) solving the same problem
while taking advantage of different properties in the input (i.e.
they are based on different requirements; e.g. 'distance(beg, end)'
actually counts the number of elements for all iterator categories
unless the iterators are random access iterators in which case the
function just uses a substraction: these are two different
algorithms solving the same problem).
I'd go with "operations ", "solver" sounds like something out of the
marketing department. :)
Minimizing the abstraction penalty actually excludes the use of
run-time polymorphism for the general concept because run-time
polymorphism invariably has some abstraction penalty, e.g. because
the resulting blocks fed to the optimizer are chopped up at each
lately bound function call. Of course, where reasonable, run-time
polymorphism can still be used. On the other hand, the concepts
used by STL are rather small operations and the ration between
computations done by a single variation point
I'm not sure what you mean by variation point.
and a possible
function call overhead is typically rather bad, i.e. when using
STL concepts you are better off not also using run-time
polymorphism.
This is why I have serious questions about mem_fun. It is typically
implemented using a pointer to member function, which according to what I
was able to glean from thumbing through _Inside_The_C++ _Object_Model_ has
fairly bad performance.
When using run-time polymorphism you want your
variations points to do more computations, i.e. you would fold
multiple operations into one call. This can be witnessed by the
radically different strategies used in STL and sequence accesses
in "purely object-oriented languages" (like Java or C#). To access
a single element in a sequence STL [normally] uses three operations:

- a check whether the current position indicates that processing
has completed ("current == end");
- access to the current element ("*current")
- moving to the next element ("++current" )

In Java and C# these three operations are folded into one operation
("Next()") which returns a null value if there is no next position
and otherwise returns the current value and moves the iterator to
the next position.
But this is quite easy to implement in C++, I've even done it. The naive
approach uses postfix operators, so may be slightly less efficient than the
STL form, but I suspect the difference is negligible since I maintain an
internal temp object to return as a reference.

Now *_this_* is elegant:

http://www.open-std.org/JTC1/SC22/WG...005/n1796.html

for( int i : vec ) std::cout << i;
Now, which is better? If you don't pay for calling multiple
functions, the former is more desirable because it offers more
flexibility and allows for smaller points of variability (although
it is still too course-grained...).
With Java you can call toArray() on a collection, and get a random access
interface to iterate over with a for() loop using the same sequence of
checks you've stated above.
This approach also plays well
with having a hierarchy of concepts which may take advantage of
special features although only a minimalist requirement is made
(which in turn caters for some very powerful optimizations because
this can take advantage of special structures in the underlying
data structure).
I don't follow. What difference does it make whether I type the following,
or I leave it to the compiler to 'type' it for me:

using std::begin; // enable ADL
using std::end; // ditto
for( auto __begin = begin(vec),
__end = end(vec);
__begin != __end; ++__begin )
{
int i = *__begin;
std::cout << i;
}?

On the other hand, if each operation incurs a function call or at
least chops up your optimization blocks, the extra benefit is
clearly dwarfed by additional costs. Thus, in "purely
object-oriented languages" (i.e. in languages which incur arbitrary
restrictions without any gain but with, IMO, huge losses)
an infe^H^H^H^H^H^ H different approach is used which reduces the
costs (the pros) and the possibilities (the cons).


Again, it you are intending Java, this will not necessarily apply. Java
does have some low-level optimization for its array object which is not
implemented in the same way is its other collections.
Thus it is only consistent to
provide facilities which allow interoperabilit y with object oriented
features.


I need to get around to running some tests on algorithms using mem_fun
verses using for loops and static invocation.


You might want to also have a look at TR1's "mem_fn" which is
also available from Boost.


I'd have to study it a bit more. I don't believe it addresses the
performance issue I suspect exists with std::mem_fun; If I understand
correctly, it is something of a combination of std::mem_fun,
std::mem_fun_re f, and an extension (not really a generalization) of
std::binary_fun ction. It seems to me that having overloaded operator.()
and operator.*() would be more useful for the kinds of problems I've
recently encountered, and would seem to be more general:

http://www.open-std.org/jtc1/sc22/wg...2004/n1671.pdf
The hardest thing for me to deal with when it comes to generic
programming has been the fact that the concept is often not clear from
the context.


Yes, this is indeed a recognized problem. There are proposals to
add some form of explicit concepts to the C++ core language. The
major discussion is currently between two competing views on how
the details should look. The basic idea is to somehow describe a
"concept" which is essentially a set of operations used by
algorithms and provided by the entities passed as arguments to
these algorithms. The biggest argument is whether the arguments
have to specify more or less explicitly that they implement a
concept or if this should be implicitly determined by the compiler
based on the presence of operations for the arguments. You might
want to search the papers of the last few meetings if you are
interested in details.


So now we have meta-types to deal with the type checking of our "pure
abstractions". At some point people are going to realize the meta-types
can be grouped into useful categories, and will want to check if that fit
into these categories. In that event we will have meta-meta-types. There
are tow ways of dealing with Russell's paradox, accept that languages with
selfreference allow for the formulation of selfcontradicto ry statements, or
insist that sets cannot contain other sets, only meta-sets can contain
sets, and only meta-meta-sets can contain sets....
Apparently, you haven't understood what STL is about ;-) There are just
a few kind of entities which are completely described by concepts with
the central concept (group) being the iterators.


I was considering the iterators to be integral to the containers.


It is necessary that a container is accessible view iterators to
be used with STL algorithms. However, iterators need not at all be
related with any container. For example, an iterator over the even
integers could compute its current state and operations without the
need of any container.


Agreed, but that isn't very useful in understanding what's in the STL, and
how it works in general. Your example seems like the example of a function
object which is an example of a useful class type that has no data members
or state. I argue that the fundamental concept of OO is having user
defined types containing data and having functions bound to these types to
operate on the data. When I presented that argument, it was challenged by
using the example of a function object. My response is that a function
object is a degenerate case of the general construct I was describing.

I almost used the word "collection " because, to me, the STL "containers "
really aren't containers. This is especially true of vector and deque. I
would not even use the term "sequence". To me, "sequence" suggests the
elements under discussion have some inherent ordering.


They have, indeed! In fact, the iteration over a sequence just
shows how the elements are ordered [currently]. However, the
element order may depend on other aspects than the element values.
For example, it may depend on the insertion order instead. I think,
this pretty accurately reflects typical definitions of "sequence".
See, for example, <http://en.wikipedia.or g/wiki/Sequence>.


But out can leave the collection unchanged, an iterate over in a completely
different order. For example, you could create an iterator that traverses
a binary tree using depth-first traversal, and another using breadth-first
traversal. In this case we do have something of a container, but the order
of traversal is external to it.
I was actually thinking in more general terms of being able to access the
typedefs inside a class through an instance.


I realized this. However, at least in the context of STL this need
should not arise at all...


Are you suggesting I should neve have to explicitly instantiate iterators?
I'm pretty sure the way lambda expressions will work is similar to the
way
Mathematica works. That is significantly different than the way
traditional C++ works. Functional programming is not an intuiteve art
for
me. I've done some fun stuff with it, but it usually requires more
careful thought than writing out a proceedural block of code.


Well, I consider the ability to inject portions of code into a
templatized function taking operations as arguments as an
application for lambda functions. It is a relatively primitive
use but it would drastically ease the use STL algorithms. It is
related to functional programming in this use, although real
lambda expressions would probably allow at least some forms of
functional programming.


My problem with Boost.Lambda is that it introduces additional syntax into
the language, and doesn't appear to provide much in the way of useful
additional functionality. It seems the entire effort is to find a way of
shoving more complex functions into the third parameter of std::for_each.
Perhaps I'm wrong about this, but if I had to chose between for( int i :
vec ) std::cout << i; and Boost.Lambda for inclusion in C++0X, I'd go with
the foreach construct.
--
If our hypothesis is about anything and not about some one or more
particular things, then our deductions constitute mathematics. Thus
mathematics may be defined as the subject in which we never know what we
are talking about, nor whether what we are saying is true.-Bertrand Russell
Jul 23 '05 #29
Steven T. Hatton wrote:
Dietmar Kuehl wrote:
IMO, the wording of the Standard is too lawyerly. I really believe it
could be translated into English if someone who understands the intended
meaning were motivated to do so.
The standard is part of a contract between the user of a C++ tool (e.g.
a compiler) and the implementer of said tool. Contracts tend to be
lawyerly. The problem is that the two parties agreeing on the contract
have radically different views of its content and the specification has
to cater for both. At the same time it should be minimalist, consistent,
and complete. I doubt that translating it into English would be a useful
enterprise, not to mention a futile one (the real standard will stay in
more or less the form it currently is.
A certain form of performance was indeed a design rule for how the
STL works (actually, the rules were quite strict in that they kind
of required that the abstraction penalty is non-existant or at
least rather low). However, this is kind of a restriction on the
design space.


I believe it may be the strongest justification for having the 'T' in STL.


Well, the strongest justification for using compile-time polymorphism
is that this allows the search for the best algorithm. Also, template
provide a mechanism which allows dispatch based on multiple participant,
something only few systems support with run-time polymorphism. In any
case, the performance benefit is beyond merely avoiding a few virtual
functions: the fact that e.g. the result type of an operation is
statically known but can still vary with different instantiations is
rather powerful.
The primary goal was to find an abstraction which
makes all algorithms applicable to all reasonable data structures
while writing each algorithm a small number of times.


There are a some algorithms that are not applicable to associative
containers because they rearage the elements, and would therefore break
the sorting order.


Of course, but you would not expect them to be applicable anyway! This
is what the term "reasonable " refers to in the above statement: you
cannot apply all sequence algorithms to all sequences. E.g. you cannot
sort a sequence exposed by input iterators. In general, the required
concepts should be such that the compiler issues an error message. For
example, trying to sort an associative container will fail to compile
because the elements in the sequence are not assignable (their first
member is const-qualified).
Of course,
"algorithm" is actually a misnomer in this context: something like
"solver" or "operation" would be better since the different
implementations of an "algorithms " are actually different algorithms
(in the conventional meaning of the term) solving the same problem
while taking advantage of different properties in the input (i.e.
they are based on different requirements; e.g. 'distance(beg, end)'
actually counts the number of elements for all iterator categories
unless the iterators are random access iterators in which case the
function just uses a substraction: these are two different
algorithms solving the same problem).


I'd go with "operations ", "solver" sounds like something out of the
marketing department. :)


I doubt that the term for algorithms is changed in this context, however.
"Operation" would be a better fit from my perspective but I'm not a
native English speaker.
Minimizing the abstraction penalty actually excludes the use of
run-time polymorphism for the general concept because run-time
polymorphism invariably has some abstraction penalty, e.g. because
the resulting blocks fed to the optimizer are chopped up at each
lately bound function call. Of course, where reasonable, run-time
polymorphism can still be used. On the other hand, the concepts
used by STL are rather small operations and the ration between
computations done by a single variation point


I'm not sure what you mean by variation point.


Essentially, something which can be changed when using a polymorph
implementation. In statically typed object-oriented languages the
virtual functions are the variation points. With static polymorphism
the operations implementing a concept are the variation points, e.g.
operator++(), operator*(), and operator==() for iterators (this is
an incomplete list, however).
and a possible
function call overhead is typically rather bad, i.e. when using
STL concepts you are better off not also using run-time
polymorphism.


This is why I have serious questions about mem_fun. It is typically
implemented using a pointer to member function, which according to what I
was able to glean from thumbing through _Inside_The_C++ _Object_Model_ has
fairly bad performance.


Does it? When I measured using pointer to member functions I had
differing results depending on how the pointer was actually used.
However, it is indeed likely that mem_fun has no real choice to do
a good job. If the member [function] pointer is turned into a
template parameter, the function can even be inlined. On the other
hand, if it is turned into a variable, this chance is reduced
substantially. I haven't measured this effect lately, though. It is
well possible that compiler are smart enough to detect that the
pointer does not change at all.
In Java and C# these three operations are folded into one operation
("Next()") which returns a null value if there is no next position
and otherwise returns the current value and moves the iterator to
the next position.


But this is quite easy to implement in C++, I've even done it.


Many people have done so before STL was introduced! ... and they all
failed to deliver the performance and flexibility STL delivers.
Actually, the generally failed to deliver general algorithms in the
first place!

One of the key feature of the STL approach is that algorithms are
accepting iterators with minimal requirements (typically iterators
are only required to be input or output iterators) but use an
algorithm best suited for the iterators actually used. For example,
'distance()' merely requires that the user passes a pair of input
iterators and it just moves through the sequence counting the number
of steps. However, if the user actually passes a pair of random
access iterators, the operation becomes a mere substraction which
tends to be substantially faster.

You cannot do a similar thing with enumerators using run-time
polymorphism: the interface is only know to be an enumerators not
some extended interface. Well, you could (at run-time) test for
more powerful derived interfaces but the whole point is to improve
the performance not to kill it.

There are other issues, too. For example, if you combine the three
operations (advance, access, and test) into one operation you need
to perform all three even if you only need a subset. For example,
'distance()' never accesses the value. An algorithm using loop
unrolling only tests relatively few steps, not ever step and does
not necessarily advance the iterator in each step but might step
forward multiple elements at once. From a practical point of view,
separating access and test also has the advantage that it is not
necessary to have a dedicated null value for each type. This in
turn allows for value semantics which is more general than reference
semantic (where you get an obvious null value): if you want
references you just use a proxy which behaves as if it is a value
(here overloading operator.() would be handy...). This is not
possible the other way around, i.e. you cannot simulate value
semantics if your basis is implemented to use reference semantic.
The naive
approach uses postfix operators, so may be slightly less efficient than
the STL form, but I suspect the difference is negligible since I maintain
an internal temp object to return as a reference.

Now *_this_* is elegant:

http://www.open-std.org/JTC1/SC22/WG...005/n1796.html

for( int i : vec ) std::cout << i;
I disagree. I would prefer something like this

for_each(vec, std::cout << _1);

Assuming a suitable implementation of 'for_each()' and a using
statement for an appropriate namespace this is already achievable
with current C++. However, if the operation becomes more complex
than an expression, it kind of stops working. In this case language
level lambda support becomes necessary. This yields a more flexible
and more general approach than the rather limited proposal of the
above paper. Of course, the above notation would introduce a minor
semantic difference to your statement: the element type of 'vec' is
used to in the print expression. This is easily fixed assuming
language level support of lambdas:

for_each(vec, std::cout << int(_1))

if something akin to Boost::Lambda is built-in or

for_each(vec, { std::cout << int(_1); });

if braces are necessary to signal use of a lambda expression.
Now, which is better? If you don't pay for calling multiple
functions, the former is more desirable because it offers more
flexibility and allows for smaller points of variability (although
it is still too course-grained...).


With Java you can call toArray() on a collection, and get a random access
interface to iterate over with a for() loop using the same sequence of
checks you've stated above.


I doubt that 'toArray()' has O(1) performance on doubly linked lists.
Thus, it is dead in the water: effectively, it either has to copy the
list contents into an array, possibly using some form of proxies to
avoid at least copying the whole content, or it has to use iteration
to move freely within an array. Both cases kill the performance,
especially when using only a small fraction of the sequence as is
quite likely. If you don't care about performance, feel free to use
an approach akin to 'toArray()'. Or you may use vectors all the time.
In all project I was involved performance *was* an issue. Not the
primary one but I always had to tune the software to make it run fast
enough. Deliberately killing efficiency has never been an option.
This approach also plays well
with having a hierarchy of concepts which may take advantage of
special features although only a minimalist requirement is made
(which in turn caters for some very powerful optimizations because
this can take advantage of special structures in the underlying
data structure).


I don't follow. What difference does it make whether I type the following,
or I leave it to the compiler to 'type' it for me:

using std::begin; // enable ADL
using std::end; // ditto
for( auto __begin = begin(vec),
__end = end(vec);
__begin != __end; ++__begin )
{
int i = *__begin;
std::cout << i;
}?


Since formatting the output for 'std::cout' and buffering the
result is the bottleneck in this example, it does not make much of
a difference. However, the compiler might write something rather
different instead:

using std::begin; // this is a neat trick; I should remember it
using std::end;
auto _B = begin(vec);
auto _E = end(vec);
for (; 2 <= _E - _B; _B += 2)
std::cout << int(_B[0]) << int(_B[1]);
for (; _B != _E; ++_B)
std::cout << int(*_B);

Of course, instead of "2" the compiler would rather insert "16"
or something depending on the size of the type and provide a
corresponding amount of operations. There are other optimizations
possible, too, even for such a trivial operation! Of course,
things like 'std::copy()' provide even more alternatives.
On the other hand, if each operation incurs a function call or at
least chops up your optimization blocks, the extra benefit is
clearly dwarfed by additional costs. Thus, in "purely
object-oriented languages" (i.e. in languages which incur arbitrary
restrictions without any gain but with, IMO, huge losses)
an infe^H^H^H^H^H^ H different approach is used which reduces the
costs (the pros) and the possibilities (the cons).


Again, it you are intending Java, this will not necessarily apply. Java
does have some low-level optimization for its array object which is not
implemented in the same way is its other collections.


I consider it rather unlikely that you are always using array
objects and if you don't you incur a fair performance hit by
copying the content of your container into an array already even
if only references to the actual objects are copied. STL always
operates on the actual sequence and does so quickly if the iterators
are reasonably implemented. It is not by accident that Java does not
ship with an extensive algorithms library.
So now we have meta-types to deal with the type checking of our "pure
abstractions". At some point people are going to realize the meta-types
can be grouped into useful categories, and will want to check if that fit
into these categories. In that event we will have meta-meta-types. There
are tow ways of dealing with Russell's paradox, accept that languages with
selfreference allow for the formulation of selfcontradicto ry statements,
or insist that sets cannot contain other sets, only meta-sets can contain
sets, and only meta-meta-sets can contain sets....
I don't think you want concepts on meta types. On the other hand, people
doing meta programming might disagree... I still think that concepts can
provide some benefit when done correctly. They can also incur loads of
overhead with no benefit if done wrong. From my perspective it is still
open whether the correct or the wrong approach is chosen...
It is necessary that a container is accessible view iterators to
be used with STL algorithms. However, iterators need not at all be
related with any container. For example, an iterator over the even
integers could compute its current state and operations without the
need of any container.


Agreed, but that isn't very useful in understanding what's in the STL, and
how it works in general.


I think the understanding the concepts is the only viable route to
understand what STL is about in the first place. What's in there actually
become secondary once you have understood the basic principle. Applying
STL concepts to rather strange entities helps in understanding what STL
is about, IMO: who would bother to implement algorithms on more or less
arbitrary sequences? Well, STL readily provides them and the user can
plug in more algorithms applicable to all sequences and, of course,
arbitrary new sequences. The only restriction on the definition of the
sequence is that depending on its capabilities you might get only certain
subsets of algorithms: again this "reasonable "-thing mentioned above.
They have, indeed! In fact, the iteration over a sequence just
shows how the elements are ordered [currently]. However, the
element order may depend on other aspects than the element values.
For example, it may depend on the insertion order instead. I think,
this pretty accurately reflects typical definitions of "sequence".
See, for example, <http://en.wikipedia.or g/wiki/Sequence>.


But out can leave the collection unchanged, an iterate over in a
completely
different order.


You will find that you have to iterate over a sequence in always the
same order unless you can iterator over it only once, i.e. if is
accessed by an input or an output iterators. In all cases, the sequence
stays the same whenever you iterate over it (unless you change it
explicitly by means outside the iteration).
For example, you could create an iterator that traverses
a binary tree using depth-first traversal, and another using breadth-first
traversal. In this case we do have something of a container, but the
order of traversal is external to it.
Note that you got two different sequences here for the same container:
one for the DFS order and one for the BFS order. That is, a container
can have more than one sequence associated with it. Well, as stated
before, the container is actually quite irrelevant to STL anyway.
I was actually thinking in more general terms of being able to access
the typedefs inside a class through an instance.


I realized this. However, at least in the context of STL this need
should not arise at all...


Are you suggesting I should neve have to explicitly instantiate iterators?


You should never have to explicitly access the iterator type. It is
often convenient to use subsequences and thus you might access the
iterators of a container (actually, with the current definition of
the STL you have to) but you should never need their type - unless
you are implementing a generic algorithm. But then, in this case you
either get the iterator types readily delivered or you should extract
the iterators from the object and pass them directly to another
generic algorithm operating on iterators. In generic programming you
rarely need the typedefs and if you do you normally want to infer
associated types, e.g. the value type of an iterator to define the
return type of an algorithm.
My problem with Boost.Lambda is that it introduces additional syntax into
the language,
Boost::Lambda is entirely implemented in terms of the current core
language with no changes made to support it. Where does it introduce
new syntax and, even more interesting, how?
and doesn't appear to provide much in the way of useful
additional functionality. It seems the entire effort is to find a way of
shoving more complex functions into the third parameter of std::for_each.
I thought you were in favor of n1796...!? *This* proposal is nothing
more than a glorified for_each! Lambdas can be used everywhere you
are using a functor. In fact, they would neatly solve your performance
problem with mem_fun(), too: Instead of 'mem_fun(foo::b ar)' you would
write '_1.bar()', '_1.bar(_2)', ... You'd need to be consistent with
your desires and look from a more general perspective than just solving
a minor quibble with a special case. Solving many minor quibbles with
just one general solution is much more desirable than having many
special cases. Lambda support *would* solve many minor problems and
avoid the need for special solutions. In fact, Boost::Lambda already
does a very good job in this direction but it cannot solve all
problems. For example, the member function issue is not readily solved
but I think it can be addressed using special classes. Of course, if
you need to implement a class for each member function the benefit is
dwarfed by the needed effort.
Perhaps I'm wrong about this, but if I had to chose between for( int i :
vec ) std::cout << i; and Boost.Lambda for inclusion in C++0X, I'd go with
the foreach construct.


You got a very limited view, IMO. I'd rather go with a general solution!
--
<mailto:di***** ******@yahoo.co m> <http://www.dietmar-kuehl.de/>
<http://www.eai-systems.com> - Efficient Artificial Intelligence
Jul 23 '05 #30

This thread has been closed and replies have been disabled. Please start a new discussion.

Similar topics

3
2971
by: Dan Christensen | last post by:
I have written a small application in VB6 -- a kind of special purpose text editor. There seem to be many annoying glitches in VB's rich text box control, however. Performance on larger files has suffered as a result of various work-arounds I have had to use. And there some very annoying, nonstandard "features." Are there any C++ alternatives I could use? Dan
12
5242
by: christopher diggins | last post by:
I am looking for any free libraries which provide a wrapper around or are an alternative to the STL. I am familiar with Boost and STLSoft. Would anyone be able to provide other alternatives? Specifically I am most interested in libraries which have the functionality of the STL but are easier to learn for beginners. Thanks in advance all - Christopher Diggins
0
2511
by: Don Pedro | last post by:
According to the documentation for the DataBinder class's Eval method should it only be used with discretion due to the fact it is latebound and uses reflection. "CAUTION Since this method performs late-bound evaluation, using reflection, at runtime, it can cause performance to noticeably slow compared to standard ASP.NET data-binding syntax." (http://msdn.microsoft.com/library/default.asp?url=/library/en-us/cpref/html...
6
2157
by: greek_bill | last post by:
Hi, I'm interested in developing an application that needs to run on more than one operating system. Naturally, a lot of the code will be shared between the various OSs, with OS specific functionality being kept separate. I've been going over the various approaches I could follow in order to implement the OS-specific functionality. The requirements I have are as follows :
0
9489
marktang
by: marktang | last post by:
ONU (Optical Network Unit) is one of the key components for providing high-speed Internet services. Its primary function is to act as an endpoint device located at the user's premises. However, people are often confused as to whether an ONU can Work As a Router. In this blog post, we’ll explore What is ONU, What Is Router, ONU & Router’s main usage, and What is the difference between ONU and Router. Let’s take a closer look ! Part I. Meaning of...
0
10072
Oralloy
by: Oralloy | last post by:
Hello folks, I am unable to find appropriate documentation on the type promotion of bit-fields when using the generalised comparison operator "<=>". The problem is that using the GNU compilers, it seems that the internal comparison operator "<=>" tries to promote arguments from unsigned to signed. This is as boiled down as I can make it. Here is my compilation command: g++-12 -std=c++20 -Wnarrowing bit_field.cpp Here is the code in...
0
9906
jinu1996
by: jinu1996 | last post by:
In today's digital age, having a compelling online presence is paramount for businesses aiming to thrive in a competitive landscape. At the heart of this digital strategy lies an intricately woven tapestry of website design and digital marketing. It's not merely about having a website; it's about crafting an immersive digital experience that captivates audiences and drives business growth. The Art of Business Website Design Your website is...
1
9885
by: Hystou | last post by:
Overview: Windows 11 and 10 have less user interface control over operating system update behaviour than previous versions of Windows. In Windows 11 and 10, there is no way to turn off the Windows Update option using the Control Panel or Settings app; it automatically checks for updates and installs any it finds, whether you like it or not. For most users, this new feature is actually very convenient. If you want to control the update process,...
0
9737
tracyyun
by: tracyyun | last post by:
Dear forum friends, With the development of smart home technology, a variety of wireless communication protocols have appeared on the market, such as Zigbee, Z-Wave, Wi-Fi, Bluetooth, etc. Each protocol has its own unique characteristics and advantages, but as a user who is planning to build a smart home system, I am a bit confused by the choice of these technologies. I'm particularly interested in Zigbee because I've heard it does some...
0
8737
agi2029
by: agi2029 | last post by:
Let's talk about the concept of autonomous AI software engineers and no-code agents. These AIs are designed to manage the entire lifecycle of a software development project—planning, coding, testing, and deployment—without human intervention. Imagine an AI that can take a project description, break it down, write the code, debug it, and then launch it, all on its own.... Now, this would greatly impact the work of software developers. The idea...
0
6562
by: conductexam | last post by:
I have .net C# application in which I am extracting data from word file and save it in database particularly. To store word all data as it is I am converting the whole word file firstly in HTML and then checking html paragraph one by one. At the time of converting from word file to html my equations which are in the word document file was convert into image. Globals.ThisAddIn.Application.ActiveDocument.Select();...
0
5172
by: TSSRALBI | last post by:
Hello I'm a network technician in training and I need your help. I am currently learning how to create and manage the different types of VPNs and I have a question about LAN-to-LAN VPNs. The last exercise I practiced was to create a LAN-to-LAN VPN between two Pfsense firewalls, by using IPSEC protocols. I succeeded, with both firewalls in the same network. But I'm wondering if it's possible to do the same thing, with 2 Pfsense firewalls...
1
3829
by: 6302768590 | last post by:
Hai team i want code for transfer the data from one system to another through IP address by using C# our system has to for every 5mins then we have to update the data what the data is updated we have to send another system

By using Bytes.com and it's services, you agree to our Privacy Policy and Terms of Use.

To disable or enable advertisements and analytics tracking please visit the manage ads & tracking page.