Alf P. Steinbach wrote:

* Kai-Uwe Bux: Axter wrote:

The following is the recommended approach to overloading arithmetic and

assignment operators:

T& T::operator+=(const T&){

//.... impl

return *this

}

T operator+(const T& lhs, const T& rhs){

T temp(lhs);

return temp += rhs;

}

Recommended for which reason?

Axter's example shows two independent guidelines:

1) Implement operator+ in terms of operator+=.

2) Implement operator+ as a non-member function.

(1) is a somewhat contested guideline. E.g., I seem to recall that

Microsoft has published the exact opposite guideline, and here you are

also arguing against it. As I see it, using (1) you can do no worse

than the opposite approach, and will mostly do better (allowing

operator+= to be as efficient as possible with no temporary involved,

where possible).

That works if (and I would contest only if) you can implement operator *=

in-place. I am not arguing against using that idiom where it works. I am

arguing against recommending this as a general rule.

(2) has a simple rationale: to treat both arguments on an equal footing.

For example, to allow the same implicit conversions.

Ok.

I would like to see the "recommended" approach carried out for matrix

multiplication or arbitrary precission integer multiplication. In both

cases, an in-place implementation is far from obvious (if possible); and

implementing * in terms of *= for matrices or BigInt is likely to create

more temporaries internally than implementing *= in terms of *.

Yes, there are cases where you can't do in-place operations, but how can

you get fewer temporaries by implementing operator+= in terms of

operator+? As I see it, for operator+= you can take advantage of access

to internals, in particular reusing an outer encapsulation or already

allocated internal memory. That seems to me to generally imply fewer

temporaries and such.

In matrix multiplication, you cannot overwrite the coefficients because you

still need them. Thus, you will end up allocating a temporary matrix for *=

anyway. If then, on top of this, you implement * in terms of *=, you may

end up with more temporaries.

[snip]

Also note that, if profiling shows the need for using expression

templates, an implementation of *= in terms of * is more natural that the

other way around -- in fact, I do not see how I would write an expression

template for * in terms of *=; but that could be a lack of imagination on

my part.

I'm not sure, it may be that you have a good point why expression

templates are simply different, but consider

template< typename T >

struct Multiply

{

static inline T apply( T const& a, T const& b )

{

T result( a );

return (result += b);

}

};

as a kind of generic implementation, and then e.g. for double, if

necessary for efficiency,

template<> struct Multiply<double>

{

static inline double apply( double a, double b ) { return a*b; }

};

and so on (disclaimer: I haven't tried this!).

Probably, I need to me more specific on the expressions template business.

Here is a simple expression template implementation for vector addition --

something that I would implement using += as the primitive (since it can be

done in place) and then defining + in terms of +=. However, with expression

templates, it appears that the other way around is more natural:

#include <cstddef>

#include <iostream>

std::size_t const length = 4;

typedef double Number;

class VectorStoragePolicy {

Number data [length];

public:

VectorStoragePolicy ( Number e = 0 )

{

for ( std::size_t i = 0; i < length; ++i ) {

data[i] = e;

}

}

VectorStoragePolicy ( VectorStoragePolicy const & other )

{

for ( std::size_t i = 0; i < length; ++i ) {

data[i] = other[i];

}

}

Number operator[] ( std::size_t i ) const {

return ( data[i] );

}

Number & operator[] ( std::size_t i ) {

return ( data[i] );

}

};

template < typename ExprA, typename ExprB >

class VectorPlusVector {

ExprA a_ref;

ExprB b_ref;

public:

VectorPlusVector ( ExprA const & a, ExprB const & b )

: a_ref ( a )

, b_ref ( b )

{}

Number operator[] ( std::size_t i ) const {

return ( a_ref[i] + b_ref[i] );

}

};

template < typename Expr >

struct VectorTag : public Expr {

VectorTag ( void )

: Expr()

{}

template < typename A >

VectorTag ( A const & a )

: Expr( a )

{}

};

template < typename ExprA, typename ExprB >

VectorTag< VectorPlusVector< ExprA, ExprB > >

operator+ ( VectorTag< ExprA > const & a,

VectorTag< ExprB > const & b ) {

return ( VectorPlusVector< ExprA, ExprB >( a, b ) );

}

struct Vector : public VectorTag< VectorStoragePolicy > {

Vector ( Number a = 0 )

: VectorTag< VectorStoragePolicy >( a )

{}

template < typename Expr >

Vector & operator= ( VectorTag< Expr > const & other )

{

for ( std::size_t i = 0; i < length; ++i ) {

(*this)[i] = other[i];

}

return ( *this );

}

template < typename Expr >

Vector & operator+= ( VectorTag< Expr > const & other ) {

*this = *this + other;

return ( *this );

}

};

template < typename Expr >

std::ostream & operator<< ( std::ostream & o_str,

VectorTag< Expr > const & v ) {

for ( std::size_t i = 0; i < length; ++i ) {

o_str << v[i] << ' ';

}

return ( o_str );

}

int main ( void ) {

Vector a ( 1.0 );

Vector b ( 2.3 );

a += b;

std::cout << a+b << '\n';

}

As you can see, there is the VectorPlusVector template that postpones

evaluation of sums. So if you write

(a + b + c + d)[2];

there is no additional temporary vector but the expression gets translated

into:

a[2] + b[2] + c[2] + d[2]

The challenge is to define a VectorIncrement expression template

(representing +=) and then define + in terms of that. I just don't see how

to do that without loosing the advantage of expression templates

(elimination of temporaries).

Best

Kai-Uwe Bux