By using this site, you agree to our updated Privacy Policy and our Terms of Use. Manage your Cookies Settings.
446,190 Members | 833 Online
Bytes IT Community
+ Ask a Question
Need help? Post your question and get tips & solutions from a community of 446,190 IT Pros & Developers. It's quick & easy.

Exception safe at what cost

P: n/a
I read Stroustrup's article of the day:
http://www.research.att.com/~bs/C++.html

Programming with Exceptions. InformIt.com. April 2001.
http://www.research.att.com/~bs/eh_brief.pdf

Some of these ideas are finally beginning to sink in. I believe I looked at
the same article a while back and decided I wasn't quite ready for it. If I
understood things correctly, there seems to be a slight problem with the
design of his exception safe assignment operator. I believe I have
correctly extracted all the relevant code, and appropriately commented it
to describe the various issues discussed in the article. My questions are
in the final comment block.

class Vector { // v points to an array of sz ints
int sz;
int* v;
public: explicit Vector(int n); // create vector of n ints
Vector(const Vector&);
~Vector(); // destroy vector
Vector& operator=(const Vector&); // assignment
int size() const;
void resize(int n); // change the size to n
int& operator[](int i); // subscripting
const int& operator[](int i) const; // subscripting
};

Vector::Vector(int i) //constructor
:sz(i) ,v(new int[i]) { } // establishes class invariant

Vector::~Vector() { delete[] v; }

int Vector::size() const { return sz; } //no effect on class representation

struct Bad_range { }; // I want an Über-exception to derive from!

int& Vector::operator[](int i)
{
if (0<=i && i<sz) return v[i];
throw Bad_range(); // no effect on class representation if thrown
}
/*
This is a bad assignment operator implementation because
it can lead to bad things like double deletion when exceptions
are thrown. That's because it fails to maintain the class invariant
that says a Vector holds an array

*/
Vector& Vector::operator=(const Vector& a)
{
sz = a.sz; // get new size
delete[] v; // free old memory
v = new int[n]; // get new memory
copy(a.v,a.v+a.sz,v); // copy to new memory
}

/*
The is a better version because it maintains the invariant
even if the memory allocation fails. We can probably trust
copy() not to throw an exception, or to otherwise fail,
so we don't worry about losing *p if it fails.
*/
Vector& Vector::operator=(const Vector& a)
{
int* p = new int[n]; // get new memory [or thorw bad_alloc?]
copy(a.v,a.v+a.sz,p); // copy to new memory

/* invariant is violated in the next statement*/
sz = a.sz; // get new size
delete[] v; // free old memory

/* invariant is reestablished in the next statement*/
v = p;
}

/*
This is the example of where the problem with the first form
of Vector::operator=(const Vector& a) might arise. Vector::v is
deleted once by the assignment operator, and then again when the
destructor Vector::~Vector() is called upon exit of the containing
block.
*/
int main() {
try
{
Vector vec(10) ;
cout << vec.size() << ´\n´; // so far, so good
Vector v2(40*1000000) ; // ask for 160 megabytes
vec = v2; // use another 160 megabytes
}
catch(Range_error) {
cerr << "Oops: Range error!\n";
}
catch(bad_alloc) {
cerr << "Oops: memory exhausted!\n";
}
}

/*
Now my observation is this:

If I don't free the memory in Vector::v until the penultimate statement
in the assignment operator function, then I will not be able to
allocate as much memory as I would with the original (bad)
implementation.

Earlier in the article Stroustrup provides this example of a
destructor: ~File_ptr() { if (p) fclose(p); }

If I were to use the comparable ~Vector() { if(v) delete[] v; }
in conjunction with the first form of the assignment operator
function that would seem to solve the problem of not freeing
potentially usable memory before allocating the new array.

A problem with that approach seems to be that it violates the
guarantee of class invariance. Is this a correct observation on my
part? Is there a way to accomplish both the goal of freeing available
memory prior to allocating more, and preserving the class invariant?
I can think of some approaches, but they all seem to add overhead
to the assignment operator function.

*/
--
STH
Hatton's Law: "There is only One inviolable Law"
KDevelop: http://www.kdevelop.org SuSE: http://www.suse.com
Mozilla: http://www.mozilla.org
Jul 22 '05 #1
Share this Question
Share on Google+
10 Replies


P: n/a
Steven T. Hatton wrote:
I read Stroustrup's article of the day:
http://www.research.att.com/~bs/C++.html

Programming with Exceptions. InformIt.com. April 2001.
http://www.research.att.com/~bs/eh_brief.pdf

Some of these ideas are finally beginning to sink in. I believe I looked at
the same article a while back and decided I wasn't quite ready for it. If I
understood things correctly, there seems to be a slight problem with the
design of his exception safe assignment operator. I believe I have
correctly extracted all the relevant code, and appropriately commented it
to describe the various issues discussed in the article. My questions are
in the final comment block.

class Vector { // v points to an array of sz ints
int sz;
int* v;
public: explicit Vector(int n); // create vector of n ints
Vector(const Vector&);
~Vector(); // destroy vector
Vector& operator=(const Vector&); // assignment
int size() const;
void resize(int n); // change the size to n
int& operator[](int i); // subscripting
const int& operator[](int i) const; // subscripting
};

Vector::Vector(int i) //constructor
:sz(i) ,v(new int[i]) { } // establishes class invariant

Vector::~Vector() { delete[] v; }

int Vector::size() const { return sz; } //no effect on class representation

struct Bad_range { }; // I want an Über-exception to derive from!

int& Vector::operator[](int i)
{
if (0<=i && i<sz) return v[i];
throw Bad_range(); // no effect on class representation if thrown
}
/*
This is a bad assignment operator implementation because
it can lead to bad things like double deletion when exceptions
are thrown. That's because it fails to maintain the class invariant
that says a Vector holds an array

*/
Vector& Vector::operator=(const Vector& a)
{
sz = a.sz; // get new size
delete[] v; // free old memory
v = new int[n]; // get new memory
What 'n'? Don't you mean

v = new int[a.sz];

???
copy(a.v,a.v+a.sz,v); // copy to new memory
}

/*
The is a better version because it maintains the invariant
even if the memory allocation fails. We can probably trust
copy() not to throw an exception, or to otherwise fail,
so we don't worry about losing *p if it fails.
*/
Vector& Vector::operator=(const Vector& a)
{
int* p = new int[n]; // get new memory [or thorw bad_alloc?]
copy(a.v,a.v+a.sz,p); // copy to new memory

/* invariant is violated in the next statement*/
sz = a.sz; // get new size
delete[] v; // free old memory

/* invariant is reestablished in the next statement*/
v = p;
}

/*
This is the example of where the problem with the first form
of Vector::operator=(const Vector& a) might arise. Vector::v is
deleted once by the assignment operator, and then again when the
destructor Vector::~Vector() is called upon exit of the containing
block.
*/
int main() {
try
{
Vector vec(10) ;
cout << vec.size() << ´\n´; // so far, so good
Vector v2(40*1000000) ; // ask for 160 megabytes
vec = v2; // use another 160 megabytes
}
catch(Range_error) {
cerr << "Oops: Range error!\n";
}
catch(bad_alloc) {
cerr << "Oops: memory exhausted!\n";
}
}

/*
Now my observation is this:

If I don't free the memory in Vector::v until the penultimate statement
in the assignment operator function, then I will not be able to
allocate as much memory as I would with the original (bad)
implementation.

Earlier in the article Stroustrup provides this example of a
destructor: ~File_ptr() { if (p) fclose(p); }

If I were to use the comparable ~Vector() { if(v) delete[] v; }
Checking 'v' before deletion is useless. delete[] NULL does nothing.
But if you don't clear 'v' after first deletion (in the operator=),
then you are bound to try to delete it again here.
in conjunction with the first form of the assignment operator
function that would seem to solve the problem of not freeing
potentially usable memory before allocating the new array.
No, it doesn't. Only if you fix the assignment operator to do this:

Vector& Vector::operator=(const Vector& a)
{
delete[] v; // free old memory
v = 0; /// important: do not keep pointers that point to nothing
sz = 0; // we got no storage -- indicate that in the size
// at this point the state of the object is new, clean and consistent.
// it's not as good as "allocate first, then assign" because you still
// can lose data this way (if new allocation fails), but at least you
// don't have any problems with double deallocation
v = new int[a.sz]; // get new memory
sz = a.sz; // successful allocation -- store new size
copy(a.v,a.v+a.sz,v); // copy to new memory
}

Since the 'new int[a.sz]' can throw, 'v' will remain 0 in that case.

A problem with that approach seems to be that it violates the
guarantee of class invariance. Is this a correct observation on my
part? Is there a way to accomplish both the goal of freeing available
memory prior to allocating more, and preserving the class invariant?
I can think of some approaches, but they all seem to add overhead
to the assignment operator function.
Really?

*/


Victor
Jul 22 '05 #2

P: n/a
Victor Bazarov wrote:
Steven T. Hatton wrote:
I read Stroustrup's article of the day:
http://www.research.att.com/~bs/C++.html

Programming with Exceptions. InformIt.com. April 2001.
http://www.research.att.com/~bs/eh_brief.pdf [...] Vector& Vector::operator=(const Vector& a)
{
sz = a.sz; // get new size
delete[] v; // free old memory
v = new int[n]; // get new memory
What 'n'? Don't you mean

v = new int[a.sz];

???

:D

There's actually another coding error in the article. There's no open
parenthesis in main().
Checking 'v' before deletion is useless. delete[] NULL does nothing.
But if you don't clear 'v' after first deletion (in the operator=),
then you are bound to try to delete it again here.
I wasn't sure what happened to 'v'. I should have looked it up before
asking the question. § 5.3.5 informs me that the pointer is not nullified
by a call to delete.
in conjunction with the first form of the assignment operator
function that would seem to solve the problem of not freeing
potentially usable memory before allocating the new array.


No, it doesn't. Only if you fix the assignment operator to do this:

Vector& Vector::operator=(const Vector& a)
{
delete[] v; // free old memory
v = 0; /// important: do not keep pointers that point to nothing
sz = 0; // we got no storage -- indicate that in the size
// at this point the state of the object is new, clean and consistent.
// it's not as good as "allocate first, then assign" because you still
// can lose data this way (if new allocation fails),


I didn't think about that consequence of a failed alloc. Good point!
// but at least you
// don't have any problems with double deallocation
v = new int[a.sz]; // get new memory
sz = a.sz; // successful allocation -- store new size
copy(a.v,a.v+a.sz,v); // copy to new memory
}

Since the 'new int[a.sz]' can throw, 'v' will remain 0 in that case.

A problem with that approach seems to be that it violates the
guarantee of class invariance. Is this a correct observation on my
part? Is there a way to accomplish both the goal of freeing available
memory prior to allocating more, and preserving the class invariant?
I can think of some approaches, but they all seem to add overhead
to the assignment operator function.


Really?


Well, it may not be (or seem like) a lot, but you did add two assignment
statements to the function. I really don't know what the cost of that
might be. If I am doing something like calculating the mutual
gravitational attraction between 10,000 point masses during the generation
of a display frame, those assignment operations might add up.
--
STH
Hatton's Law: "There is only One inviolable Law"
KDevelop: http://www.kdevelop.org SuSE: http://www.suse.com
Mozilla: http://www.mozilla.org
Jul 22 '05 #3

P: n/a
* Steven T. Hatton:

Is there a way to accomplish both the goal of freeing available
memory prior to allocating more, and preserving the class invariant?


You can preserve the class invariant but not the contents.

--
A: Because it messes up the order in which people normally read text.
Q: Why is it such a bad thing?
A: Top-posting.
Q: What is the most annoying thing on usenet and in e-mail?
Jul 22 '05 #4

P: n/a
Victor Bazarov wrote:
Steven T. Hatton wrote:
I read Stroustrup's article of the day:
http://www.research.att.com/~bs/C++.html

Programming with Exceptions. InformIt.com. April 2001.
http://www.research.att.com/~bs/eh_brief.pdf
[see previous reply for my comments on your observations.]
Victor


BTW, it _may_ be possible to get real fancy with placement new() and legally
recover the deallocated data upon bad_alloc. At what cost, I don't know.
There's some discussion in §3.8 ¶7 of the Standard which suggests this may
be possible.
--
STH
Hatton's Law: "There is only One inviolable Law"
KDevelop: http://www.kdevelop.org SuSE: http://www.suse.com
Mozilla: http://www.mozilla.org
Jul 22 '05 #5

P: n/a
On Thu, 05 Aug 2004 10:08:59 -0700, Steven T. Hatton wrote:
If I understood things correctly, there seems to be a slight problem
with the design of his exception safe assignment operator.
[...]
/*
The is a better version because it maintains the invariant even if the
memory allocation fails. We can probably trust copy() not to throw an
exception, or to otherwise fail, so we don't worry about losing *p if
it fails.
*/
Vector& Vector::operator=(const Vector& a) {
int* p = new int[n]; // get new memory [or thorw bad_alloc?]
copy(a.v,a.v+a.sz,p); // copy to new memory

/* invariant is violated in the next statement*/ sz = a.sz; // get new
size delete[] v; // free old memory

/* invariant is reestablished in the next statement*/ v = p;
}
This is the application of the principle of doing things that could
potentially throw, on the side, before touching the state of the
object; and then changing the state by using non-throwing statements.

The equivalent of the above is first copying and then swapping, which
is more idiomatically done by separate functions that do the same:

// does not throw
void Vector::swap(Vector & other)
{
std::swap(sz, other.sz);
std::swap(v, other.v);
}

Vector::Vector(Vector const & other)
:
sz(other.sz),
v(new int[other.sz])
{}

Now the assignment operator is very easy and simple:

Vector & Vector::operator= (Vector const & other)
{
Vector tmp(other);
this->swap(tmp);
return *this;
}

Herb Sutter covers exception safety issues in his Exceptional C++
book. Highly recommended...
If I don't free the memory in Vector::v until the penultimate
statement in the assignment operator function, then I will not be able
to allocate as much memory as I would with the original (bad)
implementation.


Is it because the system memory is so limited that there is no room
for another object? In that case we are dealing with a very special
situation where the strong exception safety may not be provided. Then
the version of the assignment operator with Victor Bazarov's
modifications is suitable.

Ali
Jul 22 '05 #6

P: n/a
On Thu, 05 Aug 2004 12:02:28 -0700, Ali Cehreli wrote:
Vector::Vector(Vector const & other)
:
sz(other.sz),
v(new int[other.sz])
{}


Oops! The body is missing the actual copy operation :( Should be:

{
std::copy(/* ... */);
}

Ali
Jul 22 '05 #7

P: n/a
Ali Cehreli wrote:
On Thu, 05 Aug 2004 10:08:59 -0700, Steven T. Hatton wrote:

If I understood things correctly, there seems to be a slight problem
with the design of his exception safe assignment operator.

[...]

/*
The is a better version because it maintains the invariant even if the
memory allocation fails. We can probably trust copy() not to throw an
exception, or to otherwise fail, so we don't worry about losing *p if
it fails.
*/
Vector& Vector::operator=(const Vector& a) {
int* p = new int[n]; // get new memory [or thorw bad_alloc?]
copy(a.v,a.v+a.sz,p); // copy to new memory

/* invariant is violated in the next statement*/ sz = a.sz; // get new
size delete[] v; // free old memory

/* invariant is reestablished in the next statement*/ v = p;
}

This is the application of the principle of doing things that could
potentially throw, on the side, before touching the state of the
object; and then changing the state by using non-throwing statements.

The equivalent of the above is first copying and then swapping, which
is more idiomatically done by separate functions that do the same:

// does not throw
void Vector::swap(Vector & other)
{
std::swap(sz, other.sz);
std::swap(v, other.v);
}

Vector::Vector(Vector const & other)
:
sz(other.sz),
v(new int[other.sz])
{}

Now the assignment operator is very easy and simple:

Vector & Vector::operator= (Vector const & other)
{
Vector tmp(other);
this->swap(tmp);
return *this;


You could just write

Vector().swap(*this);
return *this;
}
And if you make 'swap' return its argument, you could even write

return Vector().swap(*this);

!

Herb Sutter covers exception safety issues in his Exceptional C++
book. Highly recommended...

If I don't free the memory in Vector::v until the penultimate
statement in the assignment operator function, then I will not be able
to allocate as much memory as I would with the original (bad)
implementation.

Is it because the system memory is so limited that there is no room
for another object? In that case we are dealing with a very special
situation where the strong exception safety may not be provided. Then
the version of the assignment operator with Victor Bazarov's
modifications is suitable.


I thought that the amount of the available memory is basically the base
for all the attempts to improve the implementation presented by Bjarne
in his article... Anyway, good points and recommendations!

Victor
Jul 22 '05 #8

P: n/a
"Steven T. Hatton" <su******@setidava.kushan.aa> wrote in message
news:mb********************@speakeasy.net...
[snip]
If I don't free the memory in Vector::v until the penultimate statement
in the assignment operator function, then I will not be able to
allocate as much memory as I would with the original (bad)
implementation.


If you're really worried about this, it might be good to switch to something
similar to std::deque, which allocates multiple blocks instead of one single
block. There's a little more overhead, but it's still pretty darn good.

--
David Hilsee
Jul 22 '05 #9

P: n/a
"Steven T. Hatton" <su******@setidava.kushan.aa> wrote:
class Vector { // v points to an array of sz ints
int sz;
int* v;
public: explicit Vector(int n); // create vector of n ints
Vector(const Vector&);
~Vector(); // destroy vector
Vector& operator=(const Vector&); // assignment
int size() const;
void resize(int n); // change the size to n
int& operator[](int i); // subscripting
const int& operator[](int i) const; // subscripting
};


void Vector::clear() {
// I would make this an inline function
delete [] v;
v = 0;
sz = 0;
}

Vector& Vector::operator=(const Vector& other) {
if ( sz < other.sz ) {
clear();
v = new int[other.sz];
}
copy( other.v, other.v + other.sz, v );
sz = other.sz;
}

This solves the problems you had with the Vector class but introduces a
few wrinkles:
1) If new throws in the op=, the data that was in the vector may be lost.
2) The invariant has changed, now it's more like: if v != NULL then its
valid to dereference v[sz-1].

We could extend this concept and include a variable 'cap', which makes
it just that much more like std::vector.
Jul 22 '05 #10

P: n/a
Some days I wish I were a Smalltalk programmer . . . or Prolog (my first
love, I guess).

Ryan
Jul 22 '05 #11

This discussion thread is closed

Replies have been disabled for this discussion.