By using this site, you agree to our updated Privacy Policy and our Terms of Use. Manage your Cookies Settings.
448,662 Members | 1,707 Online
Bytes IT Community
+ Ask a Question
Need help? Post your question and get tips & solutions from a community of 448,662 IT Pros & Developers. It's quick & easy.

Exception unwinding base destructor called - why?

P: n/a
Take a look at this code, it looks funny as its written to be as short as
possible:

-- code --
struct Base
{
~Base() { *((char*)0) = 0; }
};

struct Derived : public Base
{
Derived() { throw 1; }
};

int main(int,char**)
{
try { new Derived; }
catch (int) { }
return 0;
}
-- code --

Now don't get excited about the *((char*)0) = 0, I'm just using that as a
kind of compiler independent, hard coded breakpoint. I want to verify that
the destructor is in fact getting called without relying on a debugger set
breakpoint which can sometimes not occur, or the code be optimized out so
that there is no point to break on.

So what are we looking at?

We are creating an instance of Derived inside an exception handler. During
construction Derived throws an exception and the handler's unwinding
mechanism is calling the destructor for the base class.

My question is: Why?

Where is the logic in partially deconstructing an aggregate object that's
signaling it cannot be constructed by way of an exception? If you move the
....ahem... breakpoint to Derived's destructor, that never gets called. Makes
sense not to deconstruct an object that's not fully constructed, so again,
why partially deconstruct it?

The other thing that doesn't make sense to me is that this object is not
being created on the stack, so why would *any* destructor be called by the
unwinding mechanism? There is no call to delete anywhere in the code, so
what makes the compiler think it should be deconstructed at all?

You may have guessed that I'm having chicken and egg problems and you'd be
correct. Now I *think* I can fix the problem by rearraging order of
operations in my constructors (although I have not at this point worked out
the details, and of course that adds a degree of complexity to the code I'd
rather avoid), but I'd really like to understand the reasoning behind this.
Perhaps I'm doing something fundamentally 'wrong' and need to rethink my
design (I sure hope not!).
Jul 22 '05 #1
Share this Question
Share on Google+
7 Replies


P: n/a

"Douglas Peterson" <Te******@nospam.msn.com> wrote in message
news:Ur********************@comcast.com...
Take a look at this code, it looks funny as its written to be as short as
possible:

-- code --
struct Base
{
~Base() { *((char*)0) = 0; }
};

struct Derived : public Base
{
Derived() { throw 1; }
};

int main(int,char**)
{
try { new Derived; }
catch (int) { }
return 0;
}
-- code --

Now don't get excited about the *((char*)0) = 0, I'm just using that as a
kind of compiler independent, hard coded breakpoint. I want to verify that
the destructor is in fact getting called without relying on a debugger set
breakpoint which can sometimes not occur, or the code be optimized out so
that there is no point to break on.

So what are we looking at?

We are creating an instance of Derived inside an exception handler. During
construction Derived throws an exception and the handler's unwinding
mechanism is calling the destructor for the base class.

My question is: Why?
Because its useful.

Where is the logic in partially deconstructing an aggregate object that's
signaling it cannot be constructed by way of an exception? If you move the
...ahem... breakpoint to Derived's destructor, that never gets called. Makes sense not to deconstruct an object that's not fully constructed, so again,
why partially deconstruct it?
The Base object is fully constructed, do you want Base objects to be handled
differently depending on whether they are part of a larger object or not?
All fully constructed objects should be destructed.

The other thing that doesn't make sense to me is that this object is not
being created on the stack, so why would *any* destructor be called by the
unwinding mechanism?
It's not the unwinding mechanism. Its how constructors behave when an
exception occurs in a constructor. All this is happening before stack
unwinding starts.
There is no call to delete anywhere in the code, so
what makes the compiler think it should be deconstructed at all?
Because a Base object has been fully constructed. Suppose the Base object
constructor had allocated some memory which was freed in the destructor. If
the destructor were not called there would be a memory leak.

You may have guessed that I'm having chicken and egg problems and you'd be
correct.
Actaully I don't see why you have a problem at all.
Now I *think* I can fix the problem by rearraging order of
operations in my constructors (although I have not at this point worked out the details, and of course that adds a degree of complexity to the code I'd rather avoid), but I'd really like to understand the reasoning behind this. Perhaps I'm doing something fundamentally 'wrong' and need to rethink my
design (I sure hope not!).


I think so. Have a look at this article by Bjarne Stroustrup which explains
how to use this language feature to write cleaner more exception safe code.

http://www.research.att.com/~bs/3rd_safe.pdf

john
Jul 22 '05 #2

P: n/a
On Thu, 24 Jun 2004 02:18:15 -0400 in comp.lang.c++, "Douglas Peterson"
<Te******@nospam.msn.com> wrote,
Where is the logic in partially deconstructing an aggregate object
It has been partially constructed. The Base constructor has already
been called at that point, so the Base destructor needs calling.

Compare:

int base_count = 0;

struct Base
{
Base() { ++base_count; }
~Base() { --base_count; }
};

struct Derived : public Base
{
Derived() { throw 1; }
};

int main(int,char**)
{
try {
std::auto_ptr<Derived> a(new Derived);
}
catch (int) { }
std::cout << base_count; // should print 0
return 0;
}
The other thing that doesn't make sense to me is that this object is not
being created on the stack, so why would *any* destructor be called by the
unwinding mechanism?
The constructor doesn't know that.
You may have guessed that I'm having chicken and egg problems and you'd be
correct. Now I *think* I can fix the problem by rearraging order of
operations in my constructors


What are you trying to accomplish. Remember RAII - Resource Acquisition
Is Initialization. See usage of std::auto_ptr above.

Jul 22 '05 #3

P: n/a
"Douglas Peterson" <Te******@nospam.msn.com> wrote in
news:Ur********************@comcast.com:
Take a look at this code, it looks funny as its written to be as short
as possible:

-- code --
struct Base
{
~Base() { *((char*)0) = 0; }
};

struct Derived : public Base
{
Derived() { throw 1; }
};

int main(int,char**)
{
try { new Derived; }
catch (int) { }
return 0;
}
-- code --

Now don't get excited about the *((char*)0) = 0, I'm just using that
as a kind of compiler independent, hard coded breakpoint. I want to
verify that the destructor is in fact getting called without relying
on a debugger set breakpoint which can sometimes not occur, or the
code be optimized out so that there is no point to break on.
Not necessarily safe. Dereferencing a NULL (or 0) pointer invokes
"Undefined Behaviour". The compiler could theoretically "optimize" out
that line too... (the compiler could also theoretically format your hard
drive too....)
So what are we looking at?

We are creating an instance of Derived inside an exception handler.
During construction Derived throws an exception and the handler's
unwinding mechanism is calling the destructor for the base class.

My question is: Why?

Where is the logic in partially deconstructing an aggregate object
that's signaling it cannot be constructed by way of an exception? If
you move the ...ahem... breakpoint to Derived's destructor, that never
gets called. Makes sense not to deconstruct an object that's not fully
constructed, so again, why partially deconstruct it?
Before the body of the Derived constructor can be invoked, the Base
"class" must have completed constructing (as well as all members of the
base class, as well as all members of Dervied too...). During your
Derived constructor, you throw an exception. This will cause all of the
parts of Derived that had been completely constructed (ie: the Base
portion (which also includes Base's members) and all members of Derived)
will be destructed, in the reverse order in which their constructions
completed (so it would be Derived members, Base destructor body, Base
members).
The other thing that doesn't make sense to me is that this object is
not being created on the stack, so why would *any* destructor be
called by the unwinding mechanism? There is no call to delete anywhere
in the code, so what makes the compiler think it should be
deconstructed at all?


See above.... in C++, objects don't care whether they're on the stack,
heap, shared memory, or any other location (<pedantic mode on>There is no
such thing as a stack in Standard C++</pedantic mode off>). When the
object's lifespan begins, the constructors are invoked. When the
object's lifespan ends, the destructors are invoked.
Jul 22 '05 #4

P: n/a
Thanks to Andre, John, and David for taking the time to reply.

After your explanations and giving it more thought, I accept that a fully
constucted object needs to be deconstructed. Makes perfect sense when you
look at it outside of the problem your having :)

Here's what the issue was and how I'm going to change it:

The objects in my system are all contained in lists. There is a root item
with a list of objects, those objects contain a list of objects and so on,
and so on.

I wanted two properties for objects that are contained in this system:

1) Objects could be created externally so that they can be further derived
by the user. That is to say you don't call Container.CreateObject() to
create one, you can 'new Object'.
2) Objects would automagically be added and removed to/from the containers
they belong to.

To accomplish #2, the constructors of the object's deepest base class were
adding and removing themselves to their owner's lists. When an exception
occurs during object construction, one of two possibilities occurs depending
on what point the exception is thrown:

1) The object has been added to the container's list, but doesn't (can't)
remove itself when its not getting fully constructed. This leads to an
attempt by the container to delete an already deleted object.
2) The object isn't yet added, but its destructor tries to unlist itself.

Number 1 can be solved by placing the 'insert myself into my owner's
container' line of code dead last in the constructor. That way if an
exception occurs during its construction, its not added. If the exception
occurs in a derived class, we get number 2 because the base is fully
constructed and will be destructed.
Number 2 is benign, however, my container list code asserts because in most
cases the programmer wants to know that his code is trying to remove
something that isn't there (a potential bug).

So I'm left with a choice:

1) Remove the automagic and require all objects to be inserted after they
are constructed.
2) Remove the assertion from my list class to allow for benign attempts at
removal.

I'll elicit some respones before making a descision.
Jul 22 '05 #5

P: n/a

"Douglas Peterson" <Te******@nospam.msn.com> wrote in message
news:P9********************@comcast.com...
Thanks to Andre, John, and David for taking the time to reply.

After your explanations and giving it more thought, I accept that a fully
constucted object needs to be deconstructed. Makes perfect sense when you
look at it outside of the problem your having :)

Here's what the issue was and how I'm going to change it:

The objects in my system are all contained in lists. There is a root item
with a list of objects, those objects contain a list of objects and so on,
and so on.

I wanted two properties for objects that are contained in this system:

1) Objects could be created externally so that they can be further derived
by the user. That is to say you don't call Container.CreateObject() to
create one, you can 'new Object'.
2) Objects would automagically be added and removed to/from the containers
they belong to.

To accomplish #2, the constructors of the object's deepest base class were
adding and removing themselves to their owner's lists. When an exception
occurs during object construction, one of two possibilities occurs depending on what point the exception is thrown:

1) The object has been added to the container's list, but doesn't (can't)
remove itself when its not getting fully constructed. This leads to an
attempt by the container to delete an already deleted object.
2) The object isn't yet added, but its destructor tries to unlist itself.

Number 1 can be solved by placing the 'insert myself into my owner's
container' line of code dead last in the constructor. That way if an
exception occurs during its construction, its not added. If the exception
occurs in a derived class, we get number 2 because the base is fully
constructed and will be destructed.
Number 2 is benign, however, my container list code asserts because in most cases the programmer wants to know that his code is trying to remove
something that isn't there (a potential bug).

So I'm left with a choice:

1) Remove the automagic and require all objects to be inserted after they
are constructed.
2) Remove the assertion from my list class to allow for benign attempts at
removal.

I'll elicit some respones before making a descision.


I don't understand the problem, its seems to me that C++'s rules are helping
you, but you seem to think that they hinder you.

Here's how I see it, the deepest base class ctor adds to the list, the
deepest base class removes from the list. Nothing else happens in the
deepest base class ctor and dtor.

So when an object is constructed the first thing to happen is that the
deepest base class ctor is called. If it constructs successfully then the
object is on a list and the deepest base class dtor will remove it from that
list should an exception be thrown later during construction. If by chance
an exception happens while adding to the list, then the object isn't in the
list but the dtor won't be called because the deepest base class wasn't
fully constructed. Isn't that exactly what you want?

Seems straightforward to me, maybe you could make the problem clearer with
some sample code.

john
Jul 22 '05 #6

P: n/a

"John Harrison" <jo*************@hotmail.com> wrote in message
news:2k*************@uni-berlin.de...

"Douglas Peterson" <Te******@nospam.msn.com> wrote in message
news:P9********************@comcast.com...
Thanks to Andre, John, and David for taking the time to reply.

After your explanations and giving it more thought, I accept that a fully constucted object needs to be deconstructed. Makes perfect sense when you look at it outside of the problem your having :)

Here's what the issue was and how I'm going to change it:

The objects in my system are all contained in lists. There is a root item with a list of objects, those objects contain a list of objects and so on, and so on.

I wanted two properties for objects that are contained in this system:

1) Objects could be created externally so that they can be further derived by the user. That is to say you don't call Container.CreateObject() to
create one, you can 'new Object'.
2) Objects would automagically be added and removed to/from the containers they belong to.

To accomplish #2, the constructors of the object's deepest base class were adding and removing themselves to their owner's lists. When an exception
occurs during object construction, one of two possibilities occurs depending
on what point the exception is thrown:

1) The object has been added to the container's list, but doesn't (can't) remove itself when its not getting fully constructed. This leads to an
attempt by the container to delete an already deleted object.
2) The object isn't yet added, but its destructor tries to unlist itself.
Number 1 can be solved by placing the 'insert myself into my owner's
container' line of code dead last in the constructor. That way if an
exception occurs during its construction, its not added. If the exception occurs in a derived class, we get number 2 because the base is fully
constructed and will be destructed.
Number 2 is benign, however, my container list code asserts because in

most
cases the programmer wants to know that his code is trying to remove
something that isn't there (a potential bug).

So I'm left with a choice:

1) Remove the automagic and require all objects to be inserted after they are constructed.
2) Remove the assertion from my list class to allow for benign attempts at removal.

I'll elicit some respones before making a descision.


I don't understand the problem, its seems to me that C++'s rules are

helping you, but you seem to think that they hinder you.

Here's how I see it, the deepest base class ctor adds to the list, the
deepest base class removes from the list. Nothing else happens in the
deepest base class ctor and dtor.

So when an object is constructed the first thing to happen is that the
deepest base class ctor is called. If it constructs successfully then the
object is on a list and the deepest base class dtor will remove it from that list should an exception be thrown later during construction. If by chance
an exception happens while adding to the list, then the object isn't in the list but the dtor won't be called because the deepest base class wasn't
fully constructed. Isn't that exactly what you want?

Seems straightforward to me, maybe you could make the problem clearer with
some sample code.

john


The problem with that is the deepest base can't insert itself without
knowledge of the container. It has to call some kind of insert/remove
method. Here is some pseudo code of the system:

class ItemOne : public LinkedItem<ItemOne>
{
ItemOne(Owner* owner)
{
owner->itemOneList.Insert(this);
}
~ItemOne()
{
owner->itemOneList.Remove(this);
}
};

class Owner
{
// notice that the owner can contain lists of more than one different
kind of object
LinkedItemList<ItemOne> itemOneList;
LinkedItemList<ItemTwo> itemTwoList;
};

What your saying, is have (for example) the LinkedItem<> class do the
insert/remove and that would be ok, except LinkedItem<> would have to be
passed a pointer to the LinkedItemList<> it will get inserted into. Not so
bad, but it has to cache that pointer in order to call Remove on
destruction. That's yet another 4 bytes per item wasted. I say wasted
because the derived class is going to cache a pointer to its container and
the derived class is aware of which list it belongs in so that's 4 (or 8)
redundant, wasted bytes. One pointing to the container and one pointing to a
member of the container all in the same aggregate object.

Its a very good idea, I just have to convince myself that its ok to waste
bytes for the benefit of the programmer (as opposed to for the benefit of
the program). I could remove the automagic insert/remove code and leave the
onus on the programmer to do it properly. I suppose for most people this is
a non-issue, but I come from the world of embedded systems programming and
old habits die hard.
Jul 22 '05 #7

P: n/a
> The problem with that is the deepest base can't insert itself without
knowledge of the container. It has to call some kind of insert/remove
method. Here is some pseudo code of the system:

class ItemOne : public LinkedItem<ItemOne>
{
ItemOne(Owner* owner)
{
owner->itemOneList.Insert(this);
}
~ItemOne()
{
owner->itemOneList.Remove(this);
}
};

class Owner
{
// notice that the owner can contain lists of more than one different
kind of object
LinkedItemList<ItemOne> itemOneList;
LinkedItemList<ItemTwo> itemTwoList;
};

What your saying, is have (for example) the LinkedItem<> class do the
insert/remove and that would be ok, except LinkedItem<> would have to be
passed a pointer to the LinkedItemList<> it will get inserted into. Not so
bad, but it has to cache that pointer in order to call Remove on
destruction. That's yet another 4 bytes per item wasted. I say wasted
because the derived class is going to cache a pointer to its container and
the derived class is aware of which list it belongs in so that's 4 (or 8)
redundant, wasted bytes. One pointing to the container and one pointing to a member of the container all in the same aggregate object.

Its a very good idea, I just have to convince myself that its ok to waste
bytes for the benefit of the programmer (as opposed to for the benefit of
the program). I could remove the automagic insert/remove code and leave the onus on the programmer to do it properly. I suppose for most people this is a non-issue, but I come from the world of embedded systems programming and
old habits die hard.


Course I could just insert an additional class...

class ItemOneAuto : public LinkedItem<ItemOne>
....

class ItemOne : public ItemOneAuto

Sometimes I get too narrow focused!
Jul 22 '05 #8

This discussion thread is closed

Replies have been disabled for this discussion.