Take a look at this code, it looks funny as its written to be as short as
possible:
-- code --
struct Base
{
~Base() { *((char*)0) = 0; }
};
struct Derived : public Base
{
Derived() { throw 1; }
};
int main(int,char**)
{
try { new Derived; }
catch (int) { }
return 0;
}
-- code --
Now don't get excited about the *((char*)0) = 0, I'm just using that as a
kind of compiler independent, hard coded breakpoint. I want to verify that
the destructor is in fact getting called without relying on a debugger set
breakpoint which can sometimes not occur, or the code be optimized out so
that there is no point to break on.
So what are we looking at?
We are creating an instance of Derived inside an exception handler. During
construction Derived throws an exception and the handler's unwinding
mechanism is calling the destructor for the base class.
My question is: Why?
Where is the logic in partially deconstructing an aggregate object that's
signaling it cannot be constructed by way of an exception? If you move the
....ahem... breakpoint to Derived's destructor, that never gets called. Makes
sense not to deconstruct an object that's not fully constructed, so again,
why partially deconstruct it?
The other thing that doesn't make sense to me is that this object is not
being created on the stack, so why would *any* destructor be called by the
unwinding mechanism? There is no call to delete anywhere in the code, so
what makes the compiler think it should be deconstructed at all?
You may have guessed that I'm having chicken and egg problems and you'd be
correct. Now I *think* I can fix the problem by rearraging order of
operations in my constructors (although I have not at this point worked out
the details, and of course that adds a degree of complexity to the code I'd
rather avoid), but I'd really like to understand the reasoning behind this.
Perhaps I'm doing something fundamentally 'wrong' and need to rethink my
design (I sure hope not!). 7 1875
"Douglas Peterson" <Te******@nospam.msn.com> wrote in message
news:Ur********************@comcast.com... Take a look at this code, it looks funny as its written to be as short as possible:
-- code -- struct Base { ~Base() { *((char*)0) = 0; } };
struct Derived : public Base { Derived() { throw 1; } };
int main(int,char**) { try { new Derived; } catch (int) { } return 0; } -- code --
Now don't get excited about the *((char*)0) = 0, I'm just using that as a kind of compiler independent, hard coded breakpoint. I want to verify that the destructor is in fact getting called without relying on a debugger set breakpoint which can sometimes not occur, or the code be optimized out so that there is no point to break on.
So what are we looking at?
We are creating an instance of Derived inside an exception handler. During construction Derived throws an exception and the handler's unwinding mechanism is calling the destructor for the base class.
My question is: Why?
Because its useful. Where is the logic in partially deconstructing an aggregate object that's signaling it cannot be constructed by way of an exception? If you move the ...ahem... breakpoint to Derived's destructor, that never gets called.
Makes sense not to deconstruct an object that's not fully constructed, so again, why partially deconstruct it?
The Base object is fully constructed, do you want Base objects to be handled
differently depending on whether they are part of a larger object or not?
All fully constructed objects should be destructed. The other thing that doesn't make sense to me is that this object is not being created on the stack, so why would *any* destructor be called by the unwinding mechanism?
It's not the unwinding mechanism. Its how constructors behave when an
exception occurs in a constructor. All this is happening before stack
unwinding starts.
There is no call to delete anywhere in the code, so what makes the compiler think it should be deconstructed at all?
Because a Base object has been fully constructed. Suppose the Base object
constructor had allocated some memory which was freed in the destructor. If
the destructor were not called there would be a memory leak. You may have guessed that I'm having chicken and egg problems and you'd be correct.
Actaully I don't see why you have a problem at all.
Now I *think* I can fix the problem by rearraging order of operations in my constructors (although I have not at this point worked
out the details, and of course that adds a degree of complexity to the code
I'd rather avoid), but I'd really like to understand the reasoning behind
this. Perhaps I'm doing something fundamentally 'wrong' and need to rethink my design (I sure hope not!).
I think so. Have a look at this article by Bjarne Stroustrup which explains
how to use this language feature to write cleaner more exception safe code. http://www.research.att.com/~bs/3rd_safe.pdf
john
On Thu, 24 Jun 2004 02:18:15 -0400 in comp.lang.c++, "Douglas Peterson"
<Te******@nospam.msn.com> wrote, Where is the logic in partially deconstructing an aggregate object
It has been partially constructed. The Base constructor has already
been called at that point, so the Base destructor needs calling.
Compare:
int base_count = 0;
struct Base
{
Base() { ++base_count; }
~Base() { --base_count; }
};
struct Derived : public Base
{
Derived() { throw 1; }
};
int main(int,char**)
{
try {
std::auto_ptr<Derived> a(new Derived);
}
catch (int) { }
std::cout << base_count; // should print 0
return 0;
}
The other thing that doesn't make sense to me is that this object is not being created on the stack, so why would *any* destructor be called by the unwinding mechanism?
The constructor doesn't know that.
You may have guessed that I'm having chicken and egg problems and you'd be correct. Now I *think* I can fix the problem by rearraging order of operations in my constructors
What are you trying to accomplish. Remember RAII - Resource Acquisition
Is Initialization. See usage of std::auto_ptr above.
"Douglas Peterson" <Te******@nospam.msn.com> wrote in
news:Ur********************@comcast.com: Take a look at this code, it looks funny as its written to be as short as possible:
-- code -- struct Base { ~Base() { *((char*)0) = 0; } };
struct Derived : public Base { Derived() { throw 1; } };
int main(int,char**) { try { new Derived; } catch (int) { } return 0; } -- code --
Now don't get excited about the *((char*)0) = 0, I'm just using that as a kind of compiler independent, hard coded breakpoint. I want to verify that the destructor is in fact getting called without relying on a debugger set breakpoint which can sometimes not occur, or the code be optimized out so that there is no point to break on.
Not necessarily safe. Dereferencing a NULL (or 0) pointer invokes
"Undefined Behaviour". The compiler could theoretically "optimize" out
that line too... (the compiler could also theoretically format your hard
drive too....)
So what are we looking at?
We are creating an instance of Derived inside an exception handler. During construction Derived throws an exception and the handler's unwinding mechanism is calling the destructor for the base class.
My question is: Why?
Where is the logic in partially deconstructing an aggregate object that's signaling it cannot be constructed by way of an exception? If you move the ...ahem... breakpoint to Derived's destructor, that never gets called. Makes sense not to deconstruct an object that's not fully constructed, so again, why partially deconstruct it?
Before the body of the Derived constructor can be invoked, the Base
"class" must have completed constructing (as well as all members of the
base class, as well as all members of Dervied too...). During your
Derived constructor, you throw an exception. This will cause all of the
parts of Derived that had been completely constructed (ie: the Base
portion (which also includes Base's members) and all members of Derived)
will be destructed, in the reverse order in which their constructions
completed (so it would be Derived members, Base destructor body, Base
members).
The other thing that doesn't make sense to me is that this object is not being created on the stack, so why would *any* destructor be called by the unwinding mechanism? There is no call to delete anywhere in the code, so what makes the compiler think it should be deconstructed at all?
See above.... in C++, objects don't care whether they're on the stack,
heap, shared memory, or any other location (<pedantic mode on>There is no
such thing as a stack in Standard C++</pedantic mode off>). When the
object's lifespan begins, the constructors are invoked. When the
object's lifespan ends, the destructors are invoked.
Thanks to Andre, John, and David for taking the time to reply.
After your explanations and giving it more thought, I accept that a fully
constucted object needs to be deconstructed. Makes perfect sense when you
look at it outside of the problem your having :)
Here's what the issue was and how I'm going to change it:
The objects in my system are all contained in lists. There is a root item
with a list of objects, those objects contain a list of objects and so on,
and so on.
I wanted two properties for objects that are contained in this system:
1) Objects could be created externally so that they can be further derived
by the user. That is to say you don't call Container.CreateObject() to
create one, you can 'new Object'.
2) Objects would automagically be added and removed to/from the containers
they belong to.
To accomplish #2, the constructors of the object's deepest base class were
adding and removing themselves to their owner's lists. When an exception
occurs during object construction, one of two possibilities occurs depending
on what point the exception is thrown:
1) The object has been added to the container's list, but doesn't (can't)
remove itself when its not getting fully constructed. This leads to an
attempt by the container to delete an already deleted object.
2) The object isn't yet added, but its destructor tries to unlist itself.
Number 1 can be solved by placing the 'insert myself into my owner's
container' line of code dead last in the constructor. That way if an
exception occurs during its construction, its not added. If the exception
occurs in a derived class, we get number 2 because the base is fully
constructed and will be destructed.
Number 2 is benign, however, my container list code asserts because in most
cases the programmer wants to know that his code is trying to remove
something that isn't there (a potential bug).
So I'm left with a choice:
1) Remove the automagic and require all objects to be inserted after they
are constructed.
2) Remove the assertion from my list class to allow for benign attempts at
removal.
I'll elicit some respones before making a descision.
"Douglas Peterson" <Te******@nospam.msn.com> wrote in message
news:P9********************@comcast.com... Thanks to Andre, John, and David for taking the time to reply.
After your explanations and giving it more thought, I accept that a fully constucted object needs to be deconstructed. Makes perfect sense when you look at it outside of the problem your having :)
Here's what the issue was and how I'm going to change it:
The objects in my system are all contained in lists. There is a root item with a list of objects, those objects contain a list of objects and so on, and so on.
I wanted two properties for objects that are contained in this system:
1) Objects could be created externally so that they can be further derived by the user. That is to say you don't call Container.CreateObject() to create one, you can 'new Object'. 2) Objects would automagically be added and removed to/from the containers they belong to.
To accomplish #2, the constructors of the object's deepest base class were adding and removing themselves to their owner's lists. When an exception occurs during object construction, one of two possibilities occurs
depending on what point the exception is thrown:
1) The object has been added to the container's list, but doesn't (can't) remove itself when its not getting fully constructed. This leads to an attempt by the container to delete an already deleted object. 2) The object isn't yet added, but its destructor tries to unlist itself.
Number 1 can be solved by placing the 'insert myself into my owner's container' line of code dead last in the constructor. That way if an exception occurs during its construction, its not added. If the exception occurs in a derived class, we get number 2 because the base is fully constructed and will be destructed. Number 2 is benign, however, my container list code asserts because in
most cases the programmer wants to know that his code is trying to remove something that isn't there (a potential bug).
So I'm left with a choice:
1) Remove the automagic and require all objects to be inserted after they are constructed. 2) Remove the assertion from my list class to allow for benign attempts at removal.
I'll elicit some respones before making a descision.
I don't understand the problem, its seems to me that C++'s rules are helping
you, but you seem to think that they hinder you.
Here's how I see it, the deepest base class ctor adds to the list, the
deepest base class removes from the list. Nothing else happens in the
deepest base class ctor and dtor.
So when an object is constructed the first thing to happen is that the
deepest base class ctor is called. If it constructs successfully then the
object is on a list and the deepest base class dtor will remove it from that
list should an exception be thrown later during construction. If by chance
an exception happens while adding to the list, then the object isn't in the
list but the dtor won't be called because the deepest base class wasn't
fully constructed. Isn't that exactly what you want?
Seems straightforward to me, maybe you could make the problem clearer with
some sample code.
john
"John Harrison" <jo*************@hotmail.com> wrote in message
news:2k*************@uni-berlin.de... "Douglas Peterson" <Te******@nospam.msn.com> wrote in message news:P9********************@comcast.com... Thanks to Andre, John, and David for taking the time to reply.
After your explanations and giving it more thought, I accept that a
fully constucted object needs to be deconstructed. Makes perfect sense when
you look at it outside of the problem your having :)
Here's what the issue was and how I'm going to change it:
The objects in my system are all contained in lists. There is a root
item with a list of objects, those objects contain a list of objects and so
on, and so on.
I wanted two properties for objects that are contained in this system:
1) Objects could be created externally so that they can be further
derived by the user. That is to say you don't call Container.CreateObject() to create one, you can 'new Object'. 2) Objects would automagically be added and removed to/from the
containers they belong to.
To accomplish #2, the constructors of the object's deepest base class
were adding and removing themselves to their owner's lists. When an exception occurs during object construction, one of two possibilities occurs depending on what point the exception is thrown:
1) The object has been added to the container's list, but doesn't
(can't) remove itself when its not getting fully constructed. This leads to an attempt by the container to delete an already deleted object. 2) The object isn't yet added, but its destructor tries to unlist
itself. Number 1 can be solved by placing the 'insert myself into my owner's container' line of code dead last in the constructor. That way if an exception occurs during its construction, its not added. If the
exception occurs in a derived class, we get number 2 because the base is fully constructed and will be destructed. Number 2 is benign, however, my container list code asserts because in most cases the programmer wants to know that his code is trying to remove something that isn't there (a potential bug).
So I'm left with a choice:
1) Remove the automagic and require all objects to be inserted after
they are constructed. 2) Remove the assertion from my list class to allow for benign attempts
at removal.
I'll elicit some respones before making a descision.
I don't understand the problem, its seems to me that C++'s rules are
helping you, but you seem to think that they hinder you.
Here's how I see it, the deepest base class ctor adds to the list, the deepest base class removes from the list. Nothing else happens in the deepest base class ctor and dtor.
So when an object is constructed the first thing to happen is that the deepest base class ctor is called. If it constructs successfully then the object is on a list and the deepest base class dtor will remove it from
that list should an exception be thrown later during construction. If by chance an exception happens while adding to the list, then the object isn't in
the list but the dtor won't be called because the deepest base class wasn't fully constructed. Isn't that exactly what you want?
Seems straightforward to me, maybe you could make the problem clearer with some sample code.
john
The problem with that is the deepest base can't insert itself without
knowledge of the container. It has to call some kind of insert/remove
method. Here is some pseudo code of the system:
class ItemOne : public LinkedItem<ItemOne>
{
ItemOne(Owner* owner)
{
owner->itemOneList.Insert(this);
}
~ItemOne()
{
owner->itemOneList.Remove(this);
}
};
class Owner
{
// notice that the owner can contain lists of more than one different
kind of object
LinkedItemList<ItemOne> itemOneList;
LinkedItemList<ItemTwo> itemTwoList;
};
What your saying, is have (for example) the LinkedItem<> class do the
insert/remove and that would be ok, except LinkedItem<> would have to be
passed a pointer to the LinkedItemList<> it will get inserted into. Not so
bad, but it has to cache that pointer in order to call Remove on
destruction. That's yet another 4 bytes per item wasted. I say wasted
because the derived class is going to cache a pointer to its container and
the derived class is aware of which list it belongs in so that's 4 (or 8)
redundant, wasted bytes. One pointing to the container and one pointing to a
member of the container all in the same aggregate object.
Its a very good idea, I just have to convince myself that its ok to waste
bytes for the benefit of the programmer (as opposed to for the benefit of
the program). I could remove the automagic insert/remove code and leave the
onus on the programmer to do it properly. I suppose for most people this is
a non-issue, but I come from the world of embedded systems programming and
old habits die hard.
> The problem with that is the deepest base can't insert itself without knowledge of the container. It has to call some kind of insert/remove method. Here is some pseudo code of the system:
class ItemOne : public LinkedItem<ItemOne> { ItemOne(Owner* owner) { owner->itemOneList.Insert(this); } ~ItemOne() { owner->itemOneList.Remove(this); } };
class Owner { // notice that the owner can contain lists of more than one different kind of object LinkedItemList<ItemOne> itemOneList; LinkedItemList<ItemTwo> itemTwoList; };
What your saying, is have (for example) the LinkedItem<> class do the insert/remove and that would be ok, except LinkedItem<> would have to be passed a pointer to the LinkedItemList<> it will get inserted into. Not so bad, but it has to cache that pointer in order to call Remove on destruction. That's yet another 4 bytes per item wasted. I say wasted because the derived class is going to cache a pointer to its container and the derived class is aware of which list it belongs in so that's 4 (or 8) redundant, wasted bytes. One pointing to the container and one pointing to
a member of the container all in the same aggregate object.
Its a very good idea, I just have to convince myself that its ok to waste bytes for the benefit of the programmer (as opposed to for the benefit of the program). I could remove the automagic insert/remove code and leave
the onus on the programmer to do it properly. I suppose for most people this
is a non-issue, but I come from the world of embedded systems programming and old habits die hard.
Course I could just insert an additional class...
class ItemOneAuto : public LinkedItem<ItemOne>
....
class ItemOne : public ItemOneAuto
Sometimes I get too narrow focused! This thread has been closed and replies have been disabled. Please start a new discussion. Similar topics
by: Ritz, Bruno |
last post by:
hi
in java i found that when a method has a throws clause in the definition,
callers must either handle the exceptions thrown by the method they are
calling or "forward" the exception to the...
|
by: Douglas Peterson |
last post by:
I have a simple class that verifies some data between the beginning and the
end of a function:
class Verify
{
Verify() { note the data value }
~Verify() { check if data value is the same as we...
|
by: rogero |
last post by:
I'm having a nasty problem where the destructor of an automatic
variable is invoked with the wrong address during stack unwinding.
Below is a small example program.
Roger Orr
--
MVP in C++ at...
|
by: Neelesh Bodas |
last post by:
Hello all,
Just wanted a small clarification on these two points :
1) The following code compiles well -
int f() try {
throw "xyz";
}
catch(...)
{
|
by: Kannan |
last post by:
Some amount of memory is allocated inside the Base Constructor using
new. During the construction of a derived object an exception occurred
in the constructor of the derived class.
Will the...
|
by: junw2000 |
last post by:
Hi,
Below is a simple code about exception.
#include <iostream>
using namespace std;
struct E {
const char* message;
E(const char* arg) : message(arg) { }
};
|
by: Subra |
last post by:
Hi,
I am learning C++ and need export advice on the below program.
I have read it that whenever a exception is genrated and control
enteres the catch block it will call destructors for all the...
|
by: Peter Jansson |
last post by:
Dear newsgroup,
In the following code, it looks as though the exception thrown in the
constructor is not caught properly. I get this output:
---- standard output ----
Base() constructor....
|
by: asm23 |
last post by:
Hi, everyone, I'm learning <<thinking c++>volume Two, and testing the
code below.
/////////////////////////////////////////////////////////////////////////////
//: C01:Wrapped.cpp
#include...
|
by: emmanuelkatto |
last post by:
Hi All, I am Emmanuel katto from Uganda. I want to ask what challenges you've faced while migrating a website to cloud.
Please let me know.
Thanks!
Emmanuel
|
by: BarryA |
last post by:
What are the essential steps and strategies outlined in the Data Structures and Algorithms (DSA) roadmap for aspiring data scientists? How can individuals effectively utilize this roadmap to progress...
|
by: Sonnysonu |
last post by:
This is the data of csv file
1 2 3
1 2 3
1 2 3
1 2 3
2 3
2 3
3
the lengths should be different i have to store the data by column-wise with in the specific length.
suppose the i have to...
|
by: Hystou |
last post by:
There are some requirements for setting up RAID:
1. The motherboard and BIOS support RAID configuration.
2. The motherboard has 2 or more available SATA protocol SSD/HDD slots (including MSATA, M.2...
|
by: Oralloy |
last post by:
Hello folks,
I am unable to find appropriate documentation on the type promotion of bit-fields when using the generalised comparison operator "<=>".
The problem is that using the GNU compilers,...
|
by: jinu1996 |
last post by:
In today's digital age, having a compelling online presence is paramount for businesses aiming to thrive in a competitive landscape. At the heart of this digital strategy lies an intricately woven...
|
by: agi2029 |
last post by:
Let's talk about the concept of autonomous AI software engineers and no-code agents. These AIs are designed to manage the entire lifecycle of a software development project—planning, coding, testing,...
|
by: isladogs |
last post by:
The next Access Europe User Group meeting will be on Wednesday 1 May 2024 starting at 18:00 UK time (6PM UTC+1) and finishing by 19:30 (7.30PM).
In this session, we are pleased to welcome a new...
|
by: conductexam |
last post by:
I have .net C# application in which I am extracting data from word file and save it in database particularly. To store word all data as it is I am converting the whole word file firstly in HTML and...
| |