By using this site, you agree to our updated Privacy Policy and our Terms of Use. Manage your Cookies Settings.
445,813 Members | 1,252 Online
Bytes IT Community
+ Ask a Question
Need help? Post your question and get tips & solutions from a community of 445,813 IT Pros & Developers. It's quick & easy.

Dispose / Destructor Guidance (long)

P: n/a
I am in search of a comprehensive methodology of using these two
object cleanup approaches to get rid of a number of bugs,
unpleasantries, and cleanup-ordering issues we currently have in our
4-month old C#/MC++ .NET project project.

I'd like to thank in advance anyone who takes the time to read and/or
respond to this message. At a couple points, it may seem like a rant
against C# / .NET, but we are pretty firmly stuck with this approach
and I am simply trying to find solutions which balance all the
difference goals and criterion we have for project quality.

First, a little background:
a.)
We're all previously C++ programmers with lots of experience with
ref-counted, synchronized auto-ptrs and things like that. We bought
into .NET because of its remoting advantages over COM, not because of
the notion that it makes resource management easier. We _knew_ it made
non-memory resource management harder, but decided it was worth it.

b.)
We happen to doing a lot of non-memory resource management in C#.

Some of this is implemented in MC++ classes which contain unmanaged
3rd party resources. These 3rd party handles have ordering constraints
requiring all resources of one type to be released before resources of
a different type can be released. For example, all image handles must
be released before the API handle can be released.

We're also doing things which IMHO make a lot of sense, but puts our
face right in the door of .NET so to speak; like, for example, having
an object contain a thread as a member variable which it stops as part
of its destructor. (More on why this blows up further down)

c.)
We're currently using the following IDisposable pattern for any object
which we think needs to have its lifetime managed explicitly (because
it contains large resources, for example).

class A : IDisposable
{
~A()
{
Dispose(false);
}

public void Dispose()
{
Dispose(true);
GC.SupressFinalize(this);
}

protected virtual void Dispose( bool bExplicitDispose )
{
if( bExplicitDispose )
{
// Cleanup managed resources...
// This means, call Dispose() on all your "owned" members which
// implement it. Ownership is kindof a touchy subject in C#, but
// IDisposable forces it to be considered.
}

// Cleanup unmanaged resources
// This means, delete any _truly_ unmanaged resources you have
// (like __nogc) pointers for a MC++ class. This also means
performing
// actions on managed objects other than calling Dispose though.
// For example, a Thread is a managed object, but this an
appropriate
// time to do a controlled stop on it.
}
}

Hopefully everyone will be familiar with this basic pattern as IMO it
is pretty common and popularly recommended by the so-called experts.
I've tried to document our interpretation of it pretty well, however,
because most of the examples I saw looked more like this:

protected virtual void Dispose( bool bDisposing )
{
if( bDisposing )
{
// Cleanup managed resources
}

// Unmanaged resources
}

Not only was the guidance poor on what to actually do in these blocks,
the variable name bDisposing which was used in 90% of example code is
fairly confusing given that you are inside of a function called
Dispose().

The initial guidance from literature we had read was that only certain
objects needed to have an IDisposable, and that IDisposable was really
against the .NET way of doing things. The above pattern supported this
by providing a client-side optimization choice to Dispose of an object
or not, but the code would basically work either way. The destructor
is implemented using Dispose so that clients not concerned with the
optimization get the same effect at some unspecified time later.

Since then, I've read articles discussing some of the problems with
properly implementing destructors make the argument that every object
should have an IDisposable interface on it, and that clients should
_always_ call this when they are done using an object. This changes
the intent of IDisposable from an optimization for special
circumstances to the everyday-preferred technique; in this case, the
role of the destructor is to guard against a careless programmer
forgetting to call Dispose.

The difference may be subtle, but it changes the roles of the two
functions fairly significantly IMO.

d.)
We designed a singleton pattern around our understanding of the way
the GC worked using the following concept.

class ASingleton
{
public ASingleton Instance
{
get
{
// Assorted locking, and allocation of the singleton variable.
// ...

return m_singleton;
}
}

public void Release()
{
m_singleton = null;
}

~ASingleton()
{
// Stuff
// ...
}

private ASingleton m_singleton = null;
}

The implementation of the Release method was the key to this concept.
Rather than disposing the singleton, we simply set the singleton
reference to the singleton to null. From that point on, any client
requesting the singleton would get a null reference and would be
expected to handle that situation accordingly. We thought this would
resolve an issue which is fairly hard to address in C++ (without the
support of Ref-Counted AutoPtrs at least)... Singleton lifetimes near
the end of the program lifetime.

For example... Consider the following code:
void Cleanup()
{
ASingleton aSingleton = ASingleton.Instance;

// If the singleton is still alive, then use it... otherwise
// the program must be going down and we can't use it
if( aSingleton != null )
{
aSingleton.DoSomething();
}
}

The singleton pattern we developed guaranteed that if the if-test
succeeded, the call to DoSomething() would have a valid object to work
on. There were no timing issues introduce the possibility that the
singleton could be destructed between the if-check and the function
call because the local reference inside the function prevented that
problem.

Of course, its still possible for clients to abuse this power:

void Cleanup
{
// If the singleton is still alive, then use it... otherwise
// the program must be going down and we can't use it
if( ASingleton.Instance != null )
{
ASingleton.Instance.DoSomething();
}
}

This was a potential bug we believed code reviews would catch.

e.)
For the 3rd party resources I mentioned before, we bundled them into
MC++ classes which derive from C# classes which are the primary
currency of the application. Let's have two classes, ImageAPI and
Image. The OEM Imaging Library requires that all OEM-image handles are
released before releasing the ImageAPI. To enforce this, each Image
object contains a reference to the ImageAPI. It happens to need this
_anyway_ so it can perform error checking on operations. This includes
the operations it needs to do inside of its destructor / IDisposable
interface.

The idea was that as long as Images were in the system which hadn't
been destructed, the ImageAPI couldn't be destructed. As each Image
was destructed, its reference to the ImageAPI would be removed and
eventually no Image object would contain a reference to the ImageAPI.
If the ImageAPI Singleton reference was also gone, then the ImageAPI
would also be disposed.

Problems:
a.)
The IDisposable pattern I wrote up earlier has a huge hole in it. The
hole is that because order of destructor calls is indeterminate, it is
not possible for an object to safely reference any of its own member
variables which are references.

Just to be clear on this, I'll provide a short example:

using System.Threading;

class A
{
~A()
{
m_event.Set();
}

ManualResetEvent m_event = new ManualResetEvent(false);
}

If possible, I'd prefer to not delve into a discussion of whether this
code is necessary, or go to the trouble of creating a realistic
example where it is necessary and/or useful to access member variables
in an object's destructor call.

It is my understanding based on a reading of the standard and from
debugging existing problems, that there is no guarantee that this code
will execute correctly. In fact, the problem I frequently see is an
ObjectDisposedException coming from the member variable when I try to
access it.

My initial understanding was that once an object which could not be
referenced anymore it is available for Finalization (call of
destructor) and Collection (release of object memory). As a subtle
point, this is different from saying the object has no more references
to it. This is to support the cyclical dependency issue where two
objects both have back-references to each other; if those two objects
are the only path to each other, then both are available for
Finalization and Collection.

However, this is not what is indicated by the standard. The standard
explicitly says that objects are available for Finalization once it is
not possible to access them via any mechanism ___other than destructor
calls___. This means, in the cyclical back-reference example I
mentioned a second ago, one object can still reference the other in
its destructor and there is no way to ensure or check that the other
object has not been destructed already.

CS Spec (10.9.2):
If the object, or any part of it, cannot be accessed by any possible
continuation of execution, ___other than than the running of
destructors___, the object is considered no longer in use, and it
becomes eligible for destruction.

Right after, this specification, an example is given of how an object
(B) can bring another object (A) back to life by taking its stored
reference to the second object and placing it in some global location
(Test) as part of its destructor logic. This prevents the second
object (A) from being Collected, but since there is no ordering
constraint on when the two objects (A&B) are destructed, there is no
way to ensure that the object which is now alive and has not been
collected is actually useful. All of its members may have already been
destructed. The language specification even says (p.87):

"In the above program, if the garbage collector chooses to run the
destructor of A before the destructor of B, then the output of this
program might be:"

"To avoid confusion and unexpected behavior, it is generally a good
idea for destructors to only perform cleanup on data stored in their
object's own fields, and not to perform any actions on references
objects or static fields."

This seems like a collosal restriction on the role of destructors
which has been glossed over by every C# book I've read (which is up to
about 4 by this point). Furthermore, I'm suprised its not a bigger
issue for large, complex projects with demanding requirements for
object interaction. I would have expected to see significantly more
griping about this problem than I have; the last thread to
significantly address the issue was back in December of 2002 and was
primarily talking about limitations of beta 1.

b.)
The same problem that caused the Dispose pattern to fall on its face
caused the Singleton pattern to fall on its face. If we can't depend
on member variables to be valid inside of a destructor call, then we
can't depend on object references inside of an Image class to prevent
destruction of an ImageAPI class.

Conclusions:
I honestly don't have very many. It seems two elegant designs which
seem like they would have worked perfectly in a large-scale software
environment and avoided some of the problems I've encountered on C++
development in the past are apparently impossible. I'm not necessarily
saying the design of the Garbage Collector is incorrect (its certainly
not a bug, it appears to meet all the specifications of the language
spec)... but I'm not convinced its correct either. Ability to access
member references would have been pretty high on my list of
requirements for a GC implementation.

One of the points I've seen some argue is to proliferate the
IDisposable interface on every single object and require objects to be
extremely diligent in calling of this function. I personally think
this obviates most of the advantage of having a GC implementation to
begin with; this is painfully similar to having to explicitly delete
every object as in C++. Of course, in modern C++ we have auto-ptrs
which manage this for us and delete's are a rare occurence.

The using keyword (another keyword I still believe was an afterthought
when they realized how painful non-memory resource management was
going to be) probably makes this easier for most people, but we have
severe exception-safety requirements which requires us to handle the
case where object Dispose functions throw... please don't make me go
into this, but the using keyword is off-limits for us until MS makes
some other enhancements to the language.

Another problem with propogating the IDisposable interface to every
object is that it becomes increasingly hard to decide when its safe to
call it. Explicitly calling Dispose, or a using statement works fine
when an object is allocated inside a function, used, and destroyed in
that same function. In a more complex environment where the object may
be stored in multiple places, or a multi-threaded environment where
multiple threads must get their chance to access it, the problem
becomes much more difficult.

If we have objects floating around the system being used in an
asychronous fashion by multiple threads, we want the objects to be
released sometime after all threads are done with it. We understand
that it doesn't have to be done ASAP, and that the GC will do it
whenever it pleases unless we call GC.Collect (which we do when
necessary).

If you remove the ability to do useful work in a destructor, and move
the cleanup logic from the destructor to the IDisposable interface,
then every object in the system needs an explicit refcount so we can
keep track of _when_ to call Dispose on it. Again, in C++ we'd have a
templatized auto-ptr to do the dirty work; in C#, we have to write
tons of agonizing code to increment and decrement reference counts,
and we have to do it in a way that is 100% exception-safe, or our
object will never be properly cleaned up.

The conclusion I came to within a couple months of working on C# is
that it was only safe to call Dispose() on an object when you are 100%
certain that you are the only code referencing it. This points
directly to the stupidity of using Dispose in this case to begin
with... if you have to be certain you're the only one referencing it,
then why call Dispose in the first place, release your reference on it
and let the GC collect it. That way, you avoid having thousands of
latent assumptions about when object lifetimes are over. Everytime a
new piece of logic is added in one place you don't have to check every
Dispose() call in the rest of the code to see if the new code is now
the final owner of the object.

The other conclusion I might be willing to draw is that
"ObjectDisposedException" may be predominate software bug in .NET
pretty soon just like Memory Access Violation is in C++. In C++, you
get this message when you access a memory location you don't own.
Usually, (my experience), this is because you're accessing a dangling
pointer which used to refer to a valid reference. C# / .NET purports
to get rid of this very common problem (and all the reference count
management often needed to get rid of it), but in reality it just
changes it from a crash into an exception. Better, perhaps, but still
far from correct.

Also:
I'd honestly like to know if I'm extremely off-base in my usage of the
language or the problems I'm running into. I think I may be a somewhat
special case because of the application domain (hint, its not web
servers or business software) and the nature of resource ownership
that is sometimes associated with that. But I think nearly all of the
problems I've been running into would be applicable to all problem
domains to varying extents.

I'd also love to know that there's some future MS patch which will
allow me to be confident that an object's member references haven't
been destructed before said object.... but I don't think thats gonna
happen.

Thanks,
-ken
Nov 15 '05 #1
Share this Question
Share on Google+
11 Replies


P: n/a
I"m surprised someone hasn't responded yet...guess I can take a crack at a
little of this.
I don't think I can solve your problem in full, but I have a few ideas and a
few questions that I'll put inline.
"Ken Durden" <cr*************@hotmail.com> wrote in message
news:18**************************@posting.google.c om...
I am in search of a comprehensive methodology of using these two
object cleanup approaches to get rid of a number of bugs,
unpleasantries, and cleanup-ordering issues we currently have in our
4-month old C#/MC++ .NET project project.

I'd like to thank in advance anyone who takes the time to read and/or
respond to this message. At a couple points, it may seem like a rant
against C# / .NET, but we are pretty firmly stuck with this approach
and I am simply trying to find solutions which balance all the
difference goals and criterion we have for project quality.

First, a little background:
a.)
We're all previously C++ programmers with lots of experience with
ref-counted, synchronized auto-ptrs and things like that. We bought
into .NET because of its remoting advantages over COM, not because of
the notion that it makes resource management easier. We _knew_ it made
non-memory resource management harder, but decided it was worth it.
This may be a tad bit of the problem. From what I've read you seem to be
using a few C++'isms that don't translate well. b.)
We happen to doing a lot of non-memory resource management in C#.

Some of this is implemented in MC++ classes which contain unmanaged
3rd party resources. These 3rd party handles have ordering constraints
requiring all resources of one type to be released before resources of
a different type can be released. For example, all image handles must
be released before the API handle can be released.

We're also doing things which IMHO make a lot of sense, but puts our
face right in the door of .NET so to speak; like, for example, having
an object contain a thread as a member variable which it stops as part
of its destructor. (More on why this blows up further down)
Something that is important here is when the resources are allocated and
deallocated. I'm going to assume, for the moment anyway, that the core of
your app allocates your API handle. Does a given class or method always
allocate its own image handle for its purposes, or do you have a pool or set
collection of image handles? c.)
We're currently using the following IDisposable pattern for any object
which we think needs to have its lifetime managed explicitly (because
it contains large resources, for example).

class A : IDisposable
{
~A()
{
Dispose(false);
}

public void Dispose()
{
Dispose(true);
GC.SupressFinalize(this);
}

protected virtual void Dispose( bool bExplicitDispose )
{
if( bExplicitDispose )
{
// Cleanup managed resources...
// This means, call Dispose() on all your "owned" members which
// implement it. Ownership is kindof a touchy subject in C#, but
// IDisposable forces it to be considered.
}

// Cleanup unmanaged resources
// This means, delete any _truly_ unmanaged resources you have
// (like __nogc) pointers for a MC++ class. This also means
performing
// actions on managed objects other than calling Dispose though.
// For example, a Thread is a managed object, but this an
appropriate
// time to do a controlled stop on it.
}
}

Hopefully everyone will be familiar with this basic pattern as IMO it
is pretty common and popularly recommended by the so-called experts.
I've tried to document our interpretation of it pretty well, however,
because most of the examples I saw looked more like this:

protected virtual void Dispose( bool bDisposing )
{
if( bDisposing )
{
// Cleanup managed resources
}

// Unmanaged resources
}

Not only was the guidance poor on what to actually do in these blocks,
the variable name bDisposing which was used in 90% of example code is
fairly confusing given that you are inside of a function called
Dispose().

The initial guidance from literature we had read was that only certain
objects needed to have an IDisposable, and that IDisposable was really
against the .NET way of doing things. The above pattern supported this
by providing a client-side optimization choice to Dispose of an object
or not, but the code would basically work either way. The destructor
is implemented using Dispose so that clients not concerned with the
optimization get the same effect at some unspecified time later.

Since then, I've read articles discussing some of the problems with
properly implementing destructors make the argument that every object
should have an IDisposable interface on it, and that clients should
_always_ call this when they are done using an object. This changes
the intent of IDisposable from an optimization for special
circumstances to the everyday-preferred technique; in this case, the
role of the destructor is to guard against a careless programmer
forgetting to call Dispose.

The difference may be subtle, but it changes the roles of the two
functions fairly significantly IMO.
As far as IDisposable goes, I recommend only implementing it when you have
non-managed resources that need to be cleaned up, such as your image handle.
The question of what classes needs IDisposable is a matter of design in your
specific application, but I'm quite sure not every class needs to implement
it. d.)
We designed a singleton pattern around our understanding of the way
the GC worked using the following concept.

class ASingleton
{
public ASingleton Instance
{
get
{
// Assorted locking, and allocation of the singleton variable.
// ...

return m_singleton;
}
}

public void Release()
{
m_singleton = null;
}

~ASingleton()
{
// Stuff
// ...
}

private ASingleton m_singleton = null;
}

The implementation of the Release method was the key to this concept.
Rather than disposing the singleton, we simply set the singleton
reference to the singleton to null. From that point on, any client
requesting the singleton would get a null reference and would be
expected to handle that situation accordingly. We thought this would
resolve an issue which is fairly hard to address in C++ (without the
support of Ref-Counted AutoPtrs at least)... Singleton lifetimes near
the end of the program lifetime.

For example... Consider the following code:
void Cleanup()
{
ASingleton aSingleton = ASingleton.Instance;

// If the singleton is still alive, then use it... otherwise
// the program must be going down and we can't use it
if( aSingleton != null )
{
aSingleton.DoSomething();
}
}

The singleton pattern we developed guaranteed that if the if-test
succeeded, the call to DoSomething() would have a valid object to work
on. There were no timing issues introduce the possibility that the
singleton could be destructed between the if-check and the function
call because the local reference inside the function prevented that
problem.

Of course, its still possible for clients to abuse this power:

void Cleanup
{
// If the singleton is still alive, then use it... otherwise
// the program must be going down and we can't use it
if( ASingleton.Instance != null )
{
ASingleton.Instance.DoSomething();
}
}

This was a potential bug we believed code reviews would catch.
As you noted before, IDisposable breaks that. Similar problems exist, in
theory atleast, with events and delegates. Although its off topic, this may
be important for you as well.
When you request the delegate list from an event handler to make a call, a
pattern such as this is often used:

RaiseEvent()
{
if (myEvent != null)
myEvent();
}

however, that is prone to myEvent becoming null. So a pattern like:

RaiseEvent()
{
Delegate invoke;
invoke = myEvent;
if ( invoke != null)
invoke.DynamicInvoke(null);
}
can be used, however you risk calling into a disposed object.

That problem can be solved with event properties and locks, if need be, its
still not too pretty. e.)
For the 3rd party resources I mentioned before, we bundled them into
MC++ classes which derive from C# classes which are the primary
currency of the application. Let's have two classes, ImageAPI and
Image. The OEM Imaging Library requires that all OEM-image handles are
released before releasing the ImageAPI. To enforce this, each Image
object contains a reference to the ImageAPI. It happens to need this
_anyway_ so it can perform error checking on operations. This includes
the operations it needs to do inside of its destructor / IDisposable
interface.

The idea was that as long as Images were in the system which hadn't
been destructed, the ImageAPI couldn't be destructed. As each Image
was destructed, its reference to the ImageAPI would be removed and
eventually no Image object would contain a reference to the ImageAPI.
If the ImageAPI Singleton reference was also gone, then the ImageAPI
would also be disposed.

Problems:
a.)
The IDisposable pattern I wrote up earlier has a huge hole in it. The
hole is that because order of destructor calls is indeterminate, it is
not possible for an object to safely reference any of its own member
variables which are references.

Just to be clear on this, I'll provide a short example:

using System.Threading;

class A
{
~A()
{
m_event.Set();
}

ManualResetEvent m_event = new ManualResetEvent(false);
}

If possible, I'd prefer to not delve into a discussion of whether this
code is necessary, or go to the trouble of creating a realistic
example where it is necessary and/or useful to access member variables
in an object's destructor call.

It is my understanding based on a reading of the standard and from
debugging existing problems, that there is no guarantee that this code
will execute correctly. In fact, the problem I frequently see is an
ObjectDisposedException coming from the member variable when I try to
access it.

Actually, if I recall properly, there is no guarentee your Finalizer will
ever even be called. When and if a finalizer is called is totally up to the
GC.
My initial understanding was that once an object which could not be
referenced anymore it is available for Finalization (call of
destructor) and Collection (release of object memory). As a subtle
point, this is different from saying the object has no more references
to it. This is to support the cyclical dependency issue where two
objects both have back-references to each other; if those two objects
are the only path to each other, then both are available for
Finalization and Collection.

However, this is not what is indicated by the standard. The standard
explicitly says that objects are available for Finalization once it is
not possible to access them via any mechanism ___other than destructor
calls___. This means, in the cyclical back-reference example I
mentioned a second ago, one object can still reference the other in
its destructor and there is no way to ensure or check that the other
object has not been destructed already.

CS Spec (10.9.2):
If the object, or any part of it, cannot be accessed by any possible
continuation of execution, ___other than than the running of
destructors___, the object is considered no longer in use, and it
becomes eligible for destruction.

Right after, this specification, an example is given of how an object
(B) can bring another object (A) back to life by taking its stored
reference to the second object and placing it in some global location
(Test) as part of its destructor logic. This prevents the second
object (A) from being Collected, but since there is no ordering
constraint on when the two objects (A&B) are destructed, there is no
way to ensure that the object which is now alive and has not been
collected is actually useful. All of its members may have already been
destructed. The language specification even says (p.87):

"In the above program, if the garbage collector chooses to run the
destructor of A before the destructor of B, then the output of this
program might be:"

"To avoid confusion and unexpected behavior, it is generally a good
idea for destructors to only perform cleanup on data stored in their
object's own fields, and not to perform any actions on references
objects or static fields."

This seems like a collosal restriction on the role of destructors
which has been glossed over by every C# book I've read (which is up to
about 4 by this point). Furthermore, I'm suprised its not a bigger
issue for large, complex projects with demanding requirements for
object interaction. I would have expected to see significantly more
griping about this problem than I have; the last thread to
significantly address the issue was back in December of 2002 and was
primarily talking about limitations of beta 1.
With careful design, its not to much trouble. Actually for business apps
atleast, 90% of your code doesn't even need explicit cleanup. What does
(usually DB handles, file handles, perhaps business specific resources) can
be handled with IDisposable as they are usually only used in a specific
realm of the application. Those objects that do require unmanaged resource
clean up are expected to be capable of cleaning themselves and only
themselves up in its finalizer.
For things like your API handle, sometimes you have to rely on your client
code to have cleaned up properly.
To achieve this, at application shutdown time, I generally clean up all
client code, abort all threads, etc before beginning to destroy my system
provided singletons.
b.)
The same problem that caused the Dispose pattern to fall on its face
caused the Singleton pattern to fall on its face. If we can't depend
on member variables to be valid inside of a destructor call, then we
can't depend on object references inside of an Image class to prevent
destruction of an ImageAPI class.

Conclusions:
I honestly don't have very many. It seems two elegant designs which
seem like they would have worked perfectly in a large-scale software
environment and avoided some of the problems I've encountered on C++
development in the past are apparently impossible. I'm not necessarily
saying the design of the Garbage Collector is incorrect (its certainly
not a bug, it appears to meet all the specifications of the language
spec)... but I'm not convinced its correct either. Ability to access
member references would have been pretty high on my list of
requirements for a GC implementation.

One of the points I've seen some argue is to proliferate the
IDisposable interface on every single object and require objects to be
extremely diligent in calling of this function. I personally think
this obviates most of the advantage of having a GC implementation to
begin with; this is painfully similar to having to explicitly delete
every object as in C++. Of course, in modern C++ we have auto-ptrs
which manage this for us and delete's are a rare occurence.

The using keyword (another keyword I still believe was an afterthought
when they realized how painful non-memory resource management was
going to be) probably makes this easier for most people, but we have
severe exception-safety requirements which requires us to handle the
case where object Dispose functions throw... please don't make me go
into this, but the using keyword is off-limits for us until MS makes
some other enhancements to the language.
On this I agree, I don't like using the using keyword, however explicit
disposal of objects when they are no longer needed is ideal. However, I
doubt that MS will ever modify the using keyword in any manner that modifies
exception behaviour, doing so would probably break far to much code.
Another problem with propogating the IDisposable interface to every
object is that it becomes increasingly hard to decide when its safe to
call it. Explicitly calling Dispose, or a using statement works fine
when an object is allocated inside a function, used, and destroyed in
that same function. In a more complex environment where the object may
be stored in multiple places, or a multi-threaded environment where
multiple threads must get their chance to access it, the problem
becomes much more difficult.

If we have objects floating around the system being used in an
asychronous fashion by multiple threads, we want the objects to be
released sometime after all threads are done with it. We understand
that it doesn't have to be done ASAP, and that the GC will do it
whenever it pleases unless we call GC.Collect (which we do when
necessary).

If you remove the ability to do useful work in a destructor, and move
the cleanup logic from the destructor to the IDisposable interface,
then every object in the system needs an explicit refcount so we can
keep track of _when_ to call Dispose on it. Again, in C++ we'd have a
templatized auto-ptr to do the dirty work; in C#, we have to write
tons of agonizing code to increment and decrement reference counts,
and we have to do it in a way that is 100% exception-safe, or our
object will never be properly cleaned up.

The conclusion I came to within a couple months of working on C# is
that it was only safe to call Dispose() on an object when you are 100%
certain that you are the only code referencing it. This points
directly to the stupidity of using Dispose in this case to begin
with... if you have to be certain you're the only one referencing it,
then why call Dispose in the first place, release your reference on it
and let the GC collect it. That way, you avoid having thousands of
latent assumptions about when object lifetimes are over. Everytime a
new piece of logic is added in one place you don't have to check every
Dispose() call in the rest of the code to see if the new code is now
the final owner of the object.

The other conclusion I might be willing to draw is that
"ObjectDisposedException" may be predominate software bug in .NET
pretty soon just like Memory Access Violation is in C++. In C++, you
get this message when you access a memory location you don't own.
Usually, (my experience), this is because you're accessing a dangling
pointer which used to refer to a valid reference. C# / .NET purports
to get rid of this very common problem (and all the reference count
management often needed to get rid of it), but in reality it just
changes it from a crash into an exception. Better, perhaps, but still
far from correct.

Also:
I'd honestly like to know if I'm extremely off-base in my usage of the
language or the problems I'm running into. I think I may be a somewhat
special case because of the application domain (hint, its not web
servers or business software) and the nature of resource ownership
that is sometimes associated with that. But I think nearly all of the
problems I've been running into would be applicable to all problem
domains to varying extents.

I'd also love to know that there's some future MS patch which will
allow me to be confident that an object's member references haven't
been destructed before said object.... but I don't think thats gonna
happen.

Ok, now time for my current idea, assuming i understood everything you said
and overlooked nothing(you'll have to judge that, it is your project after
all). This doesn't exactly cover everything, but should allow you to be sure
your sub classes have been cleared before your controller class is.
Using your image handle sampe again:
assuming:
1) Your ImageAPI handle class creates your Image handles (I suspect it
should).
2) Your Image handle class can destroy itself without accessing other
classes.
3) That after each Image handle is created, there is no way to retrieve that
image directly from the ImageAPI class.

Now, before I describe this possible solution, I need to explain weak
references. There is a class, aptly named WeakReference, that can store a
reference to an object weakly, that is the GC doesn't consider it as a valid
reference to the object and will collect even if a weak reference exists.
The class provides three properties you need, IsAlive, which returns a
boolean specifing if the object is alive or not, Target which contains the
object reference itself, and TrackResurrection which specifies if the object
is tracked until finalization or until the actual collection occurs.
What I would do is every time I create an image handle I'd store a weak
reference to it. Then, when I need to know if its safe to tear down my
ImageAPI class, all I would have to do is scan the weak references and see
if any are still alive. If any are then they have not been
finalized\collected and must be considered live. If need be, the condition
could be handled by throwing an exception from the ImageAPI Dispose method
that the class cannot be disposed because Image handle classes still exist,
or by calling Image.Dispose on the weak references (you have to be careful
though, the image may not be reachable but unfinalized and be considered
alive). You could alternatly provide a IsDisposable property that scans the
weak reference list and returns a boolean that specifies if its safe to
dispose the current class or not.
In essence, I suspect that you will have to design your resource management
around .NET rather than C++ ideas. Finalizers aren't worth very much in the
C# world, they reduce performance and often provide little benifit except to
catch situations where Dispose wasn't called. IDisposable, while not the
prettiest thing in existance, provides a rather simple pattern for unmanaged
resource disposal.

I hope some of my comments atleast helped you figure out some of your
problems or atleast shed some light on possible alternatives. Thanks,
-ken

Nov 15 '05 #2

P: n/a
Ken Durden <cr*************@hotmail.com> wrote:

<snip>

Rather than replying to each point, I have a few general points:

o I assume when you talk about implementing IDispose for every type,
you actually only mean those which have some reference to an unmanaged
resource. You certainly don't need to implement IDispose for a type
which only has fields of (say) some strings, ints, DateTimes and
references to similar types.

o You're right about the role of IDisposable - the idea is that clients
still *do* call Dispose rather than leaving it up to the GC. I suspect
it wouldn't be uncommon for the destructor to also log a debug message
saying that it has been run and that therefore a developer has missed
out a call to Dispose.

o A point about using references within a destructor: other objects may
have had their destructor called, but as far as I can see they won't
have been garbage collected. You can therefore use references to any
type which doesn't have a destructor. Whether this helps you or not, I
don't know.

o It sounds like your tree is acyclic, and would be fine with reference
counting. I suggest you might want to at least *consider* using a
reference-counting Dispose: basically if *all* clients *always* call
Dispose, then you can use Dispose to decrement a reference count. You'd
need to *increment* the reference count manually with another method,
so it wouldn't be pretty, but I *think* this would help to solve the
problems you're having. Basically I don't think you'll get a pretty
solution to this, because it sounds like you've got legacy stuff which
is assuming that the RAII idiom is available, and it's basically not in
..NET...

o I didn't quite follow why your singleton pattern is failing - could
you explain again?

o Another point about the singleton pattern: are you aware that double-
checked locking doesn't work in .NET? You should either use type
initialization to ensure the singleton nature, or explicitly lock on
every call to Instance. Have a look at
http://www.pobox.com/~skeet/csharp/singleton.html for more information
about this.

--
Jon Skeet - <sk***@pobox.com>
http://www.pobox.com/~skeet/
If replying to the group, please do not mail me too
Nov 15 '05 #3

P: n/a
Jon Skeet <sk***@pobox.com> wrote in message news:<MP************************@news.microsoft.co m>...

o I assume when you talk about implementing IDispose for every type,
you actually only mean those which have some reference to an unmanaged
resource. You certainly don't need to implement IDispose for a type
which only has fields of (say) some strings, ints, DateTimes and
references to similar types.
Correct. I don't need to implement IDisposable for a class with
strings, ints, and DateTimes. However, I believe I still need to
implement IDisposable for objects containing other managed types which
I need to perform some type of cleanup on. See my other thread for
examples, but stopping a thread is a classic one.

o You're right about the role of IDisposable - the idea is that clients
still *do* call Dispose rather than leaving it up to the GC. I suspect
it wouldn't be uncommon for the destructor to also log a debug message
saying that it has been run and that therefore a developer has missed
out a call to Dispose.
Just for the record, I wasn't advocating this, but simply mentioning
it as an argument which I've seen on this ng.

I personally believe this is insane, but if this is truly the ".NET
way of doing things" which is accepted by the community of programmers
then I'll have to change alot of things about our development
practices.

o A point about using references within a destructor: other objects may
have had their destructor called, but as far as I can see they won't
have been garbage collected. You can therefore use references to any
type which doesn't have a destructor. Whether this helps you or not, I
don't know.
This may help to some extent, because it could allow me to write
Destructor logic which conditionally does the stuff it needs to do
based on whether the objects it needs to do that stuff are still
non-collected (which they should always be), non-destructed and
non-disposed. Since destructors usually call Dispose, I would simply
publicize IsDisposed as a property on every class which comes off of
IDisposable.

o It sounds like your tree is acyclic, and would be fine with reference
counting. I suggest you might want to at least *consider* using a
reference-counting Dispose: basically if *all* clients *always* call
Dispose, then you can use Dispose to decrement a reference count. You'd
need to *increment* the reference count manually with another method,
so it wouldn't be pretty, but I *think* this would help to solve the
problems you're having. Basically I don't think you'll get a pretty
solution to this, because it sounds like you've got legacy stuff which
is assuming that the RAII idiom is available, and it's basically not in
.NET...
We considered this technique a long time ago and dismissed it as being
too fragile. It's particularly hard to ensure that these increment and
decrement operations _always_ happen in an exceptional environment. It
looks like we'll likely have to reconsider this issue.

o I didn't quite follow why your singleton pattern is failing - could
you explain again?
The singleton pattern relies on the fact that every object which
_must_ have the singleton in order to do its work takes it in as a
constructor argument and stores it. This was based on the assumption
that the existence of the reference in the object (say, Image) would
prevent the Singleton (say, ImageAPI) from being destructed until all
Images had been destructed / disposed (the call to Dispose on Image
sets the ref. to ImageAPI to null in order to release it). The concept
is valid even if you ignore the fact that the 3rd party OEM library
has requirements about order of resource destruction; one class may
depend on another class in order to do its work. Ex: The logging
client should be one of the last things to be released in the system.

Once we discovered our assumption behind the design was wrong, we had
an explanation for why the design wasn't working. We frequently
(70-90%) see error messages reporting that the order of resource
destruction was incorrect coming out of the OEM library.

o Another point about the singleton pattern: are you aware that double-
checked locking doesn't work in .NET? You should either use type
initialization to ensure the singleton nature, or explicitly lock on
every call to Instance. Have a look at
http://www.pobox.com/~skeet/csharp/singleton.html for more information
about this.


We're using the Third version described on this web-page. We haven't
seen any problems with it yet (we did have some problems earlier on
until we moved from a poorer version to the Third version):

The website says this doesn't work in Java, and that it may not be
quite as efficient as other techniques. I saw the comment that it may
not work in C#, but he's not really sure.... has there been any
further conclusion on whether it works or not, I personally find
technique #3 much clearer than #4 and #5 (which rely on subleties of
the CLR as opposed to basic locking and checking mechanisms which are
non-language specific). Thanks for the tip, and I'll consider looking
at other implementations, but this isn't really a concern right now.

// For everyone's reference:
// Third version:
public class Singleton
{
static Singleton instance=null;
static object padlock = new object();

Singleton()
{
}

public static Singleton GetInstance()
{
if (instance==null)
{
lock (padlock)
{
if (instance==null)
instance = new Singleton();
}
}
return instance;
}
}

Thanks,
-ken
Nov 15 '05 #4

P: n/a
Ken Durden <cr*************@hotmail.com> wrote:
o I assume when you talk about implementing IDispose for every type,
you actually only mean those which have some reference to an unmanaged
resource. You certainly don't need to implement IDispose for a type
which only has fields of (say) some strings, ints, DateTimes and
references to similar types.
Correct. I don't need to implement IDisposable for a class with
strings, ints, and DateTimes. However, I believe I still need to
implement IDisposable for objects containing other managed types which
I need to perform some type of cleanup on. See my other thread for
examples, but stopping a thread is a classic one.


Sure.
o You're right about the role of IDisposable - the idea is that clients
still *do* call Dispose rather than leaving it up to the GC. I suspect
it wouldn't be uncommon for the destructor to also log a debug message
saying that it has been run and that therefore a developer has missed
out a call to Dispose.


Just for the record, I wasn't advocating this, but simply mentioning
it as an argument which I've seen on this ng.

I personally believe this is insane, but if this is truly the ".NET
way of doing things" which is accepted by the community of programmers
then I'll have to change alot of things about our development
practices.


I think that's basically the bottom line - don't assume that any idioms
you're used to from C++ will work in .NET.
o A point about using references within a destructor: other objects may
have had their destructor called, but as far as I can see they won't
have been garbage collected. You can therefore use references to any
type which doesn't have a destructor. Whether this helps you or not, I
don't know.


This may help to some extent, because it could allow me to write
Destructor logic which conditionally does the stuff it needs to do
based on whether the objects it needs to do that stuff are still
non-collected (which they should always be), non-destructed and
non-disposed. Since destructors usually call Dispose, I would simply
publicize IsDisposed as a property on every class which comes off of
IDisposable.


Right - that should be fine.
o It sounds like your tree is acyclic, and would be fine with reference
counting. I suggest you might want to at least *consider* using a
reference-counting Dispose: basically if *all* clients *always* call
Dispose, then you can use Dispose to decrement a reference count. You'd
need to *increment* the reference count manually with another method,
so it wouldn't be pretty, but I *think* this would help to solve the
problems you're having. Basically I don't think you'll get a pretty
solution to this, because it sounds like you've got legacy stuff which
is assuming that the RAII idiom is available, and it's basically not in
.NET...


We considered this technique a long time ago and dismissed it as being
too fragile. It's particularly hard to ensure that these increment and
decrement operations _always_ happen in an exceptional environment. It
looks like we'll likely have to reconsider this issue.


The decrement is fairly straight-forward with the using construct, but
you've said you can't use that for other reasons. The increment would
be easy to miss though - you'd probably want some kind of check that
you never decrement more times in a single thread than you decrement.
You could even (for testing purposes) make some kind of stack check, if
you could guarantee that the increment/decrement should always be
called in pairs from the same method.
o I didn't quite follow why your singleton pattern is failing - could
you explain again?


The singleton pattern relies on the fact that every object which
_must_ have the singleton in order to do its work takes it in as a
constructor argument and stores it.


Hmm... still not sure I follow you. The thing which fundamentally stops
the singleton instance from being collectible is the reference within
the Singleton class itself.

Or is the idea that someone can call Dispose on the Singleton class,
and the instance itself will still be valid for as long as the other
classes have a reference to it stored somewhere? If so, I *think* I see
what you mean...
This was based on the assumption
that the existence of the reference in the object (say, Image) would
prevent the Singleton (say, ImageAPI) from being destructed until all
Images had been destructed / disposed (the call to Dispose on Image
sets the ref. to ImageAPI to null in order to release it). The concept
is valid even if you ignore the fact that the 3rd party OEM library
has requirements about order of resource destruction; one class may
depend on another class in order to do its work. Ex: The logging
client should be one of the last things to be released in the system.

Once we discovered our assumption behind the design was wrong, we had
an explanation for why the design wasn't working. We frequently
(70-90%) see error messages reporting that the order of resource
destruction was incorrect coming out of the OEM library.
Ah, right. I'm with you now - the problem was that lots of things were
no longer reachable, all at the same time, and there's no guarantee
about which of their destructors will be called first, yes?
o Another point about the singleton pattern: are you aware that double-
checked locking doesn't work in .NET? You should either use type
initialization to ensure the singleton nature, or explicitly lock on
every call to Instance. Have a look at
http://www.pobox.com/~skeet/csharp/singleton.html for more information
about this.


We're using the Third version described on this web-page. We haven't
seen any problems with it yet (we did have some problems earlier on
until we moved from a poorer version to the Third version):


While you may well not have done *yet*, experts including Chris Brumme
have shown it to fail according to the .NET memory specification
itself. The current CLR implementation is much stronger than the .NET
memory specification, partly due to the strong model of x86 itself.

If you stick with x86 and the current CLR, you should be fine - but I
don't think it's a good idea, myself.
The website says this doesn't work in Java, and that it may not be
quite as efficient as other techniques. I saw the comment that it may
not work in C#, but he's not really sure...
I might have to amend that - I *am* pretty sure it doesn't work, but
there are pages which claim it does. However, they don't really explain
*why* they think it should work in .NET but not in Java, and they
certainly don't go into as much details as the articles which show why
it *doesn't* work in .NET (for basically the same reasons as it doesn't
work in Java).
has there been any
further conclusion on whether it works or not, I personally find
technique #3 much clearer than #4 and #5 (which rely on subleties of
the CLR as opposed to basic locking and checking mechanisms which are
non-language specific).
Hang on - the subtleties of the CLR are non-language specific too.
Instead, you're relying on the subtleties of the CLR memory model which
are too subtle for the DCL algorithm to work.

The only language-specific bit of the simpler later implementation is
the laziness with which the singleton is created, making sure that the
C# compiler doesn't include the beforefieldinit flag on the type.
Thanks for the tip, and I'll consider looking
at other implementations, but this isn't really a concern right now.


Fair enough.

--
Jon Skeet - <sk***@pobox.com>
http://www.pobox.com/~skeet/
If replying to the group, please do not mail me too
Nov 15 '05 #5

P: n/a
"Jon Skeet" <sk***@pobox.com> wrote in message
news:MP************************@news.microsoft.com ...
o Another point about the singleton pattern: are you aware that double- checked locking doesn't work in .NET? You should either use type
initialization to ensure the singleton nature, or explicitly lock on
every call to Instance. Have a look at
http://www.pobox.com/~skeet/csharp/singleton.html for more information
about this.


We're using the Third version described on this web-page. We haven't
seen any problems with it yet (we did have some problems earlier on
until we moved from a poorer version to the Third version):


While you may well not have done *yet*, experts including Chris Brumme
have shown it to fail according to the .NET memory specification
itself. The current CLR implementation is much stronger than the .NET
memory specification, partly due to the strong model of x86 itself.

If you stick with x86 and the current CLR, you should be fine - but I
don't think it's a good idea, myself.

Just FYI, here's Chris Brumme's article:
http://blogs.gotdotnet.com/cbrumme/P...b-c69f01d7ff2b

(For the OP) He really goes into the nuts & bolts of the CLR in his blogs.
If you read through his other articles, they may give you other insight for
your problems (that is, if you haven't seen Chris' stuff before).

--
Mike Mayer
http://www.mag37.com/csharp/
mi**@mag37.com
Nov 15 '05 #6

P: n/a
> > While you may well not have done *yet*, experts including Chris Brumme
have shown it to fail according to the .NET memory specification
itself. The current CLR implementation is much stronger than the .NET
memory specification, partly due to the strong model of x86 itself.

If you stick with x86 and the current CLR, you should be fine - but I
don't think it's a good idea, myself.

Just FYI, here's Chris Brumme's article:
http://blogs.gotdotnet.com/cbrumme/P...b-c69f01d7ff2b

(For the OP) He really goes into the nuts & bolts of the CLR in his blogs.
If you read through his other articles, they may give you other insight for
your problems (that is, if you haven't seen Chris' stuff before).


Thanks for the link, his site is indeed very detailed.

While its kindof getting off topic to the original-post, I can't
imagining MS ever actually requiring programmers to code to those kind
of requirements (this is basically what he says towards the end).

While I've always been a rather pedantic programmer with regards to
standard conformance (in particular in my former C++ life) in regards
to memory alignment, # of bits in a char, etc... While I don't
particularly mind changing to a different Singleton pattern, it seems
that the singleton pattern is just an example of one of several
multithreaded cases where what essentially looks like perfectly valid
code in any MT language (C, C++, C#, Java, Pascal, Eiffel, etc)
doesn't work due to a _very_ subtle aspect of the language / CLR
specification. I don't see how its humanly possible to detect these
issues at coding time or code review.

-ken
Nov 15 '05 #7

P: n/a
John,

I think you may have figured out the answers to some of your own
questions as you read through the email, but just to be clear I
thought I'd answer them.

Jon Skeet <sk***@pobox.com> wrote in message
news:<MP************************@news.microsoft.co m>...
o I didn't quite follow why your singleton pattern is failing - could
you explain again?
The singleton pattern relies on the fact that every object which
_must_ have the singleton in order to do its work takes it in as a
constructor argument and stores it.


Hmm... still not sure I follow you. The thing which fundamentally stops
the singleton instance from being collectible is the reference within
the Singleton class itself.

Or is the idea that someone can call Dispose on the Singleton class,
and the instance itself will still be valid for as long as the other
classes have a reference to it stored somewhere? If so, I *think* I see
what you mean...


In our original, current Singleton design. No one ever called Dispose
on it. Someone high up in application land, calls Release on all
singletons (that may be static or non-static), but all that does is
release the m_singleton static variable. If no other references to
that object are held it should be GC'ed at some point.

This allows code to guarantee that as long as they have a non-null
reference to a singleton object, they are assured that it is valid w/o
the need for locks.

Once we discovered our assumption behind the design was wrong, we had
an explanation for why the design wasn't working. We frequently
(70-90%) see error messages reporting that the order of resource
destruction was incorrect coming out of the OEM library.


Ah, right. I'm with you now - the problem was that lots of things were
no longer reachable, all at the same time, and there's no guarantee
about which of their destructors will be called first, yes?


Exactly. Essentially, everything of importance becomes available all
at once near the end of program execution. There was no explicit code
to attempt to enforce an ordering constraint on destruction except for
our particular usage of the GC which is at the root of the problem an
incorrect understanding of its specification.

While you may well not have done *yet*, experts including Chris Brumme
have shown it to fail according to the .NET memory specification
itself. The current CLR implementation is much stronger than the .NET
memory specification, partly due to the strong model of x86 itself.

If you stick with x86 and the current CLR, you should be fine - but I
don't think it's a good idea, myself.
We have the fortunate situation of deciding what hardware we run on,
so for the forseeable future we do get to stick with x86... but
there's always talk of switching to Itanium, or AMD Opteron (sp?),
etc....
has there been any
further conclusion on whether it works or not, I personally find
technique #3 much clearer than #4 and #5 (which rely on subleties of
the CLR as opposed to basic locking and checking mechanisms which are
non-language specific).


Hang on - the subtleties of the CLR are non-language specific too.
Instead, you're relying on the subtleties of the CLR memory model which
are too subtle for the DCL algorithm to work.

The only language-specific bit of the simpler later implementation is
the laziness with which the singleton is created, making sure that the
C# compiler doesn't include the beforefieldinit flag on the type.


Sorry. Yes, the subleties of the CLR as non-language specific, as long
as your talking about a CLS-compliant language. However, programming
to specific properties of the CLR seems less intuitive to me than the
other version, which seems to me to be correct in any MT language with
basic MT constructs such as locking. Now, if version #3 isn't correct
under C#, then its a non-issue, who cares if v3 is more or less
intuitive than v4 or v5, its correct.

Thanks,
-ken
Nov 15 '05 #8

P: n/a
Ken Durden <cr*************@hotmail.com> wrote:
While I've always been a rather pedantic programmer with regards to
standard conformance (in particular in my former C++ life) in regards
to memory alignment, # of bits in a char, etc... While I don't
particularly mind changing to a different Singleton pattern, it seems
that the singleton pattern is just an example of one of several
multithreaded cases where what essentially looks like perfectly valid
code in any MT language (C, C++, C#, Java, Pascal, Eiffel, etc)
doesn't work due to a _very_ subtle aspect of the language / CLR
specification. I don't see how its humanly possible to detect these
issues at coding time or code review.


It's actually quite simple though: if you want to read or write a value
which is shared between multiple threads, you must lock first, and
always lock on the same reference when accessing the value. I believe
that if you do that, you'll never have problems. The DCL trick is
trying to read data without doing any locking, which means some but not
all previous writes may have been noticed, and it's the ordering which
is the pain here.

Basically so long as you don't try too much trickery, it's not a
problem.

--
Jon Skeet - <sk***@pobox.com>
http://www.pobox.com/~skeet/
If replying to the group, please do not mail me too
Nov 15 '05 #9

P: n/a
Ken Durden <cr*************@hotmail.com> wrote:
Ah, right. I'm with you now - the problem was that lots of things were
no longer reachable, all at the same time, and there's no guarantee
about which of their destructors will be called first, yes?


Exactly. Essentially, everything of importance becomes available all
at once near the end of program execution. There was no explicit code
to attempt to enforce an ordering constraint on destruction except for
our particular usage of the GC which is at the root of the problem an
incorrect understanding of its specification.


I have an idea for you: have a static map from the "leaf" nodes to the
"branch" nodes (where the leaf nodes are the ones which should be
removed first). In the Dispose method for the leaf node class, remove
the entry from the map, which means the map will no longer be holding
onto that leaf node's reference to the branch node. That means the
branch node will only be scheduled for finalization when *all* the leaf
nodes have been disposed, which is what you want.

Now, the tricky bit is that the map (unless you're careful) would end
up making it so that the leaf nodes don't get collected. What you can
do is store some other reference in the map, but one which is unique to
the leaf. So, within the leaf class you'd have:

class Leaf
{
object mapReference = new object();

Leaf()
{
// All the other stuff
ResourceManagement.AddReference (mapReference, branchNode);
}

Dispose()
{
// All the other stuff
ResourceManagement.RemoveReference (mapReference);
}
}

Do you see what I mean? And if so, does it help you?

Note that it means there need to be *two* garbage collections before
the branch node is finalized, which may mean it doesn't happen before
the app ends...

--
Jon Skeet - <sk***@pobox.com>
http://www.pobox.com/~skeet/
If replying to the group, please do not mail me too
Nov 15 '05 #10

P: n/a
Jon Skeet <sk***@pobox.com> wrote in message news:<MP************************@news.microsoft.co m>...

I have an idea for you: have a static map from the "leaf" nodes to the
"branch" nodes (where the leaf nodes are the ones which should be
removed first). In the Dispose method for the leaf node class, remove
the entry from the map, which means the map will no longer be holding
onto that leaf node's reference to the branch node. That means the
branch node will only be scheduled for finalization when *all* the leaf
nodes have been disposed, which is what you want.

Now, the tricky bit is that the map (unless you're careful) would end
up making it so that the leaf nodes don't get collected. What you can
do is store some other reference in the map, but one which is unique to
the leaf. So, within the leaf class you'd have:


I've actually already tried this technique (I called it the
GCHoldingPond). There's a two main problems with it:

a.)
It is extremely difficult to code to. Simple cases are easy, but hard
cases are very hard. For example, any function or property which
changes the value of an owned variable, must release using the
previous value and then set the new value.

The GCHoldingPond actually has to implement reference counting because
you get multiple leafs putting the same branch object in there. I
implemented the holding pond as a HashTable with the object as the
key, and the ref count as the value.

We ran into some cute issues too where Leaf objects which are actually
owned by the branch object (Images owned by the ImageAPI, for example)
should _not_ put their reference to the branch object into the holding
pond. Of course, most Images do this, so we had to add a special
constructor flag to indicate whether to do this or not.

b.)
I could never get it to actually work. I worked on the holding pond
for about 2 days. I only used it in one area of the code, sortof a
testbed before distributing it to more objects with similar
requirements. Whats really frustrating of course, is that sometimes it
"works," meaning that I don't get any additional errors regarding
incorrect destruction of the ImageAPI object.

I still _think_ it could work in principal, but its kindof a moot
point. After seeing implementation execution of the concept on the
client-side, technical management, project management, and myself all
agreed that the concept was too complicated to require "average"
programmers to use it.

Whats kindof sad is that since the real problems don't happen until
application end-of-life, we're considering just abandoning correct
destruction execution. Certainly for now, I no longer have schedule to
spend another week or so re-designing the cleanup idioms.

Although I haven't done a real careful analysis of the different
techniques which are possible both in the specialized testbed domain
for ImageAPI and Image, or for the application in general (Singletons,
etc...). For the ImageAPI example, I'm leaning towards ref-counted
Dispose interface. I may also use a type-specialized ref-counted
sentry object to increment and decrement the ref-count. Or perhaps I
could create a RefCount interface, and then create a sentry to
increment and decrement using that interface.

Thanks for your help,
-ken
Nov 15 '05 #11

P: n/a
Ken Durden <cr*************@hotmail.com> wrote:
I've actually already tried this technique (I called it the
GCHoldingPond). There's a two main problems with it:

a.)
It is extremely difficult to code to. Simple cases are easy, but hard
cases are very hard. For example, any function or property which
changes the value of an owned variable, must release using the
previous value and then set the new value.
Yes, that's nasty. It really depends on the type of client code you
use. For instance, the using{} construct effectively declared the
variable as readonly, and I've never had a problem with that - but I
can see that using a different type of resource I might.
The GCHoldingPond actually has to implement reference counting because
you get multiple leafs putting the same branch object in there.
Why is that a problem? With a hashtable using the leaf as the key and
the branch as the node, you get effective automatic ref counting, as
far as I can see.
We ran into some cute issues too where Leaf objects which are actually
owned by the branch object (Images owned by the ImageAPI, for example)
should _not_ put their reference to the branch object into the holding
pond. Of course, most Images do this, so we had to add a special
constructor flag to indicate whether to do this or not.
Blech. That's nasty, yes.
b.)
I could never get it to actually work. I worked on the holding pond
for about 2 days. I only used it in one area of the code, sortof a
testbed before distributing it to more objects with similar
requirements. Whats really frustrating of course, is that sometimes it
"works," meaning that I don't get any additional errors regarding
incorrect destruction of the ImageAPI object.

I still _think_ it could work in principal, but its kindof a moot
point. After seeing implementation execution of the concept on the
client-side, technical management, project management, and myself all
agreed that the concept was too complicated to require "average"
programmers to use it.
Fair enough. If you ever *are* interested in trying it again, I'd be
interested in helping you to fix the implementation, if you like.
Whats kindof sad is that since the real problems don't happen until
application end-of-life, we're considering just abandoning correct
destruction execution. Certainly for now, I no longer have schedule to
spend another week or so re-designing the cleanup idioms.
Right. Are there any actual potential problems caused by incorrect
destruction?
Although I haven't done a real careful analysis of the different
techniques which are possible both in the specialized testbed domain
for ImageAPI and Image, or for the application in general (Singletons,
etc...). For the ImageAPI example, I'm leaning towards ref-counted
Dispose interface. I may also use a type-specialized ref-counted
sentry object to increment and decrement the ref-count. Or perhaps I
could create a RefCount interface, and then create a sentry to
increment and decrement using that interface.


Yup. All of those sound like possibilities. If you ever need another
pair of eyes to look over the implementation, let me know.

--
Jon Skeet - <sk***@pobox.com>
http://www.pobox.com/~skeet/
If replying to the group, please do not mail me too
Nov 15 '05 #12

This discussion thread is closed

Replies have been disabled for this discussion.