By using this site, you agree to our updated Privacy Policy and our Terms of Use. Manage your Cookies Settings.
434,741 Members | 2,004 Online
Bytes IT Community
+ Ask a Question
Need help? Post your question and get tips & solutions from a community of 434,741 IT Pros & Developers. It's quick & easy.

Garbage Colletor

P: n/a
Hellow

I'am not sure what to think about the Garbage Collector.

I have a Class OutlookObject, It have two private variables.
Private Microsoft.Office.Interop.Outlook.Application _Application = null;
Private Microsoft.Office.Interop.Outlook.NameSpace _Namespace = null;

The Constructor:
public OutlookObject()
{
_Application = new Microsoft.Office.Interop.Outlook.Application();
_Namespace = _Application.GetNameSpace("MAPI");
}
..... after this alot of members and properties

When i need to do something in outlook, I'll instanciate the OutlookObject
objOL = new OutlookObject();
And when i don't need Outlook anymore I'll use objOL = null;

In my head this should destroy the objOL and thereafter also the
_Application and _Namespace in the Garbage Collector, but it don't happens
all time.
So I'll found a way that should force the destruction, the use of finalizer,
so i put this into the OutlookObject() class:

~OutlookObject()
{
if ( _Application == null)
return;
else
{
_Application =null;
}
}

And I'll also tryed with the OutlookClass to dever from IDisposable class
and put in:
Public Dispose()
{
if ( _Application == null)
return;
else
{
_Application =null;
}
}

With both the same result - Outlook don't get destroyed before i'll close
the application.

The problem is that I'll use Outlook a lot of times within my application,
and with the Task Manager I'll some time see five or more Outlook instances
WHY?????
And if there is so many Outlook instances, they stay open event after I'll
close my application. WHY???
Kind regards

Johnny E. Jensen



Oct 9 '07 #1
Share this Question
Share on Google+
56 Replies


P: n/a
Johnny E. Jensen wrote:
Hellow

I'am not sure what to think about the Garbage Collector.
You should love the GC. It's your friend.

If you don't love the GC, you probably just misunderstand it.

I don't know about the specific Outlook.Application object, but assuming
it has a Close() or Dispose() method, then you're not using it right.
If it doesn't have a Close() or Dispose() method, then your inability to
get it destroyed as you'd like is not the fault of the GC. It's the
fault of the Outlook.Application object.

As for your well-intentioned-but-not-quite-right attempts to get this to
work go...
[...]
When i need to do something in outlook, I'll instanciate the OutlookObject
objOL = new OutlookObject();
And when i don't need Outlook anymore I'll use objOL = null;
That is fine, as long as you don't need the Outlook.Application object
closed, freed, released, or otherwise discarded immediately. It sounds
as though you do though.
In my head this should destroy the objOL and thereafter also the
_Application and _Namespace in the Garbage Collector, but it don't happens
all time.
Nor should it. Setting the reference to null simply disconnects the
reference and makes it unreachable from your program's data. Nothing
will happen to it until the GC needs to actually release the memory
being used by the object, and that might not be in awhile.

Setting a reference to null is _only_ for causing that reference to be
eligible for garbage collection. It essentially frees the managed
memory used by the object, but the GC isn't necessarily going to bother
to try to take advantage of that until it needs to.

If you have other cleanup you need to do, you are required to do that
before you discard the reference. You would do this via some method the
object class provides, and that method is usually Close(), Dispose() or
both.
So I'll found a way that should force the destruction, the use of finalizer,
so i put this into the OutlookObject() class:

~OutlookObject()
{
if ( _Application == null)
return;
else
{
_Application =null;
}
}
Your finalizer should call the Dipose() method, rather than doing
"dispose-like behavior" itself. Not that your finalizer actually
implemented "dispose-like behavior" (see below), but still.

And implementing a finalizer for a class that implements IDisposable is
a good idea, but the finalizer exists for redundancy. It should not be
considered the front line of defense, as it usually won't get called
right away, and might not ever be called.
And I'll also tryed with the OutlookClass to dever from IDisposable class
and put in:
Public Dispose()
{
if ( _Application == null)
return;
else
{
_Application =null;
}
}

With both the same result - Outlook don't get destroyed before i'll close
the application.
No doubt. You didn't do anything that would cause the object to be
destroyed. All you did was release your reference to it, which as I
mentioned doesn't necessarily do anything right away.

If Outlook.Application implements IDisposable, then you'll want to call
that in your own Dispose() method. And you'll want your finalizer to
call Dispose().

If Outlook.Application has a Close() method, then you may also want to
implement a Close() method on your class that calls Close() on your
Outlook object.

If Outlook.Application has neither Dispose() nor Close(), nor any other
sort of "destroy this object" method, then you're out of luck. That's
something you'll need to take up with the Outlook folks. It's a bug for
them to implement a .NET class that has or represents unmanaged
resources (e.g. a process) but doesn't provide a method for explicitly
disposing those resources.

I suspect, however, that the Outlook.Application object does have a
Close() or Dispose() or similar, and if so then using that is the
solution to your problem.

Pete
Oct 9 '07 #2

P: n/a
Johnny E. Jensen wrote:
Hellow

I'am not sure what to think about the Garbage Collector.
That's just because you expect it to do something that it's not supposed
to do. ;)
I have a Class OutlookObject, It have two private variables.
Private Microsoft.Office.Interop.Outlook.Application _Application = null;
Private Microsoft.Office.Interop.Outlook.NameSpace _Namespace = null;

The Constructor:
public OutlookObject()
{
_Application = new Microsoft.Office.Interop.Outlook.Application();
_Namespace = _Application.GetNameSpace("MAPI");
}
.... after this alot of members and properties

When i need to do something in outlook, I'll instanciate the OutlookObject
objOL = new OutlookObject();
And when i don't need Outlook anymore I'll use objOL = null;

In my head this should destroy the objOL
It doesn't. It only makes the object available for garbage collection.

Garbage collections happen at times that the garbage collector decides,
and not every time an object is available for garbage collection. Also,
when a garbage collection happens, that doesn't guarante that all
available objects are collected. The garbage collector can decide that
it's not efficient to collect some of the objects at that time.
and thereafter also the
_Application and _Namespace in the Garbage Collector,
Actually, at the exact moment that the objOL can be garbage collected,
every object that it references (and isn't referenced somewhere else)
can also be garbage collected.
but it don't happens
all time.
So I'll found a way that should force the destruction, the use of finalizer,
so i put this into the OutlookObject() class:

~OutlookObject()
{
if ( _Application == null)
return;
else
{
_Application =null;
}
}
A finalizer is not useful if you want to control when the objects are
released. An object that has a finalizer will be put in the freachable
queue instead of being collected. A background thread then runs the
finalizer methods in the objects one after one, but you have no control
over when this happens.

If it takes too long to go throught the freachable queue when the
application ends, the rest of the objects will just be discarded without
finalization, so it's not even guaranteed that the finalizer will ever run.
And I'll also tryed with the OutlookClass to dever from IDisposable class
and put in:
Public Dispose()
{
if ( _Application == null)
Why checking? Setting the reference to null again doesn't do any harm.
return;
Why a return statement here? It's totally superflous as there is no more
code from this point to the end of the method.
else
{
_Application =null;
}
}

With both the same result - Outlook don't get destroyed before i'll close
the application.
Did you actually call the Dispose method? It's not called automatically.
The problem is that I'll use Outlook a lot of times within my application,
and with the Task Manager I'll some time see five or more Outlook instances
WHY?????
And if there is so many Outlook instances, they stay open event after I'll
close my application. WHY???
--
Göran Andersson
_____
http://www.guffa.com
Oct 9 '07 #3

P: n/a
"Peter Duniho" <Np*********@NnOwSlPiAnMk.comwrote
You should love the GC. It's your friend.
I just signed up to do a session on the Garbage Collector at the Silicon
Valley code camp later this month.

So many people seem to have problems with it....

--
Chris Mullins
Oct 10 '07 #4

P: n/a
"Peter Duniho" <Np*********@NnOwSlPiAnMk.comwrote
>You should love the GC. It's your friend.

I just signed up to do a session on the Garbage Collector at the Silicon
Valley code camp later this month.

So many people seem to have problems with it....
I can't help but laugh at the irony of this (not at you). Unmanaged
languages like C/C++ have been scoffed at for years because of memory
issues. Now people can't even handle a system that takes care of memory for
them. It goes a long way to demonstrate the complexities of our field
(there's no free lunch).
Oct 10 '07 #5

P: n/a
Larry Smith wrote:
I can't help but laugh at the irony of this (not at you). Unmanaged
languages like C/C++ have been scoffed at for years because of memory
issues. Now people can't even handle a system that takes care of memory for
them. It goes a long way to demonstrate the complexities of our field
(there's no free lunch).
While there's probably some truth in your observation, I have to say
that my experience has been more the opposite. Languages or platforms
that manage memory with a garbage collector are often viewed as "lazy",
"inefficient", "toy-like", etc. with many people scoffing at them.
Those people assert that memory management is best done explicitly by
the program, even though it requires more code.

I have to admit that, while I don't think I was ever so rigid in my
thinking, even I held those beliefs to some degree. The memory
management of .NET was always a concern to me, until I actually started
using it.

But now, I love the garbage collector. :)

And yes, there's no such thing as a free lunch. However, there
definitely is a such thing as efficient engineering. The lunch isn't
technically "free", but it sure costs a lot less if you do things
efficiently. And I do feel that .NET as a platform offers great
opportunity for much-improved efficiency in the development process.
There are even at least a few examples of where using .NET results in an
_implementation_ that is more efficient than what most developers might
come up with.

That's about as close to getting a free lunch as you're likely to see. :)

Pete
Oct 10 '07 #6

P: n/a
"Peter Duniho" <Np*********@NnOwSlPiAnMk.comschrieb im Newsbeitrag
news:13*************@corp.supernews.com...
<snip>
>And I'll also tryed with the OutlookClass to dever from IDisposable class
and put in:
Public Dispose()
{
if ( _Application == null)
return;
else
{
_Application =null;
}
}

With both the same result - Outlook don't get destroyed before i'll close
the application.

No doubt. You didn't do anything that would cause the object to be
destroyed. All you did was release your reference to it, which as I
mentioned doesn't necessarily do anything right away.

If Outlook.Application implements IDisposable, then you'll want to call
that in your own Dispose() method. And you'll want your finalizer to call
Dispose().
You should not call Dispose in the finalizer. When the finalizer runs, the
referenced object will also be eligible for collection resp. finalization,
and it's finalizer will run, maybe has allready run. But you should call
Dispose on the held objects in the Dispose method of the container.

Christof

Oct 10 '07 #7

P: n/a
On Oct 10, 4:44 am, "Christof Nordiek" <c...@nospam.dewrote:
"Peter Duniho" <NpOeStPe...@NnOwSlPiAnMk.comschrieb im Newsbeitragnews:13*************@corp.supernews.com ...
<snip>


And I'll also tryed with the OutlookClass to dever from IDisposable class
and put in:
Public Dispose()
{
if ( _Application == null)
return;
else
{
_Application =null;
}
}
With both the same result - Outlook don't get destroyed before i'll close
the application.
No doubt. You didn't do anything that would cause the object to be
destroyed. All you did was release your reference to it, which as I
mentioned doesn't necessarily do anything right away.
If Outlook.Application implements IDisposable, then you'll want to call
that in your own Dispose() method. And you'll want your finalizer to call
Dispose().

You should not call Dispose in the finalizer. When the finalizer runs, the
referenced object will also be eligible for collection resp. finalization,
and it's finalizer will run, maybe has allready run. But you should call
Dispose on the held objects in the Dispose method of the container.

Christof- Hide quoted text -

- Show quoted text -
Just a point of clarification for other people who might misread your
post -- calling Dispose from the finalizer is a common pattern and
it's perfectly ok to do. You just need to make sure that when you do
it, your Dispose method doesn't try to touch other managed objects
when called from the finalizer. As you point out, these other managed
objects might have already been cleaned up, so calling them will lead
to errors. The pattern I've seen most often is to have Dispose( ) and
the finalizer both call Dispose(bool disposing). When calling from
Dispose( ), pass disposing=true and call GC.SuppressFinalize(this).
When calling from the finalizer, disposing=false. Inside Dispose(bool
disposing), you should dispose other managed resources only if
disposing=true.

John
Oct 10 '07 #8

P: n/a
>I can't help but laugh at the irony of this (not at you). Unmanaged
>languages like C/C++ have been scoffed at for years because of memory
issues. Now people can't even handle a system that takes care of memory
for them. It goes a long way to demonstrate the complexities of our field
(there's no free lunch).

While there's probably some truth in your observation, I have to say that
my experience has been more the opposite. Languages or platforms that
manage memory with a garbage collector are often viewed as "lazy",
"inefficient", "toy-like", etc. with many people scoffing at them. Those
people assert that memory management is best done explicitly by the
program, even though it requires more code.
This is simply human nature at work. People who master any complex skill
will generally feel superior to those who master a "lesser" skill (one
that's less complicated). They then convince themselves that the lesser
skill itself must also be inferior in some way. It's a fallacious argument
but most are guilty of it to some extent.
I have to admit that, while I don't think I was ever so rigid in my
thinking, even I held those beliefs to some degree. The memory management
of .NET was always a concern to me, until I actually started using it.

But now, I love the garbage collector. :)

And yes, there's no such thing as a free lunch. However, there definitely
is a such thing as efficient engineering. The lunch isn't technically
"free", but it sure costs a lot less if you do things efficiently. And I
do feel that .NET as a platform offers great opportunity for much-improved
efficiency in the development process. There are even at least a few
examples of where using .NET results in an _implementation_ that is more
efficient than what most developers might come up with.

That's about as close to getting a free lunch as you're likely to see. :)
As far as handling resources is concerned however, I think C++ has always
been unfairly maligned. Not only is it much less error-prone than people
believe (IMHO), the RAII paradigm itself is a cleaner design compared to a
GC system (again, IMHO). While it would be very difficult for anyone to
technically [dis]prove it (I certainly can't), I base it solely on my own
experience.
Oct 10 '07 #9

P: n/a
Larry Smith wrote:
This is simply human nature at work. People who master any complex skill
will generally feel superior to those who master a "lesser" skill (one
that's less complicated). They then convince themselves that the lesser
skill itself must also be inferior in some way. It's a fallacious argument
but most are guilty of it to some extent.
I have the impression that you feel it's also a fallacious argument that
C/C++ is deserving of scorn and scoffing. Which is fine...I think
there's a lot of pointless scorning and scoffing going on in both
directions.

My point was simply that the scorn and scoffing isn't limited to the
C/C++ direction. It's also directed at garbage collection, and is just
as fallacious in that case as the case you mention.
As far as handling resources is concerned however, I think C++ has always
been unfairly maligned. Not only is it much less error-prone than people
believe (IMHO), the RAII paradigm itself is a cleaner design compared to a
GC system (again, IMHO). While it would be very difficult for anyone to
technically [dis]prove it (I certainly can't), I base it solely on my own
experience.
Well, my experience is the opposite. It is simply not possible to have
a memory leak in a garbage collection system. But memory leaks abound
in conventionally written C/C++ applications. I see it all the time.

There are ways to write code in a way that helps ensure against memory
leaks, but these aren't things that the language provide. They are
things that the developer must implement oneself (for example, code in
the debug build that actually tracks memory allocations and requires
idle-time consistency checks).

Now, does that mean that C++ is as error-prone as people believe? I
don't know. What do people believe? How do you measure that? But in
terms of C++ being more error-prone than a garbage collecting system
goes, I'd say that's easily observed and demonstrated.

IMHO the only real error likely to come up with respect to memory
management is, by definition, failing to manage memory correctly. That
is, either failing to allocate memory when you need it, or failing to
release it when you're done. The former is a trivial problem, easily
solved in any paradigm, while the latter simply doesn't exist in a
garbage collecting system.

So it's easily proven that, as long as one is looking only at those
kinds of errors, C/C++ is trivially more error-prone than a
garbage-collecting system.

Is the explicit allocation-and-release of C/C++ itself unwieldy and
excessively error-prone? No, I don't think so. I agree with you that
it's actually not hard to use, even while ensuring correct code. But it
is very unforgiving of carelessness, and unfortunately there are a lot
of programmers out there whose primary identifying characteristic is
carelessness.

I don't know how to evaluate "cleaner design" in this context. If
you're talking about the design of the memory manager, I'd have to agree
that a GC memory manager is more complicated, less "clean". On the
other hand, that only needs to be written once. I would definitely
disagree if you are trying to claim that the design of code written that
_uses_ the memory manager is "cleaner" in C/C++ than if using a GC
memory manager. What could be cleaner than not having to write the code
in the first place?

But really, the only thing I was pointing out is that just as you've
observed people scoffing at the C/C++ paradigm, I have observed people
scoffing at the garbage collection paradigm. It's not as one-sided as
your post seems to imply, and in fact (maybe as a result of the
environment in which I work) my experience has been that I see more
people criticizing the GC paradigm than the other way around.

Pete
Oct 10 '07 #10

P: n/a
Christof Nordiek wrote:
You should not call Dispose in the finalizer. When the finalizer runs,
the referenced object will also be eligible for collection resp.
finalization, and it's finalizer will run, maybe has allready run. But
you should call Dispose on the held objects in the Dispose method of the
container.
As John says, that's not true. I may not have been clear enough in my
own post, but you should call your own Dispose() in the finalizer to
ensure that unmanaged resources in your own object are released.

But really, the situation here isn't about the finalizer at all. As I
said, that's a backup plan for when things aren't coded right.

The non-finalizing scenario (indicated by the boolean passed in) in the
Dispose() method of the class should release managed and unmanaged
resources, and should call the appropriate method on the
Outlook.Application object (that is apparently the Quit() method, as
Peter Bromberg has pointed out) to release that object.

And the OP's code should call his own class's Dispose() method directly,
rather than relying on the finalizer.

I don't know whether the Outlook.Application finalizer also calls
Quit(), but if it doesn't, that would mean it's essentially an unmanaged
resource and your own (that is, the OP's) Dispose() method should call
Quit() even in the finalizing case.

Pete
Oct 10 '07 #11

P: n/a
"Peter Duniho" <Np*********@NnOwSlPiAnMk.comwrote
>
Well, my experience is the opposite. It is simply not possible to have a
memory leak in a garbage collection system. But memory leaks abound in
conventionally written C/C++ applications. I see it all the time.
Well, for what it's worth, in a 3 hour presentation on application Tuning,
about 1/2 that time is spent on Memory Leaks, and how to track them down.

Just like in C/C++ land, they're due to a user misunderstanding how GC works
and making a silly mistake. For some reason, the people who most often seem
to get bitten are novice devs who discover they really like static
variables...

.... so I spend quite a bit of time using a Memory Profiler, and showing them
how to read it's output.

My favorite leaks invoke making my laptop (with 2GB of memory) generate an
OOM while only actually allocating a few hundred KB of memory. (I have a
very nasty pinning / fragementation demo that really drives home the
issue...)

--
Chris Mullins

Oct 10 '07 #12

P: n/a
Chris Mullins [MVP - C#] schreef:
Just like in C/C++ land, they're due to a user misunderstanding how GC works
and making a silly mistake. For some reason, the people who most often seem
to get bitten are novice devs who discover they really like static
variables...
Can you give us a small example when and how a static variable can cause
problems with the Garbage Collector?
Oct 10 '07 #13

P: n/a
"Martijn Mulder" <i@mwrote in message
news:47***********************@news.wanadoo.nl...
Can you give us a small example when and how a static variable can cause
problems with the Garbage Collector?
public class MyWebService
{
private static List<Byte[]_myReceivedData;

private void DataFromAWebService(Byte[] rawData)
{
_myReceivedData.Add(rawData);
// Do operations
}
}

I see "innocent" code that looks like this all the time. The List, because
it's static, is never cleaned up.

People add data into it, and forget the list is static. They exepect that
when the class instance of MyWebService goes away, that the list of received
data will as well.

It's a really simple error, but I see it again and again and again...

--
Chris Mullins
Oct 10 '07 #14

P: n/a
>As far as handling resources is concerned however, I think C++ has always
>been unfairly maligned. Not only is it much less error-prone than people
believe (IMHO), the RAII paradigm itself is a cleaner design compared to
a GC system (again, IMHO). While it would be very difficult for anyone to
technically [dis]prove it (I certainly can't), I base it solely on my own
experience.

Well, my experience is the opposite. It is simply not possible to have a
memory leak in a garbage collection system. But memory leaks abound in
conventionally written C/C++ applications. I see it all the time.
It's a religious issue and both have their pros and cons. While C++ may be
more error-prone than a GC system however (for releasing resources), there
need not be a significant difference. The problem is almost entirely a human
one. It's extremely easy to handle resources in C++ as you stated but the
language's reputation has often suffered because of its practioners (most
programmers being very poor at what they do). My own opinion however is that
in the hands of those who really know what they're doing (few and far
between), RAII is a cleaner approach than a GC system. By clean I mean it's
more natural to release your resources in a destructor than to wait for a GC
to run (not to be confused with easier or less error prone - it's clearly
not). The "using" statement for performing this in C# for instance (or even
worse, a "finally" block), is ugly compared to a C++ destructor. The
destructor is nicely symmetrical with its constructor. The latter
initializes the object and the former cleans it up. This occurs immediately
when an object goes out of scope so your resources only exist for as long as
they're needed. It's all very well controlled and understood. You know
exactly when clean up is going to occur and need not worry the timing of a
GC. In fact, a GC itself can even promote sloppy behaviour. People become
so used to it doing the clean up that they can neglect to explicitly clean
something up themselves when the situation calls for it (such as immediately
releasing a resource that might later cause something else to fail if it
hasn't been GC'd yet). Or people might always perform shallow copies of
their objects where a deep copy is required (since it's just so easy). In
C++ you have to think about these things more but that deeper thinking
process also sharpens your understanding of the issues IMO (and hopefully
the design of your app). Of course this is all a Utopian view of things. In
the real world most programmers require the handholding that a GC offers.
Oct 10 '07 #15

P: n/a
Chris Mullins [MVP - C#] wrote:
"Peter Duniho" <Np*********@NnOwSlPiAnMk.comwrote
>Well, my experience is the opposite. It is simply not possible to have a
memory leak in a garbage collection system. But memory leaks abound in
conventionally written C/C++ applications. I see it all the time.

Well, for what it's worth, in a 3 hour presentation on application Tuning,
about 1/2 that time is spent on Memory Leaks, and how to track them down.
I guess that depends on your definition of "memory leak". Mine is such
that a garbage collection system specifically precludes them.

I understand how you might call your example of a static variable that
isn't released a "memory leak", but the memory hasn't become orphaned or
anything. The application simply failed to release it, and the same
kind of "leak" exists regardless of the type of memory management (ie it
would be just the same error for a CRT-based program to fail to release
that memory).

To me, a true "memory leak" is a situation in which the memory is gone
forever for that process. There's simply no way for any code anywhere
to ever recover it. Whether or not such code exists isn't relevant to
my use of the term. It's whether it _could_ exist.

At the very least, it seems to me that in your presentation you should
make clear the differing definitions of "memory leak".

Pete
Oct 10 '07 #16

P: n/a
Göran Andersson wrote:
>
>return;

Why a return statement here? It's totally superflous as there is no
more code from this point to the end of the method.
Some languages require it and some people moving from those languages
would feel better with it there. Sure they could do it the C# way and
leave it out, but it causes no problems, does it.

Also, more pertinently, if you should come along and modify the code, by
wrapping it in an 'if' for example and also put code after it, then you
have to remember to insert the 'return' if it is not already there.

If you modify the routine to return something, the return is already
there to be modified.

I'm not saying that these are particularly *good* reasons, mind you..

Cheers,

Cliff

--

Have you ever noticed that if something is advertised as 'amusing' or
'hilarious', it usually isn't?
Oct 10 '07 #17

P: n/a
Larry Smith wrote:
It's a religious issue and both have their pros and cons. While C++ may be
more error-prone than a GC system however (for releasing resources), there
need not be a significant difference.
I certainly agree there. But one system is definitely more resilient to
programmer error.
The problem is almost entirely a human
one. It's extremely easy to handle resources in C++ as you stated but the
language's reputation has often suffered because of its practioners (most
programmers being very poor at what they do). My own opinion however is that
in the hands of those who really know what they're doing (few and far
between),
I think we've found ourselves in vehement agreement on that point
previously. I'm still in vehement agreement with you on it. :)
RAII is a cleaner approach than a GC system. By clean I mean it's
more natural to release your resources in a destructor than to wait for a GC
to run (not to be confused with easier or less error prone - it's clearly
not).
Well, the thing is...once you no longer have a reference to a memory
resource, it _is_ released. Just because the GC hasn't run, that
doesn't mean the memory isn't available. It just means that the GC
hasn't gotten around to moving it into the collection of memory that is
_immediately_ available for use.

Logically speaking, the memory is still in fact available, the moment
you release your last reference to it.
The "using" statement for performing this in C# for instance (or even
worse, a "finally" block), is ugly compared to a C++ destructor.
IMHO, it's a mistake to think of the "using" statement, the finalizer,
or IDisposable as related to a C++ destructor. It's only "ugly"
compared to it if you are treating them the same.

The "using" statement and IDisposable exist for one purpose: to
explicitly release resources held by an object without releasing the
object itself. Within that one purpose, there are two sub-categories of
types of resources that may be released: managed, and unmanaged.

Obviously the only reason the unmanaged category even exists is that
..NET runs on top of a system that is not entirely managed code. If all
of Windows was based on a garbage collection system, that category
wouldn't exist.

So, let's consider only the managed category. In this case, it's
beneficial to be able to tell an object "let go of the resources you're
holding" without releasing that object itself. But this is again not
comparable to a destructor, because the object itself still exists. It
hasn't been released or destroyed and it is theoretically possible that
it could be reused. This is much more comparable to a C++ class that
hasn't been deleted, and thus hasn't been destroyed, but which has some
sort of "release your resources" function that has been called.
The destructor is nicely symmetrical with its constructor.
The latter initializes the object and the former cleans it up.
And in C# not having a reference to an object is nicely symmetrical to
creating a reference to an object. The latter initializes the object
and the former cleans it up. In a purely managed environment, releasing
a single reference to an object is exactly equivalent to the C++
paradigm of having to call a destructor where individual resources
within the object have to be explicitly cleaned up.

In fact, the garbage collection model is, at least for that particular
operation, much more efficient, because there's no need to go through
the entire object cleaning things up. Everything that object refers to
is automatically released, with a single nulling, or leaving scope, of
the last variable holding a reference to that object.

Overall, I suspect the efficiency is about the same. The extra work
that the C++ model has to do initially is balanced by the extra work the
garbage collector will have to do later.
This occurs immediately
when an object goes out of scope so your resources only exist for as long as
they're needed.
Likewise, using GC as soon as an object is no longer referenced, any
managed resources no longer exist. They are automatically released when
that object referencing them is.
It's all very well controlled and understood. You know
exactly when clean up is going to occur and need not worry the timing of a
GC.
But why do you care when the GC is going to occur? It only happens when
it needs to, or when it gets the opportunity to, and there should be
nothing in your code that depends on or otherwise relies on when, if at
all, garbage collection happens.

In a multi-tasking operating system like Windows, there are a wide
variety of things that occur and which you have no control over. A
garbage collection system simply introduces a new instance to this
already very broad category of components.
In fact, a GC itself can even promote sloppy behaviour. People become
so used to it doing the clean up that they can neglect to explicitly clean
something up themselves when the situation calls for it
I don't understand that at all. A person who isn't used to releasing a
reference to an object when they are done with it isn't going to be used
to deleting a C++ object when they are done with. Conversely, a person
who can remember to delete a C++ object when they are done with it can
remember to release a reference to a .NET object when they're done with it.
(such as immediately
releasing a resource that might later cause something else to fail if it
hasn't been GC'd yet).
What kind of resource? If you're talking about a managed resource, then
simply releasing the reference to the referencing object is sufficient
to release the resource.

If you're talking about an unmanaged resource, well...that's not a
problem inherent with garbage collection. It's a natural consequence of
mixing a garbage collection system with a traditional alloc/free system.
That problem _only_ exists because of the traditional alloc/free
system; it hardly seems fair to blame it on the garbage collection paradigm.
Or people might always perform shallow copies of
their objects where a deep copy is required (since it's just so easy).
This one I understand even less. If a deep copy is required but a
shallow copy is done, this is if anything more dangerous in the C++
model, because the referenced data can be freed by any one copy of the
instance. This just won't happen in a garbage collection system.

With a GC system, you still have the potential issue of having multiple
instances refer to the same data, but this issue exists regardless of
the memory management model. It's an implementation problem, not a
memory management problem.
In
C++ you have to think about these things more but that deeper thinking
process also sharpens your understanding of the issues IMO (and hopefully
the design of your app). Of course this is all a Utopian view of things. In
the real world most programmers require the handholding that a GC offers.
I generally agree that it is good to think more deeply about what is
going on. A person who understands better the lower levels is almost
always going to be able to use use the higher level API more
effectively. But I don't see how that makes the C++ model necessarily
better; either model has some lower level implementation details that
are important to understand for most effective use, and C++ has just the
same potential for someone failing to bother to learn those lower level
implementation details as .NET does.

And I still think that people scoff at garbage collection at least as
much as they do the more traditional C++ model.

Pete
Oct 10 '07 #18

P: n/a
[What's a Memory Leak?]

"Peter Duniho" <Np*********@NnOwSlPiAnMk.comwrote
I guess that depends on your definition of "memory leak". Mine is such
that a garbage collection system specifically precludes them.
The definition will vary, but even then, saying GC precludes them isn't
quite right.

There are all sorts of strange corner cases. Off the top of my head:
- Static variables are never collected off the high frequency heap
- The Large Object Heap isn't compacted

Think of the fun you could have allocating large byte arrays in static
constructors - memory would come off the LOH, and would likley never be able
to be reclaimed....

If we bring in Win32 & Interop, many corner cases spring up, the most common
one being fragmentation due to pinning.
To me, a true "memory leak" is a situation in which the memory is gone
forever for that process. There's simply no way for any code anywhere to
ever recover it. Whether or not such code exists isn't relevant to my use
of the term. It's whether it _could_ exist.
I think that's a misleading definition.

A leak, even in C/C++ land, is generally characterized by an application
bug. Sometimes it's as simple to fix as, "use an auto pointer", and other
times it's very complex. The same seems to hold true in .Net.
At the very least, it seems to me that in your presentation you should
make clear the differing definitions of "memory leak".
Nah. These are usually people new to .Net, and the level for them is just
right. More detail just becomes confusing...

--
Chris Mullins
Oct 10 '07 #19

P: n/a
Chris Mullins [MVP - C#] wrote:
[What's a Memory Leak?]

"Peter Duniho" <Np*********@NnOwSlPiAnMk.comwrote
>I guess that depends on your definition of "memory leak". Mine is such
that a garbage collection system specifically precludes them.

The definition will vary, but even then, saying GC precludes them isn't
quite right.
As I said, my definition does.
There are all sorts of strange corner cases. Off the top of my head:
- Static variables are never collected off the high frequency heap
- The Large Object Heap isn't compacted
Do you mean there is some programming error that would cause that to
happen? Can you be more specific?
Think of the fun you could have allocating large byte arrays in static
constructors - memory would come off the LOH, and would likley never be able
to be reclaimed....
But those would be arrays the application still holds a reference to,
no? If not, why wouldn't they be able to be reclaimed?
If we bring in Win32 & Interop, many corner cases spring up, the most common
one being fragmentation due to pinning.
I'm specifically talking about memory leaks. There are, of course,
other ways to interfere with memory allocations, such as fragmenting the
heap. That's outside the scope of what I'm talking about.
>To me, a true "memory leak" is a situation in which the memory is gone
forever for that process. There's simply no way for any code anywhere to
ever recover it. Whether or not such code exists isn't relevant to my use
of the term. It's whether it _could_ exist.

I think that's a misleading definition.
Well, I'm happy to agree to disagree. But just as I suspect there's at
least one person who agrees with your viewpoint, I think it's likely
there's at least one person who agree with mine. We're talking about a
semantic issue here, and those are almost never black & white.

Even if you don't agree with a particular viewpoint, you should at least
take it into account.
A leak, even in C/C++ land, is generally characterized by an application
bug.
Agreed. But I don't agree that all memory-related bugs are examples of
"leaks". A leak is a bug, but not all bugs are leaks.
[...]
>At the very least, it seems to me that in your presentation you should
make clear the differing definitions of "memory leak".

Nah. These are usually people new to .Net, and the level for them is just
right. More detail just becomes confusing...
Your choice, of course. However, from my own personal point of view,
the fact that I might be new to .NET does not negate any previous
experience I might have, nor does it change how I view the definition of
a "memory leak".

Whether that's an issue in your presentation depends more on how many,
if any, of your audience shares my viewpoint than on how much, if
anything, they already know about .NET.

Pete
Oct 10 '07 #20

P: n/a
Enkidu wrote:
Göran Andersson wrote:
>>
>>return;

Why a return statement here? It's totally superflous as there is no
more code from this point to the end of the method.
Some languages require it and some people moving from those languages
would feel better with it there. Sure they could do it the C# way and
leave it out, but it causes no problems, does it.

Also, more pertinently, if you should come along and modify the code, by
wrapping it in an 'if' for example and also put code after it, then you
have to remember to insert the 'return' if it is not already there.
On the other hand, you could just as well add code that you actually
want to be executed in both cases, so then you would have to remember to
remove the return statement. :)
If you modify the routine to return something, the return is already
there to be modified.

I'm not saying that these are particularly *good* reasons, mind you..

Cheers,

Cliff

--
Göran Andersson
_____
http://www.guffa.com
Oct 10 '07 #21

P: n/a
This is what I mean by "ugly":

void CSharpFunc()
{
using (MyExpensiveObject obj = new MyExpensiveObject())
{
// ...
}
}

This OTOH achieves "deterministic finalization" automatically and it's
syntactically cleaner:

void CPlusPlusFunc()
{
MyExpensiveObject obj;

// ...
}
Oct 10 '07 #22

P: n/a
Larry Smith wrote:
This is what I mean by "ugly":

void CSharpFunc()
{
using (MyExpensiveObject obj = new MyExpensiveObject())
{
// ...
}
}

This OTOH achieves "deterministic finalization" automatically and it's
syntactically cleaner:

void CPlusPlusFunc()
{
MyExpensiveObject obj;

// ...
}
But those two functions aren't doing the same thing.

The C# equivalent to the CPlusPlusFunc() you posted is this:

void CSharpFunc()
{
MyExpensiveObject obj = new MyExpensiveObject();

// ...
}

As I pointed out, "using" is used for a completely different purpose.
It's a mistake to think of it as the same as a C++ destructor.

In fact, in .NET the runtime is smart enough to recognize when a
reference is not actually used throughout a function, and will in that
case release the reference _earlier_ than would be the case for C++.

So not only is the code no "uglier", the lifetime of the object in .NET
much more exactly matches its actual use than it does in C++.

Pete
Oct 10 '07 #23

P: n/a
But those two functions aren't doing the same thing.
>
The C# equivalent to the CPlusPlusFunc() you posted is this:

void CSharpFunc()
{
MyExpensiveObject obj = new MyExpensiveObject();

// ...
}
In fact they are doing the same thing for all intents and purposes. I have
no reason to allocate my C++ object on the free-store (heap) in this
scenariio but even if I did, I can always assign the pointer to an
"auto_ptr" or some other smart-pointer object. It's still syntactically
cleaner than the C# version. In practice however you need not rely on local
pointers most of the time (or even "new" for that matter) so it's usually a
non-issue.
As I pointed out, "using" is used for a completely different purpose. It's
a mistake to think of it as the same as a C++ destructor.
I realize it's not the same thing. It's just syntactic sugar for a call to
"Dispose()". It's intended to "destroy" the object however and once called,
you shouldn't access the object again.
In fact, in .NET the runtime is smart enough to recognize when a reference
is not actually used throughout a function, and will in that case release
the reference _earlier_ than would be the case for C++.
When an object's reference is no longer accessible, the call to "Finalize()"
is non-deterministic. It normally happens when memory exhaustion occurs
which triggers the GC. In theory however it might never occur so
"Finalize()" itself might never be called. GC is also a very expensive
process. The GC even suspends all threads in the application to carry out
its work by injecting the code to do this at strategic locations.
Implementing a proper clean-up routine also raises a host of other
housekeeping chores such as calling "SuppressFinalize()" in your "Dispose()"
method, handling multiple calls to "Dispose()", etc (note that both
"Dispose()" and "Finalize()" should also be routed through the same common
cleanup function). The bottom line in any case is that you have no choice
but to release your resources explicitly if you can't wait for the GC to
invoke "Finalize()". The syntax for that is unsightly however (IMO).
So not only is the code no "uglier", the lifetime of the object in .NET
much more exactly matches its actual use than it does in C++.
How do you arrive at that conclusion. The call to "Finalize()" is
non-deterministic but the call to a C++ destructor isn't. That occurs as
soon as it exits its scope which you have complete control over. The
destructor is also syntactically cleaner than relying on a "using" statement
or calling "Dispose()" directly.
Oct 11 '07 #24

P: n/a
So what? The finalizer isn't part of the behavior of a correctly written
program. You should forget all about the finalizer. It only is relevant
when you have a bug.
Your're focusing the crux of your argument on some theoretical notion that
we live in a world without unmanaged resources. Well explain to me what
resources you think the .NET classes are handling behind the scenes. This is
..NET for "Windows", not .NET for "Peter's purely managed OS X". In the world
we both live in the finalizer is a fact of life. What do you think the
"Note" section here means for instance:

http://msdn2.microsoft.com/en-us/lib...t.dispose.aspx

I'm not aware of any "bug" so should I still "forget all about the
finalizer". What if I'm holding an object that stores a network resource of
some type, possibly using a native .NET class that holds this. If I don't
explicitly release it then it might never get releasesd and my app might
eventually fail somewhere (after creating enough instances). In fact, even
your own objects should implement "Dispose()" whenever they store references
to other objects that implement "Dispose()". The focus of my previous posts
have been on this very issue. I'm talking about the (cleaner) syntax of
using a C++ destructor versus finalize/dispose for releasing *unmanaged*
resources.(not objects that live entirely on the managed heap and are
therefore cleaned up automatically).
Oct 11 '07 #25

P: n/a
On Oct 10, 6:11 pm, Peter Duniho <NpOeStPe...@NnOwSlPiAnMk.comwrote:
Chris Mullins [MVP - C#] wrote:
[What's a Memory Leak?]
"Peter Duniho" <NpOeStPe...@NnOwSlPiAnMk.comwrote
I guess that depends on your definition of "memory leak". Mine is such
that a garbage collection system specifically precludes them.
The definition will vary, but even then, saying GC precludes them isn't
quite right.

As I said, my definition does.
There are all sorts of strange corner cases. Off the top of my head:
- Static variables are never collected off the high frequency heap
- The Large Object Heap isn't compacted

Do you mean there is some programming error that would cause that to
happen? Can you be more specific?
Think of the fun you could have allocating large byte arrays in static
constructors - memory would come off the LOH, and would likley never be able
to be reclaimed....

But those would be arrays the application still holds a reference to,
no? If not, why wouldn't they be able to be reclaimed?
If we bring in Win32 & Interop, many corner cases spring up, the most common
one being fragmentation due to pinning.

I'm specifically talking about memory leaks. There are, of course,
other ways to interfere with memory allocations, such as fragmenting the
heap. That's outside the scope of what I'm talking about.
To me, a true "memory leak" is a situation in which the memory is gone
forever for that process. There's simply no way for any code anywhere to
ever recover it. Whether or not such code exists isn't relevant to my use
of the term. It's whether it _could_ exist.
I think that's a misleading definition.

Well, I'm happy to agree to disagree. But just as I suspect there's at
least one person who agrees with your viewpoint, I think it's likely
there's at least one person who agree with mine. We're talking about a
semantic issue here, and those are almost never black & white.

Even if you don't agree with a particular viewpoint, you should at least
take it into account.
A leak, even in C/C++ land, is generally characterized by an application
bug.

Agreed. But I don't agree that all memory-related bugs are examples of
"leaks". A leak is a bug, but not all bugs are leaks.
[...]
At the very least, it seems to me that in your presentation you should
make clear the differing definitions of "memory leak".
Nah. These are usually people new to .Net, and the level for them is just
right. More detail just becomes confusing...

Your choice, of course. However, from my own personal point of view,
the fact that I might be new to .NET does not negate any previous
experience I might have, nor does it change how I view the definition of
a "memory leak".

Whether that's an issue in your presentation depends more on how many,
if any, of your audience shares my viewpoint than on how much, if
anything, they already know about .NET.

Pete
Chris,
For what it's worth, I'm with Pete on this one. It's probably due to
the fact that I have C++ background, but to me a memory leak means a
very specific thing, which is that the memory has been orphaned and
there is no reference anywhere that can be used to recover it. Not
all increases in memory usage are memory leaks.

Even if I were new to .NET, I think it would be helpful to be a little
more specific about your definition of memory leak. Even people who
are new to .NET have heard that one of the big benefits is garbage
collection which prevents memory leaks.
John

Oct 11 '07 #26

P: n/a
Neglecting to call "Dispose()" isn't a bug unto itself unless
absolutely mandated by a particular object.
IMO, implementing IDisposable is just such a mandate - by the
encapsulation principle, i.e. you shouldn't know or care what goes on
under the covers.
It would be nice if the language had some way of enforcing this, and
transferring ownership in the case of factory methods, or handing
ownership to a wrapper object that was itself IDisposable. Oh well.

Marc
Oct 11 '07 #27

P: n/a
IMO, implementing IDisposable is just such a mandate

Probably the most reasonable interpretation anyway.
Oct 11 '07 #28

P: n/a
I don't understand that at all. A person who isn't used to releasing a
reference to an object when they are done with it isn't going to be used
to deleting a C++ object when they are done with. Conversely, a person
who can remember to delete a C++ object when they are done with it can
remember to release a reference to a .NET object when they're done with
it.
Not "a person who can remember to delete a C++ object:". A good programmer
knows how to use a reference-counting smart pointer, where cleanup is just
as automatic as with a garbage collector (and sometimes moreso). A smart
pointer as a static member is just as bad as with GC, but at least C++ smart
pointers do the right thing with local variables and exceptions or early
exits -- automatically.
Oct 11 '07 #29

P: n/a
Chris,
For what it's worth, I'm with Pete on this one. It's probably due to
the fact that I have C++ background, but to me a memory leak means a
very specific thing, which is that the memory has been orphaned and
there is no reference anywhere that can be used to recover it. Not
all increases in memory usage are memory leaks.
I know exactly what you're getting at, but you and Peter are both wrong.

A memory leak is memory that remains unavailable for reuse after it is no
longer needed. This is the only definition of memory leak that makes sense.
After all, by your definition, this isn't a memory leak (C++):

void f(void)
{
int* p = new int[1024];
}

Of course that is a memory leak. And so is this (C#):

class C
{
private static int[] a;
static C() { a = new int[1024]; }
}

C c = new C();

These cases are *identical*. Both int arrays are still accessible (in
native code, via HeapWalk, in managed, via reflection on the Type object
which, once loaded, is never freed until the AppDomain unloads), but are
also totally useless in the context given.
Oct 11 '07 #30

P: n/a

"Peter Duniho" <Np*********@NnOwSlPiAnMk.comwrote in message
news:13*************@corp.supernews.com...
Larry Smith wrote:
>This is what I mean by "ugly":

void CSharpFunc()
{
using (MyExpensiveObject obj = new MyExpensiveObject())
{
// ...
}
}

This OTOH achieves "deterministic finalization" automatically and it's
syntactically cleaner:

void CPlusPlusFunc()
{
MyExpensiveObject obj;

// ...
}

But those two functions aren't doing the same thing.

The C# equivalent to the CPlusPlusFunc() you posted is this:

void CSharpFunc()
{
MyExpensiveObject obj = new MyExpensiveObject();

// ...
}
No, because Larry's is exception-safe.
Oct 11 '07 #31

P: n/a

"Marc Gravell" <ma**********@gmail.comwrote in message
news:%2****************@TK2MSFTNGP03.phx.gbl...
>Neglecting to call "Dispose()" isn't a bug unto itself unless absolutely
mandated by a particular object.

IMO, implementing IDisposable is just such a mandate - by the
encapsulation principle, i.e. you shouldn't know or care what goes on
under the covers.
It would be nice if the language had some way of enforcing this, and
transferring ownership in the case of factory methods, or handing
ownership to a wrapper object that was itself IDisposable. Oh well.
The language does. Well, C++/CLI does at least. A CLI/C++ class
automatically implements IDisposable to call Dispose on every member object
that implements IDisposable.
>
Marc

Oct 11 '07 #32

P: n/a
Ben Voigt [C++ MVP] wrote:
>Chris,
For what it's worth, I'm with Pete on this one. It's probably due to
the fact that I have C++ background, but to me a memory leak means a
very specific thing, which is that the memory has been orphaned and
there is no reference anywhere that can be used to recover it. Not
all increases in memory usage are memory leaks.

I know exactly what you're getting at, but you and Peter are both wrong.
It must be satisfying to know that, even in a disagreement of semantics
(which is itself almost always a matter of subjective interpretation),
you are always right, and the other person is always wrong.
A memory leak is memory that remains unavailable for reuse after it is no
longer needed. This is the only definition of memory leak that makes sense.
But you haven't defined "remains unavailable".

This is a semantic issue, and we can follow the chain of terminology as
deep as you like. You can't go around saying that your interpretation
is patently obvious while mine is obviously wrong. When dealing with
human language, hardly anything is ever truly obvious.
After all, by your definition, this isn't a memory leak (C++):

void f(void)
{
int* p = new int[1024];
}

Of course that is a memory leak.
That most certainly is a leak by my definition. If you think otherwise,
I obviously haven't explained my definition very well. But that's an
error of communication, not of the definition itself.
And so is this (C#):

class C
{
private static int[] a;
static C() { a = new int[1024]; }
}

C c = new C();
I disagree that that's a leak, sort of.

You still have a variable referencing the memory. But your class is
degenerate, providing no code whatsoever that uses the variable and so
literally speaking I suppose I'd say that's a leak. And it is so by my
definition, since the variable is not accessible by any code in the program.

But assuming you put the private static variable "a" there for a reason,
and assuming you actually wrote code somewhere in the class that uses
"a", then the failure to release the array later once you no longer need
it isn't what I'd call a leak. It's certainly a programmer error, and
it certainly does result in the application using more memory than it
should. But the memory is still accessible by an actual variable
storing the reference to the memory.
These cases are *identical*. Both int arrays are still accessible (in
native code, via HeapWalk, in managed, via reflection on the Type object
which, once loaded, is never freed until the AppDomain unloads), but are
also totally useless in the context given.
I wouldn't call the allocated memory "accessible" in your f() case.
Using HeapWalk or reflection doesn't count as "accessible" to the
program that actually allocated the memory. Or put another way, if you
choose to define memory that can be found via those means as
"accessible", IMHO you've just made the word "accessible" a completely
pointless word, as with a definition that broad (assuming you take it to
its logical conclusion) there is no such thing as memory that is NOT
"accessible".

Pete
Oct 12 '07 #33

P: n/a
MikeP wrote:
The other paradigm you're talking about is the very same paradigm we're all
working with. And these "compromises" you're referring to are fundamental
necessities required to support the unmanaged world we all live in. You need
to stop focusing on your hypothetical model that doesn't exist.
Why? I don't feel such a need. Who else has the authority to tell me
that in spite of that, I do have that need?
When a purely managed OS is available then you can have your cake.
I'm enjoying my cake right now, thank you. I don't actually have a
problem with the syntax required to mix .NET with the pre-existing
Windows OS behaviors.
[...]
In reality .NET has no choice but to support unmanaged resources so
why not address that instead of focusing on think-tank ideals.
In reality, why make any attempt to judge .NET on this arbitrary measure
of "cleanness" anyway?

I personally think the comparison is silly. But if one is going to make
the comparison, I don't think it's fair to describe .NET as "unclean"
just because it has to make compromises because the unmanaged paradigm
was here first.

You might as well say that the metric system is bad just because all
those metric wrenches don't fit your English bolt heads.

And if you think that's a fair way to evaluate the metric system,
well...that's the crux of the disagreement right there, and we will
never get past that.

Pete
Oct 12 '07 #34

P: n/a
On Oct 11, 5:14 pm, "Ben Voigt [C++ MVP]" <r...@nospam.nospamwrote:
A memory leak is memory that remains unavailable for reuse after it is no
longer needed. This is the only definition of memory leak that makes sense.
After all, by your definition, this isn't a memory leak (C++):

void f(void)
{
int* p = new int[1024];

}
I don't follow what you mean that my definition says this is not a
memory leak. Once f() completes and p goes out of scope, there is no
reference that can be used to recover the memory.
Of course that is a memory leak. And so is this (C#):

class C
{
private static int[] a;
static C() { a = new int[1024]; }

}

C c = new C();

These cases are *identical*.
I disagree that these are identical, for a couple of reasons. First,
if you execute "new C()" N times, the memory will be allocated for "a"
just once, whereas executing "f()" N times will allocate the memory N
times.

Second, there is still a reference to the memory in the case of class
C. I understand your argument that you can get a reference using
thing like HeapWalk & reflection, but I think most people would agree
there is a qualitative difference between the two cases.

For example, it is *possible* to write a simple method for class C
that will let the GC reclaim the memory:

public static void CleanUp()
{
a = null;
}

There is no simple equivalent for the C++ case because you no longer
have a reference to the memory. And if there is a simple equivalent,
I stand corrected. In that case, I would love to see some sample code
-- I'm always happy to learn some new tricks.

John
Both int arrays are still accessible (in
native code, via HeapWalk, in managed, via reflection on the Type object
which, once loaded, is never freed until the AppDomain unloads), but are
also totally useless in the context given.

Oct 12 '07 #35

P: n/a
I'm logged back onto my original machine (and account name):
> In reality .NET has no choice but to support unmanaged resources so why
not address that instead of focusing on think-tank ideals.

In reality, why make any attempt to judge .NET on this arbitrary measure
of "cleanness" anyway?
Because it affects your code. It's a trivial issue only however and hardly a
scathing indictment of .NET (which is an excellent system overall). I've
certainly never lost any sleep over it. Code should nevertheless be as clean
and concise as possible and the C++ destructor better promotes this than the
"using" statement. Relying on programmers to explicitly call "Dispose()"
isn't as safe either (though this is a topic for another day).
I personally think the comparison is silly. But if one is going to make
the comparison, I don't think it's fair to describe .NET as "unclean" just
because it has to make compromises because the unmanaged paradigm was here
first.
How it makes those compromises *is* signifcant. It's also a specious
argument IMO to suggest that any system (A) is inferior in some way only
because of the tradeoffs it must make to support another system (B). System
B is not inferior here but only different so system A's shortcomings are its
own. More to the point, part of system A's job *is* to support system B so
if it does it in a way that itself is substandard (even if you consider
system B inferior which would only be an opinion here), then system A is
still at fault.
You might as well say that the metric system is bad just because all those
metric wrenches don't fit your English bolt heads.
And if you think that's a fair way to evaluate the metric system,
well...that's the crux of the disagreement right there, and we will never
get past that.
If everyone was still working in imperial units then what good would it do
me.
Oct 12 '07 #36

P: n/a
Larry Smith wrote:
[...] Code should nevertheless be as clean
and concise as possible and the C++ destructor better promotes this than the
"using" statement.
As I've already noted, the C++ destructor and the "using" statement or
Dispose() method don't do the same things. It's not sensible to compare
them.

If and when C++ has a way for me to call a destructor multiple times,
then perhaps we can revisit the question. Until then, they just aren't
comparable.
How it makes those compromises *is* signifcant. It's also a specious
argument IMO to suggest that any system (A) is inferior in some way only
because of the tradeoffs it must make to support another system (B).
I agree. That's my point. So why are you suggesting that system A is
inferior only because of the tradeoffs is must make to support system B?
System
B is not inferior here but only different so system A's shortcomings are its
own.
You keep asserting that "different == shortcomings". That's not true.

Different paradigms will always have to go outside their normal mode of
operation to support other paradigms. You can't judge a paradigm by the
concessions it needs to make to support other paradigms. Otherwise, the
paradigm that shows up first always wins.
If everyone was still working in imperial units then what good would it do
me.
What good would metric do you? You'd gain all of the benefits from the
use of metric that people do every day already. There's a reason the
most of the world uses metric now, and that reason isn't just that
"everyone else is using it".

Are you claiming that the only benefit the metric system has is that
other people use it? If not, then why is the question of what everyone
else is using relevant? (And if so, then how is that in this day and
age a person can be unaware of all of the other benefits that the metric
system offers?)

Pete
Oct 12 '07 #37

P: n/a
Peter, this is a very simple matter and perhaps we're misunderstanding
eachother. A C++ programmer doesn't have to do anything to clean up
resources except write the destructor itself. Once written, you never have
to bother with it again since the destructor runs *automatically*. Even if
an exception is thrown, resources are cleaned up with no work on the part of
the developer (and no waiting until the GC runs which might never happen -
oops). By contrast, if I use a .NET object with a "Dispose()" method I now
have the following issues:

1) Clients have to explicitily call "Dispose()" or rely on the "using"
statement which just wraps the call to "Dispose()"
2) Step1 is an extra requirement/burden
3) Step 1 may be ignored by clients or an exception may be thrown before
"Dispose()" is called (really forcing clients to adopt the "using" statement
instead of calling "Dispose()" directly). "Finalize()" should then clean
things up. You've said ample times now that this is irrelevant and to just
ignore "Finalize()". It's *not* irrelevant because "Finalize()" may never be
called which could cause your app to break at some point. Just because you
may not be writing "Finalize()" itself doesn't mean you don't have to think
about the process or understand the issues. And if you do have to write it
on occasion then the details of juggling both "Finalize()" and "Dispose()"
aren't trivial. We haven't even touched upon a host of other issues that
Juval Lowy spells out in his book "Programming .NET Components" (and he's
recognized by MSFT as one of the world's top experts)
4) Step 1 is more verbose than a C++ destructor (read "uglier"). The C++
destructor has no (visible) usage footprint which makes for cleaner code.

It's that simple. No amount of debating that the GC is at the mercy of step
1 (through its own fault or not) changes the reality of steps 2-4.
Oct 12 '07 #38

P: n/a
Larry Smith <no_spam@_nospam.comwrote:
Peter, this is a very simple matter and perhaps we're misunderstanding
eachother. A C++ programmer doesn't have to do anything to clean up
resources except write the destructor itself.
So long as the ownership of the object is clear, of course. You can use
reference counting for "shared" objects, but that then runs into the
problem of circular references.

If the C++ had been that straightforward and always foolproof, I think
it's fairly safe to bet that MS would have taken it when creating .NET.

--
Jon Skeet - <sk***@pobox.com>
http://www.pobox.com/~skeet Blog: http://www.msmvps.com/jon.skeet
If replying to the group, please do not mail me too
Oct 12 '07 #39

P: n/a
"Jon Skeet [C# MVP]" <sk***@pobox.comwrote

[Garbage Collection - Memory Leaks]
So long as the ownership of the object is clear, of course.
I think that statement, right there, is really the key. In any environment
I've ever used, managing memory has been pretty easy so long as ownership of
something is clear.

The problems don't start to arise until some base class, somewhere deep
down, allocates some memory and hands it off. At that point, ownership
becomes very ambigious, and it's often impossible for the caller to to tell
"Do I free this memory or not?".

..Net has just as many problems here as any other langes. How many DAL's have
you seen that look like:
DataSet GetUser(int employeeId){} ?

This has some complicated questions associated with it - the DataSet has a
dispose method on it, but who is responsible for calling it? What if you
keep it around for a while, sitting in a cache? As systems get complex, this
question is often very difficult to answer.

For example, in the most basic sense, the GetUser method would go out to the
database, build a dataset, and the caller is responsible for Disposing the
dataset.

.... but after a round of performance tuning, the DAL could be sticking the
DataSet into a cache. When the method is called, it just returns the cached
copy. If the caller disposes the dataset now, the overally could be
comprimised.

This stuff gets complicated quickly. GC seems the best overally solution
I've seen to date, but there are certainly aspects of the C++ destructor
pattern, and the related std::auto_ptr<Tpattern, that I really, really
liked.

--
Chris Mullins
Oct 12 '07 #40

P: n/a
Chris Mullins [MVP - C#] <cm******@yahoo.comwrote:

<snip>
This stuff gets complicated quickly. GC seems the best overally solution
I've seen to date, but there are certainly aspects of the C++ destructor
pattern, and the related std::auto_ptr<Tpattern, that I really, really
liked.
Absolutely. It's going to be interesting to see whether the successor
to .NET, whenever it comes, has a better story around this. It's not
the only aspect of development which appears to be slightly lacking
(and not for want of effort) - there are fundamentally difficult
problems, but whether they turn out to be fundamentally "impossible to
solve" problems remains to be seen :)

--
Jon Skeet - <sk***@pobox.com>
http://www.pobox.com/~skeet Blog: http://www.msmvps.com/jon.skeet
If replying to the group, please do not mail me too
Oct 12 '07 #41

P: n/a

"John Duval" <Jo********@gmail.comwrote in message
news:11**********************@z24g2000prh.googlegr oups.com...
On Oct 11, 5:14 pm, "Ben Voigt [C++ MVP]" <r...@nospam.nospamwrote:
>A memory leak is memory that remains unavailable for reuse after it is no
longer needed. This is the only definition of memory leak that makes
sense.
After all, by your definition, this isn't a memory leak (C++):

void f(void)
{
int* p = new int[1024];

}

I don't follow what you mean that my definition says this is not a
memory leak. Once f() completes and p goes out of scope, there is no
reference that can be used to recover the memory.
I quote you: "to me a memory leak means a very specific thing, which is that
the memory has been orphaned and there is no reference anywhere that can be
used to recover it". Such a reference certainly does exist, and HeapWalk
can be used to obtain it.
Oct 13 '07 #42

P: n/a
I wouldn't call the allocated memory "accessible" in your f() case. Using
HeapWalk or reflection doesn't count as "accessible" to the program that
actually allocated the memory. Or put another way, if you choose to
define memory that can be found via those means as "accessible", IMHO
you've just made the word "accessible" a completely pointless word, as
with a definition that broad (assuming you take it to its logical
conclusion) there is no such thing as memory that is NOT "accessible".
If it counts for .NET, it should also count for C++. With a definition of
accessible which includes HeapWalk, neither C++ nor .NET can leak memory.
With a narrower definition of accessible, the .NET GC can now leak memory
(the object is "reachable" according to the garbage collector, but not in
any way that is useful to the programmer, hence a leak). For example, in
..NET registering an object to a static event would leak that object. Here:

class AutoCleanup : IDisposable
{
private void Clean(object sender, EventArgs e) {}
public AutoCleanup() { AppDomain.Current.Shutdown += Clean; }
public void Dispose() { } // bug, forgot "AppDomain.Current.Shutdown -=
Clean;", therefore we leak
}

foreach (/* many many objects */) {
using (AutoCleanup ac = new AutoCleanup()) {
}
}

..NET can leak objects just like native C++ can.
Oct 13 '07 #43

P: n/a
Furthermore, C++ does not actually fix the exception issue. If you put a
C++ class on the stack as a local variable, it works fine. But not all
C++ classes can be instantiated as a local variable; some MUST be
allocated via new, just like .NET classes. Or in some cases, it's just
preferable to use new. Either way, you still have to catch an exception
and clean them up explicitly.
This isn't true. First up, show me a C++ class that can't be instantiated
locally. Secondly, even when you do instantiate from the heap, you can use
a smart pointer and again avoid explicit cleanup. Third, this isn't C++ vs
..NET, because C++/CLI provides automatic cleanup for garbage collected,
IDisposable objects as well. It's actually C++ vs C#, because C# places
extra unnecessary burden on the programmer (remembering to apply a "using"
block).
Oct 14 '07 #44

P: n/a
Ben Voigt [C++ MVP] wrote:
If it counts for .NET, it should also count for C++.
Agreed.
With a definition of
accessible which includes HeapWalk, neither C++ nor .NET can leak memory.
Agreed.
With a narrower definition of accessible, the .NET GC can now leak memory
(the object is "reachable" according to the garbage collector, but not in
any way that is useful to the programmer, hence a leak).
Huh?
For example, in
..NET registering an object to a static event would leak that object. Here:

class AutoCleanup : IDisposable
{
private void Clean(object sender, EventArgs e) {}
public AutoCleanup() { AppDomain.Current.Shutdown += Clean; }
public void Dispose() { } // bug, forgot "AppDomain.Current.Shutdown -=
Clean;", therefore we leak
}

foreach (/* many many objects */) {
using (AutoCleanup ac = new AutoCleanup()) {
}
}
I don't understand your example. Is your concern the difficult in
retrieving the event handler reference from the Shutdown event? It
seems as though you are trying to say that the list of event handlers is
not accessible by the code, but that's simply false. It may not be
accessible by the code that added the handler, but that's not the same
thing.

This example seems to be basically a more complicated version of your
degenerate private class member. And just like that one, it doesn't
prove the point you seem to be trying to prove. In particular, the only
thing preventing the reference from being accessible is the protection
level of the variable. There still is a variable referencing the data,
and there can easily be some code somewhere that can get at that data.
..NET can leak objects just like native C++ can.
I disagree.

Pete
Oct 14 '07 #45

P: n/a
Ben Voigt [C++ MVP] wrote:
>Furthermore, C++ does not actually fix the exception issue. If you put a
C++ class on the stack as a local variable, it works fine. But not all
C++ classes can be instantiated as a local variable; some MUST be
allocated via new, just like .NET classes. Or in some cases, it's just
preferable to use new. Either way, you still have to catch an exception
and clean them up explicitly.

This isn't true. First up, show me a C++ class that can't be instantiated
locally.
I have seen code designed such that it relied on the behavior of the
heap allocation as part of the object. The base class knew about the
heap, and if you tried to declare the class as a local variable, it
wouldn't work right.
Secondly, even when you do instantiate from the heap, you can use
a smart pointer and again avoid explicit cleanup.
That's not an inherent part of C++ though and on top of that it's no
more "clean" than the "using" statement.
Third, this isn't C++ vs
..NET, because C++/CLI provides automatic cleanup for garbage collected,
That's fine. I don't use managed C++ enough to be aware of the special
differences that exist. It's not really C# or .NET that I'm defending
anyway; it's the basic paradigm of using garbage collection.

Pete
Oct 14 '07 #46

P: n/a
On Oct 13, 7:41 pm, "Ben Voigt [C++ MVP]" <r...@nospam.nospamwrote:
"John Duval" <JohnMDu...@gmail.comwrote in message

news:11**********************@z24g2000prh.googlegr oups.com...


On Oct 11, 5:14 pm, "Ben Voigt [C++ MVP]" <r...@nospam.nospamwrote:
A memory leak is memory that remains unavailable for reuse after it is no
longer needed. This is the only definition of memory leak that makes
sense.
After all, by your definition, this isn't a memory leak (C++):
void f(void)
{
int* p = new int[1024];
}
I don't follow what you mean that my definition says this is not a
memory leak. Once f() completes and p goes out of scope, there is no
reference that can be used to recover the memory.

I quote you: "to me a memory leak means a very specific thing, which is that
the memory has been orphaned and there is no reference anywhere that can be
used to recover it". Such a reference certainly does exist, and HeapWalk
can be used to obtain it.- Hide quoted text -

- Show quoted text -
Hi Ben,
I guess I just don't see how HeapWalk could be used to identify which
blocks of memory to be recover. I see how you can use HeapWalk to
iterate through the heap entries, but how would you know which ones
need to be recovered?

John

Oct 15 '07 #47

P: n/a
This example seems to be basically a more complicated version of your
degenerate private class member. And just like that one, it doesn't prove
the point you seem to be trying to prove. In particular, the only thing
preventing the reference from being accessible is the protection level of
the variable. There still is a variable referencing the data, and there
can easily be some code somewhere that can get at that data.
Oh, so now even HeapWalk isn't necessary. As long as the internal heap
manager has internal variables that can get at that data, it isn't leaked?

That, in my opinion, is not a very useful definition of a memory leak.
Oct 15 '07 #48

P: n/a
>I quote you: "to me a memory leak means a very specific thing, which is
>that
the memory has been orphaned and there is no reference anywhere that can
be
used to recover it". Such a reference certainly does exist, and HeapWalk
can be used to obtain it.- Hide quoted text -

- Show quoted text -

Hi Ben,
I guess I just don't see how HeapWalk could be used to identify which
blocks of memory to be recover. I see how you can use HeapWalk to
iterate through the heap entries, but how would you know which ones
need to be recovered?
By this new definition of memory leak, .NET can leak memory as well. Say I
have a Form that's disposed but being kept alive by some event handler
somewhere. .NET doesn't provide any better means of cleaning that up and
letting the Form and child controls be reclaimed than HeapWalk provides to
native code.
>
John

Oct 15 '07 #49

P: n/a
Ben Voigt [C++ MVP] wrote:
Oh, so now even HeapWalk isn't necessary. As long as the internal heap
manager has internal variables that can get at that data, it isn't leaked?
No. That's not what I'm talking about, and IMHO a person who infers
that is intentionally misinterpreting what I'm trying to write just to
make a point.
Oct 15 '07 #50

56 Replies

This discussion thread is closed

Replies have been disabled for this discussion.