By using this site, you agree to our updated Privacy Policy and our Terms of Use. Manage your Cookies Settings.
432,508 Members | 1,856 Online
Bytes IT Community
+ Ask a Question
Need help? Post your question and get tips & solutions from a community of 432,508 IT Pros & Developers. It's quick & easy.

A re-announce on GC's defects

P: n/a
GC is really garbage itself

Reason 1:

There is delay between the wanted destruction and the actual destruction.

Negative effects by the destruction delay:

1) Efficiency issue

It's bad for CPU/Resource intensive but memory cheap objects.

CPU intensive objects refer to objects who own internal threads.

Resource intensive objects refer to objects who own unmanaged resources like
file handle, network connections, etc.

Don't tell me these objects are rare. Everything is possible to happen and a
general purpose language should not find any excuse not to apply to some
situations.

2) Logic issue

The need for weak reference makes the destruction delay logically incorrect.

Weak references (or you can call them handles) refer to references who do
not require the referenced targets to keep alive, while strong references
do.

When all strong references to a target go out of their lifetime, the target
also comes to the end of its lifetime. Right at this point, weak references
to this target become invalid. However, the destruction delay caused by the
GC violates this logic. Weak references will continue to think the target
alive until the GC really collects the target object.

Don't tell me the IDispose pattern is for that. There may be more than one
strong references and you don't know when and where to call Dispose.

Don't tell me WeakReference (C#) is for that. If you don't have Dispose
called properly, WeakReference still gives a wrong logic.

Don't tell me this can be solved by adding method like IsDestroyed to the
target. It holds a candle to the sun and it only adds to the complexity of
logics.

An example:

Suppose we're doing a 3D game. A radar is monitoring a target. Obviously,
the radar should hold a weak reference to the target. When the target is
killed, logical confusion is immediately brought to the radar watcher (the
gamer). Is the target destroyed or not? You can not tell him, hey, it's
killed but still shown on the radar because you've got to wait for the GC to
make it.

Reason 2:

Poor scalability

This is a Theory issue.

GC is global, which means it scales as the application's memory use scales.
Theoretically, this indicates a bad scalability.

Don't tell me an application should not use too many concurrent objects.
Again, everything is possible. Restrictions only prove the defects of the
language.

Fairly speaking, GC itself is not garbage. However, when java and C#
integrate it and prevent the user from manually managing memory, it becomes
garbage. Note GC in java and C# is not really an addictive as someone would
argue since there is no way to do real memory management like delete obj in
C++.

Better memory management in my mind is reference counting + smart pointer,
which makes things automatic and correct. You have deterministic
destructions while no need to manually call Dispose. You need not to
manually change the reference count as smart pointers help you achieve it.
The only problem with this approach is cyclic reference. However, even if
not theoretically proven, the problem generally can be solved by replacing
some strong references with weak references.

I believe the restriction by GC is one of the main reasons why in some field
(the gaming industry, for example), java or C# is rarely used in serious
products who face real computing challenges.

Solution

1) The ideal solution is to convince the language providers to give us back
the ability of managing memory by our own. GC can still be there, and it
becomes a real addictive in that situation.

2) Transfer the burden to the user. We can ask the user to always take
special cautions (for example, always use "using" in C# to have Dispose
correctly called even exception occurs). Things can work around if the user
do them right. However, that's at risk in nature and not a robust solution.
Jan 18 '07 #1
Share this Question
Share on Google+
62 Replies


P: n/a
Born wrote:
GC is really garbage itself
[snip]
Better memory management in my mind is reference counting + smart pointer,
So stick with C++ ...
which makes things automatic and correct.
.... because after all, C++ is famous for its programs never having any
kind of memory-allocation-related bugs.

Oh wait.
I believe the restriction by GC is one of the main reasons why in some field
(the gaming industry, for example), java or C# is rarely used in serious
products who face real computing challenges.
How nice for you. Any evidence for this belief, or is it blind faith?
--
Larry Lard
la*******@googlemail.com
The address is real, but unread - please reply to the group
For VB and C# questions - tell us which version
Jan 18 '07 #2

P: n/a
I think you have actually missed many very important reasons why the GC
is bad. I hate it all the way. But we have very little choice. It is
either we use .Net and there is a GC or we use Win 32 compiled app
written in C++ or Delphi. Nothing obliges us to use .Net.

--
Michael
----
http://michael.moreno.free.fr/
http://port.cogolin.free.fr/
Jan 18 '07 #3

P: n/a
Hello Born,

BGC is really garbage itself
B>
BReason 1:
B>
BThere is delay between the wanted destruction and the actual
Bdestruction.
B>
BNegative effects by the destruction delay:
B>
B1) Efficiency issue

Does it really bad? :) It only depends on your app context. If it doesnt
meets your requirement welcome back to unmanaged world with manually memmory
management

B2) Logic issue
B>
BThe need for weak reference makes the destruction delay logically
Bincorrect.

Caching is the logically incorrect too?

BThe only problem with this approach is cyclic
Breference. However, even if not theoretically proven, the problem
Bgenerally can be solved by replacing some strong references with weak
Breferences.

But it's logically incorrect, as mentioned before :)

BI believe the restriction by GC is one of the main reasons why in
Bsome field (the gaming industry, for example), java or C# is rarely
Bused in serious products who face real computing challenges.

lol
What does the "serious product" and "real computing challenge" mean for your?

The FPS? :)

BTW, last DX samples are in C#. The real challenge of C# and game industry
is performance.
In 2 - 3 years it will be solved
BSolution
B>
B1) The ideal solution is to convince the language providers to give
Bus back the ability of managing memory by our own. GC can still be
Bthere, and it becomes a real addictive in that situation.

Nobody prohibits u to use C++
---
WBR,
Michael Nemtsev [C# MVP] :: blog: http://spaces.live.com/laflour

"The greatest danger for most of us is not that our aim is too high and we
miss it, but that it is too low and we reach it" (c) Michelangelo
Jan 18 '07 #4

P: n/a
Born wrote:
GC is really garbage itself
[...]
Negative effects by the destruction delay:
1) Efficiency issue
It's bad for CPU/Resource intensive but memory cheap objects.
It's also bad allocating / deallocating permanently small objects on a
native heap. It's rather time consuming, compared to the managed one and
it doesn't scale that well if you have a multi core CPU.
>[...]
2) Logic issue
[...]
Don't tell me the IDispose pattern is for that. There may be more than one
strong references and you don't know when and where to call Dispose.
You could use reference counting as well for a managed object. Instead
of calling the destructor you call Dispose if the reference counter is 0.
[...]
An example:

Suppose we're doing a 3D game. A radar is monitoring a target. Obviously,
the radar should hold a weak reference to the target. When the target is
killed, logical confusion is immediately brought to the radar watcher (the
gamer). Is the target destroyed or not? You can not tell him, hey, it's
killed but still shown on the radar because you've got to wait for the GC to
make it.
What has the state of an object (resource) to do with the memory it has
allocated ? That's the only thing GC is supposed to do - manage memory.
Reason 2:
[...]
Fairly speaking, GC itself is not garbage. However, when java and C#
integrate it and prevent the user from manually managing memory, it becomes
garbage. Note GC in java and C# is not really an addictive as someone would
argue since there is no way to do real memory management like delete obj in
C++.
In C++ you have many classes which handle the memory by them self, e.g.
most STL classes to ensure that not permanently memory is allocated or
that the memory is consecutive. GC handles this automatically.

Better memory management in my mind is reference counting + smart pointer,
which makes things automatic and correct. You have deterministic
destructions while no need to manually call Dispose.
As I wrote, why not also implementing reference counting for managed
objects, which are calling Dispose if the reference count is 0 ?
Though there is a performance impact, because for thread safe reference
counting you have to use Interlocked functions.
You need not to
manually change the reference count as smart pointers help you achieve it.
Agreed, you would have to call them manually in C#, because there's no
RAII. Which I'm really missing in C#.
The only problem with this approach is cyclic reference. However, even if
not theoretically proven, the problem generally can be solved by replacing
some strong references with weak references.
Yes, but it's sometimes tricky and in complex object hierarchies you
have a very high chance to build cyclic references, which are hard to
deal with.
I believe the restriction by GC is one of the main reasons why in some field
(the gaming industry, for example), java or C# is rarely used in serious
products who face real computing challenges.
Hm, perhaps because one can't see that Java or C# is used. E.g. the game
Chrome is written in Java (not 100% but many parts of it)
Solution
1) The ideal solution is to convince the language providers to give us back
the ability of managing memory by our own. GC can still be there, and it
becomes a real addictive in that situation.
GC doesn't solve resource allocation problems. They are different as in
C++ and so are the problems you have to face. It's the same with memory
handling. In C++ you still have to think over and over again, how the
memory is handled and if it's better to use an object cache. Otherwise
you will face performance problems too.

2) Transfer the burden to the user. We can ask the user to always take
special cautions (for example, always use "using" in C# to have Dispose
correctly called even exception occurs). Things can work around if the user
do them right. However, that's at risk in nature and not a robust solution.
Isn't that the case ? The developer has to use "using" e.g. for file
objects, which shall release the file handles directly after their usage.

I admit that sometimes I'm missing reference counting, when I'm dealing
with objects stored in multiple lists. How shall I know when to call
dispose ?

E.g. if a file object is stored in 2 or more lists and has to be removed
from one of the lists. How do I know if I have to call Dispose ? Only
performant solution for me would be to use reference counting.
Though you can't have smart pointers, which are automatically destroyed
and will decrease the reference count of an object automatically. You
have to do it manually in C#. :-( - perhaps there's a better solution in
C# that I don't know yet ? (any comments and solutions would be highly
appreciated)

Andre

Jan 18 '07 #5

P: n/a
Born wrote:
GC is really garbage itself
Hey, I have a lot of patience, but I'm sorry... when someone starts a
post with a patently moronic statement like that, well, I conclude that
that person is a moron.

GC is worthless garbage? That must be why Java was a flash in the pan.
Hardly anyone uses Java. As one poster said, "Oh wait."

Not that I consider one study a definitive conclusion, but the one
cited here

http://www.eweek.com/article2/0,1895,2065392,00.asp

says,

"Java now holds the market penetration lead at 45 percent, followed by
C/C++ at 40 percent, and C# at 32 percent."

(Yes, I realize that the numbers add up to more than 100%... I assume
that that is because some shops, like ours, use more than one
language.)

I'm not game to draw any solid conclusions from these numbers, but a
general conclusion is fair game: for a memory management method that is
"garbage itself" it's doing very well. I don't think it's out of line
to say that more than half of all new software is being written using a
garbage-collected language.

Given this, I think that there are only two possible conclusions that
you can draw:

1. Half of the programmers in the world are benighted idiots who have
not yet attained the lofty heights of intellectual superiority that you
now enjoy. They would be using C++ if only they would realize The
Truth.

2. You're missing something.

Personally, I vote for door #2.

Jan 18 '07 #6

P: n/a

Born wrote:
Suppose we're doing a 3D game. A radar is monitoring a target. Obviously,
the radar should hold a weak reference to the target. When the target is
killed, logical confusion is immediately brought to the radar watcher (the
gamer). Is the target destroyed or not? You can not tell him, hey, it's
killed but still shown on the radar because you've got to wait for the GC to
make it.
Oh. My. God. You can't seriously tell me that you would use the memory
allocation state of an object to determine whether it is "alive" or not
in a game? Are you really that bad a designer that you would mix the
concepts of a target's relationship to the game and its memory
allocation status? Even when I programmed in C++ I didn't mix ideas
like that: an object is dead when I flag it dead. It may be destructed
immediately, or some time later. What if I decide to keep a list of the
player's kills? Does it then become impossible to kill anything,
because it's in the kill list?

If you're going to trumpet the benefits of deterministic destruction,
please, please don't offer terrible designs as evidence.
I believe the restriction by GC is one of the main reasons why in some field
(the gaming industry, for example), java or C# is rarely used in serious
products who face real computing challenges.
This made me laugh. I love the little caveat, "in some field" You had
to add that because in _many_ fields garbage collected languages do
just fine.

So what is your complaint, really? That a GC language can't handle
every problem in every domain? No shit, Sherlock. There _are_ people
using C# and Java for gaming (the latter mostly for portability to
micro platforms, but that aside...) but I wouldn't use it for that. I
would use C++ to write games for precisely the reasons that you are so
ham-handedly outlining here, which can be summed up in one sentence:
more control. GC is a two-edged sword: it removes a lot of picky
housekeeping that has caused more bugs than anything I know of. (Larry
Lard pointed this out rather sardonically in his post: C++ is infamous
for memory allocation bugs.) The other edge of the sword is that GC
removes control from the programmer, which is what you're complaining
about. For _most_ applications, the loss of control doesn't matter. For
some, such as gaming, it does. So, in those domains, use C++.

I no longer consider C++ a serious language for business development.
You know, the meat-and-potatoes, number-crunching applications that
make up most of the world's software. Why? Because C++ adds a lot of
picky housekeeping (memory management) that gains me nothing. I don't
need the additional control it offers, and the cost of that control is
bugs. For some domains (such as gaming), the tradeoff is warranted. For
most, it's not, which is why Java and C# are doing so well.

If you're so close to the metal that the tiny delays introduced by
..NET's GC will screw up your app, then the answer is simple: DON'T USE
IT. USE C++.

In the end, what are you trying to prove here? That C# can't tackle
every problem under the sun? We all know that. Move on. Nothing to see
here. Or are you trying to demonstrate that C# isn't suited to any
problem domain at all? If so then I refer you to my previous post:
you're missing something.
1) The ideal solution is to convince the language providers to give us back
the ability of managing memory by our own. GC can still be there, and it
becomes a real addictive in that situation.
And this buys us... what? The ability to use C# / Java in a few domains
where it isn't working well right now? Why not just use a better-suited
language? What is it about programmers, all of us, that makes us want
to invent the super-duper maxed-out Swiss Army Knife of programming
languages that can do anything? They already tried that. It was called
Ada. How many people still use Ada? So what if game development is
mostly done using C++ and not C#? Would you do brain surgery with a
Swiss Army Knife? Of course not: you would use specialized tools. What,
then, is so wrong about using the best language suited to a particular
domain, rather than trying to create the language that can do anything?

Jan 18 '07 #7

P: n/a

Born wrote:
GC is really garbage itself

Reason 1:

There is delay between the wanted destruction and the actual destruction.

Negative effects by the destruction delay:

1) Efficiency issue

It's bad for CPU/Resource intensive but memory cheap objects.

CPU intensive objects refer to objects who own internal threads.

Resource intensive objects refer to objects who own unmanaged resources like
file handle, network connections, etc.

Don't tell me these objects are rare. Everything is possible to happen and a
general purpose language should not find any excuse not to apply to some
situations.
They're not rare. In fact, they're all over the place. That's why
IDisposable was created. It provides a mechanism for doing
deterministic finalization. That leaves managed memory as the only
resource whose cleanup is necessarily delayed. But, I think the
decrease in efficiency here is more than offset by the increase in
efficiency during memory allocation.
2) Logic issue

The need for weak reference makes the destruction delay logically incorrect.

Weak references (or you can call them handles) refer to references who do
not require the referenced targets to keep alive, while strong references
do.

When all strong references to a target go out of their lifetime, the target
also comes to the end of its lifetime. Right at this point, weak references
to this target become invalid. However, the destruction delay caused by the
GC violates this logic. Weak references will continue to think the target
alive until the GC really collects the target object.
That's one of the advantages of a WeakReferece though. You can
resurrect a WeakReference target. What GC logic does that violate?
>
Don't tell me the IDispose pattern is for that. There may be more than one
strong references and you don't know when and where to call Dispose.
It depends on whose perspective you're considering. As the developer
of the object you have little control over when or if Dispose gets
called, but as the caller of the object you have absolute control.
Isn't it the callers responsibility to decide when the object isn't
needed any longer? Maybe it's just me, but I'd hate it if the
FileStream decided for me when I was done writing to a file. Now, if
the FileStream reference falls out of scope (and assuming that I don't
have references to it elsewhere) before I've called Dispose then it's
my fault for not using it correctly. Still, the GC is nice enough to
call Dispose for me as a last resort. So, I think I understand your
point. I just don't see it as a big problem.
>

Don't tell me WeakReference (C#) is for that. If you don't have Dispose
called properly, WeakReference still gives a wrong logic.
A WeakReference isn't suppose to be used as a counter-measure to a
missing Dispose call. That's what the class' destructor (or Finalize
method) is used for.
>

Don't tell me this can be solved by adding method like IsDestroyed to the
target. It holds a candle to the sun and it only adds to the complexity of
logics.

An example:

Suppose we're doing a 3D game. A radar is monitoring a target. Obviously,
the radar should hold a weak reference to the target.
I'd hardly call that an obvious conclusion. I would not use a
WeakReference in this situation. In fact, I rarely use a WeakReference
at all.
When the target is
killed, logical confusion is immediately brought to the radar watcher (the
gamer). Is the target destroyed or not? You can not tell him, hey, it's
killed but still shown on the radar because you've got to wait for the GC to
make it.
So don't rely on the GC to propagate that information to the radar.
Code it deterministically.
>


Reason 2:

Poor scalability

This is a Theory issue.

GC is global, which means it scales as the application's memory use scales.
Theoretically, this indicates a bad scalability.

Don't tell me an application should not use too many concurrent objects.
Again, everything is possible. Restrictions only prove the defects of the
language.
I disagree with your poor scalability argument. Creating a new object
in .NET is blindingly fast. Do some benchmarks. This speed is
*because* of the GC not in *spite* of it. What might be problematic is
that the GC will suspend all threads in the application while it is
cleaning up memory.
>

Fairly speaking, GC itself is not garbage. However, when java and C#
integrate it and prevent the user from manually managing memory, it becomes
garbage. Note GC in java and C# is not really an addictive as someone would
argue since there is no way to do real memory management like delete obj in
C++.

Better memory management in my mind is reference counting + smart pointer,
which makes things automatic and correct. You have deterministic
destructions while no need to manually call Dispose. You need not to
manually change the reference count as smart pointers help you achieve it.
The only problem with this approach is cyclic reference. However, even if
not theoretically proven, the problem generally can be solved by replacing
some strong references with weak references.

I believe the restriction by GC is one of the main reasons why in some field
(the gaming industry, for example), java or C# is rarely used in serious
products who face real computing challenges.
I think it has more to do with the GC suspending threads at
unpredictable times. That would cause the display to appear jerky.
This may be one scenario where a strategically placed call to
GC.Collect would be appropriate. Regardless, I don't think C# was ever
intended to be a tool for gamer developers.
>


Solution

1) The ideal solution is to convince the language providers to give us back
the ability of managing memory by our own. GC can still be there, and it
becomes a real addictive in that situation.

2) Transfer the burden to the user. We can ask the user to always take
special cautions (for example, always use "using" in C# to have Dispose
correctly called even exception occurs). Things can work around if the user
do them right. However, that's at risk in nature and not a robust solution.
Jan 18 '07 #8

P: n/a

"Andre Kaufmann" <an*********************@t-online.dewrote in message
news:uw**************@TK2MSFTNGP02.phx.gbl...
Born wrote:
>GC is really garbage itself
[...]
Negative effects by the destruction delay:
1) Efficiency issue
It's bad for CPU/Resource intensive but memory cheap objects.

It's also bad allocating / deallocating permanently small objects on a
native heap. It's rather time consuming, compared to the managed one and
it doesn't scale that well if you have a multi core CPU.
The framework GC is actually one of the more efficient garbage collectors
out there. Properly done, reference counting can take up to 50% of your
program's CPU cycles. Reference counting will also leave objects that refer
to each other in memory, even when no other objects refer to them. Google
for my GC posts in late 2005 for sample code in VB 6 and VB 2005 for a
simple program that demonstrates the difference between the mark/sweep
algorithm in dotNet vs the reference counting algorithm in the VB 6/COM
model.

The dotNet GC is actually a three generation GC. What this means is that
when the Gen(0) heap fills up, all accessible objects are marked and their
size is computed. If there is insufficient space in the Gen(1) heap, then
and only then is the Gen(1) heap cleaned up to the Gen(2) heap. Once there
is sufficient space in the Gen(1) heap, the marked objects in the Gen(0)
heap are copied to the Gen(1) heap and the heap pointer in Gen(0) is reset
to the base address. The only time a full mark/sweep/compact garbage
collection is done is when the Gen(2) heap is full. This appears to be done
before the framework requests additional memory from the OS.
[...]
2) Logic issue
[...]
Don't tell me the IDispose pattern is for that. There may be more than
one strong references and you don't know when and where to call Dispose.

You could use reference counting as well for a managed object. Instead of
calling the destructor you call Dispose if the reference counter is 0.

You never have to call the IDispose interface. The GC will do this for you.
Yes, it does result in those objects possibly being left in memory for one
more GC cycle, but the benefit is that you, as the programmer, never need
worry about dangling pointers and memory leaks. As long as your objects
themselves clean up properly in the dispose method, memory leaks and
dangling pointers simply cannot occur. The GC class even provides
interfaces to that if your object allocates a lot of unmanaged system
memory, you can tell the framework how much is being allocated in its
constructor. You then tell the framework about the release of this memory
in your Dispose method.

>
>[...]
An example:

Suppose we're doing a 3D game. A radar is monitoring a target. Obviously,
the radar should hold a weak reference to the target. When the target is
killed, logical confusion is immediately brought to the radar watcher
(the gamer). Is the target destroyed or not? You can not tell him, hey,
it's killed but still shown on the radar because you've got to wait for
the GC to make it.

What has the state of an object (resource) to do with the memory it has
allocated ? That's the only thing GC is supposed to do - manage memory.
You're thinking is backwards. You are letting your resource management
determine the scope of an object's lifetime. You need to use the
applicatoin domain logic, in this case, the object going off the edge of the
radar or being destroyed by a missile to remove the references to the
object. Yes, the object's resources will still be allocated for an
indeterminate time, but the object will never again be able to appear on
your radar anyway. The runtime will eventually get around to releasing
those resources via the object's IDispose interface - you don't have to do
it yourself. In C++ you must explicitely handle the release of the memory
at some point in your program execution. In C#, the runtime will execute
the destruction code for you when it either has idle time or needs the
memory to satisfy a allocation request.
>Reason 2:
[...]
Fairly speaking, GC itself is not garbage. However, when java and C#
integrate it and prevent the user from manually managing memory, it
becomes garbage. Note GC in java and C# is not really an addictive as
someone would argue since there is no way to do real memory management
like delete obj in C++.

In C++ you have many classes which handle the memory by them self, e.g.
most STL classes to ensure that not permanently memory is allocated or
that the memory is consecutive. GC handles this automatically.

>Better memory management in my mind is reference counting + smart
pointer, which makes things automatic and correct. You have deterministic
destructions while no need to manually call Dispose.

As I wrote, why not also implementing reference counting for managed
objects, which are calling Dispose if the reference count is 0 ?
Though there is a performance impact, because for thread safe reference
counting you have to use Interlocked functions.
The performance impact can be huge. Early Smalltalk implementations spent
50% of their time reference counting.
>You need not to manually change the reference count as smart pointers
help you achieve it.

Agreed, you would have to call them manually in C#, because there's no
RAII. Which I'm really missing in C#.
>The only problem with this approach is cyclic reference. However, even if
not theoretically proven, the problem generally can be solved by
replacing some strong references with weak references.
When using weak references, you still need to handle dangling pointer
exceptions.
Yes, but it's sometimes tricky and in complex object hierarchies you have
a very high chance to build cyclic references, which are hard to deal
with.
>I believe the restriction by GC is one of the main reasons why in some
field (the gaming industry, for example), java or C# is rarely used in
serious products who face real computing challenges.

Hm, perhaps because one can't see that Java or C# is used. E.g. the game
Chrome is written in Java (not 100% but many parts of it)
>Solution
>1) The ideal solution is to convince the language providers to give us
back the ability of managing memory by our own. GC can still be there,
and it becomes a real addictive in that situation.
It's there is C# - you can mix C++ modules with C#. The penalty is that you
now spend your time managing memory.
GC doesn't solve resource allocation problems. They are different as in
C++ and so are the problems you have to face. It's the same with memory
handling. In C++ you still have to think over and over again, how the
memory is handled and if it's better to use an object cache. Otherwise you
will face performance problems too.
This is what the IDispose and Finalize model is for. Yes, it takes an
additional GC cycle to free the memory, but the object has a chance to clean
up after itself first.
>
>2) Transfer the burden to the user. We can ask the user to always take
special cautions (for example, always use "using" in C# to have Dispose
correctly called even exception occurs). Things can work around if the
user do them right. However, that's at risk in nature and not a robust
solution.

Isn't that the case ? The developer has to use "using" e.g. for file
objects, which shall release the file handles directly after their usage.

I admit that sometimes I'm missing reference counting, when I'm dealing
with objects stored in multiple lists. How shall I know when to call
dispose ?

E.g. if a file object is stored in 2 or more lists and has to be removed
from one of the lists. How do I know if I have to call Dispose ? Only
performant solution for me would be to use reference counting.
Though you can't have smart pointers, which are automatically destroyed
and will decrease the reference count of an object automatically. You have
to do it manually in C#. :-( - perhaps there's a better solution in C#
that I don't know yet ? (any comments and solutions would be highly
appreciated)
You never need to call Dispose. By telling the compiler that an object
class implements IDisposable, the GC will call Dispose for you before it
actually deallocates memory.
Andre
I have researched various allocation/deallocation schemes over the years.
Here's a quick summary:

Explicit allocation/deallocation
- Example: C malloc/dealloc and C++ new/delete
- Benefits: Very easy to implement in the runtime and inital
allocations/deallocations are fast.
- Drawbacks: Dangling pointers, attempts to access deallocated/deleted
objects (null pointer references), heap fragmentation
- Where I would use: Only in Real-Time environments where reliable response
time is the driving factor.

Reference Counting:
- Example: Early SmallTalk and VB 6
- Benefits: Programmer doesn't have to worry about memory allocation.
Impossible to dangle pointers or attempt to dereference pointers to
deallocated objects.. Easy to implement in the runtime
- Drawbacks: Cyclic objects never being freed from memory. Early SmallTalk
implementations spent up to 50% of their time doing reference counting.
Doesn't handle resources other than memory.
- Where I would use: General purpose computing

Simple Mark/Sweep/Compact
- Example: Some older Lisp implementations
- Benefits: Programmer doesn't have to worry about memory allocation. Self
referencing objects will be collected.
- Drawbacks: Long pauses while the GC cycle runs. I've seen Lisp machines
pause for several minutes while the GC cycle runs. Implementation can be
tricky. Object lifetime cannot be predicted.
- Where I would use: General purpose computing.

Multi-Generational Mark/Sweep/Compact with explicit disposal interfaces
- Example: Java and dotNet
- Benefits: Programmer controls the allocation of unmanaged resources
through constructors and deallocation through destructors, limiting the
amount of code that actually handles this issue. Usually faster than simple
mark/sweep/compact since most GC passes don't need to process all
generations of memory, especially since most objects have very short
accessible lifetimes and won't be accessible the next time the GC needs to
run.
- Drawbacks: Hard to implement (but not too much more difficult than simple
mark/sweep/compact). Object lifetime cannot be predicted. Must provide at
least one more generation than the maximum number of generations the
destruction of an object can take. Otherwise this becomes a very bloated
version of the simple mark/sweep/compaction algorithm.
- Where I would use: Everywhere, including most real-time systems.

As you move up this chain of memory management, you're thinking of how to
manage objects in your program needs to change, especially when moving from
the explicit allocation/deallocation to any of the other three models.
Moving from Reference counting to mark/sweep removes the restriction of not
creating cyclic references, making the end developer's job easier. Moving
from simple mark/sweep/compaction to generational provides, in most cases,
is a major performance boost for the same source code.

Mike Ober.
Jan 18 '07 #9

P: n/a
Born wrote:
There is delay between the wanted destruction and the actual destruction.
The "destruction" (i.e. disposal) happens under the caller's control.
It's up to the user how much of a delay they want, not GC.

Don't forget that GC manages memory, not resources. If you are trying to
make an argument that GC is bad for managing resources, the simple
answer is that it's not *designed* for managing resources.
1) Efficiency issue
It's bad for CPU/Resource intensive but memory cheap objects.
CPU intensive objects refer to objects who own internal threads.
Resource intensive objects refer to objects who own unmanaged resources like
file handle, network connections, etc.
Here you are, criticising the memory-managing GC for not managing
resources well, the very job it's not designed for.
2) Logic issue

The need for weak reference makes the destruction delay logically incorrect.
Destruction, i.e. disposal of resources other than memory, is the
caller's responsibility, not the GC. GC applies to the collection of
memory, and it's logically correct in the way it manages memory - i.e.
the object lifetime.
Poor scalability
GC is global, which means it scales as the application's memory use scales.
Theoretically, this indicates a bad scalability.
Given enough memory, a copying GC (like the CLR GC) is provably more
efficient than manual memory allocation that treats each allocation
individually. Let me repeat that: it's actually provably *more* scalable
than manual memory allocation.
Note GC in java and C# is not really an addictive as someone would
argue since there is no way to do real memory management like delete obj in
C++.
'delete obj' is a memory safety hole. If there are any other references
to the object being deleted, you've just created a security violation.
That's what GC is there to prevent.
Better memory management in my mind is reference counting + smart pointer,
which makes things automatic and correct.
Reference counting *is* GC - poor GC, because it doesn't deal with
cycles.
You have deterministic
destructions while no need to manually call Dispose.
Here you are confusing GC with resource management again.

-- Barry

--
http://barrkel.blogspot.com/
Jan 18 '07 #10

P: n/a
Michael D. Ober wrote:
You never have to call the IDispose interface. The GC will do this for you.
The GC never calls Dispose(). It makes its best effort to call
Finalize() as soon as possible if it's been overridden, but very few
disposable (implementing IDisposable) objects should have finalizers
(which should be restricted to handle-like objects).

The implementing Finalize() ("~ClassNameHere()" in C#) according to
recommended practices involves writing a protected virtual Dispose(bool)
method, but that's a separate (yet related) issue to implementing
IDisposable.

Do not confuse IDisposable with Finalize(). Any class overriding
Finalize() should implement IDisposable, but the reverse is *not* true.
Objects should implement IDisposable when they override Finalize, or if
they are the logical owner of any other resource which implements
IDisposable.

And ignore the horrific example of System.ComponentModel.Component. That
class is like a car accident :). You should implement Finalize() only on
handle-like classes, or preferably descend from SafeHandle (or one of
its descendants) and override the appropriate methods.
The runtime will eventually get around to releasing
those resources via the object's IDispose interface - you don't have to do
it yourself.
You've got the .NET GC wrong, I'm afraid. It knows nothing about
IDispose, or the Dispose() method. And it certainly tries hard to call
finalizers on finalizable objects (objects overriding the Finalize()
method) as soon as possible, but leaving resource deallocation up to the
GC is irresponsible and a recipe for bugs. It will leave sockets open,
files open and locked, database connections open etc. longer than
necessary, and will ultimately result in unexpected resource acquisition
failures.

For an easy life, deterministically dispose your resources, please!
In C#, the runtime will execute
the destruction code for you when it either has idle time or needs the
memory to satisfy a allocation request.
..NET doesn't have an exact analogue to C++ destruction. C++ destruction
combines two things: (1) memory disposal and (2) resource disposal. In
..NET, the GC does (1) and has facilities to catch (2) as a *last* line
of defense, while IDisposable is for (2).
You never need to call Dispose. By telling the compiler that an object
class implements IDisposable, the GC will call Dispose for you before it
actually deallocates memory.
Please, educate yourself before you spread more misinformation!

-- Barry

--
http://barrkel.blogspot.com/
Jan 19 '07 #11

P: n/a

Barry Kelly wrote:
Michael D. Ober wrote:
You never have to call the IDispose interface. The GC will do this for you.
You never need to call Dispose. By telling the compiler that an object
class implements IDisposable, the GC will call Dispose for you before it
actually deallocates memory.

Please, educate yourself before you spread more misinformation!
Sorry, Barry, but could you provide some references? I believe your
description of how finalizers / IDisposable interact, but I too have
read (somewhere) on MSDN that the _ideal_ is to write your classes so
that if the caller forgets to Dispose your objects, it will be taken
care of in garbage collection, later.

I realize that it's possible to write classes that don't act that way;
I'm talking about "best practices" here.

I'm fully open to being corrected. This is a vague memory of "something
I read," nothing more.

Jan 19 '07 #12

P: n/a
I admit that sometimes I'm missing reference counting, when I'm dealing
with objects stored in multiple lists. How shall I know when to call
dispose ?

E.g. if a file object is stored in 2 or more lists and has to be removed
from one of the lists. How do I know if I have to call Dispose ? Only
performant solution for me would be to use reference counting.
Though you can't have smart pointers, which are automatically destroyed
and will decrease the reference count of an object automatically. You have
to do it manually in C#. :-( - perhaps there's a better solution in C#
that I don't know yet ? (any comments and solutions would be highly
appreciated)

Andre


Very good. You gave me an another example.

Jan 19 '07 #13

P: n/a
Bruce Wood wrote:
Sorry, Barry, but could you provide some references?
For what exactly? What do you think I'm saying that you disagree with?
I believe your
description of how finalizers / IDisposable interact,
I take this to be referring to how IDisposable objects don't necessarily
need to implement a finalizer, but anything implementing a finalizer
should implement IDisposable, but that's just a guess. I'm verbose below
to try to be clear.
but I too have
read (somewhere) on MSDN that the _ideal_ is to write your classes so
that if the caller forgets to Dispose your objects, it will be taken
care of in garbage collection, later.
Of course - I never contradicted that the GC has finalizer support as a
last line of defense, and that this should be used (or preferably
SafeHandle or one of its descendants) to implement that last line of
defense. I mention SafeHandle etc. in the grandparent, and that it
should be used for handle-like objects that wrap unmanaged resources.

Objects that wrap managed resources shouldn't implement finalizers
because you can't (reliably) access other managed objects from a
finalizer (they may or may not have been collected already), because
order of finalization isn't guaranteed. Instead, they should implement
IDisposable, and call Dispose() (or the relevant Close()) on the managed
resources, preferably via a protected virtual Dispose(bool) method so it
works nicely (polymorphically) with descendants.

See the documentation on Object.Finalize():

"Finalize operations have the following limitations:

* The exact time when the finalizer executes during garbage collection
is undefined. Resources are not guaranteed to be released at any
specific time, unless calling a Close method or a Dispose method.

* The finalizers of two objects are not guaranteed to run in any
specific order, even if one object refers to the other. That is, if
Object A has a reference to Object B and both have finalizers, Object B
might have already finalized when the finalizer of Object A starts.

The thread on which the finalizer is run is unspecified.

The Finalize method might not run to completion or might not run at all
in the following exceptional circumstances:

* Another finalizer blocks indefinitely (goes into an infinite loop,
tries to obtain a lock it can never obtain and so on). Because the
runtime attempts to run finalizers to completion, other finalizers might
not be called if a finalizer blocks indefinitely.

* The process terminates without giving the runtime a chance to clean
up. In this case, the runtime's first notification of process
termination is a DLL_PROCESS_DETACH notification."

It's pretty clear in there that you can't absolutely rely on
(1) finalizers always executing, e.g. to close files properly etc., or
(2) the order of finalization.
I realize that it's possible to write classes that don't act that way;
I'm talking about "best practices" here.

I'm fully open to being corrected. This is a vague memory of "something
I read," nothing more.
I don't know what you're objecting to, exactly. Microsoft haven't helped
here - there's a lot of confusion even now, years after the fact - with
mixing/confusing these two things:

* implementing IDisposable & protected virtual Dispose(bool)
- this is common, because most code wants to affect the world,
which means dealing with resources, which need to be
deterministically disposed of for reliable operation.

* implementing a finalizer with C# "~YourClassName()"
- this is rare, because it implies one or both of:
. working with unmanaged code directly via handles etc.
. something pretty advanced, where the exact behaviour of
the GC is integral to the design

The mistakes were partially fixed in, e.g. C++/CLI, which (unlike its
predecessor MC++) doesn't try to make Finalizers look like destructors,
and instead creates a new declaration construct for them -
"!YourClassName()".

-- Barry

--
http://barrkel.blogspot.com/
Jan 19 '07 #14

P: n/a

">
Oh. My. God. You can't seriously tell me that you would use the memory
allocation state of an object to determine whether it is "alive" or not
in a game?
Open you mind. The memory allocation state is a general solution to inform
a weak reference that its target has gone. So why should I use domain
context to complicate things?

Again: It holds a candle to the sun.

>And this buys us... what? The ability to use C# / Java in a few domains
where it isn't working well right now? Why not just use a better-suited
language?

Be creative. The world can be better. Imagine C++ with enhanced features
like Reflection in C#. This is what I'm trying to prove here. Actually I'm
checking with C++/CLI, but I'm not sure how good it can be.
Jan 19 '07 #15

P: n/a
Bruce Wood wrote:
Sorry, Barry, but could you provide some references?
The main reference I have for this is the .NET Design Guidelines:

http://www.amazon.com/Framework-Desi.../dp/0321246756

.... but most of the content of the relevant section (including
annotations) is on Joe Duffy's site:

http://www.bluebytesoftware.com/blog...3-20c06ae539ae

-- Barry

--
http://barrkel.blogspot.com/
Jan 19 '07 #16

P: n/a
Barry,

I stand corrected. I got the Finalizer and Dispose interfaces backwards.
The Dispose interface drives finalization overrides, not the other way
around. The concept of using Dispose to clean up is still valid, however.

Mike.

"Bruce Wood" <br*******@canada.comwrote in message
news:11**********************@11g2000cwr.googlegro ups.com...
>
Barry Kelly wrote:
>Michael D. Ober wrote:
You never have to call the IDispose interface. The GC will do this for
you.
You never need to call Dispose. By telling the compiler that an object
class implements IDisposable, the GC will call Dispose for you before
it
actually deallocates memory.

Please, educate yourself before you spread more misinformation!

Sorry, Barry, but could you provide some references? I believe your
description of how finalizers / IDisposable interact, but I too have
read (somewhere) on MSDN that the _ideal_ is to write your classes so
that if the caller forgets to Dispose your objects, it will be taken
care of in garbage collection, later.

I realize that it's possible to write classes that don't act that way;
I'm talking about "best practices" here.

I'm fully open to being corrected. This is a vague memory of "something
I read," nothing more.



Jan 19 '07 #17

P: n/a
I have to say my charges are not answered. No practical solution but
criticisms. I was told C# is not fit for my need or I gave bad cases.

As I said, what I want is a better world. The C++ can be better if it
integrates some new features. The C#/jave can be better if they makes GC
really an addictive. I know C# and java have gained vast success, but that
should not become the reason for us not to improve them.
Jan 19 '07 #18

P: n/a
Barry Kelly wrote:
The main reference I have for this is the .NET Design Guidelines:

http://www.amazon.com/Framework-Desi.../dp/0321246756
Barry,

I don't know about you, but that's one of the best books I own. Every
..NET developer would own a copy if it were up to me :)

Brian

Jan 19 '07 #19

P: n/a
Andre Kaufmann wrote:
I admit that sometimes I'm missing reference counting, when I'm dealing
with objects stored in multiple lists. How shall I know when to call
dispose ?

E.g. if a file object is stored in 2 or more lists and has to be removed
from one of the lists. How do I know if I have to call Dispose ?
Only
performant solution for me would be to use reference counting.
Though you can't have smart pointers, which are automatically destroyed
and will decrease the reference count of an object automatically. You
have to do it manually in C#.
A solution (but you're not going to like it) would be to do it manually
via IDisposable. Create a proxy with the reference count and a reference
to the resource, and create handles with a reference to both the
resource and the proxy and that decrement the reference count when
disposed. Have the proxy dispose the resource when the refcount reaches
0. Only hand out handles to the resource, and (if necessary) wrap access
to the resource so that it's not visible to the outside. The handles
could have a Clone() method that creates a new handle and increments the
refcount.

That technique turns N-ownership into N instances of 1-ownership. So
theoretically, if you're able to deal with a single parent, you're able
to deal with N single parents (and thus N parents).

-- Barry

--
http://barrkel.blogspot.com/
Jan 19 '07 #20

P: n/a

"A solution (but you're not going to like it) would be to do it manually
via IDisposable. Create a proxy with the reference count and a reference
to the resource, and create handles with a reference to both the
resource and the proxy and that decrement the reference count when
disposed. Have the proxy dispose the resource when the refcount reaches
0. Only hand out handles to the resource, and (if necessary) wrap access
to the resource so that it's not visible to the outside. The handles
could have a Clone() method that creates a new handle and increments the
refcount.

That technique turns N-ownership into N instances of 1-ownership. So
theoretically, if you're able to deal with a single parent, you're able
to deal with N single parents (and thus N parents).

-- Barry

I thought about similiar things. That seems to be the only way at present if
I want to use C# to give a general purpose framework. I admit I don't like
it because inexperienced user or exception can easily ruin it.
Jan 19 '07 #21

P: n/a
Born wrote:
I have to say my charges are not answered.
What, are you prosecutor, judge *and* jury? Ha ha!
No practical solution but
criticisms. I was told C# is not fit for my need or I gave bad cases.
You criticise GC for not managing resources. It's like criticising cars
for being bad boats.

Object lifetime and resource lifetime are similar, so similar that other
languages (such as C++) put code managing them in the same place - the
destructor. However, GC is not a solution for both: it's a solution for
object lifetime *only*.

You've not responded to this point - it's the fundamental assumption on
which your argument falls flat.
As I said, what I want is a better world. The C++ can be better if it
integrates some new features.
You can use C++/CLI if you like, of course.

-- Barry

--
http://barrkel.blogspot.com/
Jan 19 '07 #22

P: n/a
Object lifetime and resource lifetime are similar, so similar that other
languages (such as C++) put code managing them in the same place - the
destructor. However, GC is not a solution for both: it's a solution for
object lifetime *only*.

You've not responded to this point - it's the fundamental assumption on
which your argument falls flat.
Well, I don't care if they are two different things or not. I want to use
object lifetime to tell its weak references that it's gone. It's a general
purpose solution. Why not?

Jan 19 '07 #23

P: n/a
Born wrote:
Object lifetime and resource lifetime are similar, so similar that other
languages (such as C++) put code managing them in the same place - the
destructor. However, GC is not a solution for both: it's a solution for
object lifetime *only*.

You've not responded to this point - it's the fundamental assumption on
which your argument falls flat.

Well, I don't care if they are two different things or not.
I don't think you can use .NET effectively without respecting this
difference. It's too easy to leave files inadvertently locked, and have
that kind of failure - failure of resource acquisition of mutually
exclusive resources due to overlap.
I want to use
object lifetime to tell its weak references that it's gone. It's a general
purpose solution. Why not?
WeakReference can work for e.g. caches, but it's not really a callback
mechanism. For one thing, GC only occurs during allocation: depending on
the architecture, allocation may be a rare event. Especially if you've
architected the solution such that gen2 collections are rare, it may be
a long time before objects which have migrated to gen2 (and then died)
get collected.

WeakReference is for holding on to something that is somewhat expensive
to recreate, yet you want to give up under memory pressure; or
alternatively, for situations (such as weak delegates) when you want an
object to subscribe to callbacks or be similarly "contactable", but you
don't want those subscriptions themselves to keep the object alive.

-- Barry

--
http://barrkel.blogspot.com/
Jan 19 '07 #24

P: n/a

Born wrote:
">
Oh. My. God. You can't seriously tell me that you would use the memory
allocation state of an object to determine whether it is "alive" or not
in a game?

Open you mind. The memory allocation state is a general solution to inform
a weak reference that its target has gone. So why should I use domain
context to complicate things?
Domain context solves domain problems. Memory allocation and
reclamation are housekeeping. Mixing concerns, otherwise known as a
"hack," leads to woe, in my experience.

Your general point about needing to add higher-level context in order
to do resource management in C# was interesting. I was merely pointing
out that your example was terrible, that's all. I was not claiming that
a terrible example invalidates the original argument.
And this buys us... what? The ability to use C# / Java in a few domains
where it isn't working well right now? Why not just use a better-suited
language?

Be creative. The world can be better. Imagine C++ with enhanced features
like Reflection in C#. This is what I'm trying to prove here. Actually I'm
checking with C++/CLI, but I'm not sure how good it can be.
Sorry, it still sounds to me as though you're railing that it's far too
difficult to pound a nail into the wall with a screwdriver, and
claiming that the problem is that the screwdriver needs a heavier
handle with a flatter surface on it with which to drive nails. I claim
that you should, instead, use a hammer, or perhaps employ a screw
instead of the nail. I don't see anything particularly closed-minded or
uncreative in that advice.

Jan 19 '07 #25

P: n/a

Born wrote:
Object lifetime and resource lifetime are similar, so similar that other
languages (such as C++) put code managing them in the same place - the
destructor. However, GC is not a solution for both: it's a solution for
object lifetime *only*.

You've not responded to this point - it's the fundamental assumption on
which your argument falls flat.

Well, I don't care if they are two different things or not. I want to use
object lifetime to tell its weak references that it's gone.
I don't know nearly as much as Barry on this subject, but I immediately
note a semantic problem here: what do you mean by "gone"?

In C++ you have total control over that concept, and C++ software is
usually designed so that all definitions of "gone" occur simultaneously
in the object's destructor. However, this is an artifact of the
language, not a truism.

Let's take your game, for example. An object representing an adversary
can be killed, "gone" from the point of view of the player, but still
be on a queue somewhere waiting for some post-death processing, so not
yet "gone" from the point of view of the game software. In C#, the
object can have any resources it's holding released, but still have
references to it, in other words, Disposed, or "gone" from the point of
view of being any longer usable. Hopefully, a few nanoseconds after
that the last reference to the object goes out of scope, so now the
object is "gone" from the point of view of the program, but may still
be taking up memory. Then, finally, the GC reclaims it, so it is now
"gone" from all points of view.

So... what do you mean by "gone"?

Jan 19 '07 #26

P: n/a
First, "gone" means the object's members (methods and variables) are not
available any longer. If you want post-processing, you either employ some
callbacks or simply use an ID of the object, depends on how you post-process
it. The advantage of using the object lifetime to notify its weak references
it's gone is, it's a general purpose solution and you need no extra logics.
Those logics may eventually lead to IsKilled, IsDetroyed,IsVanished... in
practical projects lasting months with many people involved.

Second, I believe object lifetime and the lifetime of the resources the
object owns are the same thing. Why, this is what encapsulation means. An
object encapsulates its resources and their lifetimes are bound together.
"Bruce Wood" <br*******@canada.comwrote in message
news:11*********************@11g2000cwr.googlegrou ps.com...
>
Born wrote:
Object lifetime and resource lifetime are similar, so similar that
other
languages (such as C++) put code managing them in the same place - the
destructor. However, GC is not a solution for both: it's a solution for
object lifetime *only*.

You've not responded to this point - it's the fundamental assumption on
which your argument falls flat.

Well, I don't care if they are two different things or not. I want to use
object lifetime to tell its weak references that it's gone.

I don't know nearly as much as Barry on this subject, but I immediately
note a semantic problem here: what do you mean by "gone"?

In C++ you have total control over that concept, and C++ software is
usually designed so that all definitions of "gone" occur simultaneously
in the object's destructor. However, this is an artifact of the
language, not a truism.

Let's take your game, for example. An object representing an adversary
can be killed, "gone" from the point of view of the player, but still
be on a queue somewhere waiting for some post-death processing, so not
yet "gone" from the point of view of the game software. In C#, the
object can have any resources it's holding released, but still have
references to it, in other words, Disposed, or "gone" from the point of
view of being any longer usable. Hopefully, a few nanoseconds after
that the last reference to the object goes out of scope, so now the
object is "gone" from the point of view of the program, but may still
be taking up memory. Then, finally, the GC reclaims it, so it is now
"gone" from all points of view.

So... what do you mean by "gone"?

Jan 19 '07 #27

P: n/a
I've finished my research on C++/CLI.

With C++/CLI (VS2005), things turn out to be as I expected. Now I have stack
objects, delete, Reflection and so on that I believe can make the world
better.

Although reference objects with stack semantics are not really on stack, it
solves the problem I addressed. The only pity is that only C++ clients can
use this feature and not other .NET languages. However, I will not be
surprised if they extend it to C#,BASIC in the future.

Thank you guys, for without you, I could not have defined and solved the
problem within such a short time.
Jan 19 '07 #28

P: n/a
Born wrote:
Second, I believe object lifetime and the lifetime of the resources the
object owns are the same thing. Why, this is what encapsulation means. An
object encapsulates its resources and their lifetimes are bound together.
I understand why you say this - you're coming from a C++ perspective -
but you must understand that C++ is a different language from C#. What's
true for C++ is *not* necessarily true in the managed worlds of C# and
Java.

But don't take my word for it. Here are two annotations from the page I
linked earlier
(http://www.bluebytesoftware.com/blog...20c06ae539ae):

"Annotation (Joe Duffy): Earlier in the .NET Framework’s lifetime,
finalizers were consistently referred to as destructors by C#
programmers. As we become smarter over time, we are trying to come to
terms with the fact that the Dispose method is really more equivalent to
a C++ destructor (deterministic), while the finalizer is something
entirely separate (nondeterministic). The fact that C# borrowed the C++
destructor syntax (i.e. ~T()) surely had at least a little to do with
the development of this misnomer. Confusing the two has been unhealthy
in general for the platform, and as we move forward the clear
distinction between resource and object lifetime needs to take firm root
in each and every managed software engineer’s head."

*** Note this especially, I'm repeating it to call it out:
"the clear distinction between resource and object lifetime needs to
take firm root in each and every managed software engineer’s head."

"Annotation (Jeffrey Richter): It is very unfortunate that the C# team
chose to use the tilde syntax to define what is now called a finalizer.
Programmers coming from an unmanaged C++ background naturally think that
you get deterministic cleanup when using this syntax. I wish the team
had chosen a symbol other than tilde; this would have helped developers
substantially in learning how the .NET platform is different than the
unmanaged architecture."

-- Barry

--
http://barrkel.blogspot.com/
Jan 19 '07 #29

P: n/a
"Born" <wa*******@dlctek.comwrote in message news:eo**********@news.yaako.com...
I've finished my research on C++/CLI.

With C++/CLI (VS2005), things turn out to be as I expected. Now I have stack objects,
delete, Reflection and so on that I believe can make the world better.
Although reference objects with stack semantics are not really on stack, it solves the
problem I addressed. The only pity is that only C++ clients can use this feature and not
other .NET languages. However, I will not be surprised if they extend it to C#,BASIC in
the future.
IMO it's not gonna happen because it solves nothing that isn't solved with with the "using"
idiom. What they did in C++/CLI is provide "scope destructors", not deterministic behavior
per se. The syntax is also far less expressive than the "using" directive in C#, and you
have to put the variable in a separate scope, you can't declare it inline.
A user can still make mistakes, you can't force him to declare the variable holding a
reference with stack semantics, just like you can't force a user to call Dispose or apply
the "using" idiom. A reference that is not declared with stack semantics, still has to be
deleted to have deterministic destruction.

Willy.
Jan 19 '07 #30

P: n/a
Willy Denoyette [MVP] wrote:
"Born" <wa*******@dlctek.comwrote in message
>Although reference objects with stack semantics are not really on
stack, it solves the problem I addressed. The only pity is that only
C++ clients can use this feature and not other .NET languages.
However, I will not be surprised if they extend it to C#,BASIC in
the future.

IMO it's not gonna happen because it solves nothing that isn't solved
with with the "using" idiom. What they did in C++/CLI is provide
"scope destructors", not deterministic behavior per se. The syntax is
also far less expressive than the "using" directive in C#, and you
have to put the variable in a separate scope, you can't declare it
inline.
While it's true that scope-based deterministic destruction solves the same
problem as the using statement in C#, it's a bit of a stretch to say that
it's "far less expressive". Perhaps you prefer to see

using (T1 t1 = new T1())
{
using (T2 t2 = new T2())
{
using (T3 t3 = new T3())
{
// some code
}
}
}

or even the short hand:

using (T1 t1 = new T1())
using (T2 t2 = new T2())
using (T3 t3 = new T3())
{
// some code
}

but it's hard for me to see how that's more expressive than

{
T1 t1;
T2 t2;
T3 t3;

// some code
}

Note that using variables also must be declared in their own scope since the
using statement itself defines a scope, so that argument is a non-starter..
A user can still make mistakes, you can't force him to declare the
variable holding a reference with stack semantics, just like you
can't force a user to call Dispose or apply the "using" idiom. A
reference that is not declared with stack semantics, still has to be
deleted to have deterministic destruction.
Unfortunately, that's true. In true C++ tradition, C++/CLI lets you hang
yourself with any number of different lengths of rope. At least it does
give a paradigm that's more comfortable to C++ programmers than the using
idiom.

-cd
Jan 19 '07 #31

P: n/a
"Carl Daniel [VC++ MVP]" <cp*****************************@mvps.org.nospamwr ote in message
news:Or***************@TK2MSFTNGP04.phx.gbl...
Willy Denoyette [MVP] wrote:
>"Born" <wa*******@dlctek.comwrote in message
>>Although reference objects with stack semantics are not really on
stack, it solves the problem I addressed. The only pity is that only
C++ clients can use this feature and not other .NET languages.
However, I will not be surprised if they extend it to C#,BASIC in
the future.

IMO it's not gonna happen because it solves nothing that isn't solved
with with the "using" idiom. What they did in C++/CLI is provide
"scope destructors", not deterministic behavior per se. The syntax is
also far less expressive than the "using" directive in C#, and you
have to put the variable in a separate scope, you can't declare it
inline.

While it's true that scope-based deterministic destruction solves the same problem as the
using statement in C#, it's a bit of a stretch to say that it's "far less expressive".
Perhaps you prefer to see
using (T1 t1 = new T1())
{
using (T2 t2 = new T2())
{
using (T3 t3 = new T3())
{
// some code
}
}
}

or even the short hand:

using (T1 t1 = new T1())
using (T2 t2 = new T2())
using (T3 t3 = new T3())
{
// some code
}
Honestly , I do, IMO it's clear that T1, T2 and T3 are disposable types. C# is a new
language and IMO the using idiom clearly states the intention of the author, I see no reason
(and so did Anders) why it should have adopted the C++ idiom.
but it's hard for me to see how that's more expressive than
{
T1 t1;
T2 t2;
T3 t3;

// some code
}
Here T1, T2 T3 can be anything, consider this...

// assembly xxx
namespace xyz
{
public struct T1 {
int i;
long v;
};
public struct T2 {
public:
T2() { xxx = new ...;}
...
~T2() { delete xxx;}
!T2(){}
};
...
...
}
// assembly yy
using namespace xyz;
{
T1 t1;
T2 t2;
// some code
}

Here for the "reader" (code maintenace) it's not that obvious that T2 refers to a
disposable type.
Note that using variables also must be declared in their own scope since the using
statement itself defines a scope, so that argument is a non-starter..
Not really, the variable can be declared at class scope or function scope

// class member or local
ExpensiveObject o;
....

using(o = new...)
{...}

>A user can still make mistakes, you can't force him to declare the
variable holding a reference with stack semantics, just like you
can't force a user to call Dispose or apply the "using" idiom. A
reference that is not declared with stack semantics, still has to be
deleted to have deterministic destruction.

Unfortunately, that's true. In true C++ tradition, C++/CLI lets you hang yourself with
any number of different lengths of rope. At least it does give a paradigm that's more
comfortable to C++ programmers than the using idiom.
Agreed, that's why it has been adopted by C++/CLI, but that doesn't mean all other managed
languages should have done the same (well they couldn't, C# pre-dates C++/CLI).
Willy.
Jan 19 '07 #32

P: n/a
Barry Kelly wrote:
Andre Kaufmann wrote:
>Only
performant solution for me would be to use reference counting.
Though you can't have smart pointers, which are automatically destroyed
and will decrease the reference count of an object automatically. You
have to do it manually in C#.
A solution (but you're not going to like it) would be to do it manually
via IDisposable. Create a proxy with the reference count and a reference
to the resource, and create handles with a reference to both the
[...]
Thank you for posting this solution. I would love this solution - which
is similar to a smart pointer in C++ - if I could have the same
automatism like in C++ - RAII.

E.g. if I copy one smart pointer from one list to another, that the
reference counter is automatically incremented. With your solution at
least I could wrap the list with functions automatically doing this,
when adding a proxy. So at all I "like" this solution ;-) and think I
can live with it.

Perhaps I've been programming C++ too long and can't get used to another
programming style that easily ;-).
A C# developer coming from Delphi won't have such "problems".

Though C++ is quite complex and too has many downsides compared to C#(
no override, no sealed, slow compilation ...), I still prefer a more
complex language, which enables me to implement wrapper classes doing
just the right thing with my objects, so that I don't have to
permanently think about when to have to Dispose an object.

Reference counting doesn't scale that well and has too downsides. Though
something like reference counter based auto disposing would be very
good to have.
[...]
Andre
Jan 19 '07 #33

P: n/a
Willy Denoyette [MVP] wrote:
"Born" <wa*******@dlctek.comwrote in message
news:eo**********@news.yaako.com...
>I've finished my research on C++/CLI.
[...]
IMO it's not gonna happen because it solves nothing that isn't solved
with with the "using" idiom. What they did in C++/CLI is provide "scope
destructors", not deterministic behavior per se.
Using is a perfect solution if you don't leave the function scope. But
if you have to leave the function scope, what shall be used then ?

Small example:

List<DisposableObjectlist1; // Global
List<DisposableObjectlist2; // lists
DisposableObject MyObject = new DisposableObject();

void f()
{
list1.Add(MyObject);
list2.Add(MyObject);
}

void r()
{
list1.Remove(MyObject);
}
Function r():
-------------
Now here I have the problem, how shall I know if
I have to call Dispose, without checking the other lists
if they still contain a reference to MyObject ?

In C++ I would use smart pointers, in C#
(The solution Barry has suggested - a proxy - could be used,
although in C++ reference counter are not easy to understand
but much easier to use)
>[...]
Willy.
Andre
Jan 19 '07 #34

P: n/a
Andre Kaufmann wrote:
Thank you for posting this solution. I would love this solution - which
is similar to a smart pointer in C++ - if I could have the same
automatism like in C++ - RAII.

Perhaps I've been programming C++ too long and can't get used to another
programming style that easily ;-).
A C# developer coming from Delphi won't have such "problems".
Delphi Win32 users can also implement RAII, using interfaces. Interfaces
in Delphi are ref-counted COM-style interfaces, implemented behind the
scenes a lot like CComPtr<Tand friends.

-- Barry

--
http://barrkel.blogspot.com/
Jan 19 '07 #35

P: n/a
Barry Kelly wrote:
Andre Kaufmann wrote:
[...]
>Perhaps I've been programming C++ too long and can't get used to another
programming style that easily ;-).
A C# developer coming from Delphi won't have such "problems".

Delphi Win32 users can also implement RAII, using interfaces.
Semantically this is comparable to C++ RAII. Though there are downsides
using these interfaces, intentionally implemented to support COM, as a
RAII replacement.

At function scope it's IMHO too much overkill, since the interfaced
objects will be created on the native heap, instead being constructed on
the stack as in C++. This would be a rather slow approach, for releasing
resources at function scope commonly therefore try finally is used.

In .NET there isn't the same overhead as in Win32 in constructing
objects. However a stack based approach would be IMHO faster, but C# has
adopted the Delphi style and added the "using" keyword.
No big deal, though sometimes I'm missing RAII and assignment operator
overloading or some approach of automatic resource handling.

If you ignore the runtime overhead for object construction in
Delphi.W32, then I agree that you can use interfaces as a RAII replacement.
Interfaces
in Delphi are ref-counted COM-style interfaces, implemented behind the
scenes a lot like CComPtr<Tand friends.
Yes, but IMHO the internal implementation of interfaces is hidden too
much. They are perfect if you don't mix them with other objects, e.g.
hold them in the same object list. Otherwise I'll promise a developer
doing this having much fun ;-).
-- Barry
Andre

Jan 20 '07 #36

P: n/a
On Fri, 19 Jan 2007 23:29:29 +0100, Andre Kaufmann
<an*********************@t-online.dewrote:
>Willy Denoyette [MVP] wrote:
>"Born" <wa*******@dlctek.comwrote in message
news:eo**********@news.yaako.com...
>>I've finished my research on C++/CLI.
[...]
>IMO it's not gonna happen because it solves nothing that isn't solved
with with the "using" idiom. What they did in C++/CLI is provide "scope
destructors", not deterministic behavior per se.

Using is a perfect solution if you don't leave the function scope. But
if you have to leave the function scope, what shall be used then ?

Small example:

List<DisposableObjectlist1; // Global
List<DisposableObjectlist2; // lists
DisposableObject MyObject = new DisposableObject();

void f()
{
list1.Add(MyObject);
list2.Add(MyObject);
}

void r()
{
list1.Remove(MyObject);
}
Function r():
-------------
Now here I have the problem, how shall I know if
I have to call Dispose, without checking the other lists
if they still contain a reference to MyObject ?

In C++ I would use smart pointers, in C#
(The solution Barry has suggested - a proxy - could be used,
although in C++ reference counter are not easy to understand
but much easier to use)
[...]
Willy.

Andre
Agree. The prompt disposal of objects that live for longer than a
function is a problem. It would be really nice to have a new
attribute:

[ReferenceCount(true)]
class MyClass { ... }

--
Philip Daniels
Jan 20 '07 #37

P: n/a
Philip Daniels wrote:
[...]
Agree. The prompt disposal of objects that live for longer than a
function is a problem. It would be really nice to have a new
attribute:

[ReferenceCount(true)]
class MyClass { ... }
Yes would be nice. It's nice that .NET manages memory for me, though
regarding resources I'm missing some kind of automatism.
--
Philip Daniels

Andre
Jan 20 '07 #38

P: n/a
<Philip Daniels <Philip Daniels>wrote:

<snip>
Agree. The prompt disposal of objects that live for longer than a
function is a problem. It would be really nice to have a new
attribute:

[ReferenceCount(true)]
class MyClass { ... }
The CLR team investigated all manner of solutions fairly exhaustively.
Because you could do something like:

object o = instanceOfMyClass;
object x = y;

etc, *every* assignment that could *possibly* be a reference-counted
type would need to check whether or not reference counting was
required. That's a painful performance hit.

--
Jon Skeet - <sk***@pobox.com>
http://www.pobox.com/~skeet Blog: http://www.msmvps.com/jon.skeet
If replying to the group, please do not mail me too
Jan 20 '07 #39

P: n/a
Jon Skeet [C# MVP] wrote:
<Philip Daniels <Philip Daniels>wrote:

<snip>
>Agree. The prompt disposal of objects that live for longer than a
function is a problem. It would be really nice to have a new
attribute:

[ReferenceCount(true)]
class MyClass { ... }

The CLR team investigated all manner of solutions fairly exhaustively.
Because you could do something like:

object o = instanceOfMyClass;
object x = y;
But why should this assignment increase the reference counter ?
That's up to a smart pointer.

E.g.:

RefCountedObject obj1;
SmartPointer mySmartPointer1 = obj1; // Increase
SmartPointer mySmartPointer2 = mySmartPointer1; // Increase

RefCountedObject obj2 = obj1; // !!! Assignment

etc, *every* assignment that could *possibly* be a reference-counted
type would need to check whether or not reference counting was
required. That's a painful performance hit.
If you don't use another abstraction, like smart pointers, yes.

Smart pointers could IMHO be checked at compile time, otherwise they
would be painfully slow in C++ too. The only downside will be, that
Interlocked function must (should) be used to ensure thread safety,
which doesn't scale that well in multi threaded applications running on
dual core CPUs. But SmartPointers shouldn't be used as a general
replacement, only where they are needed. And then in most cases you will
have simpler and perhaps even faster code.

Andre
Jan 20 '07 #40

P: n/a
On Sat, 20 Jan 2007 15:46:20 -0000, Jon Skeet [C# MVP]
<sk***@pobox.comwrote:
<Philip Daniels <Philip Daniels>wrote:

<snip>
>Agree. The prompt disposal of objects that live for longer than a
function is a problem. It would be really nice to have a new
attribute:

[ReferenceCount(true)]
class MyClass { ... }

The CLR team investigated all manner of solutions fairly exhaustively.
Because you could do something like:

object o = instanceOfMyClass;
object x = y;

etc, *every* assignment that could *possibly* be a reference-counted
type would need to check whether or not reference counting was
required. That's a painful performance hit.
I can see that would be a problem.
--
Philip Daniels
Jan 20 '07 #41

P: n/a
On Sat, 20 Jan 2007 18:05:44 +0100, Andre Kaufmann
<an*********************@t-online.dewrote:
>Jon Skeet [C# MVP] wrote:
> <Philip Daniels <Philip Daniels>wrote:
snip<
But why should this assignment increase the reference counter ?
That's up to a smart pointer.

E.g.:

RefCountedObject obj1;
SmartPointer mySmartPointer1 = obj1; // Increase
SmartPointer mySmartPointer2 = mySmartPointer1; // Increase

RefCountedObject obj2 = obj1; // !!! Assignment

>etc, *every* assignment that could *possibly* be a reference-counted
type would need to check whether or not reference counting was
required. That's a painful performance hit.

If you don't use another abstraction, like smart pointers, yes.

Smart pointers could IMHO be checked at compile time, otherwise they
would be painfully slow in C++ too. The only downside will be, that
Interlocked function must (should) be used to ensure thread safety,
which doesn't scale that well in multi threaded applications running on
dual core CPUs. But SmartPointers shouldn't be used as a general
replacement, only where they are needed. And then in most cases you will
have simpler and perhaps even faster code.

Andre
I think that what Jon is saying is that ultimately in the CLR every
reference type "is" a System.Object, hence any solution using a
smartptr ultimately begs the question. The CLR has to constantly check
whether an object is a smartptr before it can do "the right thing".

It's not just our code that would be a problem, I am sure there are
many places deep in the .Net runtime where objects are treated in a
generic fashion.

--
Philip Daniels
Jan 20 '07 #42

P: n/a
Philip Daniels wrote:

Sorry this time my post is a bit longer - had to post code too for
illustration.
On Sat, 20 Jan 2007 18:05:44 +0100, Andre Kaufmann
<an*********************@t-online.dewrote:
[...]
>Andre

I think that what Jon is saying is that ultimately in the CLR every
reference type "is" a System.Object, hence any solution using a
smartptr ultimately begs the question.
Yes know that.
The CLR has to constantly check
whether an object is a smartptr before it can do "the right thing".
Don't think so.

Arguments:

1) How does the CLR check the objects type when I
add it to a generic list ? They are all objects ?
How can it be typesafe ?

2) I can implement smart pointers in C++/CLI. How is that possible
the C++/CLI code is compiled to IL code ?

3) ! The killer argument ;-9

Delphi implements reference counted objects. InterfacedObjects

AFAIK this Delphi code
compiles directly without a modification
to .NET 1.1 ? How can this be ?
It's not just our code that would be a problem, I am sure there are
many places deep in the .Net runtime where objects are treated in a
generic fashion.
Yes, but why care about it ?

Sample code which illustrates the idea:
(There are many optimizations missing and which are essential for a good
smart pointer object !!! - and I haven't tested the code !!!!)
-----------------------------------------------------
##### 1 Embedded counter implementation
########################################

class ReferenceCounter { public long value = 0; }

class SmartPointer<T: IDisposable where T : class, IDisposable
{
public SmartPointer(T initialPtr)
{
stored = initialPtr;
refcounter.value = 1;
}

public SmartPointer()
{
stored = null;
refcounter.value = 1;
}

public void Assign(SmartPointer<Tr)
{
Discard();
refcounter = r.refcounter;
stored = r.stored;
refcounter.value++;
}

public void Assign(T r)
{
if (stored == r) return;
Discard();
stored = r;
refcounter.value = 1;
}

public void Discard()
{
if (--refcounter.value == 0)
if (stored != null) stored.Dispose();
}

public void Dispose() { Discard(); }
ReferenceCounter refcounter = new ReferenceCounter();
T stored;
};

class SimpleObject : IDisposable
{
public void Dispose() {}
}

static void Main(string[] args)
{
SmartPointer<SimpleObjectp1 =
new SmartPointer<SimpleObject>(new SimpleObject());
SmartPointer<SimpleObjectp2 =
new SmartPointer<SimpleObject>();
p2 = p1;
}

###### 2 Interface based implementation:
########################################

// Base interface for reference counted objects
interface IRefCountedObject
{
ulong AddRef();
ulong Release();
}

// Dumb implementation, only for illustration
// ! not thread safe ..... !!
class BaseRefCounted : IRefCountedObject, IDisposable
{
public ulong AddRef () { return ++l; }
public ulong Release() { if (l == 1) Dispose(); return --l; }
public void Dispose () { }
ulong l = 0;
};
// 2 Objects - first reference counted, second not
class ObjectHasRef : BaseRefCounted { }
class ObjectNoRef { }
// Dumb implementation of a SmartPointer,
// only for illustration, not tested

class SmartPointer<T: IDisposable where T : class, IRefCountedObject
{
// Assign a new object, this should be also possible with an
// assignment operator !!! for convenience
public void Assign(T r)
{
if (r == stored) return;
if (stored != null) stored.Release();
r.AddRef(); stored = r;
}

// Releases the object and eventually calls Dispose
// if the reference is zero
public void Discard()
{
if (stored != null) stored.Release();
stored = null;
}

public void Dispose() { Discard(); }
T stored; // Points to the reference counted object
};
static void Main(string[] args)
{
ObjectNoRef no = new ObjectNoRef ();
ObjectHasRef yes = new ObjectHasRef();
SmartPointer<IRefCountedObjectp1 = new
SmartPointer<IRefCountedObject>();
SmartPointer<ObjectHasRef p2 = new
SmartPointer<ObjectHasRef>();

p1.Assign(yes);
p2.Assign(yes);
p1.Assign(no); // Boooom compiler error
}
Please don't sue me for any errors, it was just a quick hack.
So what's missing - some automatism implemented in the CLR, which does
the handling, so that I don't have to call an assign method, but can use
an assignment operator and would be perfect if the compiler could also
generate code which automatically disposes the reference counted object,
so optimally the SmartPointer should be a stack allocated / handled
object. Like value objects - but smart ;-)
--
Philip Daniels


Andre
Jan 20 '07 #43

P: n/a
Andre Kaufmann <an*********************@t-online.dewrote:
The CLR team investigated all manner of solutions fairly exhaustively.
Because you could do something like:

object o = instanceOfMyClass;
object x = y;

But why should this assignment increase the reference counter ?
That's up to a smart pointer.
If the assignment *didn't* increase the reference counter, then the
object could end up disposing of itself too early:

MyClass instanceOfMyClass = new MyClass(); // Ref count=1
object o = instanceOfMyClass; // If we don't do anything now...
instanceOfMyClass = null; // Ref count = 0; dispose

instanceOfMyClass = (MyClass) o;
instanceOfMyClass.DoSomething(); // Bang!
E.g.:

RefCountedObject obj1;
SmartPointer mySmartPointer1 = obj1; // Increase
SmartPointer mySmartPointer2 = mySmartPointer1; // Increase

RefCountedObject obj2 = obj1; // !!! Assignment
So SmartPointer would be a completely different type entirely? I
thought the point was that you wouldn't have to explicitly do anything
different when writing code - otherwise you might just as well have a
using statement.
etc, *every* assignment that could *possibly* be a reference-counted
type would need to check whether or not reference counting was
required. That's a painful performance hit.

If you don't use another abstraction, like smart pointers, yes.
If you have to do anything different in the client code, there's no
benefit that I can see.
Smart pointers could IMHO be checked at compile time, otherwise they
would be painfully slow in C++ too. The only downside will be, that
Interlocked function must (should) be used to ensure thread safety,
which doesn't scale that well in multi threaded applications running on
dual core CPUs. But SmartPointers shouldn't be used as a general
replacement, only where they are needed. And then in most cases you will
have simpler and perhaps even faster code.
Except that as far as I can see, you haven't actually gained anything -
the client code still needs to know that it's an object which needs to
be disposed at the right time, and so you use SmartPointer
appropriately. If you've got to know that anyway, why not just use the
using statement?

--
Jon Skeet - <sk***@pobox.com>
http://www.pobox.com/~skeet Blog: http://www.msmvps.com/jon.skeet
If replying to the group, please do not mail me too
Jan 20 '07 #44

P: n/a
Andre Kaufmann <an*********************@t-online.dewrote:
The CLR has to constantly check
whether an object is a smartptr before it can do "the right thing".

Don't think so.

Arguments:

1) How does the CLR check the objects type when I
add it to a generic list ? They are all objects ?
How can it be typesafe ?
Each object knows what its type is.
2) I can implement smart pointers in C++/CLI. How is that possible
the C++/CLI code is compiled to IL code ?
Is it possible in "pure" CLI mode, with no unmanaged code?
3) ! The killer argument ;-9

Delphi implements reference counted objects. InterfacedObjects

AFAIK this Delphi code
compiles directly without a modification
to .NET 1.1 ? How can this be ?
My guess is that the behaviour is different, or Delphi adds an extra
step in every assignment which could involve a reference counted
object.

My guess is the first, especially given
http://www.midnightbeach.com/dotNetA...ture.2002.html
<quote>
One thing that reference counting does do better than garbage collection is
resource protection. That file will get closed, that visual cue will
get restored, at the moment when your interface variable goes out of
scope and the object is freed. With garbage collection, you can have a
finalization routine that gets called when the block is scavenged, but
you have no control over when it happens. This means that a whole class
of "failsafe" Delphi techniques that rely on interface finalization are
invalid under .Net.
</quote>

Note the last sentence.
It's not just our code that would be a problem, I am sure there are
many places deep in the .Net runtime where objects are treated in a
generic fashion.

Yes, but why care about it ?
Because you don't want things to be disposed earlier than they should
be.
Sample code which illustrates the idea:
(There are many optimizations missing and which are essential for a good
smart pointer object !!! - and I haven't tested the code !!!!)
-----------------------------------------------------
<snip>
static void Main(string[] args)
{
SmartPointer<SimpleObjectp1 =
new SmartPointer<SimpleObject>(new SimpleObject());
SmartPointer<SimpleObjectp2 =
new SmartPointer<SimpleObject>();
p2 = p1;
}
So that adds an implicit call to Assign during assignment, right?
So I could make it fail very easily using:

object o = p1; // No reference count increase?

If a smart-pointer implementation is to be useful, it mustn't fail in
situations like the above, IMO.
Please don't sue me for any errors, it was just a quick hack.
So what's missing - some automatism implemented in the CLR, which does
the handling, so that I don't have to call an assign method, but can use
an assignment operator and would be perfect if the compiler could also
generate code which automatically disposes the reference counted object,
so optimally the SmartPointer should be a stack allocated / handled
object. Like value objects - but smart ;-)
But you can't make it perform *and* cope with people using
interfaces/object references instead of SmartPointer references, IMO.

If it were simple to achieve, don't you think the CLR team would have
thought of it? See http://blogs.msdn.com/brada/pages/371015.aspx for
some evidence that they've thought quite hard about this topic.

--
Jon Skeet - <sk***@pobox.com>
http://www.pobox.com/~skeet Blog: http://www.msmvps.com/jon.skeet
If replying to the group, please do not mail me too
Jan 20 '07 #45

P: n/a
"Andre Kaufmann" <an*********************@t-online.dewrote in message
news:Ob**************@TK2MSFTNGP05.phx.gbl...
Philip Daniels wrote:

Sorry this time my post is a bit longer - had to post code too for illustration.
>On Sat, 20 Jan 2007 18:05:44 +0100, Andre Kaufmann
<an*********************@t-online.dewrote:
[...]
>>Andre

I think that what Jon is saying is that ultimately in the CLR every
reference type "is" a System.Object, hence any solution using a
smartptr ultimately begs the question.

Yes know that.
>The CLR has to constantly check
whether an object is a smartptr before it can do "the right thing".

Don't think so.

Arguments:

1) How does the CLR check the objects type when I
add it to a generic list ? They are all objects ?
How can it be typesafe ?
..The CLR (CLI) has a rich type system and each object knows it's exact type.
2) I can implement smart pointers in C++/CLI. How is that possible
the C++/CLI code is compiled to IL code ?

I don't know how your implementation looks like, but I'm pretty sure it is NOT compiled
using /clr:safe. That means that the compiler is free to generate native code for what's not
"translatable" to IL. More, if you are using templates or native objects, then these are not
managed types and as such do not end on the GC heap.

Willy.
Jan 20 '07 #46

P: n/a
Willy Denoyette [MVP] wrote:
"Andre Kaufmann" <an*********************@t-online.dewrote in message
news:Ob**************@TK2MSFTNGP05.phx.gbl...
>Philip Daniels wrote:
[...]
I don't know how your implementation looks like, but I'm pretty sure it
For example boost::smart_ptr compiles in /clr:pure mode. But surely can
only handle native classes. But I don't see any reasons why it shouldn't
be possible to convert the code to handle managed ones. And instead
calling delete on the object call Dispose ?
is NOT compiled using /clr:safe.
The compiler will throw a warning if native code is generated, by
default it does not generate mixed code, without warning.
That means that the compiler is free to
generate native code for what's not "translatable" to IL. More, if you
Besides that it's not the case I think Reflector would tell me if
there's native code inside ;-)
are using templates or native objects, then these are not managed types
and as such do not end on the GC heap.
Hm, what's the difference between the object constructed on the native
or managed heap regarding smart pointers ? All I want is to release the
resources. Not the memory.

For managed ones I call Dispose, for native ones Delete. I don't see
any difference in handling, regarding smart pointers.
Willy.
Andre

Jan 20 '07 #47

P: n/a
Jon Skeet [C# MVP] wrote:
Andre Kaufmann <an*********************@t-online.dewrote:
>Arguments:

1) How does the CLR check the objects type when I
add it to a generic list ? They are all objects ?
How can it be typesafe ?

Each object knows what its type is.
Yes. But the point is the smart pointer either doesn't need to know if
it allocates the reference counter value externally, or it knows that
the object is a reference counted one if you inherited from a
IReferenceCounted interface or use attributes to add such code.

You surely can cast to System.Object and shoot yourself into the foot,
but you can do the same with all other objects or with Dispose, or by
using destructors (finalizers) everytime or.... there are many examples
where C# allows one too to write bad error prone code.
>
>2) I can implement smart pointers in C++/CLI. How is that possible
the C++/CLI code is compiled to IL code ?

Is it possible in "pure" CLI mode, with no unmanaged code?
Yes. boost::smart_ptr compiles, but surely can only handle native
objects. But as I wrote in my other post, if you replace * with ^ and
delete with Dispose it should also compile with managed ones.
I have a basic implementation, doing this.
[...]
My guess is that the behaviour is different, or Delphi adds an extra
In Delphi every object is derived from TObject - same as in C#.
So it should be comparable - at least I think so.
step in every assignment which could involve a reference counted
object.
I guess the latter. In fact it's doing that in native code, and I don't
see any reason why it shouldn't do the same in managed one.

E.g.: when I assign

InterfacedObject1 := InterfacedObject2

In Delphi the compiler calls: (not 100% exact and meta code)

InterfacedObject1._release_old_ptr;
InterfacedObject1._call_delete_if_zero;
InterfacedObject1._inc_ref_counter_to_new_pointer;
InterfacedObject1._assing_new_pointer;

The managed version (don't know if that the case)
should be something like:

InterfacedObject1._release_old_ptr;
InterfacedObject1._call_Dispose_if_zero;
InterfacedObject1._inc_ref_counter_to_new_pointer;
InterfacedObject1._assing_new_pointer;

[...] This means that a whole class
of "failsafe" Delphi techniques that rely on interface finalization are
invalid under .Net.
</quote>

Note the last sentence.
Don't know exactly what the author means with interface finalization.
I suppose the finalizer method. But as Delphi (AFAIK) maps the
Destructor to the Dispose method there is no need for having or relying
on finalization. The smart pointer simply calls Dispose -indirectly.
>>It's not just our code that would be a problem, I am sure there are
many places deep in the .Net runtime where objects are treated in a
generic fashion.
Yes, but why care about it ?

Because you don't want things to be disposed earlier than they should
be.
As I wrote above you can always shoot yourself into your own foot. But
such errors are normally easier to find as errors caused by objects
which aren't Disposed at all, relying on the GC eventually calling it.
>[...]
p2 = p1;
}

So that adds an implicit call to Assign during assignment, right?
Oops and yes. Should be: p2.Assign(p1);

Sorry. I used the syntax I'm used to in C++. I would like to use this
syntax in C# too. But I suppose the CLR teams would argue that there
would be too much going on under the hood and it wouldn't be clear to
the programmer.

Anyways I could live with p2.Assign(p1);
So I could make it fail very easily using:

object o = p1; // No reference count increase?
As I wrote above, it was the wrong syntax and I only would prefer such a
syntax in C# too. In this case it perhaps could simply deal it as a weak
reference pointer. No increment necessary. Just an additional reference.
I don't claim that I have thoroughly thought it all over, or that I'm
know it better than all the CLR developers. Perhaps the solution isn't
perfect at all and has many pitfalls, but the same applies to the
Dispose pattern.
If a smart-pointer implementation is to be useful, it mustn't fail in
situations like the above, IMO.
Agreed. But it hasn't failed yet ;-). It only fails if both smart
pointer objects are Disposed (automatically or not).
But you have the same problem for normal objects. You shouldn't Dispose
all references to the same object ;-).

So what would the code do:

object o1 = p1; // No reference count increase
object o2 = o1; // - " - same here

SmartPointer<MyObjectptr =
(SmartPointer<MyObject>)o2;
// here it would increase the reference counter

Now the big question arises - who calls and when Dispose function of the
smart pointer ? At function level if the function is left, at object
level if the object holding the smart pointers is Disposed.
[...]

But you can't make it perform *and* cope with people using
interfaces/object references instead of SmartPointer references, IMO.
The same applies to all other languages, with smart pointer classes. You
can always make failures and smart pointers aren't the holy grail. In
fact I would reduce their usage to a minimum. But sometimes IMHO it
would be better to use reference counting, that relying on a library
where I don't know if the objects are properly Disposed, if I have to
Dispose them etc.

Think of multiple threads dealing with the same file. Which thread will
dispose the file object ? The last one terminating. For this you need
some kind of reference counting, or implement some kind of thread
counter, which effectively is the same. With a smart pointer every
thread would call Dispose on it's smart pointer and the last one would
automatically Dispose the file object.
If it were simple to achieve, don't you think the CLR team would have
thought of it? See http://blogs.msdn.com/brada/pages/371015.aspx for
some evidence that they've thought quite hard about this topic.
Don't deny that they have thought about it, but I think the discussion
they've done was about the basic GC implementation and what's better a
Dispose pattern or generally using reference counting.

Not about adding some syntax elements that allow me to use reference
counting additionally, if I need too.

In fact I do it already, but it would be nice if the CLR would support
it natively by an attribute.

GC helps a lot, where in native languages smart pointers must be used.
In C# or generally managed code there are only few cases where they are
needed - for handling native resources.
So currently there's no big pressure for me to have this feature, would
only be nice to have (not only in C++ but in C# too) ;-)

Reference counting is dangerous and can too lead to memory leaks. But
IMHO the Dispose pattern introduces similar problems.

Andre
Jan 20 '07 #48

P: n/a
Jon Skeet [C# MVP] wrote:
Andre Kaufmann <an*********************@t-online.dewrote:
[...]
See my other post.
appropriately. If you've got to know that anyway, why not just use the
using statement?
3 examples where using doesn't help:
------------------------------------
1. Multiple threads are dealing with the same object
holding a native resource. The last thread terminating
should call Dispose. How do I know which one is the
last one ? I count them. Effectively I do reference counting ;-)

2. The other one, for which I really like reference counting.

I have multiple lists. Each of them are holding the same object
and if the object is removed from the last list it shall be
disposed.

How can I check if the object isn't held anymore. Effectively
I would either use some kind of counting or check all the lists,
if the object has already been removed. But IMHO that's
not a good solution.

3. I have a function returning an object. The object holding the
object is Disposed, which object is responsible for Disposing ?
Which of the other object still holding my returned one ?
Andre
Jan 20 '07 #49

P: n/a
Andre Kaufmann wrote:
3) ! The killer argument ;-9

Delphi implements reference counted objects. InterfacedObjects

AFAIK this Delphi code
compiles directly without a modification
to .NET 1.1 ? How can this be ?
Delphi .NET interfaces aren't ref-counted. In fact, this can be
compatibility problem.

Even if they were ref-counted in Delphi .NET, Delphi can't control what
other languages do with the references. The same problem occurs in any
language that targets the CLR: you can only add these kinds of features,
features which expect all other users to play along, either to the core
CLR, or make that portion of the language incompatible with other CLR
languages.

-- Barry

--
http://barrkel.blogspot.com/
Jan 21 '07 #50

62 Replies

This discussion thread is closed

Replies have been disabled for this discussion.