473,320 Members | 1,838 Online
Bytes | Software Development & Data Engineering Community
Post Job

Home Posts Topics Members FAQ

Join Bytes to post your question to a community of 473,320 software developers and data experts.

Destructor: not gauranteed to be called?

I'm programming in VS C++.NET 2005 using cli:/pure syntax. In my code I have
a class derived from Form that creates an instance of one of my custom
classes via gcnew and stores the pointer in a member. However, I set a
breakpoint at the destructor of this instance's class and it was never
called!!! I can see how it might not get called at a deterministic time. But
NEVER?

So, I guess I need to know the rules about destructors. I would have thought
any language derived from C++ would always guarantee the destructor of an
instance of a class be called at some time, especially if created via
[gc]new and stored as a pointer.

Yes, I think I can deterministically destruct it via 'delete' and setting to
nullptr. But the point still kinda freaks me that the destructor is no
longer gauranteed to EVER be called. I feel like I should be worried since
it is sometimes important to do other things besided freeing up memory in a
destructor. In my case I discovered it becuase I'm communicating through a
serial port which I change the baud rate from the current speed, but then
changed it back in the destructor - only to find out the destructor was
NEVER called! Hence, the port died, and MY program wouldn't work on
subsequent runs since it assumed the port had been returned to the same baud
(and hence couldn't communicate with it anymore).

So, again, why is the destructor no longer gauranteed to be called, and what
are these new rules? Or am I being ignorant, and C++ never made such
assurances. Inquiring minds want to know! : )

[==P==]
Jan 31 '06 #1
35 3250
Peter Oliphant wrote:
I'm programming in VS C++.NET 2005 using cli:/pure syntax. In my code
I have a class derived from Form that creates an instance of one of
my custom classes via gcnew and stores the pointer in a member.
However, I set a breakpoint at the destructor of this instance's
class and it was never called!!! I can see how it might not get
called at a deterministic time. But NEVER?

So, I guess I need to know the rules about destructors. I would have
thought any language derived from C++ would always guarantee the
destructor of an instance of a class be called at some time,
especially if created via [gc]new and stored as a pointer.
Why would you think that when C++ makes no similar guarantee for pure native
C++? The destructor for an object on the heap is called when and if you
call delete on a pointer to that object. The situation is no different for
C++/CLI with respect the to destructor (which is IDisposable::Dispose for
C++/CLI).

Yes, I think I can deterministically destruct it via 'delete' and
setting to nullptr. But the point still kinda freaks me that the
destructor is no longer gauranteed to EVER be called. I feel like I
should be worried since it is sometimes important to do other things
besided freeing up memory in a destructor. In my case I discovered it
becuase I'm communicating through a serial port which I change the
baud rate from the current speed, but then changed it back in the
destructor - only to find out the destructor was NEVER called! Hence,
the port died, and MY program wouldn't work on subsequent runs since
it assumed the port had been returned to the same baud (and hence
couldn't communicate with it anymore).
So, again, why is the destructor no longer gauranteed to be called,
and what are these new rules? Or am I being ignorant, and C++ never
made such assurances. Inquiring minds want to know! : )


They're not new rules - it's the nature of objects on the heap. For managed
object on the GC heap, the Finalizer MAY be called if you don't delete the
object, but the CLR doesn't guarantee that finalizers are ever called
either.

-cd
Jan 31 '06 #2
Hi Carl!
Why would you think that when C++ makes no similar guarantee for pure native
C++? The destructor for an object on the heap is called when and if you
call delete on a pointer to that object. The situation is no different for
C++/CLI with respect the to destructor (which is IDisposable::Dispose for
C++/CLI).


Just as addition:

See: Destructors and Finalizers in Visual C++
http://msdn2.microsoft.com/en-us/library/ms177197.aspx

Also be aware that the desctructor might be called, even if the
constructor has thrown an expection!!!
See also: http://blog.kalmbachnet.de/?postid=60
--
Greetings
Jochen

My blog about Win32 and .NET
http://blog.kalmbachnet.de/
Jan 31 '06 #3
/rant on

I'm sorry, but this is VERY new info to me, and I've been doing OOP for
about 15 years! Personally, I think it is against the whole concept of a
destructor. Why bother to ever create one if there is no gaurantee it will
be called? To me (IMHO), OOP should have this pact with the programmer. The
constructor is to set up the creation of an instance. The destructor is for
clean-up. Thus, the destructor should be gauranteed to be called SOMETIME,
at the very latest at application exit. Otherwise I feel the C++ laguage is
at fault for anything my destructor was meant to make sure wasn't left in a
bad state, since THAT's what I wrote the destructor for, and thought it was
responsible for making sure eventually happened.

Let me make this clear. I have always realized that when GC wasn't in play
that if I created something (then via 'new') I had to destruct it manually
to avoid memory leaks. That is, it was never gauranteed the destructor would
be called unless I invoked it via a delete call. But, with the introduction
of GC, anything created as a gc object shouldn't need to be destructed
manually, as the application is suppose to keep track of whether something
is being used anymore by anyone before GC destroys it. But I always assumed
it would destroy it the same way one would manually destroy it, by calling
it's destructor. Could someone explain to me why NOT calling the destructor
upon GC destroying the object would EVER be a GOOD thing?

What I see emerging is this. GC was created to help with the concept that
destruction of an object is tough to do when who 'owns' it is unclear, or
when it is unclear whether everyone's done with it. This caused memory leaks
in the case that 'nobody' took final responsibility (or couldn't based on
the info available). But the solution to this is now generating another
issue. Lack of reliable destruction! Destruction is now not gauranteed at
any time you don't specifically delete it. BUT WAIT! The whole point of GC
was to AVOID having to know when to do delete. So, if we are forced to do
delete to insure the destructor get run, then what did we gain from
introducing GC? That is, if we now still have to delete at the right time,
this implies we know the instance is free to be destroyed. Thus, we lose the
advantage we got. Or more precisely, we have add complication that
introduces more possible pitfalls, and we are STILL required to tell the
application when to destroy something if we want our destructors have any
reliable meaning!

Further, this is now causing aditional problems. I reported a bug that is
very nasty via the feedback center. The bug is this: Try creating two
classes, both ref. Now create 142 stack sematic variables of one class in
the other. Oh yeah, be sure the classes have destructors. Put ZERO code in
these classes. Guess what? It won't compile, and will return a 'program too
complex' error! It further explains it can't build the destructor. Now,
comment out the destructor in the class the 142 instances are based on. NOW
it compiles! So, they have introduced complexity to such a point with the
way it deals with destructors that it can't handle it past 142 members! I
don't see that as progress...

And, again possibly showing my ignorance, when did finalizers come into
play? Is this part of the C++ standard?

Basically, I think things have gotten so complicated in this destructor area
that we have just traded one set of problems for another. If I can't rely on
the code I write specifically for the purpose of tidying things up from ever
getting called, it aint my fault if stuff isn't returned back to normal once
my code is done running. Heaven forbid anyone put the 'return the system
back' code in the destructor of an application based on a single class...
; )

/rant off

Ok I feel much better now... lol

[==P==]

PS - here is link to bug I reported:

http://lab.msdn.microsoft.com/produc...0-44bd02f398c6

"Carl Daniel [VC++ MVP]" <cp*****************************@mvps.org.nospam >
wrote in message news:ef**************@TK2MSFTNGP11.phx.gbl...
Peter Oliphant wrote:
I'm programming in VS C++.NET 2005 using cli:/pure syntax. In my code
I have a class derived from Form that creates an instance of one of
my custom classes via gcnew and stores the pointer in a member.
However, I set a breakpoint at the destructor of this instance's
class and it was never called!!! I can see how it might not get
called at a deterministic time. But NEVER?

So, I guess I need to know the rules about destructors. I would have
thought any language derived from C++ would always guarantee the
destructor of an instance of a class be called at some time,
especially if created via [gc]new and stored as a pointer.


Why would you think that when C++ makes no similar guarantee for pure
native C++? The destructor for an object on the heap is called when and
if you call delete on a pointer to that object. The situation is no
different for C++/CLI with respect the to destructor (which is
IDisposable::Dispose for C++/CLI).

Yes, I think I can deterministically destruct it via 'delete' and
setting to nullptr. But the point still kinda freaks me that the
destructor is no longer gauranteed to EVER be called. I feel like I
should be worried since it is sometimes important to do other things
besided freeing up memory in a destructor. In my case I discovered it
becuase I'm communicating through a serial port which I change the
baud rate from the current speed, but then changed it back in the
destructor - only to find out the destructor was NEVER called! Hence,
the port died, and MY program wouldn't work on subsequent runs since
it assumed the port had been returned to the same baud (and hence
couldn't communicate with it anymore).
So, again, why is the destructor no longer gauranteed to be called,
and what are these new rules? Or am I being ignorant, and C++ never
made such assurances. Inquiring minds want to know! : )


They're not new rules - it's the nature of objects on the heap. For
managed object on the GC heap, the Finalizer MAY be called if you don't
delete the object, but the CLR doesn't guarantee that finalizers are ever
called either.

-cd

Jan 31 '06 #4
Peter Oliphant wrote:
Personally, I think it is against the whole
concept of a destructor. I agree : the point is that there is NO destructors in .NET!!! There are
finalizers, which are a different beast.CLI "destructors" have been mapped
to finalizers as best as MS could (generating code that implement
IDisposable, etc...), but this is by no way a native C++ destructor.

Let me make this clear. I have always realized that when GC wasn't in
play that if I created something (then via 'new') I had to destruct
it manually to avoid memory leaks. That is, it was never gauranteed
the destructor would be called unless I invoked it via a delete call.
But, with the introduction of GC, anything created as a gc object
shouldn't need to be destructed manually, as the application is
suppose to keep track of whether something is being used anymore by
anyone before GC destroys it. The GC is asynchronous, and your never sure it will execute a finalizer for
a given object.(not the destructor mind you, since it doesn't exist, the
finalizer!).
The other point is that, since you don't know in which order finalizers are
run, you can't reference any external object from within a finalizer, so
you're really very limited in what you can do within them.

The whole point of the IDisposable interface is to circumvent this
limitation of the GC, although it is still an inferior solution compared to
the native, synchronous C++ destructor, IMHO.
What I see emerging is this. GC was created to help with the concept
that destruction of an object is tough to do when who 'owns' it is
unclear, or when it is unclear whether everyone's done with it. This
caused memory leaks in the case that 'nobody' took final
responsibility (or couldn't based on the info available). But the
solution to this is now generating another issue. Lack of reliable
destruction! Agreed. There is NO destruction in .NET (nor in Java).
Destruction is now not gauranteed at any time you don't
specifically delete it. BUT WAIT! The whole point of GC was to AVOID
having to know when to do delete. So, if we are forced to do delete
to insure the destructor get run, then what did we gain from
introducing GC? No more memory leaks... The main reason for GC is to avoid raw memory leaks,
not to get a better model for logical destruction of objects.
That is, if we now still have to delete at the right
time, this implies we know the instance is free to be destroyed. More precisely, we have to *Dispose* the object at the right time...
Thus, we lose the advantage we got. Or more precisely, we have add
complication that introduces more possible pitfalls, and we are STILL
required to tell the application when to destroy something if we want
our destructors have any reliable meaning!
Yep. I do not believe anyway the computer will ever be able to *guess* what
the programmer wants, so there will ever be a manual indication of when an
action must be done (including destruction/finalization/release of
ressource).
And, again possibly showing my ignorance, when did finalizers come
into play? Is this part of the C++ standard? No, they are part of the .NET standard. They are a very central feature of
..NET, and you should document yourself to get a firm grasp on the subtle
differences between destructors and finalizers.

To make the story short, a finalizer is an optional member function that is
possibly called (if it exists!) by the GC some time before the GC reclaims
the object memory and after the last reference on the object has been
released. You've got no guarantee at all on the order on which finalizers
for different objects execute.
Basically, I think things have gotten so complicated in this
destructor area that we have just traded one set of problems for
another.

Possible. Another explanation is perhaps you didn't master the differences
between finalizers and destructors, and you expected something of the system
without taking care of checking in the documentation wether your
expectations were justified.

Arnaud
MVP - VC

PS : IMHO, the Java, C# and Managed C++ choice of using the C++ destructor
syntax (~ClassName) to express the finalizer is a bad mistake that led many
developpers into misconceptions of that kind.
Jan 31 '06 #5
It seems like the discussion has come to realize the difference between
finalizers and destructors. The first is non-deterministic and loosely
coupled, whereas the later is deterministic.

I do think there is a misunderstanding of the differences between
destructors in managed code and destructors in native code. While there are
differences, the discussion here hasn't highlighted any of them.

Arnaud Debaene wrote:
I agree : the point is that there is NO destructors in .NET!!! There are
finalizers, which are a different beast.CLI "destructors" have been
mapped to finalizers as best as MS could (generating code that implement
IDisposable, etc...), but this is by no way a native C++ destructor.
It's unfortunate that C# decided to use the tilda syntax for finalizers, and
even more unfortunate that the old Managed C++ syntax did the same thing.
However, the CLR makes no mention of destructors... so there's no real
mapping to do. Destructors are a language level implementation, not a
runtime issue.
The whole point of the IDisposable interface is to circumvent this
limitation of the GC, although it is still an inferior solution
compared to the native, synchronous C++ destructor, IMHO.
I'm curious how IDisposable presents an inferior solution. From my
perspective as a language designer, I see IDisposable as the implementation
detail for destructors in C++. Really, you don't have to know anything about
IDisposable to use destructors in C++/CLI. To me, the biggest limitation
imposed on destructors as a result of IDisposable is that all destructors
are public and virtual. I actually that's a good thing, and it's a mistake
that unmanaged C++ allows destructors to be anything else.
Agreed. There is NO destruction in .NET (nor in Java).
The premise of this statement is flawed. Destruction is a language level
service, because only the language can determine when it is appropriate to
deterministically cleanup objects. Why? Because the programmer needs to be
involved - otherwise you deal with the infamous halting problem. The CLR is
a collection of services that can be supplied to a running program. As long
as we're dealing with Turing Machines, the CLR will never be able to provide
deterministic cleanup as a service.

So, that means deterministic cleanup must be moved to the language level.
The best way to accomplish that and maintain a sense of cross-language
functionality was to create a common API. That was IDisposable. From there,
it's a matter of how the languages treat destruction semantics. C++/CLI does
everything that unmanaged C++ does, including automatic creation of
destructors when embedded types have destructors.
No more memory leaks... The main reason for GC is to avoid raw memory
leaks, not to get a better model for logical destruction of objects.


While GC is primarily about memory leaks, I would argue it serves to do much
more. C++ is inherently not type safe because it allows for things like use
of an object after delete. GC in the context of a language like C++ is the
only way to achieve type safety.

Also, if you are truly using Object Oriented Programming, objects will
represent resources like files, network connections, UI, etc. This means
that memory has a direct correlation to other resources, so GC has the
potential to cleanup a lot more than just memory.

Lastly, deterministic cleanup is really bad at cleaning up in certain
situations. A frequent example is shared resources that form a dependency
cycle. The impact of reference counting is well understood, and all of the
practices applied to unmanaged C++ frequently result in fragile programs. In
situations like these, garbage collection is the best solution. The problem
that usually results is programmers don't adapt to a different environment,
and instead try to contort deterministic practices to a non-deterministic
environment.

The short story... writing robust code still requires smart people thinking
solutions all the way through.

--
Brandon Bray, Visual C++ Compiler http://blogs.msdn.com/branbray/
Bugs? Suggestions? Feedback? http://msdn.microsoft.com/productfeedback/
Jan 31 '06 #6
Brandon Bray [MSFT] wrote:
It's unfortunate that C# decided to use the tilda syntax for finalizers,
and
even more unfortunate that the old Managed C++ syntax did the same thing. Well, that's one point on which we agree ;-)
The whole point of the IDisposable interface is to circumvent this
limitation of the GC, although it is still an inferior solution
compared to the native, synchronous C++ destructor, IMHO.


I'm curious how IDisposable presents an inferior solution. From my
perspective as a language designer, I see IDisposable as the
implementation detail for destructors in C++. Really, you don't have
to know anything about IDisposable to use destructors in C++/CLI. To
me, the biggest limitation imposed on destructors as a result of
IDisposable is that all destructors are public and virtual. I
actually that's a good thing, and it's a mistake that unmanaged C++
allows destructors to be anything else.


I was thinking more about C# "raw" implementation of IDisposable (where the
compiler doesn't generate the Dispose method; nor the call to Dipose in
client code), because in this model, it becomes the responsability of the
client of an object to free the internal ressources held by the object, by
calling explicitely Dispose, or through the "using" keyword.
For this point, C++/CLI stack semantic is a huge step in the good direction
WRT to Managed C++ / C#.
Agreed. There is NO destruction in .NET (nor in Java).
The premise of this statement is flawed. Destruction is a language
level service, because only the language can determine when it is
appropriate to deterministically cleanup objects. Why? Because the
programmer needs to be involved - otherwise you deal with the
infamous halting problem. The CLR is a collection of services that
can be supplied to a running program. As long as we're dealing with
Turing Machines, the CLR will never be able to provide deterministic
cleanup as a service.


I agree, but I think we must go a step latter : What is generally called
"destruction" is in fact a 2 parts process :

1) Logical destruction, which correspond to the user code in the
destructor/finalizer function. To be most usefull, this operation should be
synchronous with the release of the last reference to the object (ie, a
stack object goes out of scope, a heap object is not referenced anymore or
is dekleted in native C++), because it allows to implment the RAII idiom and
therefore make it much easier to write exception-safe code.
<troll - well perhaps not THAT troll>I would argue that it's almost
impossible to write a non-trivial exception-safe code without the RAII idiom
</troll>.

2) Resource automatic freeing (mainly memory) , which can be done
automagically and asynchrounously by a GC.

Both native C++ and .NET collapse those 2 distinct operations into one
concept (the destructor or the run-by-the-GC-finalizer), whereas IMHO they
should be more clearly separated. Again, CLI stack semantic with automatic
implementation of IDiposable and automatic call to Dispose is the right
answer IMHO.
So, that means deterministic cleanup must be moved to the language
level. The best way to accomplish that and maintain a sense of
cross-language functionality was to create a common API. That was
IDisposable. From there, it's a matter of how the languages treat
destruction semantics. C++/CLI does everything that unmanaged C++
does, including automatic creation of destructors when embedded types
have destructors. Yes, is is the intent. I am not sure however that stack semantics can bu
used in all cases to implement the RAII idiom.Well, I suspect one could
declare small "ref struct". The real problem of course is that it is
unusable from C# or VB.NET.

Nonetheless, I see your point about the fact that a common API (IDiposable)
was the best bet to tackle the problem in a language neutral manner. Too bad
that other languages (C#, VB.NET) took the easy and wrong road of letting
the Dispose call responsibility in the client hands.
Anyway, as a C++ bare-to-the-metal-performance-fan (sarcarsm...), I still
regret that IDisposable must go through a virtual call overhead.
No more memory leaks... The main reason for GC is to avoid raw memory
leaks, not to get a better model for logical destruction of objects.


While GC is primarily about memory leaks, I would argue it serves to
do much more. C++ is inherently not type safe because it allows for
things like use of an object after delete. GC in the context of a
language like C++ is the only way to achieve type safety.

Well, I would not call "type safety" the danger of dereferencing a dangling
pointer, but I take your point (for me, "type safety" is about the danger of
an incorrect cast that may run unnoticed).
Also, if you are truly using Object Oriented Programming, objects will
represent resources like files, network connections, UI, etc. This
means that memory has a direct correlation to other resources, so GC
has the potential to cleanup a lot more than just memory. If you use the IDiposable pattern, yes. The finalizer is much more limited
in what you can do, since you can't reference another object from within a
finalizer. The problem is that most developper know about finalizers (which
they think about as destructors), but don't know about IDisposable, or are
unaware of the stack semantics.
Lastly, deterministic cleanup is really bad at cleaning up in certain
situations. A frequent example is shared resources that form a
dependency cycle. The impact of reference counting is well
understood, and all of the practices applied to unmanaged C++
frequently result in fragile programs. Agreed. Let's say the ideal solution to this problem still remains to be
invented ;-)
The problem that usually
results is programmers don't adapt to a different environment, and
instead try to contort deterministic practices to a non-deterministic
environment. Yes, but implementors don't make our life easier when they use the same
syntax for finalizers and destructors!
The short story... writing robust code still requires smart people
thinking solutions all the way through.

Amen...

Arnaud
MVP - VC
Jan 31 '06 #7

"Peter Oliphant" <po*******@RoundTripInc.com> skrev i meddelandet
news:u9**************@TK2MSFTNGP10.phx.gbl...
/rant on
Let me make this clear. I have always realized that when GC wasn't
in play that if I created something (then via 'new') I had to
destruct it manually to avoid memory leaks. That is, it was never
gauranteed the destructor would be called unless I invoked it via a
delete call. But, with the introduction of GC, anything created as a
gc object shouldn't need to be destructed manually, as the
application is suppose to keep track of whether something is being
used anymore by anyone before GC destroys it. But I always assumed
it would destroy it the same way one would manually destroy it, by
calling it's destructor. Could someone explain to me why NOT calling
the destructor upon GC destroying the object would EVER be a GOOD
thing?


Yes! :-)

GC is *not*destroying the object, it is reclaiming the memory space.

The object really lives forever, but its memory space can be reclaimed
when the object cannot be reached anymore.
Bo Persson
Jan 31 '06 #8
>Possible. Another explanation is perhaps you didn't master the differences
between finalizers and destructors, and you expected something of the
system without taking care of checking in the documentation wether your
expectations were justified.
I agree, but there's a problem. You see, how do you know when a change has
been made, or what the new features are, or if something exists that solves
your problem in VS C++.NET? Please don't tell me this info is easily
obtained.

MSDN2. You mean ten's of thousands of pages of doco with an inferior search
engine and everything in alphabetical order? With MSDN2 you have to
basically know the answer to look it up (like a dictionary's weakness, you
have to know how it is spelled to look up how it is spelled). It therefore
becomes a guessing game. What do I suppose MS named this feature? And new
feature get losts in ten's of thousands of pages of doco (and the what is
new area is VERY skimpy).

Another problem is that there is no convention as to what is made into a
'method' and what is made into a 'property'. Often, changing a property is
like a method (i.e., to change visibility change the Visible property, there
is no SetVisible() and SetInvisible() functions, which would of course be
another logical way to do this), and many methods are the equivalent of
properties (they return a state but have no affect). Add the fact that the
stuff is not organized by functionality, and you end up with the situation
thet if you want to be sure you are doing the right thing, you must read
EVERYTHING. Also, MS often leaves old pages up with old info, so one can
even try to look things up and end with dis-information, especially since
the MSDN2 search engine will, without warning, vector you over to the old
MSDN side. And, IMHO, the MSDN2 doco is written by people so well versed in
the subject they seem to forget they actually need to explain it (they
explain it tautologically, ala 'an integer variable stores an integer'). Or
they explain it in a misleading way. For example, there is a page in MSD2
that says the following:

http://msdn.microsoft.com/library/de...m/impdf_37.asp

" A variable declared as enum is an int."

Now, if I said the variable X is an int, you would expect to be able to
store an int in it, yes? But an ENUM variable will return an error if you
try to store an int in it (e.g., enum_var = int(1) is an error). This needs
more explanation, but that is the ENTIRE explanation (look at the link).
Also, look at this page describing the new SerialPort class:

http://msdn2.microsoft.com/en-us/lib...erialport.aspx

Note the detailed description of the sample code for this class. It talks
about how it is a null modem application, and even says you need two systems
to see it in full swing. Only one problem. MS forgot to put the sample code
on the page! Now I've reported this here, reported this in the Feedback
area. Two month later, still no sample code. This would this take, what, 5
minutes, to correct?

And THIS is what I'M suppose to get my knowledge of the VS C++.NET from?
<shiver>

Let's take the point at hand. How was I suppose to find out about these new
rules regarding destructors and finalizers? Why shouldn't I assume that an
UPGRADE would maintain ALL previous functionality and possibly add onto it.
Changes, IMHO, violate the concept of UPGRADE. They should call VS C++.NET
someting like C+++ (3 +'s) to make sure we are clear we need to learn all
its details, since if you assume it will behave like standard C++ you might
find yourself chasing bugs that are actually features!

There is just TOO much info regarding VS C++.NET. This is why when you say:
No, they are part of the .NET standard. They are a very central feature of
.NET, and you should document yourself to get a firm grasp on the subtle
differences between destructors and finalizers.
The reason is not lack of desire or ability, is is lack of knowing such info
exists or that changes were made in the first place. One only 'discovers'
these when code stops working when you do what use to work and now doesn't.
Then the only recourse to sound like a complete buffoon and ask in forums
like this what to do, coming across like a total amateur (even though I have
over 35 years of programming experience).

The real annoyance though comes when you point out a bug in the language and
the response it that it can't be changed because it would 'violate the C++
standard'. How is that even close to a justification when VS C++.NET
violates the standard whenever it sees fit to in most areas. For example,
did you know that if you apply the ToString() method to a Char[] it returns
with the EXACT SAME STRING every time, and it's something like "Char[]". I
reported this, and they said this couldn't bechanged since it would violate
the standard and might break someone's existing code. HUH? Who in HELL
depends on this behavior?

Oh well, so it goes...

[==P==]

"Arnaud Debaene" <ad******@club-internet.fr> wrote in message
news:eq**************@TK2MSFTNGP09.phx.gbl... Peter Oliphant wrote:
Personally, I think it is against the whole
concept of a destructor.

I agree : the point is that there is NO destructors in .NET!!! There are
finalizers, which are a different beast.CLI "destructors" have been mapped
to finalizers as best as MS could (generating code that implement
IDisposable, etc...), but this is by no way a native C++ destructor.

Let me make this clear. I have always realized that when GC wasn't in
play that if I created something (then via 'new') I had to destruct
it manually to avoid memory leaks. That is, it was never gauranteed
the destructor would be called unless I invoked it via a delete call.
But, with the introduction of GC, anything created as a gc object
shouldn't need to be destructed manually, as the application is
suppose to keep track of whether something is being used anymore by
anyone before GC destroys it.

The GC is asynchronous, and your never sure it will execute a finalizer
for a given object.(not the destructor mind you, since it doesn't exist,
the finalizer!).
The other point is that, since you don't know in which order finalizers
are run, you can't reference any external object from within a finalizer,
so you're really very limited in what you can do within them.

The whole point of the IDisposable interface is to circumvent this
limitation of the GC, although it is still an inferior solution compared
to the native, synchronous C++ destructor, IMHO.
What I see emerging is this. GC was created to help with the concept
that destruction of an object is tough to do when who 'owns' it is
unclear, or when it is unclear whether everyone's done with it. This
caused memory leaks in the case that 'nobody' took final
responsibility (or couldn't based on the info available). But the
solution to this is now generating another issue. Lack of reliable
destruction!

Agreed. There is NO destruction in .NET (nor in Java).
Destruction is now not gauranteed at any time you don't
specifically delete it. BUT WAIT! The whole point of GC was to AVOID
having to know when to do delete. So, if we are forced to do delete
to insure the destructor get run, then what did we gain from
introducing GC?

No more memory leaks... The main reason for GC is to avoid raw memory
leaks, not to get a better model for logical destruction of objects.
That is, if we now still have to delete at the right
time, this implies we know the instance is free to be destroyed.

More precisely, we have to *Dispose* the object at the right time...
Thus, we lose the advantage we got. Or more precisely, we have add
complication that introduces more possible pitfalls, and we are STILL
required to tell the application when to destroy something if we want
our destructors have any reliable meaning!


Yep. I do not believe anyway the computer will ever be able to *guess*
what the programmer wants, so there will ever be a manual indication of
when an action must be done (including destruction/finalization/release of
ressource).
And, again possibly showing my ignorance, when did finalizers come
into play? Is this part of the C++ standard?

No, they are part of the .NET standard. They are a very central feature of
.NET, and you should document yourself to get a firm grasp on the subtle
differences between destructors and finalizers.

To make the story short, a finalizer is an optional member function that
is possibly called (if it exists!) by the GC some time before the GC
reclaims the object memory and after the last reference on the object has
been released. You've got no guarantee at all on the order on which
finalizers for different objects execute.
Basically, I think things have gotten so complicated in this
destructor area that we have just traded one set of problems for
another.

Possible. Another explanation is perhaps you didn't master the differences
between finalizers and destructors, and you expected something of the
system without taking care of checking in the documentation wether your
expectations were justified.

Arnaud
MVP - VC

PS : IMHO, the Java, C# and Managed C++ choice of using the C++ destructor
syntax (~ClassName) to express the finalizer is a bad mistake that led
many developpers into misconceptions of that kind.

Jan 31 '06 #9
Peter Oliphant wrote:
Possible. Another explanation is perhaps you didn't master the
differences between finalizers and destructors, and you expected
something of the system without taking care of checking in the
documentation wether your expectations were justified.
I agree, but there's a problem. You see, how do you know when a
change has been made, or what the new features are, or if something
exists that solves your problem in VS C++.NET? Please don't tell me
this info is easily obtained.


I don't say so. I say that, just as you have learned native C++, you should
learn C++/CLI, most probably in a book or a course if you find MSDN to be
too "dictionnarish" (I agree with that).
Another problem is that there is no convention as to what is made
into a 'method' and what is made into a 'property'. Often, changing a
property is like a method (i.e., to change visibility change the
Visible property, there is no SetVisible() and SetInvisible()
functions, which would of course be another logical way to do this),
and many methods are the equivalent of properties (they return a
state but have no affect). Well, for me, the difference between function and property is just syntaxic
sugar, they are really the same and I don't see this as a problem.
And, IMHO, the MSDN2 doco is
written by people so well versed in the subject they seem to forget
they actually need to explain it
MSDN is a reference, just as a dictionnary or an encyclopedia. It is not
meant to be a teaching tool (through, IMHO, it do quite a good job as a
teaching tool too, thanks to the many articles beside the mere
classes/functions/properties reference pages, but i agree it can be quite
difficult to find what you are looking for when you are not used to it).
Anyway, have you ever tried Linux man-pages or Oracle 800 pages PDF
reference books before complaining about MSDN ;-)

<snip samples> And THIS is what I'M suppose to get my knowledge of the VS C++.NET
from? <shiver>
Have you looked at MS Press books?

Anyway, MSDN2 was supposed to be a Beta version of the documentation for the
Beta version of Visual 2005 (that's the way I understood it at least).
Now that Visual 2005 is on the shelves, it doesn't seems that Visual 2005
specific stuff has been merged in the "main" MSDN site. I am not sure why
and if the MSDN2 is there to stay as the definitive doc (that would be a bad
idea IMHO to have 2 different "release" MSDN sites), or if we are in a
transitory state.
Let's take the point at hand. How was I suppose to find out about
these new rules regarding destructors and finalizers? They are no new : It was the same story in Managed C++, through the syntax
was different. The really new thing is the stack semantic.
Why shouldn't I
assume that an UPGRADE would maintain ALL previous functionality and
possibly add onto it.
Changes, IMHO, violate the concept of UPGRADE.
They should call VS C++.NET someting like C+++ (3 +'s) to make sure
we are clear we need to learn all its details, since if you assume it
will behave like standard C++ you might find yourself chasing bugs
that are actually features! Well, you know, they do call it C++/CLI for a purpose! Anyway, I agree that
MS nomenclature is (as most often) very confusing.

The reason is not lack of desire or ability, is is lack of knowing
such info exists or that changes were made in the first place. One
only 'discovers' these when code stops working when you do what use
to work and now doesn't. Your only error was to expect that C++/CLI will react exactly as native C++.
There I must say that the MS commercial woodoo about "It Just Work", "simply
compile your old code and see it work like it used to" and the like is
misleading.
The real annoyance though comes when you point out a bug in the
language and the response it that it can't be changed because it
would 'violate the C++ standard'. Huuu??? What are you speaking about here?
How is that even close to a
justification when VS C++.NET violates the standard whenever it sees
fit to in most areas. For example, did you know that if you apply the
ToString() method to a Char[] it returns with the EXACT SAME STRING
every time, and it's something like "Char[]". I reported this, and
they said this couldn't bechanged since it would violate the standard
and might break someone's existing code. HUH? Who in HELL depends on
this behavior?


I don't understand you here... ToString is not part of the C++ standard! It
is the .NET standard (ECMA 335) that define the ToString method, and it
seems consistent to me, as this specification says that :

<quote ECMA 335> "default : System.Object.ToString is equivalent to calling
System.Object.GetType to obtain the System.Type object for the current
instance and then returning the result of calling the System.Object.ToString
implementation for that type.
Note : The value returned includes the full name of the type.
</quote>

and, on the other hand, ToString is not redefined for System.Array.
Therefore, according to usual override rules, the return value is as
expected.

Note : My quotes from the ECMA 335 standard are from the XML file definining
the BCL and available at
http://www.ecma-international.org/pu...ma-335-xml.zip.
See partition 4 of
http://www.ecma-international.org/pu...T/Ecma-335.pdf
for a description of this XML file

Arnaud
MVP - VC
Feb 1 '06 #10
Arnaud Debaene wrote:
Brandon Bray [MSFT] wrote:
While GC is primarily about memory leaks, I would argue it serves to
do much more. C++ is inherently not type safe because it allows for
things like use of an object after delete. GC in the context of a
language like C++ is the only way to achieve type safety.


Well, I would not call "type safety" the danger of dereferencing a
dangling pointer, but I take your point (for me, "type safety" is
about the danger of an incorrect cast that may run unnoticed).


Surely though they're exactly the same thing. If a C++ pointer refers to an
object that's been deleted, that object no longer has a valid type - or
worse, it may now have a different type! Accessing that deleted object
through a dangling pointer is at it's core no different than using
reinterpret_cast to convert from double to CString (and worse, results in
bugs that are far harder to find).

-cd

Feb 1 '06 #11
Brandon Bray [MSFT] wrote:
To me, the biggest limitation imposed on destructors as a result of
IDisposable is that all destructors are public and virtual. I actually
that's a good thing, and it's a mistake that unmanaged C++ allows
destructors to be anything else.
If you make yourself familiar with the evolution of C++, you will find
that, like most language features, it is a carefully weighed design
decision and not a mistake at all. A destructor needs to to be public if
and only if objects of the class in question are to be destroyed from
outside member functions of the class itself and of its descendants, and
friends. Otherwise, it would be a mistake to make the destructor public.
A destructor needs to be virtual if and only if the class design
requires polymorphic deletion. Otherwise, it would be a mistake to make
the destructor virtual. Public virtual destructors may be appropriate in
many, but by no means in all scenarios. Enforcing it eliminates
perfectly legitimate design options. While Java and, in its wake, .NET
have a restricive tradition where the omniscient platform/language
designer knows better than the lowly programmer, this is very much
against the spirit of Standard C++ (not "unmanaged C++", please; I, and
many others, take offence at this term).
While GC is primarily about memory leaks, I would argue it serves to
do much more. C++ is inherently not type safe because it allows for
things like use of an object after delete. GC in the context of a
language like C++ is the only way to achieve type safety.
What has lifetime management got to do with type safety? It seems to me
that you are mixing up two different issues here. C++/CLI (or any CLI
language, for that matter) allows for using an object after disposing
it. The only difference is that you will screw up at a higher level
(class logic, rather than memory management). If by type safety you mean
that it is impossible for programmers to screw up, then there is no such
thing as type safety.
The premise of this statement is flawed. Destruction is a language
level service, because only the language can determine when it is
appropriate to deterministically cleanup objects. Why? Because the
programmer needs to be involved - otherwise you deal with the infamous
halting problem. The CLR is a collection of services that can be
supplied to a running program. As long as we're dealing with Turing
Machines, the CLR will never be able to provide deterministic
cleanup as a service.
Agreed. However, this is a glaring contradiction to:
Also, if you are truly using Object Oriented Programming, objects will
represent resources like files, network connections, UI, etc. This
means that memory has a direct correlation to other resources, so GC
has the potential to cleanup a lot more than just memory.
There is a growing consensus that GC is basically unsuitable to clean up
scarce resources precisely because it is non-deterministic. The idea of
GC is based on memory as an ample resource. You only can afford to clean
up at indeterminate intervals because there is so much of it, and
because it is more or less uniform. Programs do not normally need to
allocate memory at specific addresses. However, you don't want a
particular socket or a particular mutex to be blocked because the
collector has not run. With finalizers that are not guaranteed to be
called at all, managing scarce resources via GC becomes impossible. GC
is good at managing memory, but it is not the panacea you make it sound
like.
Lastly, deterministic cleanup is really bad at cleaning up in certain
situations. A frequent example is shared resources that form a
dependency cycle.
Again, you are mixing up two entirely different things. It is ordinary
*reference counting*, not deterministic cleanup, that is bad at cleaning
up cycles. The two are not synonymous, the former is a particular
implementation of the latter. Besides, how common are cycles really?
Have you ever encountered an example of a network or file resource
cycle? For the special cases where cycles do occur, there are
well-tested techniques that can deal with them, such as
shared_ptr/weak_ptr. In most resource scenarios, exclusive ownership
suffices, and you don't even need reference counting.
The impact of reference counting is well understood, and all of
the practices applied to unmanaged C++ frequently result in fragile
programs.
In all respect, this is pure FUD. Fragile programs result from sloppy
design. Standard C++ has excellent support for reliable resource
management. As you write youself:
The short story... writing robust code still requires smart people
thinking solutions all the way through.


To which I can only wholeheartedly agree. A collector can relieve you
from the tedious job of caring about memory, but it does not handle all
your resource problems, and it cannot do the thinking for you.

--
Gerhard Menzl

#dogma int main ()

Humans may reply by replacing the thermal post part of my e-mail address
with "kapsch" and the top level domain part with "net".
Feb 1 '06 #12
>> How is that even close to a
justification when VS C++.NET violates the standard whenever it sees
fit to in most areas. For example, did you know that if you apply the
ToString() method to a Char[] it returns with the EXACT SAME STRING
every time, and it's something like "Char[]". I reported this, and
they said this couldn't bechanged since it would violate the standard
and might break someone's existing code. HUH? Who in HELL depends on
this behavior?
I don't understand you here... ToString is not part of the C++ standard!
It is the .NET standard (ECMA 335) that define the ToString method, and it
seems consistent to me, as this specification says that :


Correct. Since ToString is not part of the C++ standard it doesn't make much
sense to justify broken behavior on its part as being required because it IS
standard behavior! Now, check out this link:

http://lab.msdn.microsoft.com/produc...2-bb5065fd7c2b

Recapping, the ToString() function applied to a Char[] results in EXACTLY
the following string regardless of the contents of the Char[] :
"System.Char[]". MS claims that they will not 'fix' ToString to solve this,
but instead might create a new function to accomplished more natural and
desired results. They say that changing the current behavior would be
'breaking'. That could only be true if enough people out there wrote code
that RELIES on this behavior. Who would rely on this behavior is what I was
saying in the above? It sounds like just an excuse NOT to change it since
they don't think of it as important enough.

I reported this one, which they claim is also correct:

http://lab.msdn.microsoft.com/produc...b-d6a026770ad3

That is, the ToString function applied to a single 'char' does not return a
string of just the character, but a string of the decimal ASCII value of
char. That is, '0'.ToString() = "48" instead of "0" since the ascii value
for the character '0' is 48 (or "x30" hex). Personally, I think of the
ToString as a means to convert a variable to a string equivalent of the
*natural symbolic representation* of the variable it's given. I don't think
of the natural symbolic representation of a char to be its ascii value.
Especially since chosing DECIMAL is arbitrary. I could easily make a case
where if you did typically think of a char as its ascii value that it be
represented as a HEX value, not decimal. So this looks to me more like
a -WOOPS! - ToString is taking the char as a byte value and using it that
way. Oh No! Oh well. Let's just call this correct and explain it as being
standard...

Let me put it this way. If it was discovered that the addition operation '+'
produced 3 when trying to add 1 and 1, does it make sense that the proper
way to fix this is to leave 1+1 = 3 but to invent a new sum operator for
which 1+1 =2? Or does it make more sense to fix the CUURENT '+' operator so
it returns the proper sum value? Would it be a valid excuse to say that such
a change would be 'breaking', that is, that it is reasonable to assume some
people wrote code out there counting on the fact that adding 1 and 1 will be
3? analogously, I think they should change ToString to behave in a natural
way, not give excuses as why it won't be fixed...

Now don't get me wrong. I realize it is tough to make such changes since you
don't dare release code with fixes before you check whether or not the 'fix'
hasn't broken something else. But these responses form MS are more of the
nature of denial that there is even a problem, to the point of actually
justifying incorrect behavior as being standard, correct, or that fixing it
would do more harm than good (breaking). That's what is the most frustrating
to me....

[==P==]

"Arnaud Debaene" <ad******@club-internet.fr> wrote in message
news:ue**************@TK2MSFTNGP15.phx.gbl... Peter Oliphant wrote:
Possible. Another explanation is perhaps you didn't master the
differences between finalizers and destructors, and you expected
something of the system without taking care of checking in the
documentation wether your expectations were justified.


I agree, but there's a problem. You see, how do you know when a
change has been made, or what the new features are, or if something
exists that solves your problem in VS C++.NET? Please don't tell me
this info is easily obtained.


I don't say so. I say that, just as you have learned native C++, you
should learn C++/CLI, most probably in a book or a course if you find MSDN
to be too "dictionnarish" (I agree with that).
Another problem is that there is no convention as to what is made
into a 'method' and what is made into a 'property'. Often, changing a
property is like a method (i.e., to change visibility change the
Visible property, there is no SetVisible() and SetInvisible()
functions, which would of course be another logical way to do this),
and many methods are the equivalent of properties (they return a
state but have no affect).

Well, for me, the difference between function and property is just
syntaxic sugar, they are really the same and I don't see this as a
problem.
And, IMHO, the MSDN2 doco is
written by people so well versed in the subject they seem to forget
they actually need to explain it


MSDN is a reference, just as a dictionnary or an encyclopedia. It is not
meant to be a teaching tool (through, IMHO, it do quite a good job as a
teaching tool too, thanks to the many articles beside the mere
classes/functions/properties reference pages, but i agree it can be quite
difficult to find what you are looking for when you are not used to it).
Anyway, have you ever tried Linux man-pages or Oracle 800 pages PDF
reference books before complaining about MSDN ;-)

<snip samples>
And THIS is what I'M suppose to get my knowledge of the VS C++.NET
from? <shiver>


Have you looked at MS Press books?

Anyway, MSDN2 was supposed to be a Beta version of the documentation for
the Beta version of Visual 2005 (that's the way I understood it at least).
Now that Visual 2005 is on the shelves, it doesn't seems that Visual 2005
specific stuff has been merged in the "main" MSDN site. I am not sure why
and if the MSDN2 is there to stay as the definitive doc (that would be a
bad idea IMHO to have 2 different "release" MSDN sites), or if we are in a
transitory state.
Let's take the point at hand. How was I suppose to find out about
these new rules regarding destructors and finalizers?

They are no new : It was the same story in Managed C++, through the syntax
was different. The really new thing is the stack semantic.
Why shouldn't I
assume that an UPGRADE would maintain ALL previous functionality and
possibly add onto it.
Changes, IMHO, violate the concept of UPGRADE.
They should call VS C++.NET someting like C+++ (3 +'s) to make sure
we are clear we need to learn all its details, since if you assume it
will behave like standard C++ you might find yourself chasing bugs
that are actually features!

Well, you know, they do call it C++/CLI for a purpose! Anyway, I agree
that MS nomenclature is (as most often) very confusing.

The reason is not lack of desire or ability, is is lack of knowing
such info exists or that changes were made in the first place. One
only 'discovers' these when code stops working when you do what use
to work and now doesn't.

Your only error was to expect that C++/CLI will react exactly as native
C++. There I must say that the MS commercial woodoo about "It Just Work",
"simply compile your old code and see it work like it used to" and the
like is misleading.
The real annoyance though comes when you point out a bug in the
language and the response it that it can't be changed because it
would 'violate the C++ standard'.

Huuu??? What are you speaking about here?
How is that even close to a
justification when VS C++.NET violates the standard whenever it sees
fit to in most areas. For example, did you know that if you apply the
ToString() method to a Char[] it returns with the EXACT SAME STRING
every time, and it's something like "Char[]". I reported this, and
they said this couldn't bechanged since it would violate the standard
and might break someone's existing code. HUH? Who in HELL depends on
this behavior?


I don't understand you here... ToString is not part of the C++ standard!
It is the .NET standard (ECMA 335) that define the ToString method, and it
seems consistent to me, as this specification says that :

<quote ECMA 335> "default : System.Object.ToString is equivalent to
calling System.Object.GetType to obtain the System.Type object for the
current instance and then returning the result of calling the
System.Object.ToString implementation for that type.
Note : The value returned includes the full name of the type.
</quote>

and, on the other hand, ToString is not redefined for System.Array.
Therefore, according to usual override rules, the return value is as
expected.

Note : My quotes from the ECMA 335 standard are from the XML file
definining the BCL and available at
http://www.ecma-international.org/pu...ma-335-xml.zip.
See partition 4 of
http://www.ecma-international.org/pu...T/Ecma-335.pdf
for a description of this XML file

Arnaud
MVP - VC

Feb 1 '06 #13
>To me, the biggest limitation
imposed on destructors as a result of IDisposable is that all destructors
are public and virtual. I actually that's a good thing, and it's a mistake
that unmanaged C++ allows destructors to be anything else.

Wow! So you think that concrete types like string or complex should
have virtual destructors? IMHO, that's against the spirit of C++. We
should pay only for what we use.

Feb 1 '06 #14
Gerhard Menzl wrote:
Have you ever encountered an example of a network or file resource
cycle? For the special cases where cycles do occur, there are
well-tested techniques that can deal with them, such as
shared_ptr/weak_ptr. In most resource scenarios, exclusive ownership
suffices, and you don't even need reference counting.


I completely agree with that. The .NET framework aims to solve the
memory leak problem, and it probably does a good job at that. However,
it doesn't provide the tools needed to solve the deterministic resource
destruction problem, with the exception of C++/CLI. Even that's lacking
good library features, such as shared_ptr/weak_ptr, but they can be
solved by 3rd party vendors.

In C++ we routinely use constructs like vector<Resource>, and when the
container goes out of scope, it guarantees that all owned resources are
properly destructed. In a .NET List<Resource^> it is not happening. The
stack syntax doesn't extend to containers, and there seems to be no
..NET library to implement some kind of a reference counted smart
pointer. And there's no .NET collection that "owns" the resources that
it holds either. Those who routinely wrap unmanaged C++ code in C++/CLI
feel unsafe, especially when the wrapped assemblies are going to be used
in C# or VB.

Why do I care about this when the .NET framework has finalizers? Because
my unmanaged code uses far more memory than the managed part. There is
not enough garbage in the managed memory for the GC to kick in, at least
not before I run out of native memory. If I have to depend on the
finalizer to destroy my unmanaged objects, sooner than later the native
heap will be full.

Some will disagree with me, but this is how I view this issue, and
it concerns me. The .NET framework supports and uses exceptions
extensively, and C++ programmers know how dangerous it is to use
exceptions without proper support for deterministic destructors. It's
like walking on a minefield. I realize that C# has the "using" keyword,
which is the first step toward safe resource destruction, but it only
works for local objects, not for resources stored in a container. When
it comes to objects stored in collections, the .NET framework doesn't
provide a tool better than a vector<Resource*> in native C++. ISO C++
can be error prone, but at least it has the tools to be safe.

The worst thing is that many C# and VB programmers are not used to
dealing with destructors, because they live in a false sense of security
that the GC handles everything automatically, and when this attitude is
mixed with exceptions, it could cause a disaster. Mutexes will not be
unlocked. Files will not be closed. Native memory will not be reclaimed,
because of the lack of destructors that are guaranteed to be called.

Tom
Feb 1 '06 #15
Peter Oliphant wrote:
So, I guess I need to know the rules about destructors. I would have thought
any language derived from C++ would always guarantee the destructor of an
instance of a class be called at some time, especially if created via
[gc]new and stored as a pointer.


Both native C++ and C++/CLI guarantee that the destructor is called
with, and only with, the stack syntax.

{
NativeClass class;
} // ~NativeClass() is guaranteed to be called automatically
{
NativeClass * class = new NativeClass;
} // dynamically allocated classes are not destroyed automatically
To protect dynamically created classes, you use a smart pointer:

{
auto_ptr<NativeClass> class(new NativeClass);
} // Yes, the class is deleted and destroyed automatically
Managed classes have the same behavior in C++/CLI. There is no auto_ptr
for ref classes yet, but it is expected to be available in STL.NET, to
be released soon.

Tom
Feb 1 '06 #16
Peter Oliphant wrote:
How is that even close to a
justification when VS C++.NET violates the standard whenever it sees
fit to in most areas. For example, did you know that if you apply
the ToString() method to a Char[] it returns with the EXACT SAME
STRING every time, and it's something like "Char[]". I reported
this, and they said this couldn't bechanged since it would violate
the standard and might break someone's existing code. HUH? Who in
HELL depends on this behavior?
I don't understand you here... ToString is not part of the C++
standard! It is the .NET standard (ECMA 335) that define the
ToString method, and it seems consistent to me, as this
specification says that :


Correct. Since ToString is not part of the C++ standard it doesn't
make much sense to justify broken behavior on its part as being
required because it IS standard behavior! Now, check out this link:


Well, ToString is not defined b the C++ standard, but it is defines by
another : The .NET Standard (ECMA 335).
http://lab.msdn.microsoft.com/produc...2-bb5065fd7c2b

Recapping, the ToString() function applied to a Char[] results in
EXACTLY the following string regardless of the contents of the Char[]
: "System.Char[]". MS claims that they will not 'fix' ToString to
solve this, but instead might create a new function to accomplished
more natural and desired results. They say that changing the current
behavior would be 'breaking'. That could only be true if enough
people out there wrote code that RELIES on this behavior. Who would
rely on this behavior is what I was saying in the above?
I totally agree with the MS response to this query :

- the behaviour you observe is as required by the .NET standard (se my
previous post).

- it *may* break existing code, through unlikely.

- The fact yhat you require or expect a Char[] to act as a string is a sign
that you are still in a "C" way of though. In OOP, a string is an object in
itself, and an char array is just an array, it has nothing to do with
strings. The fact that in C, a string is a char[] is a kludge that has no
reason to be in OOP.
It sounds
like just an excuse NOT to change it since they don't think of it as
important enough. I actually think it's a good thing *not* to do this change, since it force
people to adapt to the new, better, OOP paradigm, where a string is
represented by System::String and nothing else. You should forget your "C
guru" reflexes in .NET ;-)
Personally, I think of the ToString as a means to convert a variable
to a string equivalent of the *natural symbolic representation* of
the variable it's given. Yes, but in OOP, the natural symbolic representation of an array of char is
*not* a string, since an array of chars is *not* a string!
I don't think of the natural symbolic
representation of a char to be its ascii value.

There I agree with you : the choice for simple Char is strange.

Arnaud
MVP - VC
Feb 1 '06 #17
- The fact yhat you require or expect a Char[] to act as a string is a sign
that you are still in a "C" way of though. In OOP, a string is an object in
itself, and an char array is just an array, it has nothing to do with
strings. The fact that in C, a string is a char[] is a kludge that has no
reason to be in OOP.

Maybe. I don't think I'm expecting Char[] to act like a string, but believe
the natural and expected result of applying ToString to it would return a
contatenation of the characters in order in the array as one string.
Likewise, I don't think it's evidence that I think of an 'int' as a string
just because I feel applying the ToString() to an 'int' should return a
string reflecting the decimal value of the integer stored.

But, to the point. Can you think of a good justification as to why, no
matter what the contents of a Char[] are, that applying ToString to it
returns precisely this string and only this string every time:
"System.Char[]"? That is,

Char[] char_array_1 = {'0','1','2'} ;
Char[] char_array_2 = {'A','B','C','D','E','F'} ;

assert( char_array_1.ToString() == "System.Char[]" ) ; // true
assert( char_array_2.ToString() == "System.Char[]" ) ; // true, in fact...
assert( char_array_1.ToString() == char_array_2.ToString() ) ; // true

Therefore, the only reason to apply ToString with respect to Char[] is if
you want to generate the CONSTANT string "System.Char[]". Wouldn't just
establishing a constant string with this value be easier? What good is a
function that no matter what you give it it always returns the same exact
value? Why even HAVE an input parameter? It's like a static function that
isn't static...

I fell this is broken and incorrect behavior. Your mileage may vary... ; )

[==P==]

"Arnaud Debaene" <ad******@club-internet.fr> wrote in message
news:uU*************@TK2MSFTNGP15.phx.gbl...
Peter Oliphant wrote:
How is that even close to a
justification when VS C++.NET violates the standard whenever it sees
fit to in most areas. For example, did you know that if you apply
the ToString() method to a Char[] it returns with the EXACT SAME
STRING every time, and it's something like "Char[]". I reported
this, and they said this couldn't bechanged since it would violate
the standard and might break someone's existing code. HUH? Who in
HELL depends on this behavior?

I don't understand you here... ToString is not part of the C++
standard! It is the .NET standard (ECMA 335) that define the
ToString method, and it seems consistent to me, as this
specification says that :


Correct. Since ToString is not part of the C++ standard it doesn't
make much sense to justify broken behavior on its part as being
required because it IS standard behavior! Now, check out this link:


Well, ToString is not defined b the C++ standard, but it is defines by
another : The .NET Standard (ECMA 335).
http://lab.msdn.microsoft.com/produc...2-bb5065fd7c2b

Recapping, the ToString() function applied to a Char[] results in
EXACTLY the following string regardless of the contents of the Char[]
: "System.Char[]". MS claims that they will not 'fix' ToString to
solve this, but instead might create a new function to accomplished
more natural and desired results. They say that changing the current
behavior would be 'breaking'. That could only be true if enough
people out there wrote code that RELIES on this behavior. Who would
rely on this behavior is what I was saying in the above?


I totally agree with the MS response to this query :

- the behaviour you observe is as required by the .NET standard (se my
previous post).

- it *may* break existing code, through unlikely.

- The fact yhat you require or expect a Char[] to act as a string is a
sign that you are still in a "C" way of though. In OOP, a string is an
object in itself, and an char array is just an array, it has nothing to do
with strings. The fact that in C, a string is a char[] is a kludge that
has no reason to be in OOP.
It sounds
like just an excuse NOT to change it since they don't think of it as
important enough.

I actually think it's a good thing *not* to do this change, since it force
people to adapt to the new, better, OOP paradigm, where a string is
represented by System::String and nothing else. You should forget your "C
guru" reflexes in .NET ;-)
Personally, I think of the ToString as a means to convert a variable
to a string equivalent of the *natural symbolic representation* of
the variable it's given.

Yes, but in OOP, the natural symbolic representation of an array of char
is *not* a string, since an array of chars is *not* a string!
I don't think of the natural symbolic
representation of a char to be its ascii value.

There I agree with you : the choice for simple Char is strange.

Arnaud
MVP - VC

Feb 1 '06 #18
Gerhard Menzl wrote:
If you make yourself familiar with the evolution of C++, you will find
that, like most language features, it is a carefully weighed design
decision and not a mistake at all.
This, unfortunately, will have to be a position that we agree to disagree.
Many of the design decisions within C++ are based on historical precedent
and strict maintenance to a notion of backwards source compatibility. While
I understand many of the design decisions, I do consider many of them
mistakes in hindsight.

And while the flexibility afforded by allowing destructors to be non-public
can be convenient, it demonstrates a classic misuse in my mind... it would
be far more effective to introduce a real language feature that allowed the
desired behaviors. Because there is so much flexibility (and in my view,
misuse) of some features, it hampers the ability to do rigid analysis of the
program from both a human and automated perspective.
What has lifetime management got to do with type safety?
Everything. I speak of type safety from the "I can prove the program
mathematically obeys the separation of objects" perspective. Because an
object deleted frees memory, it allows the programmer to allocate another
object in the memory that the pointer still points to. That is what type
safety is meant to eradicate. Clearly, we all know using a pointer after the
object to which it points is deleted is a programming error... but so is a
buffer overrun. The language is not type safe unless it can rigidly prevent
that.
There is a growing consensus that GC is basically unsuitable to clean up
scarce resources precisely because it is non-deterministic.
While, I'm mostly in agreement... fifty years ago, there was growing
consensus that GC was too expensive for memory. The state of the art GC
works very well for memory today. The best that is available for other
resources only does a 1-to-1 mapping to memory... which probably isn't the
most efficient way to manage scarce resources. There's still ample amount of
research to be done in this area.
Again, you are mixing up two entirely different things. It is ordinary
*reference counting*, not deterministic cleanup, that is bad at cleaning
up cycles.
I'm going to clearly state that I am not mixing up these things... I spend a
lot of time in the design issues here, so you can at least note that I do
know something. I stand behind what I said. Shared resources and
deterministic cleanup have very few options, and reference counting is by
far the most commonly used. If there are others, they aren't well formalized
and certainly not tied to a language level service.
In all respect, this is pure FUD. Fragile programs result from sloppy
design. Standard C++ has excellent support for reliable resource
management.


Unfortunately, I have to disagree again. Maintanence of applications that
grow to millions of lines of code is dramatically more expensive because
"techniques" are not good enough for automated proofs of program
correctness. And while Standard C++ has good support for certain ways to
reliably manage resources, it fails miserably in other areas. (Note, .NET
has similar issues -- it does incredibly well in some cases, and fails in
others.)

I recognize that this discussion really has very few options than to diverge
towards over-heated arguments. I don't really have much more to say, but I
do appreciate your candor and your passion for C++.

Cheers!

--
Brandon Bray, Visual C++ Compiler http://blogs.msdn.com/branbray/
Bugs? Suggestions? Feedback? http://msdn.microsoft.com/productfeedback/
Feb 2 '06 #19
Nemanja Trifunovic wrote:
Wow! So you think that concrete types like string or complex should
have virtual destructors? IMHO, that's against the spirit of C++. We
should pay only for what we use.


I don't think virtual destructors and pay-for-what-you-use are incompatible.
The context in which C++ is built today makes them antagonistic to each
other, but that's just a design decision.

For instance, value types in C++/CLI have virtual functions, but don't have
any overhead for calling them because the class is sealed. Introducing whole
program analysis to C++ would also detangle the two desires.

--
Brandon Bray, Visual C++ Compiler http://blogs.msdn.com/branbray/
Bugs? Suggestions? Feedback? http://msdn.microsoft.com/productfeedback/
Feb 2 '06 #20
"Peter Oliphant" <po*******@RoundTripInc.com> wrote in message
news:e3**************@TK2MSFTNGP09.phx.gbl...
Correct. Since ToString is not part of the C++ standard it doesn't make
much sense to justify broken behavior on its part as being required
because it IS standard behavior! Now, check out this link:

http://lab.msdn.microsoft.com/produc...2-bb5065fd7c2b

Recapping, the ToString() function applied to a Char[] results in EXACTLY
the following string regardless of the contents of the Char[] :
"System.Char[]". MS claims that they will not 'fix' ToString to solve
this, but instead might create a new function to accomplished more natural
and desired results. They say that changing the current behavior would be
'breaking'. That could only be true if enough people out there wrote code
that RELIES on this behavior. Who would rely on this behavior is what I
was saying in the above? It sounds like just an excuse NOT to change it
since they don't think of it as important enough.


There are a lot of programmers out there, and I wouldn't doubt someone
somewhere is relying on it somehow. To now implement Array.ToString() at the
risk of breaking code solely for functionality that can just as easily be
obtained through the String constructor isn't worth it in my opinion.
Feb 2 '06 #21
Peter Oliphant wrote:
- The fact yhat you require or expect a Char[] to act as a string is
a sign that you are still in a "C" way of though. In OOP, a string is
an object in itself, and an char array is just an array, it has
nothing to do with strings. The fact that in C, a string is a char[]
is a kludge that has no reason to be in OOP.

Maybe. I don't think I'm expecting Char[] to act like a string, but
believe the natural and expected result of applying ToString to it
would return a contatenation of the characters in order in the array
as one string
Ok, then go the end of your reasoning : If Char[].ToString should return the
concatentation of the chars, then Int[].ToString should return the
concatenation of the ints, Float[].ToString should return the concatenation
of the floats (totally meaningless), and SomeObject[].ToString() should
return the concatenation of SomeObjects (probably totally meaningless
too)....
But, to the point. Can you think of a good justification as to why, no
matter what the contents of a Char[] are, that applying ToString to it
returns precisely this string and only this string every time:
"System.Char[]"?

Yes : coherence with all other arrays, see above...

Arnaud
MVP - VC
Feb 2 '06 #22
> But, to the point. Can you think of a good justification as to why, no
matter what the contents of a Char[] are, that applying ToString to it
returns precisely this string and only this string every time:
"System.Char[]"?
Yes : coherence with all other arrays, see above...

Then you believe that ToString applied to an Int[] should return this string
and only this string every time:

"System.Int[]" (which maybe it does, I don't know)

Similarly, a Float[] should always return "System.Float[]" etc. Then why
doesn't applying ToString to just an Int return this string every time
"System.Int".? Seems a bit inconsistent to me. Why not just disallow
ToString from being applied to an array, instead of a constant response.
ToString has no valuable use for arrays anyway if that's all it does...

I guess we just have to agree to disgree on this. I personally don't feel a
function called ToString's purpose would be to return a string representing
the Type of any array and not be concerned at all with it's contents. I
feel it is incorrectly named when applied to arrays, and probably should
generate a compiler error when attempted (as has been mentioned ToString()
has nothing to do with any C++ standard, so MS can chose how to deal with
ToString() as they please).

ToString() does work as I would expect it on most entities: int's, floats's,
etc. This in a sense was part of the problem. It worked as I assumed it
would work in most cases, so it took a while to realize it wasn't doing what
I expected in the case of giving it a 'char'. As previously discussed,
ToString doesn't return a string with the single character passed to it, but
rather a string representing the decimal ASCII value of the char, for
example, '0'.ToString() = "48" since the ASCII value for the character '0'
is 48. MS claims this is proper. I don't feel the same way. I expected that
'0'.ToString() = "0". Silly me...

But I guess that's becuase I'm still thinking in 'C' terms. I still think of
a 'char' as a variable used to represent a symbolic member of the extended
alphabet ('a'-'z', 'A'-'Z', '0'-'9', etc.). Apparently this is old thinking.
The 'new hotness' is that a 'char' is just a byte with an ASCII value in it
Of course this means there is no natural way to convert a 'char' to a
String, or to insert one. Now that's progress... : )

[==P==]
"Arnaud Debaene" <ad******@club-internet.fr> wrote in message
news:uX**************@TK2MSFTNGP12.phx.gbl... Peter Oliphant wrote:
- The fact yhat you require or expect a Char[] to act as a string is
a sign that you are still in a "C" way of though. In OOP, a string is
an object in itself, and an char array is just an array, it has
nothing to do with strings. The fact that in C, a string is a char[]
is a kludge that has no reason to be in OOP.

Maybe. I don't think I'm expecting Char[] to act like a string, but
believe the natural and expected result of applying ToString to it
would return a contatenation of the characters in order in the array
as one string


Ok, then go the end of your reasoning : If Char[].ToString should return
the concatentation of the chars, then Int[].ToString should return the
concatenation of the ints, Float[].ToString should return the
concatenation of the floats (totally meaningless), and
SomeObject[].ToString() should return the concatenation of SomeObjects
(probably totally meaningless too)....
But, to the point. Can you think of a good justification as to why, no
matter what the contents of a Char[] are, that applying ToString to it
returns precisely this string and only this string every time:
"System.Char[]"?

Yes : coherence with all other arrays, see above...

Arnaud
MVP - VC

Feb 2 '06 #23
Why would you think that when C++ makes no similar guarantee for pure native
C++? The destructor for an object on the heap is called when and if you
call delete on a pointer to that object. The situation is no different for
C++/CLI with respect the to destructor (which is IDisposable::Dispose for
C++/CLI).

I would think this for the following reasons. Let me make a real work
analogy. I do the laundry. Hence, at the end of the process, I shut down by
folding all the clothes and putting them away and turn off the lights. But
then a manager is appointed to the laundry room. He tell me I no longer need
to turn off the lights at the end of my laundry session since he will shut
down for me. This is a good thing since other people might want to use the
laundromat, and it's wasteful to keep turning the lights on and off.

So, what your telling me is this. Why would I expect the manager to turn off
the lights when he closes the landromat? Because he told me HE would be
responsible for shutting down! Further, I told him that when I did it
manually I turned off the lights. It's in my 'rules for shutting down'.

So, the reason I would expect for the system to gaurantee calling the
destructor when it GC's them is that when it took the responsibility for
shutting down it is implying it will do it PROPERLY. Obviously the whole
purpose of a destructor is to 'shut down properly'.

Don't know if you have any kids, but consider this. If you usually vacuumed
and you gave this responsibility to your kid, would you or would you not
assume he would put the vacuum cleaner away when he is done, or would you
expect to have to do this yourself?

[==P==]

"Carl Daniel [VC++ MVP]" <cp*****************************@mvps.org.nospam >
wrote in message news:ef**************@TK2MSFTNGP11.phx.gbl...
Peter Oliphant wrote:
I'm programming in VS C++.NET 2005 using cli:/pure syntax. In my code
I have a class derived from Form that creates an instance of one of
my custom classes via gcnew and stores the pointer in a member.
However, I set a breakpoint at the destructor of this instance's
class and it was never called!!! I can see how it might not get
called at a deterministic time. But NEVER?

So, I guess I need to know the rules about destructors. I would have
thought any language derived from C++ would always guarantee the
destructor of an instance of a class be called at some time,
especially if created via [gc]new and stored as a pointer.


Why would you think that when C++ makes no similar guarantee for pure
native C++? The destructor for an object on the heap is called when and
if you call delete on a pointer to that object. The situation is no
different for C++/CLI with respect the to destructor (which is
IDisposable::Dispose for C++/CLI).

Yes, I think I can deterministically destruct it via 'delete' and
setting to nullptr. But the point still kinda freaks me that the
destructor is no longer gauranteed to EVER be called. I feel like I
should be worried since it is sometimes important to do other things
besided freeing up memory in a destructor. In my case I discovered it
becuase I'm communicating through a serial port which I change the
baud rate from the current speed, but then changed it back in the
destructor - only to find out the destructor was NEVER called! Hence,
the port died, and MY program wouldn't work on subsequent runs since
it assumed the port had been returned to the same baud (and hence
couldn't communicate with it anymore).
So, again, why is the destructor no longer gauranteed to be called,
and what are these new rules? Or am I being ignorant, and C++ never
made such assurances. Inquiring minds want to know! : )


They're not new rules - it's the nature of objects on the heap. For
managed object on the GC heap, the Finalizer MAY be called if you don't
delete the object, but the CLR doesn't guarantee that finalizers are ever
called either.

-cd

Feb 2 '06 #24
>>There are a lot of programmers out there, and I wouldn't doubt someone
somewhere is relying on it somehow. To now implement Array.ToString() at
the risk of breaking code solely for functionality that can just as easily
be obtained through the String constructor isn't worth it in my opinion.

That may very well be true, that someone has counted on this as its
behavior. This bodes a VERY BAD trend then, if this is justification for
having questionable behavior remain.

It sounds like this. If a bug remains in existence long enough, it will be
assumed someone is counting on this bug to keep behaving as it is, so we are
no longer 'allowed' to 'fix' the bug. We instead will call it a feature.

Note that it's not like someone has come forward and so "Wait! PLEASE don't
change the behavior of ToString becuase we are using it as it has been
implemented". No. It is being ASSUMED someone iMIGHT be using this. And
based on this ASSUMPTION we have to use workarounds?

I don't know about you, but to me, that's just STUPID....

[==P==]

"James Park" <so*****@hotmail.com> wrote in message
news:uf**************@TK2MSFTNGP15.phx.gbl... "Peter Oliphant" <po*******@RoundTripInc.com> wrote in message
news:e3**************@TK2MSFTNGP09.phx.gbl...
Correct. Since ToString is not part of the C++ standard it doesn't make
much sense to justify broken behavior on its part as being required
because it IS standard behavior! Now, check out this link:

http://lab.msdn.microsoft.com/produc...2-bb5065fd7c2b

Recapping, the ToString() function applied to a Char[] results in EXACTLY
the following string regardless of the contents of the Char[] :
"System.Char[]". MS claims that they will not 'fix' ToString to solve
this, but instead might create a new function to accomplished more
natural and desired results. They say that changing the current behavior
would be 'breaking'. That could only be true if enough people out there
wrote code that RELIES on this behavior. Who would rely on this behavior
is what I was saying in the above? It sounds like just an excuse NOT to
change it since they don't think of it as important enough.


There are a lot of programmers out there, and I wouldn't doubt someone
somewhere is relying on it somehow. To now implement Array.ToString() at
the risk of breaking code solely for functionality that can just as easily
be obtained through the String constructor isn't worth it in my opinion.

Feb 2 '06 #25
> Don't know if you have any kids, but consider this. If you usually
vacuumed and you gave this responsibility to your kid, would you or would
you not assume he would put the vacuum cleaner away when he is done, or
would you expect to have to do this yourself?
We now eavesdrop on the Hendersons:

Dad: Well, My Son (MS), today I make you a man! I'm giving you the great
responsibility of doing the vacuuming. This is a tradition handed down from
generation-to generation, and today I give it to you! Excited?

MS: Yeah, sure, whatever (goes back to watching tv).

Dad: Ok, let me explain this to you. Whenever I do the vacuuming and I'm
done, I always put the vacuum away. Well, son, I've given you an important
responsibility. Think you can handle it?

MS: Sure, whatever (goes back to watching tv).

Dad: Great! I'll be back in a few hours. See you then...

[2 hours go by, Dad gets home.]

Dad: Hi son! I see things are very clean! Good job vacuuming! But wait! How
come the vacuum cleaner is still sitting in the middle of the room?

MS: Well, you said when you did the vacuuming yourself you use to put away
the vacuum cleaner. Why would you assume if I did the vacuum cleaning that I
would put it away?

Dad: Because when I give you a responsibility I expect you to perform it
properly! It must be obvious to you that when I said I put the vacuum
cleaner away when I'm done that that is the proper thing to do. I even gave
you full written instructions on how to do ths. Why didn't you follow those
instructions?

MS: Hey Dad! Just becuase I told you I would do the vacuuming for you
doesn't mean I have to do everything! After all, when you did the vacuuming
it was YOU who put the vaccum cleaner away. Why wouldn't you STILL be
responsible for putting the vacuum cleaner away, even if I told you I would
do the vacuuming?

Dad: I give up! Go to your room...!

[==P==]
"Peter Oliphant" <po*******@RoundTripInc.com> wrote in message
news:uq*************@TK2MSFTNGP09.phx.gbl... Why would you think that when C++ makes no similar guarantee for pure
native
C++? The destructor for an object on the heap is called when and if you
call delete on a pointer to that object. The situation is no different
for
C++/CLI with respect the to destructor (which is IDisposable::Dispose for
C++/CLI).

I would think this for the following reasons. Let me make a real work
analogy. I do the laundry. Hence, at the end of the process, I shut down
by folding all the clothes and putting them away and turn off the lights.
But then a manager is appointed to the laundry room. He tell me I no
longer need to turn off the lights at the end of my laundry session since
he will shut down for me. This is a good thing since other people might
want to use the laundromat, and it's wasteful to keep turning the lights
on and off.

So, what your telling me is this. Why would I expect the manager to turn
off the lights when he closes the landromat? Because he told me HE would
be responsible for shutting down! Further, I told him that when I did it
manually I turned off the lights. It's in my 'rules for shutting down'.

So, the reason I would expect for the system to gaurantee calling the
destructor when it GC's them is that when it took the responsibility for
shutting down it is implying it will do it PROPERLY. Obviously the whole
purpose of a destructor is to 'shut down properly'.

Don't know if you have any kids, but consider this. If you usually
vacuumed and you gave this responsibility to your kid, would you or would
you not assume he would put the vacuum cleaner away when he is done, or
would you expect to have to do this yourself?

[==P==]

"Carl Daniel [VC++ MVP]" <cp*****************************@mvps.org.nospam >
wrote in message news:ef**************@TK2MSFTNGP11.phx.gbl...
Peter Oliphant wrote:
I'm programming in VS C++.NET 2005 using cli:/pure syntax. In my code
I have a class derived from Form that creates an instance of one of
my custom classes via gcnew and stores the pointer in a member.
However, I set a breakpoint at the destructor of this instance's
class and it was never called!!! I can see how it might not get
called at a deterministic time. But NEVER?

So, I guess I need to know the rules about destructors. I would have
thought any language derived from C++ would always guarantee the
destructor of an instance of a class be called at some time,
especially if created via [gc]new and stored as a pointer.


Why would you think that when C++ makes no similar guarantee for pure
native C++? The destructor for an object on the heap is called when and
if you call delete on a pointer to that object. The situation is no
different for C++/CLI with respect the to destructor (which is
IDisposable::Dispose for C++/CLI).

Yes, I think I can deterministically destruct it via 'delete' and
setting to nullptr. But the point still kinda freaks me that the
destructor is no longer gauranteed to EVER be called. I feel like I
should be worried since it is sometimes important to do other things
besided freeing up memory in a destructor. In my case I discovered it
becuase I'm communicating through a serial port which I change the
baud rate from the current speed, but then changed it back in the
destructor - only to find out the destructor was NEVER called! Hence,
the port died, and MY program wouldn't work on subsequent runs since
it assumed the port had been returned to the same baud (and hence
couldn't communicate with it anymore).
So, again, why is the destructor no longer gauranteed to be called,
and what are these new rules? Or am I being ignorant, and C++ never
made such assurances. Inquiring minds want to know! : )


They're not new rules - it's the nature of objects on the heap. For
managed object on the GC heap, the Finalizer MAY be called if you don't
delete the object, but the CLR doesn't guarantee that finalizers are ever
called either.

-cd


Feb 2 '06 #26
aa
"Tamas Demjen" <td*****@yahoo.com> wrote in message
news:eF**************@TK2MSFTNGP11.phx.gbl...
Those who routinely wrap unmanaged C++ code in C++/CLI feel unsafe,
especially when the wrapped assemblies are going to be used in C# or VB.


Let me help, Tom;

"Those who routinely wrap C++ code in CLI feel unsafe, especially when the
wrapped assemblies are going to be used in C# or VB.

See how clear the sentence becomes? How clear the problem becomes?
Feb 2 '06 #27
Peter Oliphant wrote:
So, the reason I would expect for the system to gaurantee calling the
destructor when it GC's them is that when it took the responsibility for
shutting down it is implying it will do it PROPERLY. Obviously the whole
purpose of a destructor is to 'shut down properly'.


You have a misconception here. If there is garbage collection, the GC
does clean up. It just calls the finalizer, not the destructor. There's
a big difference between the two. The destructor is responsible for
shutting down all the owned managed resources (deleting all owned
pointers, if you will). The finalizer, however, shouldn't call delete on
the owned managed pointers, because those pointers are GCed too. The GC
deletes them and calls the finalizer for them, so you're not supposed to
do anything with them, otherwise they would be double deleted. So the
finalizer should only release resources (closing files, mutexes), and
deallocate unmanaged memory. It should never destruct owned managed members.

You either destroy deterministically with the destructor, or if you fail
to do so, the GC will call the finalizer, but the cleanup task you need
to do in the finalizer is potentially different than in the destructor.

So the garbage collector does clean up -- when it runs at all, that is.
It's not guaranteed to run, because it only runs if the system thinks
it's low on memory. The reason behind this is that the OS cleans up for
you when your application exits anyway. It closes all the files you left
open, deletes all the memory you kept undeleted.

If your resources are critical and require prompt cleanup, you simply
can't leave them up to the garbage collector. Then you need
deterministic destruction. You have to think of the GC as a tool that
solves a lot of problems, but doesn't work in all cases. In some cases
you must ensure that you execute a certain function on destruction, and
not later than that. That's why reference counted smart pointers (or
whatever other techniques) still have their place. That's why C++/CLI
supports the stack syntax, which guarantees that the destrutor is called
when the object goes out of scope.

Tom
Feb 2 '06 #28
Hi Tom,

So was this response (from earlier on in this thread) inaccurate:

"For managed object on the GC heap, the Finalizer MAY be called if you don't
delete the
object, but the CLR doesn't guarantee that finalizers are ever called
either."

This implies you can't count on either the destructor OR the finalizer from
ever being called. What you said seems to imply one of these WILL be called.
Which is true?

Would the following be accurate? If I create a ref class, I should only
create a destructor for it if I plan to delete some instances of it manually
on occasion (if I always did it this there would be very little reason to
make it a ref class since GC would never come into play) or if I plan to
create some instances in stack semantic form (and some that are not, or else
again, there would be very little reason to make it a ref class since GC
would never come into play).

That is, if all class instances will be deleted via the GC and are never
created in stack semantic form, then the destructor is just wasted code
taking up space. On the other side, if I can guarantee the instances will
always be stack semantic in form or they will always be manually deleted,
then the finalizer is wasted code taking up space. If neither of these is
true (the majority of cases), then the destructor is for manual deletion and
the finalizer for GC deletion.

Wow! So in the effort to make things more simple and automitized we have
added a new layer of complication: the finalizer! We now have TWO ways an
instance can be destroyed, and depending on who you believe, there may or
may not be any gaurantee that either will ever be called unless you manually
do it. So much for automation!

So, if I create any class for which a destructor must be eventually called
for every instance, then creating such a class as a ref class is purely
cosmetic when it comes to functionality...

[==P==]

"Tamas Demjen" <td*****@yahoo.com> wrote in message
news:uo**************@TK2MSFTNGP12.phx.gbl...
Peter Oliphant wrote:
So, the reason I would expect for the system to gaurantee calling the
destructor when it GC's them is that when it took the responsibility for
shutting down it is implying it will do it PROPERLY. Obviously the whole
purpose of a destructor is to 'shut down properly'.


You have a misconception here. If there is garbage collection, the GC does
clean up. It just calls the finalizer, not the destructor. There's a big
difference between the two. The destructor is responsible for shutting
down all the owned managed resources (deleting all owned pointers, if you
will). The finalizer, however, shouldn't call delete on the owned managed
pointers, because those pointers are GCed too. The GC deletes them and
calls the finalizer for them, so you're not supposed to do anything with
them, otherwise they would be double deleted. So the finalizer should only
release resources (closing files, mutexes), and deallocate unmanaged
memory. It should never destruct owned managed members.

You either destroy deterministically with the destructor, or if you fail
to do so, the GC will call the finalizer, but the cleanup task you need to
do in the finalizer is potentially different than in the destructor.

So the garbage collector does clean up -- when it runs at all, that is.
It's not guaranteed to run, because it only runs if the system thinks it's
low on memory. The reason behind this is that the OS cleans up for you
when your application exits anyway. It closes all the files you left open,
deletes all the memory you kept undeleted.

If your resources are critical and require prompt cleanup, you simply
can't leave them up to the garbage collector. Then you need deterministic
destruction. You have to think of the GC as a tool that solves a lot of
problems, but doesn't work in all cases. In some cases you must ensure
that you execute a certain function on destruction, and not later than
that. That's why reference counted smart pointers (or whatever other
techniques) still have their place. That's why C++/CLI supports the stack
syntax, which guarantees that the destrutor is called when the object goes
out of scope.

Tom

Feb 2 '06 #29
Brandon Bray [MSFT] wrote:
To me, the biggest limitation
imposed on destructors as a result of IDisposable is that all destructors
are public and virtual. I actually that's a good thing, and it's a mistake
that unmanaged C++ allows destructors to be anything else.


My question below is related to pure Standard C++ and its a little bit
basic.

If as you say dtors were always public/virtual, how would you think of
re-implementing idioms, like classes that allow objects to be created
only on the heap, that depend on dtor being private/protected and
instead provide a public destroy() that simply performs a 'delete this'?

Feb 2 '06 #30
Peter Oliphant wrote:
This implies you can't count on either the destructor OR the finalizer from
ever being called. What you said seems to imply one of these WILL be called.
Which is true?
If GC is performed, it guarantees to call the finalizer. However,
garbage collection is not guaranteed to be done. Garbage is collected
when the system thinks it is necessary (for example, it runs out of
memory). However, when the application exits, it is not guaranteed to be
called, because the OS cleans up after each application anyway. You can
try to call GC::Collect manually.

The destructor is always guaranteed to be called when the object is
instantiated with the stack semantics.
Would the following be accurate? If I create a ref class, I should only
create a destructor for it if I plan to delete some instances of it manually
on occasion [...] or if I plan to create some instances in stack semantic form
To the best of my knowledge, if you allocate only managed memory, the GC
will take care of that, so it's not required that you call the
destructor. You must absolutely create a destructor and ensure
deterministic cleanup if you have unmanaged objects or resources inside
your ref class. Generally speaking, if an object inherits from
IDisposable (defines a destructor, in C++/CLI terms), at the minimum
you're encouraged to call the destructor. There are cases when you must
call the destructor in a deterministic manner, because you can't rely on
GC. That decision has to be made on a case-by-case basis.
That is, if all class instances will be deleted via the GC and are never
created in stack semantic form, then the destructor is just wasted code
taking up space.
No. You're encouraged to use the destructor if the class has one. The
finalizer is just a safety net, the last resort. If you could guarantee
deterministic destruction, there would be no need for a finalizer.
On the other side, if I can guarantee the instances will
always be stack semantic in form or they will always be manually deleted,
then the finalizer is wasted code taking up space.
I don't think there's too much overhead involved there. To prevent
double typing, the destructor and the finalizer can share code. You can
write a function and call that from both the d'tor and the finalizer.
You can bet that an object won't be destroyed both ways, but you have no
idea how your objects will be used. Ideally they're Disposed, sometimes
they're finalized, you really need to be prepared for both cases.
Wow! So in the effort to make things more simple and automitized we have
added a new layer of complication: the finalizer!


Look, there are cases where an automatic garbage collection works well,
and there are cases when it doesn't. There are cases when all you want
is automatic deterministic cleanup. There is such a thing, and one
possible implementation is the reference counted smart pointer
(boost::shared_ptr). It works as long as the language has stack
semantics. Unfortunately .NET doesn't always have it.

C++/CLI and C# have something that is a half-baked stack syntax. You can
create a local variable with deterministic destruction. In C# it's done
with the "using" keyword. However, stack semantics don't work well with
the current .NET collections. In a perfect world, value types would have
constructors and destructors too, so you could use stack semantics
within containers.

I'm (still) looking for a solution where I can guarantee deterministic
destruction with collections, something like List<shared_ptr<RefClass>>,
which would be equivalent to the native vector<shared_ptr<Class> >. I
don't see how it's going to happen, because I can't declare a destructor
for a value type. I'm not sure if STL.NET will bring anything to the rescue.

Those who program in 100% managed environment probably won't care too
much about it, but many of my managed classes contain lots of unmanaged
objects, so I don't have a choice but to follow deterministic
destruction practices. Right now I'm in a situation where I was before
boost::shared_ptr was released... a lot of pain and insecurity.

Again, if you don't wrap unmanaged code, but work in a fully managed
environment, this issue is not nearly as pressing.

Tom
Feb 2 '06 #31
Hi Tom,

Good post, good explanation. But I do need to study this stuff a lot more, I
obviously have some gross misconceptions based on legacy thinking. And, btw,
nowadays I write a routine called "m_Deconstuct()" and call it in both the
destructor and finalizer (the shared code idea you mnetioned). This works in
all cases I've had, and will work in any case where one doesn't have to make
a distinction between GC and manual destruction.

To be honest, I've discovered I'm writing more classes WITHOUT the need of
any destuctor( or finalizer). Most times I write my code so the destructor
exists, but does nothing. That is, in most cases I don't really need it. The
only reason this became important to me (and why I started this topic) is
that I created a class that uses a serial port, and during it I set the baud
rate to a non-default amount. I then naturally set the baud rate back in the
destructor, only to discover it was never being called. This resulted in
future runs being affected, since they assumed the baud rate had been
returned to default (which is possibly bad programming in the first place,
but go with me on this), and it couldn't even change it back since it could
no longer talk to the chip! So, in this case, I was using the destructor for
more than just freeing up memory, but indeed had some functionality beyond.
It is in these cases one must know the rules of the game... : )

[==P==]

"Tamas Demjen" <td*****@yahoo.com> wrote in message
news:OD**************@TK2MSFTNGP11.phx.gbl...
Peter Oliphant wrote:
This implies you can't count on either the destructor OR the finalizer
from ever being called. What you said seems to imply one of these WILL be
called. Which is true?


If GC is performed, it guarantees to call the finalizer. However, garbage
collection is not guaranteed to be done. Garbage is collected when the
system thinks it is necessary (for example, it runs out of memory).
However, when the application exits, it is not guaranteed to be called,
because the OS cleans up after each application anyway. You can try to
call GC::Collect manually.

The destructor is always guaranteed to be called when the object is
instantiated with the stack semantics.
Would the following be accurate? If I create a ref class, I should only
create a destructor for it if I plan to delete some instances of it
manually on occasion [...] or if I plan to create some instances in stack
semantic form


To the best of my knowledge, if you allocate only managed memory, the GC
will take care of that, so it's not required that you call the destructor.
You must absolutely create a destructor and ensure deterministic cleanup
if you have unmanaged objects or resources inside your ref class.
Generally speaking, if an object inherits from IDisposable (defines a
destructor, in C++/CLI terms), at the minimum you're encouraged to call
the destructor. There are cases when you must call the destructor in a
deterministic manner, because you can't rely on GC. That decision has to
be made on a case-by-case basis.
That is, if all class instances will be deleted via the GC and are never
created in stack semantic form, then the destructor is just wasted code
taking up space.


No. You're encouraged to use the destructor if the class has one. The
finalizer is just a safety net, the last resort. If you could guarantee
deterministic destruction, there would be no need for a finalizer.
On the other side, if I can guarantee the instances will always be stack
semantic in form or they will always be manually deleted, then the
finalizer is wasted code taking up space.


I don't think there's too much overhead involved there. To prevent double
typing, the destructor and the finalizer can share code. You can write a
function and call that from both the d'tor and the finalizer. You can bet
that an object won't be destroyed both ways, but you have no idea how your
objects will be used. Ideally they're Disposed, sometimes they're
finalized, you really need to be prepared for both cases.
Wow! So in the effort to make things more simple and automitized we have
added a new layer of complication: the finalizer!


Look, there are cases where an automatic garbage collection works well,
and there are cases when it doesn't. There are cases when all you want is
automatic deterministic cleanup. There is such a thing, and one possible
implementation is the reference counted smart pointer (boost::shared_ptr).
It works as long as the language has stack semantics. Unfortunately .NET
doesn't always have it.

C++/CLI and C# have something that is a half-baked stack syntax. You can
create a local variable with deterministic destruction. In C# it's done
with the "using" keyword. However, stack semantics don't work well with
the current .NET collections. In a perfect world, value types would have
constructors and destructors too, so you could use stack semantics within
containers.

I'm (still) looking for a solution where I can guarantee deterministic
destruction with collections, something like List<shared_ptr<RefClass>>,
which would be equivalent to the native vector<shared_ptr<Class> >. I
don't see how it's going to happen, because I can't declare a destructor
for a value type. I'm not sure if STL.NET will bring anything to the
rescue.

Those who program in 100% managed environment probably won't care too much
about it, but many of my managed classes contain lots of unmanaged
objects, so I don't have a choice but to follow deterministic destruction
practices. Right now I'm in a situation where I was before
boost::shared_ptr was released... a lot of pain and insecurity.

Again, if you don't wrap unmanaged code, but work in a fully managed
environment, this issue is not nearly as pressing.

Tom

Feb 2 '06 #32
Tamas Demjen wrote:
I'm (still) looking for a solution where I can guarantee deterministic
destruction with collections.


Something like this actually helps a bit:

using namespace System::Collections::Generic;

template <class T>
ref class ManagedList
{
public:
~ManagedList()
{
for each(T item in items)
delete item;
}
void Add(T item)
{
items.Add(item);
}
void RemoveAt(int index)
{
delete items[index];
items.RemoveAt(index);
}
private:
List<T> items;
};

ref class Guarded
{
public:
~Guarded() { Console::WriteLine(L"~Guarded"); }
};

int main(array<System::String ^> ^args)
{
ManagedList<Guarded^> items;
items.Add(gcnew Guarded);
return 0;
}
Feb 2 '06 #33
Peter Oliphant wrote:
<snip>
The only reason this became important to me
(and why I started this topic) is that I created a class that uses a
serial port, and during it I set the baud rate to a non-default
amount. I then naturally set the baud rate back in the destructor,
only to discover it was never being called. This resulted in future
runs being affected, since they assumed the baud rate had been
returned to default (which is possibly bad programming in the first
place, but go with me on this), and it couldn't even change it back
since it could no longer talk to the chip! So, in this case, I was
using the destructor for more than just freeing up memory, but indeed
had some functionality beyond. It is in these cases one must know the
rules of the game... : )


This (setting back an external device - the UART - to default state) is
typically the kind of "resource management" that need to be done
synchronously and that can't be handled by the GC. Other examples are mutex
release, dabase connectin close, buffer flushing to disk before exiting,
etc.... For all those things, you should use the .NET synchronous resource
maangement mechanism : IDispsoable (which, in C++/CLI, is epressed through
the destructor + stack semantic). On the other hand, the GC is perfectly OK
to manage all memory-related stuff.

As others have said, C++/CLI stack semantic is probably a step toward the
right solution, but in the current incarnation, it doesn't go far enough
because it doesn't allow (for example) to have a container being responsible
for the lifetime of the contained objects.

Arnaud
MVP - VC
Feb 2 '06 #34
Peter Oliphant wrote:
Good post, good explanation. But I do need to study this stuff a lot more


This link may help:
http://msdn2.microsoft.com/en-us/library/ms177197.aspx

Here's a typical pattern:

ref class T
{
public:
T()
: m(gcnew Managed),
n(new Native)
{
}
~T()
{
delete m;
this->!T(); // destructor calls finalizer
}
!T()
{
delete native;
}
private:
Managed^ m;
Native* n;
};

I assume that Managed is a class that requires destruction (is
IDisposable). As you see, the managed member is deleted from the
destructor only, while the unmanaged member is deleted from the
finalizer, which is explicitly called from the destructor.

Of course, to simplify things, you may as well use stack semantics for
the Managed member (it's impossible for Native):

ref class T
{
public:
T() : n(new Native) { }
~T() { this->!T(); }
!T() { delete native; }
private:
Managed m;
Native* n;
};

If you follow this pattern, the destructor can simply call the finalizer.

I've created a CliScopedPtr class, which is a smart pointer designed to
protect native members inside managed classes. Here's the source code:

http://tweakbits.com/CliScopedPtr.h

Using that you can completely avoid writing your own destructors and
finalizers in most cases (the compiler generates them implicitly, so
they still exist):

#include "CliSclopedPtr.h"

ref class T
{
public:
T() : n(new Native) { }
private:
Managed m;
CliScopedPtr<Native> n;
};

Although you don't see the destructor, it's still there, and when you
use the T class from C# or VB, you must call Dispose() on it, because it
encapsulates a native resource.

CliScopedPtr probably introduces a very slight overhead, but it's
negligible. It defines the -> operator, so you can use the "n" member as
a conventional pointer:

n->Func();

I didn't define the * operator, because it caused a compiler warning,
and you can always use the .get() member to gain access to the
underlying native pointer:

Native* ptr = n.get();

Tom
Feb 2 '06 #35
Brandon Bray [MSFT] wrote:
This, unfortunately, will have to be a position that we agree to
disagree. Many of the design decisions within C++ are based on
historical precedent and strict maintenance to a notion of backwards
source compatibility. While I understand many of the design decisions,
I do consider many of them mistakes in hindsight.
I agree that many, if not most, shortcomings of C++ are attributable to
the strict C compatibility requirement. The awkward declaration syntax
is the first that comes to mind (which, ironically, has been taken over
by the designers of Java and C# in order to make the syntax look more
familiar to C++ programmers). Yet I don't think that Bjarne Stroustrup,
were he to design C++ anew with today's hindsight, would drop the goal
of C compatibility. Without it, the language would never have gained
enough foothold to become a major player.

As for public virtual destructors (or their opposite), the explanation
does not apply, as neither access control, nor virtual functions, nor
destructors have any precedent in C.
And while the flexibility afforded by allowing destructors to be
non-public can be convenient, it demonstrates a classic misuse in my
mind... it would be far more effective to introduce a real language
feature that allowed the desired behaviors. Because there is so much
flexibility (and in my view, misuse) of some features, it hampers the
ability to do rigid analysis of the program from both a human and
automated perspective.
I don't think this is an issue of convenience. The keyword virtual
conveys a message: this class is meant to be derived from, and the
behaviour of this function is meant to be overridden. A virtual
destructor where polymorphic deletion is not intended is a message that
conflicts with the design. Likewise, a public destructor clearly states
that any code is meant to destroy objects of this class. For the sake of
encapsulation, I always choose the most restrictive access level that is
possible. Why should I be forced to make destructors public if clients
of a class are not meant to destroy objects? Public virtual destructors
for singletons? Come on!
Everything. I speak of type safety from the "I can prove the program
mathematically obeys the separation of objects" perspective. Because
an object deleted frees memory, it allows the programmer to allocate
another object in the memory that the pointer still points to. That is
what type safety is meant to eradicate. Clearly, we all know using a
pointer after the object to which it points is deleted is a
programming error... but so is a buffer overrun. The language is not
type safe unless it can rigidly prevent that.
You seem to have evaded answering my objection. We all know that
accessing an object after it has been disposed is a programming error.
Both C# and Java allow it. Are they not type safe?
While, I'm mostly in agreement... fifty years ago, there was growing
consensus that GC was too expensive for memory. The state of the art
GC works very well for memory today. The best that is available for
other resources only does a 1-to-1 mapping to memory... which probably
isn't the most efficient way to manage scarce resources. There's still
ample amount of research to be done in this area.
Once we have got computers to figure out fully automatically when to
release which resource, we don't need programming languages anymore, I
guess. A language that prevents you from screwing up is going to be
virtually unusable. Progress means that the opportunity to make a mess
is moved to a higher level. :-)
I'm going to clearly state that I am not mixing up these things... I
spend a lot of time in the design issues here, so you can at least
note that I do know something. I stand behind what I said. Shared
resources and deterministic cleanup have very few options, and
reference counting is by far the most commonly used. If there are
others, they aren't well formalized and certainly not tied to a
language level service.
I did not mean to question your competence; my apologies if I made it
sound like I did. But I stand by my opinion that in your statement, the
two concepts were certainly mixed up. Deterministic cleanup is a fairly
abstract term that includes specialized mechanisms for dealing with
cycles. It is well known that reference counting is not such a
mechanism, but that does not mean that they don't exist. GC can deal
with the *memory* hogged by cyclically dependent objects, but it
certainly cannot resolve mutual *logical* dependencies. For that you
need specialized code anyway.
Unfortunately, I have to disagree again. Maintanence of applications
that grow to millions of lines of code is dramatically more expensive
because "techniques" are not good enough for automated proofs of
program correctness. And while Standard C++ has good support for
certain ways to reliably manage resources, it fails miserably in other
areas. (Note, .NET has similar issues -- it does incredibly well in
some cases, and fails in others.)
..NET/CLR has, well, managed to largely ignore one of the most reliable
resource management features, i.e. destructors. Of course C++ is not
perfect. To my knowledge, automated proof of correctness has never been
one of its design goals. I agree that a language which emphasizes such a
feature would probably look different.

Yet somehow I don't see what this has got to do with mandatory public
virtual destructors. Removing a possibility to minimize coupling and
expressing design intentions in code is hardly going to make proofs of
correctness easier.
I recognize that this discussion really has very few options than to
diverge towards over-heated arguments. I don't really have much more
to say, but I do appreciate your candor and your passion for C++.


Your statement about Standard C++ destructors that I initially responded
to was flying in the face of C++ best practice, hence my staunch
objection. This is not a religious or emotional issue for me, and I have
certainly no inclination to get heated up about this.

I will be offline for four weeks, so in case I fail to respond to a
reply of yours, it's not out of ignorance.
--
Gerhard Menzl

#dogma int main ()

Humans may reply by replacing the thermal post part of my e-mail address
with "kapsch" and the top level domain part with "net".
Feb 3 '06 #36

This thread has been closed and replies have been disabled. Please start a new discussion.

Similar topics

52
by: Newsnet Customer | last post by:
Hi, Statement 1: "A dynamically created local object will call it's destructor method when it goes out of scope when a procedure returms" Agree. Statement 2: "A dynamically created object...
9
by: sahukar praveen | last post by:
Hello, This is the program that I am trying. The program executes but does not give me a desired output. ********************************************** #include <iostream.h> #include...
11
by: Stub | last post by:
Please answer my questions below - thanks! 1. Why "Derived constructor" is called but "Derived destructor" not in Case 1 since object B is new'ed from Derived class? 2. Why "Derived destructor"...
16
by: Timothy Madden | last post by:
Hy I have destructors that do some functional work in the program flow. The problem is destructors should only be used for clean-up, because exceptions might rise at any time, and destructors...
6
by: Squeamz | last post by:
Hello, Say I create a class ("Child") that inherits from another class ("Parent"). Parent's destructor is not virtual. Is there a way I can prevent Parent's destructor from being called when a...
11
by: Ken Durden | last post by:
I am in search of a comprehensive methodology of using these two object cleanup approaches to get rid of a number of bugs, unpleasantries, and cleanup-ordering issues we currently have in our...
4
by: Joe | last post by:
I am looking for the quintessential blueprint for how a C++ like destructor should be implemented in C#. I see all kinds of articles in print and on the web, but I see lots of discrepencies. For...
14
by: gurry | last post by:
Suppose there's a class A. There's another class called B which looks like this: class B { private: A a; public : B() { a.~A() } }
5
by: junw2000 | last post by:
I use the code below to study delete and destructor. #include <iostream> using namespace std; struct A { virtual ~A() { cout << "~A()" << endl; }; //LINE1 void operator delete(void* p) {...
0
by: ryjfgjl | last post by:
ExcelToDatabase: batch import excel into database automatically...
0
isladogs
by: isladogs | last post by:
The next Access Europe meeting will be on Wednesday 6 Mar 2024 starting at 18:00 UK time (6PM UTC) and finishing at about 19:15 (7.15PM). In this month's session, we are pleased to welcome back...
1
isladogs
by: isladogs | last post by:
The next Access Europe meeting will be on Wednesday 6 Mar 2024 starting at 18:00 UK time (6PM UTC) and finishing at about 19:15 (7.15PM). In this month's session, we are pleased to welcome back...
0
by: Vimpel783 | last post by:
Hello! Guys, I found this code on the Internet, but I need to modify it a little. It works well, the problem is this: Data is sent from only one cell, in this case B5, but it is necessary that data...
0
by: ArrayDB | last post by:
The error message I've encountered is; ERROR:root:Error generating model response: exception: access violation writing 0x0000000000005140, which seems to be indicative of an access violation...
0
by: CloudSolutions | last post by:
Introduction: For many beginners and individual users, requiring a credit card and email registration may pose a barrier when starting to use cloud servers. However, some cloud server providers now...
0
by: Defcon1945 | last post by:
I'm trying to learn Python using Pycharm but import shutil doesn't work
0
by: Shællîpôpï 09 | last post by:
If u are using a keypad phone, how do u turn on JavaScript, to access features like WhatsApp, Facebook, Instagram....
0
by: Faith0G | last post by:
I am starting a new it consulting business and it's been a while since I setup a new website. Is wordpress still the best web based software for hosting a 5 page website? The webpages will be...

By using Bytes.com and it's services, you agree to our Privacy Policy and Terms of Use.

To disable or enable advertisements and analytics tracking please visit the manage ads & tracking page.