By using this site, you agree to our updated Privacy Policy and our Terms of Use. Manage your Cookies Settings.
459,403 Members | 1,275 Online
Bytes IT Community
+ Ask a Question
Need help? Post your question and get tips & solutions from a community of 459,403 IT Pros & Developers. It's quick & easy.

Garbage collection in C++

P: n/a
Hi all,

Is garbage collection possible in C++. It doesn't come as part of
language support. Is there any specific reason for the same due to the
way the language is designed. Or it is discouraged due to
some specific reason. If someone can give inputs on the same, it will
be of great help.

Regards,
Pushpa
Nov 15 '08
Share this Question
Share on Google+
158 Replies


P: n/a
James Kanze wrote:
On Nov 20, 3:17 pm, Matthias Buelow <m...@incubus.dewrote:
>SG wrote:
>>It seems that the library solution is as convenient and
flexible as a dedicated language feature would be. So, a
"clean up" core language feature is hardly justified.
>It's beginning to resemble Perl, though.

You've noticed that too. The syntax, despite having some real
problems of its own, is still not nearly as bad as Perl, and we
haven't abandoned static type checking completely, but it's
definitely becoming a case of many different ways of doing
something (all equally unreadable).
Whoa whoa whoa... The fact that there's a lot of bad Perl code doesn't
mean the language is at fault. Perl has gotten me through some tough
projects, and countless small programs and one-liners. There is
probably no other single tool or language that has ever given me the
kind of productivity boost that Perl did. I realize it's tres chic to
bash Perl nowadays, but as far as I'm concerned, Larry Wall is a
populist hero.
Nov 21 '08 #151

P: n/a
Thomas J. Gritzan wrote:
By the way, how is your region allocator different from the pool
allocator in boost?
According to my own tests boost::fast_pool_allocator is not all that
fast. Just slightly faster than the default allocator (at least in my
linux system).
Nov 21 '08 #152

P: n/a
Matthias Buelow wrote:
Chris M. Thomasson wrote:
[snipped debate re. garbage collection]
The "garbage" is by definition unused memory and will
be reclaimed if more memory is needed
That's the point. A mark-and-sweep garbage collector waits until more
memory is "needed," or until a hint is provided to indicate that the
time is ripe for cleanup. GC is guaranteed not to free anything that's
still in use by the program, but is not guaranteed to free resources as
soon as they cease to be in use. In contrast, the latter guarantee is
supported (though not imposed) by resource management techniques based
on deterministic destruction. Non-GC C++ programs will therefore tend
to have better resource retention times than programs that use GC but
lack deterministic destruction. Moreover, even if a GC language is
given temporal hints, it will impose overhead not present in the C++
version, because the GC doesn't know implicitly which specific objects
are ready to be freed. It has to start marking and sweeping and
possibly touching a lot of address space that ought to be resting
comfortably in swap space.

You may get a sort of compromise by using GC with multiple, independent
pools of memory, but honestly, I don't see any reason to manage memory
in a totally different way from every other resource in a program.
Tying the management of non-memory resources to the automatic,
scope-based memory management that was already present in C was, IMO, a
stroke of genius on Bjarne Stroustrup's part.
Nov 21 '08 #153

P: n/a
Jeff Schwab wrote:
Matthias Buelow wrote:
>The "garbage" is by definition unused memory and will
be reclaimed if more memory is needed

That's the point. A mark-and-sweep garbage collector waits until more
memory is "needed," or until a hint is provided to indicate that the
time is ripe for cleanup. GC is guaranteed not to free anything that's
still in use by the program, but is not guaranteed to free resources as
soon as they cease to be in use. In contrast, the latter guarantee is
supported (though not imposed) by resource management techniques based
on deterministic destruction.
That is true _if_ your objects go out of scope in a timely fashion (that
is, in the case of indeterminate scope, if you destruct them properly as
soon as you don't need them anymore.) Otherwise, it's basically the same
problem as with the lazy deallocation of a GC (only you don't really
have much control over it in that case unless you know how the GC works
and can use that knowledge in your program -- with basically defeats the
"don't have to think about it" advantage; however, one can probably
defer that kind of bother until a (memory) profiling phase, if
necessary.)
Non-GC C++ programs will therefore tend
to have better resource retention times than programs that use GC but
lack deterministic destruction.
Yes but does it matter? In what situation, for example? Please note that
for simplification, I'm looking at this mostly from the egocentrical
view of the program in question, which doesn't really care for anyone
else (for example, other processes in a multitasking environment. C++
doesn't know anything about processes, anyway. :)) And the library's
allocator (the one behind new/delete or malloc/free) will most likely be
ignorant of any language details, too.
Tying the management of non-memory resources to the automatic,
scope-based memory management that was already present in C was, IMO,
a stroke of genius on Bjarne Stroustrup's part.
Hmm.. not sure about that part.
Nov 21 '08 #154

P: n/a
On 2008-11-21 07:15:20 -0500, Sam <sa*@email-scan.comsaid:
>
And I don't know when std::wstring stopped "supporting" Unicode. It supports
it very well, as far as I can tell.
What, specifically, does "'supporting' Unicode" mean, and how does
std::wstring do that? Last time I looked, there weren't enough
constraints on wchar_t to allow anything portable beyond what you could
do with char, which is why C++0x is introducing char16_t and char32_t,
along with the corresponding specializations of std::basic_string,
named std::u16string and std::u32string.

--
Pete
Roundhouse Consulting, Ltd. (www.versatilecoding.com) Author of "The
Standard C++ Library Extensions: a Tutorial and Reference
(www.petebecker.com/tr1book)

Nov 21 '08 #155

P: n/a
"Thomas J. Gritzan" <ph*************@gmx.dewrote in message
news:gg**********@newsreader2.netcologne.de...
Chris M. Thomasson wrote:
>I created a little test application for Java and C++:
http://pastebin.com/mc3e3f4e
(Java version)
http://pastebin.com/m4bf7db0a
(My C++ version)
As you can see, my C++ version uses a custom region allocation scheme I
created and posted to this group here:

http://groups.google.com/group/comp....8dc967c7ddba7c

It's a bit unfair to compare one algorithm in one language and another
algo in another language. You can find a test program that shows that
whatever you want is better than whatever you want.

To make it a little bit more fair, you could run the Java GC the same
time you flush the region allocator in C++:
http://pastebin.com/m40d757b7
Okay. Sadly, it still hits a heap exception on my old platform; it just
takes longer to. When I change the line `create_tree(22);' to
`create_tree(15);' it works, but its a lot slower than the version without
explicit triggering of GC. The version without gets:
Java:
________________________________________
Time: 450 ms
with:
Java:
________________________________________
Time: 1653 ms


Compared to


C++ (MSVC 2005):
________________________________________
Time: 64 ms
C++ (MINGW):
________________________________________
Time: 216 ms

By the way, how is your region allocator different from the pool
allocator in boost?
I guess it would be more similar to:
http://www.boost.org/doc/libs/1_37_0...ject_pool.html
However, it seems like Boost pool is using a Reaps algorithm described here:

http://www.cs.umass.edu/~emery/pubs/...oopsla2002.pdf
which combines region and heap allocation. Anyway, the similarity is my
region allocation can destroy all object in single call, so can Boost
object_pool. The difference is that Boost object_pool seems to be able to
allow one to destroy individual objects, my region allocator as-is simply
cannot do this. I am in the process of tinkering around with the design to
see if I can indeed pull this off.

Nov 21 '08 #156

P: n/a
On Nov 21, 2:38*pm, Jeff Schwab <j...@schwabcenter.comwrote:
Matthias Buelow wrote:
Chris M. Thomasson wrote:
[snipped debate re. garbage collection]
The "garbage" is by definition unused memory and will be
reclaimed if more memory is needed
That's the point. *A mark-and-sweep garbage collector waits
until more memory is "needed," or until a hint is provided to
indicate that the time is ripe for cleanup. *GC is guaranteed
not to free anything that's still in use by the program, but
is not guaranteed to free resources as soon as they cease to
be in use. *In contrast, the latter guarantee is supported
(though not imposed) by resource management techniques based
on deterministic destruction.
I'm trying to figure out how this is relevant to anything. If
you're managing resources other than memory, you need to manage
them. That would seem clear. Destructors are fine when the
resource obeys the rules of scope, but in that case, whether you
use garbage collection or not doesn't change anything.
Otherwise, you need some sort of explicit management to call the
destructor, or whatever other function you want to use to free
the resources.
Non-GC C++ programs will therefore tend to have better
resource retention times than programs that use GC but lack
deterministic destruction.
That is, of course, total bullshit, since you manage resources
in exactly the same way with or without garbage collection.

[...]
You may get a sort of compromise by using GC with multiple,
independent pools of memory, but honestly, I don't see any
reason to manage memory in a totally different way from every
other resource in a program.
Maybe because you have to. In practice, no two resources are
managed in the same way anyway; I certainly manage locks
differently than I do file descriptors.
Tying the management of non-memory resources to the automatic,
scope-based memory management that was already present in C
was, IMO, a stroke of genius on Bjarne Stroustrup's part.
You're not making much sense. Automatic, scope-based memory
management is fine for objects which have automatic scope-based
lifetimes. Nothing is going to change there. We're talking
here about dynamically allocated objects.

--
James Kanze (GABI Software) email:ja*********@gmail.com
Conseils en informatique orientée objet/
Beratung in objektorientierter Datenverarbeitung
9 place Sémard, 78210 St.-Cyr-l'École, France, +33 (0)1 30 23 00 34
Nov 21 '08 #157

P: n/a
Sam
Pete Becker writes:
On 2008-11-21 07:15:20 -0500, Sam <sa*@email-scan.comsaid:
>>
And I don't know when std::wstring stopped "supporting" Unicode. It supports
it very well, as far as I can tell.

What, specifically, does "'supporting' Unicode" mean, and how does
std::wstring do that? Last time I looked, there weren't enough
constraints on wchar_t to allow anything portable beyond what you could
do with char,
Well, every time I convert chars into wchars, using a given character set, I
always seem to wind up with Unicode characters, which make their way into my
std::wstrings. Really. Believe it, or not. I know it's hard to believe, but
I take an UTF-8 string, and use the localization library (or iconv) to
sprinkle some pixie dust on them, and I get a bunch of wchars. Then, just
for yucks, I print out the decimal values of all the resulting wchars, then
look them up. Wouldn't you know, all the multibyte UTF-8 sequences are
replaced by valid Unicodes!

Also, every time I dump a std::wstring holding a Unicode string into
std::wcout, or a std::wostream, I always seem to end up receiving the
equivalent multi-byte sequences, for my current locale. I really have no
idea how that happens. Must be magic.
which is why C++0x is introducing char16_t and char32_t,
along with the corresponding specializations of std::basic_string,
named std::u16string and std::u32string.
I have not looked at that, but it looks like an alternative to wchar_t that
uses explicit integer types. The only difference is that wchar_t is some
implementation-defined type, while these ones are more explicit. On my
platform, wchar_t is 32 bits, so, for me, std::u32string and std::wstring
would probably be type-equivalent.

-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.9 (GNU/Linux)

iEYEABECAAYFAkknQnQACgkQx9p3GYHlUOLThQCaAwE7q1924q m8BTZvrlpOU7IT
N4gAn17RVd4utsyHbAdGrSNSkvKseBrV
=RJIK
-----END PGP SIGNATURE-----

Nov 21 '08 #158

P: n/a
On 2008-11-21 18:21:27 -0500, Sam <sa*@email-scan.comsaid:
>
I have not looked at that, but it looks like an alternative to wchar_t that
uses explicit integer types. The only difference is that wchar_t is some
implementation-defined type, while these ones are more explicit.
Yup. That's the only difference. With the new types you know what
you're getting and with wchar_t you don't.
On my
platform, wchar_t is 32 bits, so, for me, std::u32string and std::wstring
would probably be type-equivalent.
Okay, if you're not interested in portability and you know that whcar_t
does what you need, fine. But when you say that wchar_t "supports
[Unicode] very well" people naturally read that as a much broader
statement.

--
Pete
Roundhouse Consulting, Ltd. (www.versatilecoding.com) Author of "The
Standard C++ Library Extensions: a Tutorial and Reference
(www.petebecker.com/tr1book)

Nov 22 '08 #159

158 Replies

This discussion thread is closed

Replies have been disabled for this discussion.