468,490 Members | 2,555 Online
Bytes | Developer Community
New Post

Home Posts Topics Members FAQ

Post your question to a community of 468,490 developers. It's quick & easy.

Do you use a garbage collector?

I followed a link to James Kanze's web site in another thread and was
surprised to read this comment by a link to a GC:

"I can't imagine writing C++ without it"

How many of you c.l.c++'ers use one, and in what percentage of your
projects is one used? I have never used one in personal or professional
C++ programming. Am I a holdover to days gone by?
Apr 10 '08
350 9443
On Fri, 11 Apr 2008 14:50:56 +0300, Juha Nieminen
<no****@thanks.invalidwrote:
>Also, according the site
above, in C++ memory is never returned to the operating system (at
least the older OS)

Which "older OS"? Some 30yo?
How about mobile and embedded devices that don't have sophisticated
memory management? If a C++ application is leaking memory, the memory
might never be returned even after the application is terminated.
This is more dangerous than memory leak in Java application, where,
after the application is terminated, all memory is returned by VM.
Apr 11 '08 #51
Razii wrote:
On Thu, 10 Apr 2008 20:37:59 -0500, Razii
<DO*************@hotmail.comwrote:
>int main(int argc, char *argv[]) {

clock_t start=clock();
for (int i=0; i<=10000000; i++) {
Test *test = new Test(i);
if (i % 5000000 == 0)
cout << test;
}

If I add delete test; to this loop it gets faster. huh? what the
exaplanation for this?

2156 ms

and after I add delete test; to the loop

1781 ms

why is that?
Due to caching at various levels of the memory hierarchy, accesses to
recently referenced virtual addresses are often a lot faster than
accesses to new ones. The original C++ code requested 10,000,000
distinct Test-sized memory allocations with no reuse. With "delete
test;" the memory allocator can reissue the same piece of memory for
each "new" operation.

In addition, the sheer amount of memory being allocated in the original
C++ program may have required some system calls to get additional
allocatable memory.

The JVM is free to reuse the virtual memory previously occupied by an
unreachable Test object, so the version with "delete test;" is a bit
more comparable to the Java program.

This illustrates the basic problem with snippet benchmarks. In modern
computers the performance of small operations depends on their context.
Taking them out of context is not realistic.

Patricia
Apr 11 '08 #52
On Fri, 11 Apr 2008 14:41:58 +1200, Ian Collins <ia******@hotmail.com>
wrote:
>Does the Java allocator/GC combination recycle the objects in the loop?
Oh, I figured it out... the flag is -verbose:gc

so trying this time now with.

java -verbose:gc -server Test>log.txt

and log.txt includes......

Test@19f953d
[GC 896K->108K(5056K), 0.0018061 secs]
[GC 1004K->108K(5056K), 0.0004453 secs]
[GC 1004K->108K(5056K), 0.0001847 secs]
[GC 1004K->108K(5056K), 0.0000500 secs]
[GC 1004K->108K(5056K), 0.0000461 secs]
[GC 1004K->108K(5056K), 0.0000478 secs]
[GC 1004K->108K(5056K), 0.0000458 secs]
[GC 1004K->108K(5056K), 0.0000478 secs]
[GC 1004K->108K(5056K), 0.0000506 secs]
[GC 1004K->108K(5056K), 0.0000483 secs]
[GC 1004K->108K(5056K), 0.0000461 secs]
[GC 1004K->108K(5056K), 0.0000497 secs]
[GC 1004K->108K(5056K), 0.0000458 secs]
[GC 1004K->108K(5056K), 0.0000461 secs]
[GC 1004K->108K(5056K), 0.0000475 secs]
[GC 1004K->108K(5056K), 0.0000458 secs]
[GC 1004K->108K(5056K), 0.0000955 secs]
[GC 1004K->108K(5056K), 0.0000494 secs]
[GC 1004K->108K(5056K), 0.0000464 secs]
[GC 1004K->108K(5056K), 0.0000481 secs]
[GC 1004K->108K(5056K), 0.0000455 secs]
[GC 1004K->108K(5056K), 0.0000469 secs]
[GC 1004K->108K(5056K), 0.0000455 secs]
[GC 1004K->108K(5056K), 0.0000486 secs]
[GC 1004K->108K(5056K), 0.0000469 secs]
[GC 1004K->108K(5056K), 0.0000455 secs]
[GC 1004K->108K(5056K), 0.0000469 secs]
[GC 1004K->108K(5056K), 0.0000458 secs]
[GC 1004K->108K(5056K), 0.0000531 secs]
[GC 1004K->108K(5056K), 0.0000461 secs]
[GC 1004K->108K(5056K), 0.0000461 secs]
[GC 1004K->108K(5056K), 0.0000475 secs]
[GC 1004K->108K(5056K), 0.0000534 secs]
[GC 1004K->108K(5056K), 0.0000481 secs]
[GC 1004K->108K(5056K), 0.0000461 secs]
[GC 1004K->108K(5056K), 0.0000472 secs]
[GC 1004K->108K(5056K), 0.0000458 secs]
[GC 1004K->108K(5056K), 0.0000458 secs]
[GC 1004K->108K(5056K), 0.0000472 secs]
[GC 1004K->108K(5056K), 0.0000453 secs]
[GC 1004K->108K(5056K), 0.0000472 secs]
[GC 1004K->108K(5056K), 0.0000455 secs]
[GC 1004K->108K(5056K), 0.0000472 secs]
[GC 1004K->108K(5056K), 0.0000455 secs]
[GC 1004K->108K(5056K), 0.0000469 secs]
[GC 1004K->108K(5056K), 0.0000453 secs]
[GC 1004K->108K(5056K), 0.0000472 secs]
[GC 1004K->108K(5056K), 0.0000534 secs]
[GC 1004K->108K(5056K), 0.0000458 secs]
[GC 1004K->108K(5056K), 0.0000475 secs]
[GC 1004K->108K(5056K), 0.0000455 secs]
[GC 1004K->108K(5056K), 0.0000469 secs]
[GC 1004K->108K(5056K), 0.0000472 secs]
[GC 1004K->108K(5056K), 0.0000472 secs]
[GC 1004K->108K(5056K), 0.0000455 secs]
[GC 1004K->108K(5056K), 0.0000469 secs]
[GC 1004K->108K(5056K), 0.0000481 secs]
[GC 1004K->108K(5056K), 0.0000475 secs]
[GC 1004K->108K(5056K), 0.0000458 secs]
[GC 1004K->108K(5056K), 0.0000461 secs]
[GC 1004K->108K(5056K), 0.0000472 secs]
[GC 1004K->108K(5056K), 0.0000458 secs]
[GC 1004K->108K(5056K), 0.0000475 secs]
[GC 1004K->108K(5056K), 0.0000455 secs]
[GC 1004K->108K(5056K), 0.0000475 secs]
[GC 1004K->108K(5056K), 0.0000472 secs]
[GC 1004K->108K(5056K), 0.0000469 secs]
[GC 1004K->108K(5056K), 0.0000458 secs]
[GC 1004K->108K(5056K), 0.0000592 secs]
[GC 1004K->108K(5056K), 0.0000464 secs]
[GC 1004K->108K(5056K), 0.0000458 secs]
[GC 1004K->108K(5056K), 0.0000469 secs]
[GC 1004K->108K(5056K), 0.0000453 secs]
[GC 1004K->108K(5056K), 0.0000469 secs]
[GC 1004K->108K(5056K), 0.0000486 secs]
[GC 1004K->108K(5056K), 0.0000472 secs]
[GC 1004K->108K(5056K), 0.0000458 secs]
[GC 1004K->108K(5056K), 0.0000455 secs]
[GC 1004K->108K(5056K), 0.0000601 secs]
[GC 1004K->108K(5056K), 0.0000458 secs]
[GC 1004K->108K(5056K), 0.0000455 secs]
[GC 1004K->108K(5056K), 0.0000472 secs]
[GC 1004K->108K(5056K), 0.0000453 secs]
[GC 1004K->108K(5056K), 0.0000472 secs]
[GC 1004K->108K(5056K), 0.0000453 secs]
[GC 1004K->108K(5056K), 0.0000469 secs]
[GC 1004K->108K(5056K), 0.0000481 secs]
Test@e86da0
[GC 1004K->108K(5056K), 0.0000528 secs]
[GC 1004K->108K(5056K), 0.0000464 secs]
[GC 1004K->108K(5056K), 0.0000475 secs]
[GC 1004K->108K(5056K), 0.0000545 secs]
[GC 1004K->108K(5056K), 0.0000472 secs]
[GC 1004K->108K(5056K), 0.0000481 secs]
[GC 1004K->108K(5056K), 0.0000458 secs]
[GC 1004K->108K(5056K), 0.0000478 secs]
[GC 1004K->108K(5056K), 0.0000475 secs]
[GC 1004K->108K(5056K), 0.0000472 secs]
[GC 1004K->108K(5056K), 0.0000458 secs]
[GC 1004K->108K(5056K), 0.0000472 secs]
[GC 1004K->108K(5056K), 0.0000475 secs]
[GC 1004K->108K(5056K), 0.0000481 secs]
[GC 1004K->108K(5056K), 0.0000464 secs]
[GC 1004K->108K(5056K), 0.0000486 secs]
[GC 1004K->108K(5056K), 0.0000478 secs]
[GC 1004K->108K(5056K), 0.0000489 secs]
[GC 1004K->108K(5056K), 0.0000481 secs]
[GC 1004K->108K(5056K), 0.0000461 secs]
[GC 1004K->108K(5056K), 0.0000472 secs]
[GC 1004K->108K(5056K), 0.0000455 secs]
[GC 1004K->108K(5056K), 0.0000472 secs]
[GC 1004K->108K(5056K), 0.0000455 secs]
[GC 1004K->108K(5056K), 0.0000472 secs]
[GC 1004K->108K(5056K), 0.0000461 secs]
[GC 1004K->108K(5056K), 0.0000458 secs]
[GC 1004K->108K(5056K), 0.0000472 secs]
[GC 1004K->108K(5056K), 0.0000455 secs]
[GC 1004K->108K(5056K), 0.0000475 secs]
[GC 1004K->108K(5056K), 0.0000453 secs]
[GC 1004K->108K(5056K), 0.0000472 secs]
[GC 1004K->108K(5056K), 0.0000453 secs]
[GC 1004K->108K(5056K), 0.0000472 secs]
[GC 1004K->108K(5056K), 0.0000464 secs]
[GC 1004K->108K(5056K), 0.0000612 secs]
[GC 1004K->108K(5056K), 0.0000455 secs]
[GC 1004K->108K(5056K), 0.0000455 secs]
[GC 1004K->108K(5056K), 0.0000458 secs]
[GC 1004K->108K(5056K), 0.0000469 secs]
[GC 1004K->108K(5056K), 0.0000455 secs]
[GC 1004K->108K(5056K), 0.0000902 secs]
[GC 1004K->108K(5056K), 0.0000469 secs]
[GC 1004K->108K(5056K), 0.0000601 secs]
[GC 1004K->108K(5056K), 0.0000475 secs]
[GC 1004K->108K(5056K), 0.0000458 secs]
[GC 1004K->108K(5056K), 0.0000475 secs]
[GC 1004K->108K(5056K), 0.0000458 secs]
[GC 1004K->108K(5056K), 0.0000475 secs]
[GC 1004K->108K(5056K), 0.0000461 secs]
[GC 1004K->108K(5056K), 0.0000472 secs]
[GC 1004K->108K(5056K), 0.0000503 secs]
[GC 1004K->108K(5056K), 0.0000483 secs]
[GC 1004K->108K(5056K), 0.0000458 secs]
[GC 1004K->108K(5056K), 0.0000623 secs]
[GC 1004K->108K(5056K), 0.0000461 secs]
[GC 1004K->108K(5056K), 0.0000461 secs]
[GC 1004K->108K(5056K), 0.0000475 secs]
[GC 1004K->108K(5056K), 0.0000458 secs]
[GC 1004K->108K(5056K), 0.0000475 secs]
[GC 1004K->108K(5056K), 0.0000458 secs]
[GC 1004K->108K(5056K), 0.0000472 secs]
[GC 1004K->108K(5056K), 0.0000458 secs]
[GC 1004K->108K(5056K), 0.0000475 secs]
[GC 1004K->108K(5056K), 0.0000673 secs]
[GC 1004K->108K(5056K), 0.0000461 secs]
[GC 1004K->108K(5056K), 0.0000478 secs]
[GC 1004K->108K(5056K), 0.0000455 secs]
[GC 1004K->108K(5056K), 0.0000475 secs]
[GC 1004K->108K(5056K), 0.0000458 secs]
[GC 1004K->108K(5056K), 0.0000478 secs]
[GC 1004K->108K(5056K), 0.0000455 secs]
[GC 1004K->108K(5056K), 0.0000475 secs]
[GC 1004K->108K(5056K), 0.0000455 secs]
[GC 1004K->108K(5056K), 0.0000623 secs]
[GC 1004K->108K(5056K), 0.0000455 secs]
[GC 1004K->108K(5056K), 0.0000481 secs]
[GC 1004K->108K(5056K), 0.0000472 secs]
[GC 1004K->108K(5056K), 0.0000455 secs]
[GC 1004K->108K(5056K), 0.0000469 secs]
[GC 1004K->108K(5056K), 0.0000475 secs]
[GC 1004K->108K(5056K), 0.0000475 secs]
[GC 1004K->108K(5056K), 0.0000453 secs]
[GC 1004K->108K(5056K), 0.0000453 secs]
[GC 1004K->108K(5056K), 0.0000481 secs]
[GC 1004K->108K(5056K), 0.0000514 secs]
[GC 1004K->108K(5056K), 0.0000455 secs]
Test@1975b59
Time: 172 ms
Jun 27 '08 #53
On Fri, 11 Apr 2008 17:26:01 +1200, Ian Collins <ia******@hotmail.com>
wrote:
>That's not the point. The example was merely an illustration of how
object allocation can be optimised if required.
Where are these objects deleted .. does the time includes that? As I
showed in the loop, in the case of java GC collects most of these
objects in the loop.
Jun 27 '08 #54
On Thu, 10 Apr 2008 20:37:59 -0500, Razii
<DO*************@hotmail.comwrote:
>int main(int argc, char *argv[]) {

clock_t start=clock();
for (int i=0; i<=10000000; i++) {
Test *test = new Test(i);
if (i % 5000000 == 0)
cout << test;
}
If I add delete test; to this loop it gets faster. huh? what the
exaplanation for this?

2156 ms

and after I add delete test; to the loop

1781 ms

why is that?
Jun 27 '08 #55
On Apr 11, 10:20 am, Razii <DONTwhatever...@hotmail.comwrote:
On Thu, 10 Apr 2008 20:37:59 -0500, Razii

<DONTwhatever...@hotmail.comwrote:
int main(int argc, char *argv[]) {
clock_t start=clock();
for (int i=0; i<=10000000; i++) {
Test *test = new Test(i);
if (i % 5000000 == 0)
cout << test;
}

If I add delete test; to this loop it gets faster. huh? what the
exaplanation for this?

2156 ms

and after I add delete test; to the loop

1781 ms

why is that?
Probably because of some compiler's optimizations.

When do you free memory in Java? Before or after you stop the clock?
Jun 27 '08 #56
Razii wrote:
On Thu, 10 Apr 2008 20:37:59 -0500, Razii
<DO*************@hotmail.comwrote:
>int main(int argc, char *argv[]) {

clock_t start=clock();
for (int i=0; i<=10000000; i++) {
Test *test = new Test(i);
if (i % 5000000 == 0)
cout << test;
}

If I add delete test; to this loop it gets faster. huh? what the
exaplanation for this?
You have just disproved your original hypothesis. Memory is returned to
the allocator, so it doesn't have to keep fetching more from the system.

--
Ian Collins.
Jun 27 '08 #57
Razii wrote:
On Fri, 11 Apr 2008 17:26:01 +1200, Ian Collins <ia******@hotmail.com>
wrote:
>That's not the point. The example was merely an illustration of how
object allocation can be optimised if required.

Where are these objects deleted .. does the time includes that? As I
showed in the loop, in the case of java GC collects most of these
objects in the loop.
As I pointed out earlier, you are leaking memory. Didn't you realise
that? Do you read peoples responses?

--
Ian Collins.
Jun 27 '08 #58
On Fri, 11 Apr 2008 00:31:30 -0700 (PDT), asterisc <Ra*******@ni.com>
wrote:
>When do you free memory in Java? Before or after you stop the clock?

You don't. When an object is no longer referenced by the program, the
GC will recycled it. The space is made available for subsequent new
objects.

Jun 27 '08 #59
On Fri, 11 Apr 2008 19:38:55 +1200, Ian Collins <ia******@hotmail.com>
wrote:
>As I pointed out earlier, you are leaking memory. Didn't you realise
that?
In the original C++ version I was leaking memory and I knew that. I
deliberately made it that way since I wasn't sure whether GC will run
at all in java version in such a short running application. However,
as it turns out, adding delete makes the C++ version faster. If I knew
that, I would have added it.
Jun 27 '08 #60
On Fri, 11 Apr 2008 19:35:52 +1200, Ian Collins <ia******@hotmail.com>
wrote:
>You have just disproved your original hypothesis.
My number one hypothesis was that "new" is faster in java than in c++.
It doesn't behave the same way as in c++. I was speculating that the
reason for that is call to operating system in c++.
Jun 27 '08 #61
Razii wrote:
On Fri, 11 Apr 2008 19:35:52 +1200, Ian Collins <ia******@hotmail.com>
wrote:
>You have just disproved your original hypothesis.

My number one hypothesis was that "new" is faster in java than in c++.
It doesn't behave the same way as in c++. I was speculating that the
reason for that is call to operating system in c++.
You claimed "In C++, each "new" allocation request will be sent to the
operating system, which is slow."

You have just disproved this by comparing the performance with and
without returning memory to the allocator. If each new requested memory
from the system, the performance would not have changed so
significantly.

--
Ian Collins.
Jun 27 '08 #62
On Fri, 11 Apr 2008 00:58:30 -0700 (PDT), asterisc <Ra*******@ni.com>
wrote:
>When exactly is the memory deallocated? Before or after you stop the
clock?
I already posted the output in the other post. The clock stops when
this is printed.

long end = System.currentTimeMillis();
System.out.println("Time: " + (end - start) + " ms");

The GC (in a different thread) runs several times while the main
thread is in the loop.

Once the main() thread ends, all the memory is returned to the
operating system by the JVM anyway.
Test@19f953d <----printed when i == 0
[GC 896K->108K(5056K), 0.0018061 secs]
[GC 1004K->108K(5056K), 0.0004453 secs]
[GC 1004K->108K(5056K), 0.0001847 secs]
[GC 1004K->108K(5056K), 0.0000500 secs]
[GC 1004K->108K(5056K), 0.0000461 secs]
[GC 1004K->108K(5056K), 0.0000478 secs]
[GC 1004K->108K(5056K), 0.0000458 secs]
[GC 1004K->108K(5056K), 0.0000478 secs]
[GC 1004K->108K(5056K), 0.0000506 secs]
[GC 1004K->108K(5056K), 0.0000483 secs]
[GC 1004K->108K(5056K), 0.0000461 secs]
[GC 1004K->108K(5056K), 0.0000497 secs]
[GC 1004K->108K(5056K), 0.0000458 secs]
[GC 1004K->108K(5056K), 0.0000461 secs]
[GC 1004K->108K(5056K), 0.0000475 secs]
[GC 1004K->108K(5056K), 0.0000458 secs]
[GC 1004K->108K(5056K), 0.0000955 secs]
[GC 1004K->108K(5056K), 0.0000494 secs]
[GC 1004K->108K(5056K), 0.0000464 secs]
[GC 1004K->108K(5056K), 0.0000481 secs]
[GC 1004K->108K(5056K), 0.0000455 secs]
[GC 1004K->108K(5056K), 0.0000469 secs]
[GC 1004K->108K(5056K), 0.0000455 secs]
[GC 1004K->108K(5056K), 0.0000486 secs]
[GC 1004K->108K(5056K), 0.0000469 secs]
[GC 1004K->108K(5056K), 0.0000455 secs]
[GC 1004K->108K(5056K), 0.0000469 secs]
[GC 1004K->108K(5056K), 0.0000458 secs]
[GC 1004K->108K(5056K), 0.0000531 secs]
[GC 1004K->108K(5056K), 0.0000461 secs]
[GC 1004K->108K(5056K), 0.0000461 secs]
[GC 1004K->108K(5056K), 0.0000475 secs]
[GC 1004K->108K(5056K), 0.0000534 secs]
[GC 1004K->108K(5056K), 0.0000481 secs]
[GC 1004K->108K(5056K), 0.0000461 secs]
[GC 1004K->108K(5056K), 0.0000472 secs]
[GC 1004K->108K(5056K), 0.0000458 secs]
[GC 1004K->108K(5056K), 0.0000458 secs]
[GC 1004K->108K(5056K), 0.0000472 secs]
[GC 1004K->108K(5056K), 0.0000453 secs]
[GC 1004K->108K(5056K), 0.0000472 secs]
[GC 1004K->108K(5056K), 0.0000455 secs]
[GC 1004K->108K(5056K), 0.0000472 secs]
[GC 1004K->108K(5056K), 0.0000455 secs]
[GC 1004K->108K(5056K), 0.0000469 secs]
[GC 1004K->108K(5056K), 0.0000453 secs]
[GC 1004K->108K(5056K), 0.0000472 secs]
[GC 1004K->108K(5056K), 0.0000534 secs]
[GC 1004K->108K(5056K), 0.0000458 secs]
[GC 1004K->108K(5056K), 0.0000475 secs]
[GC 1004K->108K(5056K), 0.0000455 secs]
[GC 1004K->108K(5056K), 0.0000469 secs]
[GC 1004K->108K(5056K), 0.0000472 secs]
[GC 1004K->108K(5056K), 0.0000472 secs]
[GC 1004K->108K(5056K), 0.0000455 secs]
[GC 1004K->108K(5056K), 0.0000469 secs]
[GC 1004K->108K(5056K), 0.0000481 secs]
[GC 1004K->108K(5056K), 0.0000475 secs]
[GC 1004K->108K(5056K), 0.0000458 secs]
[GC 1004K->108K(5056K), 0.0000461 secs]
[GC 1004K->108K(5056K), 0.0000472 secs]
[GC 1004K->108K(5056K), 0.0000458 secs]
[GC 1004K->108K(5056K), 0.0000475 secs]
[GC 1004K->108K(5056K), 0.0000455 secs]
[GC 1004K->108K(5056K), 0.0000475 secs]
[GC 1004K->108K(5056K), 0.0000472 secs]
[GC 1004K->108K(5056K), 0.0000469 secs]
[GC 1004K->108K(5056K), 0.0000458 secs]
[GC 1004K->108K(5056K), 0.0000592 secs]
[GC 1004K->108K(5056K), 0.0000464 secs]
[GC 1004K->108K(5056K), 0.0000458 secs]
[GC 1004K->108K(5056K), 0.0000469 secs]
[GC 1004K->108K(5056K), 0.0000453 secs]
[GC 1004K->108K(5056K), 0.0000469 secs]
[GC 1004K->108K(5056K), 0.0000486 secs]
[GC 1004K->108K(5056K), 0.0000472 secs]
[GC 1004K->108K(5056K), 0.0000458 secs]
[GC 1004K->108K(5056K), 0.0000455 secs]
[GC 1004K->108K(5056K), 0.0000601 secs]
[GC 1004K->108K(5056K), 0.0000458 secs]
[GC 1004K->108K(5056K), 0.0000455 secs]
[GC 1004K->108K(5056K), 0.0000472 secs]
[GC 1004K->108K(5056K), 0.0000453 secs]
[GC 1004K->108K(5056K), 0.0000472 secs]
[GC 1004K->108K(5056K), 0.0000453 secs]
[GC 1004K->108K(5056K), 0.0000469 secs]
[GC 1004K->108K(5056K), 0.0000481 secs]
Test@e86da0 <----printed when i == 5000000
[GC 1004K->108K(5056K), 0.0000528 secs]
[GC 1004K->108K(5056K), 0.0000464 secs]
[GC 1004K->108K(5056K), 0.0000475 secs]
[GC 1004K->108K(5056K), 0.0000545 secs]
[GC 1004K->108K(5056K), 0.0000472 secs]
[GC 1004K->108K(5056K), 0.0000481 secs]
[GC 1004K->108K(5056K), 0.0000458 secs]
[GC 1004K->108K(5056K), 0.0000478 secs]
[GC 1004K->108K(5056K), 0.0000475 secs]
[GC 1004K->108K(5056K), 0.0000472 secs]
[GC 1004K->108K(5056K), 0.0000458 secs]
[GC 1004K->108K(5056K), 0.0000472 secs]
[GC 1004K->108K(5056K), 0.0000475 secs]
[GC 1004K->108K(5056K), 0.0000481 secs]
[GC 1004K->108K(5056K), 0.0000464 secs]
[GC 1004K->108K(5056K), 0.0000486 secs]
[GC 1004K->108K(5056K), 0.0000478 secs]
[GC 1004K->108K(5056K), 0.0000489 secs]
[GC 1004K->108K(5056K), 0.0000481 secs]
[GC 1004K->108K(5056K), 0.0000461 secs]
[GC 1004K->108K(5056K), 0.0000472 secs]
[GC 1004K->108K(5056K), 0.0000455 secs]
[GC 1004K->108K(5056K), 0.0000472 secs]
[GC 1004K->108K(5056K), 0.0000455 secs]
[GC 1004K->108K(5056K), 0.0000472 secs]
[GC 1004K->108K(5056K), 0.0000461 secs]
[GC 1004K->108K(5056K), 0.0000458 secs]
[GC 1004K->108K(5056K), 0.0000472 secs]
[GC 1004K->108K(5056K), 0.0000455 secs]
[GC 1004K->108K(5056K), 0.0000475 secs]
[GC 1004K->108K(5056K), 0.0000453 secs]
[GC 1004K->108K(5056K), 0.0000472 secs]
[GC 1004K->108K(5056K), 0.0000453 secs]
[GC 1004K->108K(5056K), 0.0000472 secs]
[GC 1004K->108K(5056K), 0.0000464 secs]
[GC 1004K->108K(5056K), 0.0000612 secs]
[GC 1004K->108K(5056K), 0.0000455 secs]
[GC 1004K->108K(5056K), 0.0000455 secs]
[GC 1004K->108K(5056K), 0.0000458 secs]
[GC 1004K->108K(5056K), 0.0000469 secs]
[GC 1004K->108K(5056K), 0.0000455 secs]
[GC 1004K->108K(5056K), 0.0000902 secs]
[GC 1004K->108K(5056K), 0.0000469 secs]
[GC 1004K->108K(5056K), 0.0000601 secs]
[GC 1004K->108K(5056K), 0.0000475 secs]
[GC 1004K->108K(5056K), 0.0000458 secs]
[GC 1004K->108K(5056K), 0.0000475 secs]
[GC 1004K->108K(5056K), 0.0000458 secs]
[GC 1004K->108K(5056K), 0.0000475 secs]
[GC 1004K->108K(5056K), 0.0000461 secs]
[GC 1004K->108K(5056K), 0.0000472 secs]
[GC 1004K->108K(5056K), 0.0000503 secs]
[GC 1004K->108K(5056K), 0.0000483 secs]
[GC 1004K->108K(5056K), 0.0000458 secs]
[GC 1004K->108K(5056K), 0.0000623 secs]
[GC 1004K->108K(5056K), 0.0000461 secs]
[GC 1004K->108K(5056K), 0.0000461 secs]
[GC 1004K->108K(5056K), 0.0000475 secs]
[GC 1004K->108K(5056K), 0.0000458 secs]
[GC 1004K->108K(5056K), 0.0000475 secs]
[GC 1004K->108K(5056K), 0.0000458 secs]
[GC 1004K->108K(5056K), 0.0000472 secs]
[GC 1004K->108K(5056K), 0.0000458 secs]
[GC 1004K->108K(5056K), 0.0000475 secs]
[GC 1004K->108K(5056K), 0.0000673 secs]
[GC 1004K->108K(5056K), 0.0000461 secs]
[GC 1004K->108K(5056K), 0.0000478 secs]
[GC 1004K->108K(5056K), 0.0000455 secs]
[GC 1004K->108K(5056K), 0.0000475 secs]
[GC 1004K->108K(5056K), 0.0000458 secs]
[GC 1004K->108K(5056K), 0.0000478 secs]
[GC 1004K->108K(5056K), 0.0000455 secs]
[GC 1004K->108K(5056K), 0.0000475 secs]
[GC 1004K->108K(5056K), 0.0000455 secs]
[GC 1004K->108K(5056K), 0.0000623 secs]
[GC 1004K->108K(5056K), 0.0000455 secs]
[GC 1004K->108K(5056K), 0.0000481 secs]
[GC 1004K->108K(5056K), 0.0000472 secs]
[GC 1004K->108K(5056K), 0.0000455 secs]
[GC 1004K->108K(5056K), 0.0000469 secs]
[GC 1004K->108K(5056K), 0.0000475 secs]
[GC 1004K->108K(5056K), 0.0000475 secs]
[GC 1004K->108K(5056K), 0.0000453 secs]
[GC 1004K->108K(5056K), 0.0000453 secs]
[GC 1004K->108K(5056K), 0.0000481 secs]
[GC 1004K->108K(5056K), 0.0000514 secs]
[GC 1004K->108K(5056K), 0.0000455 secs]
Test@1975b59 <----printed when i == 10000000
Time: 172 ms
Jun 27 '08 #63
On Fri, 11 Apr 2008 20:06:08 +1200, Ian Collins <ia******@hotmail.com>
wrote:
>You claimed "In C++, each "new" allocation request will be sent to the
operating system, which is slow."
Yes, I did say that based on what I read on a web site. That was his
explanation regarding why allocating memory with new is slower in c++.

Jun 27 '08 #64
Juha Nieminen wrote:
I have my doubts that this "liberating" style of programming somehow
automatically leads to clean, modular and abstract designs. All the
contrary, I would even claim that at least in some cases it leads to the
opposite direction ("reckless programming").
And your point is? The simple fact is, that bad programmers write bad
code, and good programmers write good code, no matter what the language
is or whether memory is managed automatically or manually.
Jun 27 '08 #65
Razii wrote:
Yes, I did say that based on what I read on a web site. That was his
explanation regarding why allocating memory with new is slower in c++.
Maybe you shouldn't believe everything you read on a web site.

Jun 27 '08 #66
Sam
Razii writes:
On Thu, 10 Apr 2008 21:29:44 -0500, Sam <sa*@email-scan.comwrote:
>>See how well /that/ benchmarks against Java :-)

That is not the topic. The topic is how the keyword "new" behaves.

0x12ff5c
0x12ff5c
0x12ff5c
Time: 62 ms
See -- C++ is faster than Java. And the topic isn't how the keyword "new"
behaves, but, as it says, “java vs c++ difference in "new" ”. Which, of
course, includes the fact that their respective usage cases are completely
different, and a straight, linear comparison of the kind you made is
meaningless. In C++, objects are allocated on the heap, via new, much less
frequently than in Java, so a one-to-one benchmarking tells you very little.
I pointed out that in many instances C++ objects are instantiated on the
stack, which incurs much less overhead than "new". Furthermore, you don't
even have to use "new" to allocate objects on the heap anyway, in C++.

Take /that/ one for a spin, and see what happens.
All the references are the same -- not the same output as in the last
version (or java version).
In your Java version, you added a newline at the end of every print
statement. In your original C++ version, you printed the pointer value
without a trailing newline. I fixed it for you. That's why the output is
different.
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.7 (GNU/Linux)

iD8DBQBH/0ewx9p3GYHlUOIRAl6vAJ9LiO6RWyZJ7rPkT497bbGnv2K77QC fcBfy
xuaHP29jOnkHJMeuHY01ntQ=
=ryPZ
-----END PGP SIGNATURE-----

Jun 27 '08 #67
On Apr 10, 4:33 pm, Juha Nieminen <nos...@thanks.invalidwrote:
Lloyd Bonafide wrote:
How many of you c.l.c++'ers use one, and in what percentage
of your projects is one used? I have never used one in
personal or professional C++ programming. Am I a holdover
to days gone by?
I have never used a GC for C++, yet in none of my C++ projects
(professional or hobby) in the last 5+ years have I had a
memory leak. I often use tools such as valgrind and AQTime to
check for memory leaks, and they have yet to report any leak.
And I wrote C and assembler for some 20 years before starting in
C++, and I never had a memory leak in them, either. From
experience, I'd say that you can either write correct programs
in any language, or you can't do it in any language. The
question is one of cost. How much effort does it take.

Garbage collection is a tool which reduces the amount of coding
necessary in some specific cases. As such, it would be foolish
not to avail oneself of it when appropriate. And it would be
just as foolish to expect it to suddenly solve all your
problems, just like that.
There just exists a *style* of programming in C++ which very
naturally leads to encapsulated, clean and safe code. (This
style is drastically different from how, for example, Java
programming is usually done.)
There are many styles of programming, both in C++ and in any
other language. Many of them don't lead to encapsulated, clean
and safe code. A few do. I use very similar styles of
programming in both Java and C++, but I've seen some pretty bad
code in both as well.

Of course, I don't quite see the relevance to garbage collection
in this. Quite obviously, you can write bad code with garbage
collection. And just as obviously, you can write bad code
without garbage collection.
One situation where GC *might* help a bit is in efficiency if
you are constantly allocating and deallocating huge amounts of
small objects in tight loops. 'new' and 'delete' in C++ are
rather slow operations, and a well-designed GC might speed
things up.
However, part of my C++ programming style just naturally also
avoids doing tons of news and deletes in tight loops (which
is, again, very different from eg. Java programming where you
basically have no choice). Thus this has never been a problem
in practice.
There are applications, of course, where you don't have the
choice; where you're developing dynamic structures (graphs,
etc.). But even there, the real gain is not in runtime (which
will rarely differ more than 10%-15% one way or the other), but
in development time. If the design determines that object
lifetime can be non-determinate, then you don't have to worry
about it when coding.

Obviously, in languages in which every object is allocated
dynamically, such objects are an overwhelming majority---almost
by definition, a value type doesn't have a determinate lifetime.
In C++, typically, objects without a determinate lifetime tend
more to be special cases: value types with variable size, or
which are too expensive to copy, even when you'd like to, or
"agents": small polymorphic objects without much, if any, state,
which are passed around, typically so that an object A can do
something dependent on the actual type of another object B.
While such objects aren't the majority, at least not in most C++
programs, they do occur often enough to make it worth having
garbage collection in your toolkit. Every little bit helps, as
they say.
Even if I some day stumble across a situation where constant
allocations and deallocations are impacting negatively the
speed of a program, and there just isn't a way around it, I
can use a more efficient allocator than the default one used
by the compiler. (I have measured speedups to up to over 8
times when using an efficient allocator compared to the
default one.)
You seem to be misunderstanding the argument. There are
specific times when garbage collection might be chosen for
performance reasons, but they are fairly rare, and as you say,
you can also optimize the performance of manual schemes. The
main argument for garbage collection is greater programmer
efficiency.

(There is a second argument, involving security. The fact that
the underlying memory of an object cannot be used for any other
object as long as there remains a pointer to the original
object. In an ideal system, of course, there wouldn't be such
pointers. But in practice, it's nice that you can make behavior
defined even if they happen to occur.)

--
James Kanze (GABI Software) email:ja*********@gmail.com
Conseils en informatique orientée objet/
Beratung in objektorientierter Datenverarbeitung
9 place Sémard, 78210 St.-Cyr-l'École, France, +33 (0)1 30 23 00 34
Jun 27 '08 #68
Razii wrote:
Also, according the site
above, in C++ memory is never returned to the operating system (at
least the older OS)
Which "older OS"? Some 30yo?
Jun 27 '08 #69
Matthias Buelow wrote:
Juha Nieminen wrote:
> I have my doubts that this "liberating" style of programming somehow
automatically leads to clean, modular and abstract designs. All the
contrary, I would even claim that at least in some cases it leads to the
opposite direction ("reckless programming").

And your point is? The simple fact is, that bad programmers write bad
code, and good programmers write good code, no matter what the language
is or whether memory is managed automatically or manually.
My point is that the style of C++ programming which produces safe code
tends to also produce good code in other respects as well. Thus there's
a positive side-effect to not having GC in this case.

I often find that even quick&dirty small programs to perform minuscule
tasks look good because of this style of programming, and it doesn't
even necessarily require any extra work (mainly because usually the STL
can be used for data management).
Jun 27 '08 #70
Razii wrote:
On Fri, 11 Apr 2008 03:35:27 +0300, Juha Nieminen
<no****@thanks.invalidwrote:
>Razii wrote:
>>In C++, each "new" allocation request
will be sent to the operating system, which is slow.
That's blatantly false.

Well, my friend, I have proven you wrong. Razi has been victorious
once again :)
Uh? I said that the claim that each "new" performs an OS call is
blatantly false. How does your program prove that wrong?

Your C++ program does *not* perform a system call for each executed
new, and I can prove that.
Jun 27 '08 #71
On Apr 10, 11:56 pm, Juha Nieminen <nos...@thanks.invalidwrote:
Pascal J. Bourguignon wrote:
Now, on another level, I'd say that the problem is that this safe
style you must adopt in C++ to avoid memory leaks is a burden that
forces you to spend your energy in a sterile direction.
I strongly disagree. This style of programming almost
automatically leads to a clean, abstract, encapsulated design,
which is only a good thing.
We must have had to maintain C++ code written by different
people:-). I don't think you can really claim that C++ forces
you to write clean, abstract, encapsulated code. (Nor does
Java, of course. Or any other language.)
Programmers using languages with GC (such as Java) might not need to
worry about where their dynamically-allocated memory is going, but
neither do I, in most cases. I even dare to claim that at least in some
cases this style of modular programming produces cleaner, simpler, more
understandable and in some cases even more efficient code than a
"mindlessly use 'new' everywhere" style of programming.
Woah. Garbage collection has absolutely nothing to do with how
often you use new. In fact, in many cases, it will result in a
lot less use of new than otherwise. It's much easier to share
representations if you don't have to worry about who's going to
delete it.
I honestly don't feel that I need to put any extra effort to
produce this kind of safe and clean code. Given that I usually
like the end result as a design, even if I have to put that
bit of extra effort it's totally worth it.
More dynamic languages, with a garbage collector, are
liberating your mind, so your free neurons can now think
about more interesting software problems (like for example,
AI, or providing a better user experience, etc).
I have my doubts that this "liberating" style of programming
somehow automatically leads to clean, modular and abstract
designs. All the contrary, I would even claim that at least in
some cases it leads to the opposite direction ("reckless
programming").
The whole argument is irrelevant. On both sides. Garbage
collection doesn't make a language "dynamic", at least not in
the usual sense. Weak typing does, and I would agree that most
of the time, this leads to less robust code. The fact that
everything derives from Object (or rather, that so many things
only know about Object) is a disaster when it comes to writing
robust code. But you can do this in C++ as well---some early
container libraries did. (I believe that the problems revealed
by such libraries were part of the motivation for adding
templates to the language. Although even before templates, many
of us found that using <generic.h>, as painful as that was, was
preferable to "everything is an Object".)

In practice (and I speak here from experience), using garbage
collection in C++ doesn't impact style that much. (And with the
exception of a few people like you and Jerry Collins, many of
those worried about its impact on style really do need to change
their style.) Even with garbage collection, the "default" mode
in C++ is values, not references or pointers (call them whatever
you want), with true local variables, on the stack, and not
dynamically allocated memory. As far as I know, there's never
been the slightest suggestion that this should be changed.

The result is that, unlike the case in Java, garbage collection
isn't necessary in C++. But it does save some effort in certain
cases. And it can be used to significantly enhance security in
others. (Dangling pointers can be a serious security problem,
see
http://searchsecurity.techtarget.com...265116,00.html.
And while garbage collection can't completely eliminate dangling
pointers in C++---you can still return a pointer to a local
variable---it can certainly help.)

--
James Kanze (GABI Software) email:ja*********@gmail.com
Conseils en informatique orientée objet/
Beratung in objektorientierter Datenverarbeitung
9 place Sémard, 78210 St.-Cyr-l'École, France, +33 (0)1 30 23 00 34
Jun 27 '08 #72
Razii wrote:
On Fri, 11 Apr 2008 03:35:53 GMT, "Mike Schilling"
<ms*************@hotmail.comwrote:
>And you're wrong, as has been demonstrated repeatedly. There's no
point trying to explain this any further.

There was nothing wrong with my basic premise that creating objects
with on heap is much faster in java than in c++. Google search
confirms..
Your quote confirms that your claim that each 'new' causes an OS call
is false.
Jun 27 '08 #73
Razii wrote:
That is not the topic. The topic is how the keyword "new" behaves.
You just don't get it, do you. Your original claim was that "new" in
C++ is slow "because each new calls the OS". That's just false. Each
"new" does *not* call the OS.
Jun 27 '08 #74
On Apr 11, 1:53 pm, Juha Nieminen <nos...@thanks.invalidwrote:
Matthias Buelow wrote:
Juha Nieminen wrote:
I have my doubts that this "liberating" style of programming somehow
automatically leads to clean, modular and abstract designs. All the
contrary, I would even claim that at least in some cases it leads to the
opposite direction ("reckless programming").
And your point is? The simple fact is, that bad programmers
write bad code, and good programmers write good code, no
matter what the language is or whether memory is managed
automatically or manually.
My point is that the style of C++ programming which produces
safe code tends to also produce good code in other respects as
well. Thus there's a positive side-effect to not having GC in
this case.
That's a non sequitur. The style of Java programming which
produces safe code also tends to produce good code in other
respects as well. Globally, it's probably more difficult to
produce safe code in Java than in C++, but this is despite
garbage collection, not because of it. (In really safe code,
for example, public functions are almost never virtual, unless
the language has some sort of built-in support for PbC, a la
Eiffel. And of course, safe code is best served by strict type
checking---none of this "everything is an Object" business.)

--
James Kanze (GABI Software) email:ja*********@gmail.com
Conseils en informatique orientée objet/
Beratung in objektorientierter Datenverarbeitung
9 place Sémard, 78210 St.-Cyr-l'École, France, +33 (0)1 30 23 00 34
Jun 27 '08 #75
James Kanze wrote:
You seem to be misunderstanding the argument. There are
specific times when garbage collection might be chosen for
performance reasons, but they are fairly rare, and as you say,
you can also optimize the performance of manual schemes. The
main argument for garbage collection is greater programmer
efficiency.
I understood the argument, and I already said that I don't feel I'm
programming inefficiently. The style of safe modular C++ programming
tends to produce clean designs. In many cases this is *not* at the cost
of increased development time.
In fact, if you can reuse your previously created modular code, the
development time may even decrease considerably. In my experience Java
tends to lead to "reckless programming" ("why should I encapsulate when
there's no need? the GC is handling everything"), which often doesn't
produce reusable code, so in the long run it may even be
counter-productive with respect to development times.
Jun 27 '08 #76
James Kanze <ja*********@gmail.comwrites:
On Apr 11, 1:53 pm, Juha Nieminen <nos...@thanks.invalidwrote:
>Matthias Buelow wrote:
Juha Nieminen wrote:
> I have my doubts that this "liberating" style of programming somehow
automatically leads to clean, modular and abstract designs. All the
contrary, I would even claim that at least in some cases it leads to the
opposite direction ("reckless programming").
And your point is? The simple fact is, that bad programmers
write bad code, and good programmers write good code, no
matter what the language is or whether memory is managed
automatically or manually.
>My point is that the style of C++ programming which produces
safe code tends to also produce good code in other respects as
well. Thus there's a positive side-effect to not having GC in
this case.

That's a non sequitur. The style of Java programming which
produces safe code also tends to produce good code in other
respects as well. Globally, it's probably more difficult to
produce safe code in Java than in C++, but this is despite
garbage collection, not because of it. (In really safe code,
for example, public functions are almost never virtual, unless
the language has some sort of built-in support for PbC, a la
Eiffel. And of course, safe code is best served by strict type
checking---none of this "everything is an Object" business.)
Do you mean that in Java, once you've defined a method zorglub on a
class A, since everything is an object, you can send the messaeg
zorglub on an instance of the class B that is not a subclass of A?
--
__Pascal Bourguignon__
Jun 27 '08 #77
On Apr 11, 2:06 pm, r...@zedat.fu-berlin.de (Stefan Ram) wrote:
Juha Nieminen <nos...@thanks.invalidwrites:
I don't see how this is so much different from what Java does.
»[A]llocation in modern JVMs is far faster than the best
performing malloc implementations. The common code path
for new Object() in HotSpot 1.4.2 and later is
approximately 10 machine instructions (data provided by
Sun; see Resources), whereas the best performing malloc
implementations in C require on average between 60 and 100
instructions per call (Detlefs, et. al.; see Resources).
And allocation performance is not a trivial component of
overall performance -- benchmarks show that many
real-world C and C++ programs, such as Perl and
Ghostscript, spend 20 to 30 percent of their total
execution time in malloc and free -- far more than the
allocation and garbage collection overhead of a healthy
Java application (Zorn; see Resources).«
Just for the record, this is typical market people speak.

First, of course, it's well known that an allocation in a
compacting garbage collector can be significantly faster than in
any non-compacting scheme (for variable length allocations).
It's also well known that the actual garbage collection sweep in
a compacting collector is more expensive than in a
non-compacting one. You don't get something for nothing.
Whether the trade off is advantages depends on the application.
(I suspect that there are a lot of applications where the
tradeoff does favor compacting, but there are certainly others
where it doesn't, and only talking about the number of
machine instructions in allocation, without mentionning other
costs, is disingenuous, at best.)

Secondly, I've written C++ programs which spend 0% of their
total execution time in malloc and free, and I've had to fix one
which spent well over 99.9% of its time there. With such
variance, "average" looses all meaning. And the choice of Perl
and Ghostscript as "typical" C++ programs is somewhat
disingenuous as well; both implement interpreters for languages
which use garbage collection, so almost by definition, both
would benefit from garbage collection. (In fact, both probably
implement it somehow internally, so the percentage of time spent
in malloc/free is that of a garbage collected program.)

--
James Kanze (GABI Software) email:ja*********@gmail.com
Conseils en informatique orientée objet/
Beratung in objektorientierter Datenverarbeitung
9 place Sémard, 78210 St.-Cyr-l'École, France, +33 (0)1 30 23 00 34
Jun 27 '08 #78
On Apr 11, 3:07 am, "Chris Thomasson" <cris...@comcast.netwrote:
"Razii" <DONTwhatever...@hotmail.comwrote in message
news:v5********************************@4ax.com...
On Thu, 10 Apr 2008 17:33:21 +0300, Juha Nieminen
<nos...@thanks.invalidwrote:
However, part of my C++ programming style just naturally also avoids
doing tons of news and deletes in tight loops (which is, again, very
different from eg. Java programming where you basically have no choice)
Howeever, Java allocates new memory blocks on it's internal
heap (which is allocated in huge chunks from the OS). In
this way, in most of the cases it bypasses memory allocation
mechanisms of the underlying OS and is very fast. In C++,
each "new" allocation request will be sent to the operating
system, which is slow.
You are incorrect. Each call into "new" will "most-likely" be
"fast-pathed" into a "local" cache.
Are you sure about the "most-likely" part? From what little
I've seen, most C runtime libraries simply threw in a few locks
to make their 30 or 40 year old malloc thread safe, and didn't
bother with more. Since most of the marketing benchmarks don't
use malloc, there's no point in optimizing it. I suspect that
you're describing best practice, and not most likely. (But I'd
be pleased to learn differently.)

--
James Kanze (GABI Software) email:ja*********@gmail.com
Conseils en informatique orientée objet/
Beratung in objektorientierter Datenverarbeitung
9 place Sémard, 78210 St.-Cyr-l'École, France, +33 (0)1 30 23 00 34
Jun 27 '08 #79
Juha Nieminen wrote:
I understood the argument, and I already said that I don't feel I'm
programming inefficiently. The style of safe modular C++ programming
tends to produce clean designs.
"The style of safe modular XYZ programming tends to produce clean
designs". Valid for any language. The thing is, even if you(r
programmers) have the time and expertise to produce such a shiny clean
design, it won't stay that way. You have to plan for the worst case,
which is, your program will degenerate into a nasty mess, which is
usually the case, over time.
In many cases this is *not* at the cost of increased development time.
If you use a relatively low-level language such as C++, this certainly
_does_ effect your productivity. I can't understand where you get the
idea that it doesn't. A more expressive language, preferrably one
designed for or adaptable to the problem domain, will in many cases
produce a dramatic improvement in productivity over the kind of manual
stone-breaking one is doing in C++. High-level languages and
domain-specific languages are usually implemented with automatic memory
management because of the more abstract programming models being used.
You can design and write the cleanest, shiniest modular C++ code, some
mediocre programmer using a language that is more expressive for the
task at hand will beat you hands down. Maybe the reason why, as you
allege, some kind of "strict discipline" yields better results in C++ is
because this is the case for tasks where the language isn't that well
suited for, and one simply needs this discipline, otherwise one cannot
get useful results.
Jun 27 '08 #80
On Apr 11, 4:21*am, Matthias Buelow <m...@incubus.dewrote:
Juha Nieminen wrote:
* I have my doubts that this "liberating" style of programming somehow
automatically leads to clean, modular and abstract designs. All the
contrary, I would even claim that at least in some cases it leads to the
opposite direction ("reckless programming").

And your point is? The simple fact is, that bad programmers write bad
code, and good programmers write good code, no matter what the language
is or whether memory is managed automatically or manually.
But when Sun's marketing tells bad programmers they don't have to
worry abour memory anymore, it's a disaster. They do have to worry,
just in a different way.
Jun 27 '08 #81
On Apr 11, 2:16 pm, Juha Nieminen <nos...@thanks.invalidwrote:
James Kanze wrote:
You seem to be misunderstanding the argument. There are
specific times when garbage collection might be chosen for
performance reasons, but they are fairly rare, and as you say,
you can also optimize the performance of manual schemes. The
main argument for garbage collection is greater programmer
efficiency.
I understood the argument, and I already said that I don't
feel I'm programming inefficiently. The style of safe modular
C++ programming tends to produce clean designs. In many cases
this is *not* at the cost of increased development time.
But how is this relevant to garbage collection. Safe, modular
programming will reduce development times in all cases.
In fact, if you can reuse your previously created modular
code, the development time may even decrease considerably.
Certainly. The large standard library is certainly a plus for
Java.
In my experience Java tends to lead to "reckless programming"
("why should I encapsulate when there's no need? the GC is
handling everything"), which often doesn't produce reusable
code, so in the long run it may even be counter-productive
with respect to development times.
What does encapsulation have to do with garbage collection?
That's what I don't understand. Encapsulation is encapsulation.
It's a must, period. If I were to judge uniquely by the code
I've actually had to work on, I'd have to say that encapsulation
was better in Java. In fact, I know that you can do even better
in C++, despite the way some people program in C++. And I know
that the one Java project I worked on was an exception (in
general, not just as a Java project) in that it was well
managed, and that encapsulation was an important issue up front,
and that some Java programmers tend to ignore it.

If there's a difference, I'd say that it is more related to the
size of the projects in each language. In small projects,
there's less organization, and more temptation to just ignore
the rules. And for reasons which have nothing to do with
garbage collection, Java doesn't scale well to large projects,
and tends to be used for small, short lived projects. But that
has nothing to do with garbage collection.

--
James Kanze (GABI Software) email:ja*********@gmail.com
Conseils en informatique orientée objet/
Beratung in objektorientierter Datenverarbeitung
9 place Sémard, 78210 St.-Cyr-l'École, France, +33 (0)1 30 23 00 34

Jun 27 '08 #82
lb*******@yahoo.com wrote:
But when Sun's marketing tells bad programmers they don't have to
worry abour memory anymore, it's a disaster. They do have to worry,
just in a different way.
Well... "marketing"... "bad programmers"...

Perhaps some of the more simple-minded specimens of "decision makers"
believe that Sun has solved the "memory problem". Maybe some of the less
experienced programmers believe that, too. However, that's not a problem
with automatic memory management or indeed any other technical issue.
You can't even blame Sun's marketing, it's their job, more or less.
Jun 27 '08 #83
On Apr 11, 2:37 pm, Matthias Buelow <m...@incubus.dewrote:
Juha Nieminen wrote:
I understood the argument, and I already said that I don't feel I'm
programming inefficiently. The style of safe modular C++ programming
tends to produce clean designs.
"The style of safe modular XYZ programming tends to produce clean
designs". Valid for any language. The thing is, even if you(r
programmers) have the time and expertise to produce such a shiny clean
design, it won't stay that way. You have to plan for the worst case,
which is, your program will degenerate into a nasty mess, which is
usually the case, over time.
That's a self fulfilling prophecy. The moment you plan to
produce a nasty mess, you will. What you should do is organize
the projects so that they don't degenerate into a nasty mess.
I've worked on one or two C++ projects where the quality of the
code actually improved each iteration.
In many cases this is *not* at the cost of increased
development time.
If you use a relatively low-level language such as C++, this
certainly _does_ effect your productivity.
If you insist on using the low-level features where they aren't
appropriate, this will definitely have a negative effect on your
productivity. But so will programming in Java without using the
standard library components (except those in java.lang). Modern
languages tend to be more or less modular, with more and more
abstraction pushed off into the library. C++ is an extreme case
in this regard.
I can't understand where you get the idea that it doesn't. A
more expressive language, preferrably one designed for or
adaptable to the problem domain, will in many cases produce a
dramatic improvement in productivity over the kind of manual
stone-breaking one is doing in C++.
Unless you're writing low level libraries, you're not doing
manual stone-breaking in C++. The actual coding I did in Java
was much lower level that what I currently do in C++ (but I've
also done very low level coding in C++).
High-level languages and domain-specific languages are usually
implemented with automatic memory management because of the
more abstract programming models being used.
Modern languages generally provide automatic memory management,
because it is something that can be automated efficiently. It's
helps, but it's not a silver bullet. And of course, some of us
regularly use automatic memory management in C++. Because C++
has value semantics, it's not as essential as in Java, but it
helps.
You can design and write the cleanest, shiniest modular C++
code, some mediocre programmer using a language that is more
expressive for the task at hand will beat you hands down.
Care to bet. Say, write some nice, simple to use components
which use complex arithmetic. (In this case, of course, the
difference isn't garbage collection, but operator overloading.
A necessity in any modern language, but missing in Java, or at
least, it was missing when I used Java.)
Maybe the reason why, as you allege, some kind of "strict
discipline" yields better results in C++ is because this is
the case for tasks where the language isn't that well suited
for, and one simply needs this discipline, otherwise one
cannot get useful results.
In general, a strict discipline is necessary in all programming.
I suspect that Juha's argument is that without this strict
discipline, a C++ program will typically crash (which generally
doesn't go unnoticed), where as a Java program will just give
wrong results. I disagree with his conclusions for several
reasons: most significantly, because with garbage collection in
C++, you can better guarantee the crashes, but also because I
don't see why good programmers should be deprived of a useful
tool simply because there are so many bad programmers around.
But the argument that strict discipline is good can't be
disputed.

--
James Kanze (GABI Software) email:ja*********@gmail.com
Conseils en informatique orientée objet/
Beratung in objektorientierter Datenverarbeitung
9 place Sémard, 78210 St.-Cyr-l'École, France, +33 (0)1 30 23 00 34

Jun 27 '08 #84
Lew
Juha Nieminen wrote:
Razii wrote:
>That is not the topic. The topic is how the keyword "new" behaves.

You just don't get it, do you.
Don't feed the trolls. Especially don't feed them by being trollish in return.

--
Lew
Jun 27 '08 #85
Lew
Chris Thomasson wrote:
>Before I answer you, please try and answer a simple question:

If thread A allocates 256 bytes, and thread B races in and
concurrently attempts to allocate 128 bytes... Which thread is going
to win?
Both threads.

The semantics of the Java language guarantee that both allocations will
succeed. The JVM will not experience a race condition.
Oh yeah... Thread C tries to allocate just before thread B...
Which thread is going to win?
All three.
Thing of how a single global pointer to a shared virtual memory range
can be distributed and efficiently managed...
*What* "global pointer" are you talking about? There is no "global pointer"
involved in Java's 'new' operator, at least not one that we as developers will
ever see. That is a detail of how the JVM implements 'new', and is of no
concern whatsoever at the language level.

FWIW, AIUI, each thread gets a local chunk of memory from which it allocates
its objects. Whether that's the technique any particular JVM uses is highly
irrelevant. What's important is that the semantics of the language makes
promises about the thread safety of construction.

<http://java.sun.com/docs/books/jls/third_edition/html/j3TOC.html>

--
Lew
Jun 27 '08 #86
On Fri, 11 Apr 2008 09:25:43 -0400, Lew <le*@lewscanon.comwrote:
>Don't feed the trolls. Especially don't feed them by being trollish in return.
*PLONK*

Jun 27 '08 #87
On Fri, 11 Apr 2008 04:11:51 -0700 (PDT), asterisc <Ra*******@ni.com>
wrote:
>What's the point allocating 1.000.000 new objects and never use them?
Huh? The point is to benchmark the speed of creating objects with the
operator new and compare it with C++ version. it's about 10 times
faster than in c++.
>Why doesn't Java optimize this and allocate nothing, since they are
not used?
They are used as you are printing them in the loop with i % 500000.
Why doesn't the C++ compiler optimize them?
>Can you overwrite the new operator in Java?
Or are you comparing std::new with Java's new?
Why would I want to do something as stupid as overwrite new operator?
>You forget that Java's GC is implemented in a lower-level programming
language, as C or C++. It cannot be faster than an implementation in C/
C++.
It's irrelevant what language is used to write compilers/VM or GC. The
speed of execution depends on the code produced and the design of the
language. If you retrofit C++ with GC using a library, you can't use
optimizations that require the compiler to know the details of the
memory al locator and collector, i.e. the GC. This is not possible if
the GC has been retrofitted onto the language as a library. The
compiler's optimizer does not have the necessary information to make
the optimizations.

"Because it is hard to move objects for C programs", i.e. retrofitting
limits choices which limits performance.

"Many Java/ML/Scheme implementations have faster garbage collectors
that may move objects..." - Hans Boehm
http://www.hpl.hp.com/personal/Hans_...l/slide_4.html
Jun 27 '08 #88
On Fri, 11 Apr 2008 14:50:56 +0300, Juha Nieminen
<no****@thanks.invalidwrote:
>Also, according the site
above, in C++ memory is never returned to the operating system (at
least the older OS)

Which "older OS"? Some 30yo?
How about mobile and embedded devices that don't have sophisticated
memory management? If a C++ application is leaking memory, the memory
might never be returned even after the application is terminated.
This is more dangerous than memory leak in Java application, where,
after the application is terminated, all memory is returned by VM.
Jun 27 '08 #89
Razii wrote:
On Fri, 11 Apr 2008 14:50:56 +0300, Juha Nieminen
<no****@thanks.invalidwrote:
>>Also, according the site
above, in C++ memory is never returned to the operating system (at
least the older OS)
Which "older OS"? Some 30yo?

How about mobile and embedded devices that don't have sophisticated
memory management?
It's not uncommon for such devices to use a static design, so there's no
chance of a leak.
If a C++ application is leaking memory, the memory
might never be returned even after the application is terminated.
You'd be surprised how much effort goes into memory management on
embedded devices. Memory is a more precious commodity in the embedded
world.

--
Ian Collins.
Jun 27 '08 #90
Razii wrote:
On Thu, 10 Apr 2008 20:37:59 -0500, Razii
<DO*************@hotmail.comwrote:
>int main(int argc, char *argv[]) {

clock_t start=clock();
for (int i=0; i<=10000000; i++) {
Test *test = new Test(i);
if (i % 5000000 == 0)
cout << test;
}

If I add delete test; to this loop it gets faster. huh? what the
exaplanation for this?

2156 ms

and after I add delete test; to the loop

1781 ms

why is that?
Due to caching at various levels of the memory hierarchy, accesses to
recently referenced virtual addresses are often a lot faster than
accesses to new ones. The original C++ code requested 10,000,000
distinct Test-sized memory allocations with no reuse. With "delete
test;" the memory allocator can reissue the same piece of memory for
each "new" operation.

In addition, the sheer amount of memory being allocated in the original
C++ program may have required some system calls to get additional
allocatable memory.

The JVM is free to reuse the virtual memory previously occupied by an
unreachable Test object, so the version with "delete test;" is a bit
more comparable to the Java program.

This illustrates the basic problem with snippet benchmarks. In modern
computers the performance of small operations depends on their context.
Taking them out of context is not realistic.

Patricia
Jun 27 '08 #91
Stefan Ram wrote:
»Your essay made me remember an interesting phenomenon I
saw in one system I worked on. There were two versions of
it, one in Lisp and one in C++. The display subsystem of
the Lisp version was faster. There were various reasons,
but an important one was GC: the C++ code copied a lot of
buffers because they got passed around in fairly complex
ways, so it could be quite difficult to know when one
could be deallocated. To avoid that problem, the C++
programmers just copied.
I suppose you can compare incompetent C++ programmers with lisp
programmers (in the sense that lisp may lead, by its very nature, to
efficient code without the need to create complicated designs). However,
generalizing from this that lisp produces faster code than C++ is a bit
unfair.
A lot of us thought in the 1990s that the big battle would
be between procedural and object oriented programming, and
we thought that object oriented programming would provide
a big boost in programmer productivity. I thought that,
too. Some people still think that. It turns out we were
wrong. Object oriented programming is handy dandy, but
it's not really the productivity booster that was
promised. The real significant productivity advance we've
had in programming has been from languages which manage
memory for you automatically.
Claiming that OOP has not improved productivity significantly is quite
far-fetched.

(Personally I feel there has been a counter-movement against OOP: In
the late 80's and early 90's OOP was the fad and the extreme hype. While
it did indeed improve productivity a lot, it was not, however, the final
silver bullet. In other words, in some ways it was a bit of a
disappointment after all that hype. This produced an odd
counter-reaction in some circles who, for some reason, can only see what
OOP did *not* deliver and close their eyes to all that it did. It's a
kind of anti-hype as a post-reaction to the hype. IMO this kind of
counter-movement is stupid and misguided.)
Jun 27 '08 #92
On Thu, 10 Apr 2008 22:57:44 -0500, Razii
<DO*************@hotmail.comwrote, quoted or indirectly quoted
someone who said :
>
I am not sure what you mean by that ... can you post the code?
You can download the source of the JVM if you sign some sort of
agreement. That may be waived now.

All Java "new" has to do is something like this in assembler:

addressOfNewObject = nextFreeSlot;

nextFreeSlot += sizeOfObject;

if ( nextFreeSlot limit ) { garbageCollect(); }

fill( addressOfNewObject, sizeOfObject, 0 );
--

Roedy Green Canadian Mind Products
The Java Glossary
http://mindprod.com
Jun 27 '08 #93
On Thu, 10 Apr 2008 22:36:54 -0700, "Chris Thomasson"
<cr*****@comcast.netwrote, quoted or indirectly quoted someone who
said :
>If thread A allocates 256 bytes, and thread B races in and concurrently
attempts to allocate 128 bytes... Which thread is going to win?
If all threads share a common heap, new code would have to be
synchronised. If you have that synchronisation, you could use a
single global counter to use to generate a hashCode. Keep in mind we
are talking assembler here. This is the very guts of the JVM. You can
take advantage of an assembler atomic memory increment instruction for
very low overhead synchronisation.

You could invent a JVM where each thread has its own mini-heap for
freshly created objects. Then it would not need to synchronise to
allocate an object. Long lived objects could be moved to a common
synchronised heap.

JVMs have extreme latitude to do things any way they please so long
as the virtual machine behaves in a consistent way.
--

Roedy Green Canadian Mind Products
The Java Glossary
http://mindprod.com
Jun 27 '08 #94
On Thu, 10 Apr 2008 22:42:00 -0700, "Chris Thomasson"
<cr*****@comcast.netwrote, quoted or indirectly quoted someone who
said :
>Oh yeah...
Get stuffed. If you want people to take time to explain things to
you, get that chip off your shoulder.
--

Roedy Green Canadian Mind Products
The Java Glossary
http://mindprod.com
Jun 27 '08 #95
asterisc wrote:
Why doesn't Java optimize this and allocate nothing, since they are
not used?
An interesting idea. Responding to your earlier request for an output
line for each deletion, I added a finalize method. Then I cut down the
number of loops to 1000, and ran it. I got this:

init:
deps-jar:
Compiling 1 source file to
C:\Users\Brenden\Dev\misc\FinalizeTest\build\class es
compile:
run:
finalizetest.Main@1eed786
Time: 2 ms
BUILD SUCCESSFUL (total time: 0 seconds)
Hmmm..... although increasing the loops does produce more output,
increasing the loops to one million only produces ~7k lines of total output.
Code below:
package finalizetest;

public class Main
{

private static final int LOOPS = 1000;
private static final int MODULUS = 5000000;

Main( int c )
{
count = c;
}

int count;

public static void main( String[] arg )
{

long start = System.currentTimeMillis();

for ( int i = 0; i < LOOPS; i++ )
{
Main test = new Main( i );
if ( i % MODULUS == 0 )
{
System.out.println( test );
}
}
long end = System.currentTimeMillis();
System.out.println( "Time: " + ( end - start ) + " ms" );

}

@Override
protected void finalize() throws Throwable
{
System.out.println( "Finalized: " + this );
super.finalize();
}
}
Jun 27 '08 #96
On Sat, 12 Apr 2008 00:09:32 GMT, Mark Space
<ma*******@sbc.global.netwrote:
>Hmmm..... although increasing the loops does produce more output,
increasing the loops to one million only produces ~7k lines of total output.

There is no optimization. Increasing the loop to 10,000,000 produced
200 meg of output.

For 10,000,000 objects...

protected void finalize() throws Throwable
{
//System.out.println( "Finalized: " + this );
super.finalize();
}
Time: 24953 ms

huh? Removing super.finalize(); it went back to Time: 172 ms
Jun 27 '08 #97
On Apr 11, 11:44 pm, Razii <DONTwhatever...@hotmail.comwrote:
Which "older OS"? Some 30yo?

How about mobile and embedded devices that don't have sophisticated
memory management? If a C++ application is leaking memory, the memory
might never be returned even after the application is terminated.
This is more dangerous than memory leak in Java application, where,
after the application is terminated, all memory is returned by VM.
If VM is able to return memory to OS, so it should be C++ runtime.

Mirek
Jun 27 '08 #98
On Apr 10, 1:57 pm, Lloyd Bonafide <nos...@nicetry.orgwrote:
I followed a link to James Kanze's web site in another thread and was
surprised to read this comment by a link to a GC:

"I can't imagine writing C++ without it"

How many of you c.l.c++'ers use one, and in what percentage of your
projects is one used? I have never used one in personal or professional
C++ programming. Am I a holdover to days gone by?


I guess you can stay calm. I was always surprised by James Kanze's
relation to GC too. Especially if he is writting mission critical apps
and the only GC he can AFAIK use is convervative GC (what will happen
if somebody attacks his app with white noise data ?:)

Mirek
Jun 27 '08 #99
On Apr 11, 3:37 am, Razii <DONTwhatever...@hotmail.comwrote:
On Fri, 11 Apr 2008 03:35:27 +0300, Juha Nieminen

<nos...@thanks.invalidwrote:
Razii wrote:
In C++, each "new" allocation request
will be sent to the operating system, which is slow.
That's blatantly false.

Well, my friend, I have proven you wrong. Razi has been victorious
once again :)

Time: 2125 ms (C++)
Time: 328 ms (java)
What do you think you have proved? I see that MSC allocator is slower
than Java's, all right. But you have not proved that it is calling
system directly.

Besides, the code is not equivalent.
>
--- c++--

#include <ctime>
#include <cstdlib>
#include <iostream>

using namespace std;

class Test {
public:
Test (int c) {count = c;}
virtual ~Test() { }
int count;

};

int main(int argc, char *argv[]) {

clock_t start=clock();
for (int i=0; i<=10000000; i++) {
Test *test = new Test(i);
if (i % 5000000 == 0)
cout << test;
// you need to add here
delete test;
// because Java does not hold reference to created pointer, so it gets
GCed soon
}
clock_t endt=clock();
std::cout <<"Time: " <<
double(endt-start)/CLOCKS_PER_SEC * 1000 << " ms\n";

}

-- java ---

import java.util.*;

class Test {
Test (int c) {count = c;}
int count;

public static void main(String[] arg) {

long start = System.currentTimeMillis();

for (int i=0; i<=10000000; i++) {
Test test = new Test(i);
if (i % 5000000 == 0)
System.out.println (test);
}
long end = System.currentTimeMillis();
System.out.println("Time: " + (end - start) + " ms");

}

}
If you have time, fix and try with U++.... (it has overloaded new/
delete).

(Note: I do not know what the result will be, I guess it will still be
slower than Java, I am just interested :)

Mirek
Jun 27 '08 #100

This discussion thread is closed

Replies have been disabled for this discussion.

Similar topics

4 posts views Thread by Pedro Miguel Carvalho | last post: by
5 posts views Thread by Ben | last post: by
13 posts views Thread by Mingnan G. | last post: by
28 posts views Thread by Goalie_Ca | last post: by
142 posts views Thread by jacob navia | last post: by
8 posts views Thread by Paul.Lee.1971 | last post: by
56 posts views Thread by Johnny E. Jensen | last post: by
46 posts views Thread by Carlo Milanesi | last post: by
3 posts views Thread by gieforce | last post: by
reply views Thread by theflame83 | last post: by
By using this site, you agree to our Privacy Policy and Terms of Use.