473,289 Members | 1,840 Online
Bytes | Software Development & Data Engineering Community
Post Job

Home Posts Topics Members FAQ

Join Bytes to post your question to a community of 473,289 software developers and data experts.

Do you use a garbage collector?

I followed a link to James Kanze's web site in another thread and was
surprised to read this comment by a link to a GC:

"I can't imagine writing C++ without it"

How many of you c.l.c++'ers use one, and in what percentage of your
projects is one used? I have never used one in personal or professional
C++ programming. Am I a holdover to days gone by?
Apr 10 '08 #1
350 11391
Lloyd Bonafide wrote:
I followed a link to James Kanze's web site in another thread and was
surprised to read this comment by a link to a GC:

"I can't imagine writing C++ without it"

How many of you c.l.c++'ers use one, and in what percentage of your
projects is one used? I have never used one in personal or professional
C++ programming. Am I a holdover to days gone by?
I can't tell what professional C++ programming is about or similar, but
I've also never used a GC for any of my projects (and those included
ones with several months of development time and non-trivial structure).

I've never seen the need for it and in fact am rather happy if I can do
the memory management explicitelly rather than by a GC which feels
cleaner to me (BTW, I also only once needed to do reference-counting, so
for those things the "refcounting requires more time than GC"-argument
is out).

Daniel

--
Done: Bar-Sam-Val-Wiz, Dwa-Elf-Hum-Orc, Cha-Law, Fem-Mal
Underway: Ran-Gno-Neu-Fem
To go: Arc-Cav-Hea-Kni-Mon-Pri-Rog-Tou
Apr 10 '08 #2
Lloyd Bonafide wrote:
I followed a link to James Kanze's web site in another thread and was
surprised to read this comment by a link to a GC:

"I can't imagine writing C++ without it"

How many of you c.l.c++'ers use one, and in what percentage of your
projects is one used? I have never used one in personal or professional
C++ programming. Am I a holdover to days gone by?
I use RAII. I've had memory leaks but it's usually due to some third
party application I didn't understand adequately. I was never able to
figure out why AQTime was reporting large chunks of memory loss in my
use of the sqlite3 library for example. We abandoned its use.
Apr 10 '08 #3
Pascal J. Bourguignon wrote:
More dynamic languages, with a garbage collector, are liberating your
mind, so your free neurons can now think about more interesting
software problems (like for example, AI, or providing a better user
experience, etc).
Hey now, what's this nonsense against the pure joy if your program looks
totally cryptic, proves that you know the fine details of an 800 pages
thick standard, and runs an infinite loop in under 3 seconds...
Apr 10 '08 #4
On Apr 10, 10:03*am, p...@informatimago.com (Pascal J. Bourguignon)
wrote:
More dynamic languages, with a garbage collector, are liberating your
mind, so your free neurons can now think about more interesting
software problems (like for example, AI, or providing a better user
experience, etc).
You still have to worry about leaking memory (and dereferencing null
pointers) in languages like Java.

http://www.ibm.com/developerworks/li...aks/index.html
Apr 10 '08 #5
Pascal J. Bourguignon wrote:
>
Now, on another level, I'd say that the problem is that this safe
style you must adopt in C++ to avoid memory leaks is a burden that
forces you to spend your energy in a sterile direction.

More dynamic languages, with a garbage collector, are liberating your
mind, so your free neurons can now think about more interesting
software problems (like for example, AI, or providing a better user
experience, etc).
More dynamic languages, with a garbage collector, are liberating your
mind from worrying about lost resources until one day you realise that
you've run out of GDI objects, or left a file open, or something and it
won't go away until the memory garbage collector decides to clean up the
referencing object. Which it won't do, because you haven't run out of
memory.

GC isn't a magic bullet. You have to know its limitations.

Andy
Apr 10 '08 #6
Sam
Lloyd Bonafide writes:
I followed a link to James Kanze's web site in another thread and was
surprised to read this comment by a link to a GC:

"I can't imagine writing C++ without it"

How many of you c.l.c++'ers use one, and in what percentage of your
projects is one used? I have never used one in personal or professional
C++ programming.
Neither did I.
Am I a holdover to days gone by?
Probably, if by that you mean "days when young and budding programmers were
actually taught how to program in C++ correctly, instead of some other
language that resembled C++".

-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.7 (GNU/Linux)

iD8DBQBH/qB8x9p3GYHlUOIRAm7aAJ0QZXhPlPE7xgrVB3KfzB78Y2K4ZAC fRtBk
Wk3erCkdW6hEWBDEHeoLfkE=
=4Sx2
-----END PGP SIGNATURE-----

Apr 10 '08 #7
Razii wrote:
In C++, each "new" allocation request
will be sent to the operating system, which is slow.
That's blatantly false.

At least with the memory allocation logic in linux's libc (and I guess
in most other unix systems as well) the only system call that is made is
brk(), which increments the size of the heap. AFAIK this is done in
relatively large chunks to avoid every minuscule 'new' causing a brk()
call. Thus the OS is called relatively rarely. The actual memory
allocation of the blocks allocated with 'new' (or 'malloc') are done by
the routine in libc, not by the OS.

I don't know how Windows compilers do it, but I bet it's something
very similar to this.

I don't see how this is so much different from what Java does.
Apr 11 '08 #8

"Razii" <DO*************@hotmail.comwrote in message
news:99********************************@4ax.com...
On Thu, 10 Apr 2008 21:43:45 -0400, Arne Vajhřj <ar**@vajhoej.dk>
wrote:
>>I can not imagine any C++ runtime that makes an operating system
call for each new.

The runtime allocates huge chunks from the OS and then manage
it internally.

Testing the keyword "new"

Time: 2125 ms (C++)
Time: 328 ms (java)

Explain that.
The C++ allocator is less efficient. There's a big difference between that
and "makes an OS call for each allocation". In fact, given how highly
optimized Java allocators are these days, the fact that C++ is taking less
than 7 times as long proves that it's *not* making an OS call.
Apr 11 '08 #9
In article <Xn*****************************@194.177.96.26>,
no****@nicetry.org says...
I followed a link to James Kanze's web site in another thread and was
surprised to read this comment by a link to a GC:

"I can't imagine writing C++ without it"

How many of you c.l.c++'ers use one, and in what percentage of your
projects is one used? I have never used one in personal or professional
C++ programming. Am I a holdover to days gone by?
I have used one, but not in quite a while. IIRC, it was in the late '90s
or so when I tried it. I was quite enthused for a little bit, but my
enthusiasm slowly faded, and I think it's been a couple of years or so
since the last time I used it at all. I've never really made any kind of
final decision that I'll never use it again, or anything like that, but
haven't done anything for which I'm convinced it would be at all likely
to provide any real advantage either.

--
Later,
Jerry.

The universe is a figment of its own imagination.
Apr 11 '08 #10
Razii wrote:
On Thu, 10 Apr 2008 21:43:45 -0400, Arne Vajhřj <ar**@vajhoej.dk>
wrote:
>I can not imagine any C++ runtime that makes an operating system
call for each new.

The runtime allocates huge chunks from the OS and then manage
it internally.

Testing the keyword "new"

Time: 2125 ms (C++)
Time: 328 ms (java)

Explain that. What I am doing different in java than in c++? Code
below..
If you can prove that the only way to write inefficient code is to make
OS calls, then you have made your point.

But as everyone knows it is not, so your argument is completely bogus.

Arne
Apr 11 '08 #11
Sam
Razii writes:
On Thu, 10 Apr 2008 21:43:45 -0400, Arne Vajhøj <ar**@vajhoej.dk>
wrote:
>>I can not imagine any C++ runtime that makes an operating system
call for each new.

The runtime allocates huge chunks from the OS and then manage
it internally.
Testing the keyword "new"

Time: 2125 ms (C++)
Time: 328 ms (java)

Explain that. What I am doing different in java than in c++? Code
below..
What you're doing different in Java is that you're using the part of the
language that it's optimized for. Java is, generally, optimized for fast
instantiation of discrete objects on the heap, because creating a huge
number of objects is unavoidable in Java, and they all have to be allocated
on the heap, due to the nature of the language itself.

On the other case, in most use cases C++ does not require instantiation of
as many discrete objects on the heap that Java does, for implementing an
equivalent task. In C++, most -- if not all -- of these objects can be
easily allocated on the stack.

So, if you were to do a fair comparison, you should benchmark instantiation
of Java objects, on the heap, against instantiation of C++ objects on the
stack.

And then benchmark the comparable memory usage, to boot :-)

In your C++ example, there was no reason whatsoever to instantiate your C++
test object on the heap. What does that accomplish, besides a memory leak?
That's just the Java way of doing things, but, in C++ you have the option of
instantiating objects on the stack, so try benchmarking this instead:

int main(int argc, char *argv[]) {
clock_t start=clock();
for (int i=0; i<=10000000; i++)
{
Test testObj(i);

if (i % 5000000 == 0)
cout << &testObj << endl;
}

clock_t endt=clock();
std::cout <<"Time: " <<
double(endt-start)/CLOCKS_PER_SEC * 1000 << " ms\n";
}

See how well /that/ benchmarks against Java :-)

-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.7 (GNU/Linux)

iD8DBQBH/s0Xx9p3GYHlUOIRAvBpAJ9++ag+arLm2Tw7MUDh1C9RseFvCAC dG01b
bJVz24MY+1330SyZGmIWYbk=
=5Qsh
-----END PGP SIGNATURE-----

Apr 11 '08 #12
On Fri, 11 Apr 2008 02:04:18 GMT, "Mike Schilling"
<ms*************@hotmail.comwrote:
In fact, given how highly
optimized Java allocators are these days, the fact that C++ is taking less
than 7 times as long proves that it's *not* making an OS call.
with java -server it's 12 times slower...

Time: 2125 ms (g++)
Time: 172 ms (java - server)

Apr 11 '08 #13
On Thu, 10 Apr 2008 19:24:34 -0700, Arne Vajhøj <ar**@vajhoej.dkwrote:
[...]
But as everyone knows it is not, so your argument is completely bogus.
So what else is new?

What I can't figure out is, why do people continue to feed this troll,
especially when he keeps cross-posting these dopey threads to both the
C++ and Java newsgroups?

I've got as many thread filters to junk stuff he started as I do for
spam. He has really suckered a bunch of you in (including people I'm
surprised could be suckered).
Apr 11 '08 #14
On Fri, 11 Apr 2008 14:41:58 +1200, Ian Collins <ia******@hotmail.com>
wrote:
>Does the Java allocator/GC combination recycle the objects in the loop?
I doubt it (not sure though). It ends too fast for that. If it was
longer running program, then probably GC will kick in.
Apr 11 '08 #15
On Thu, 10 Apr 2008 21:29:44 -0500, Sam <sa*@email-scan.comwrote:
>See how well /that/ benchmarks against Java :-)

That is not the topic. The topic is how the keyword "new" behaves.

0x12ff5c
0x12ff5c
0x12ff5c
Time: 62 ms

All the references are the same -- not the same output as in the last
version (or java version).

Apr 11 '08 #16
On Thu, 10 Apr 2008 21:29:44 -0500, Sam <sa*@email-scan.comwrote:
>Java is, generally, optimized for fast
instantiation of discrete objects on the heap, because creating a huge
number of objects is unavoidable in Java, and they all have to be allocated
on the heap, due to the nature of the language itself.
As I said, Java allocates new memory blocks on it's internal heap
which is allocated in huge chunks from the OS. That's why the "new" in
Java is 12 times faster than C++ version. If there is any other
explanation, post it (I haven't seen it).

Apr 11 '08 #17
On Thu, 10 Apr 2008 20:31:49 -0500, Razii
<DO*************@hotmail.comwrote, quoted or indirectly quoted
someone who said :
>Creating 10000000 new objects with the keyword 'new' in tight loop.
All Java has to do is add N (the size of the object) to a counter and
zero out the object. In C++ it also has to look for a hole the right
size and record it is some sort of collection. C++ typically does not
move objects once allocated. Java does.

In my C++ days, we used NuMega to find leaks, objects that were
allocated butw never deleted even after there were no references to
them. We never got anywhere near nailing them all. With Java this is
automatic. You can't make that sort of screw up, though you can
packrat. See http://mindprod.com/jgloss/packratting.html
--

Roedy Green Canadian Mind Products
The Java Glossary
http://mindprod.com
Apr 11 '08 #18

"Razii" <DO*************@hotmail.comwrote in message
news:sq********************************@4ax.com...
On Fri, 11 Apr 2008 02:04:18 GMT, "Mike Schilling"
<ms*************@hotmail.comwrote:
>In fact, given how highly
optimized Java allocators are these days, the fact that C++ is
taking less
than 7 times as long proves that it's *not* making an OS call.

with java -server it's 12 times slower...

Time: 2125 ms (g++)
Time: 172 ms (java - server)
Which doesn't change the conclusion.
Apr 11 '08 #19
On Fri, 11 Apr 2008 15:39:36 +1200, Ian Collins <ia******@hotmail.com>
wrote:
>runs in 90mS on my box (compared to ~940mS with default new).
Doesn't compile..
new.cpp:4:21: error: bad_alloc: No such file or directory
new.cpp:10: error: expected identifier before numeric constant
new.cpp:10: error: expected ',' or '...' before numeric constant
new.cpp: In static member function 'static void* Test::operator
new(size_t)':
new.cpp:19: error: invalid operands of types 'unsigned int' and 'const
unsigned
int ()(int)' to binary 'operator*'
new.cpp:22: error: ISO C++ forbids comparison between pointer and
integer
Apr 11 '08 #20
On Fri, 11 Apr 2008 03:24:16 GMT, Roedy Green
<se*********@mindprod.com.invalidwrote:
>All Java has to do is add N (the size of the object) to a counter and
zero out the object.
I am not sure what you mean by that ... can you post the code?
Apr 11 '08 #21
On Thu, 10 Apr 2008 19:43:16 -0700, "Peter Duniho"
<Np*********@nnowslpianmk.comwrote:
when he keeps cross-posting these dopey threads to both the
C++ and Java newsgroups?
Because the topic relates to both newsgroup. The topic was not started
by me. Juha Nieminen and many others were already discussing Java and
C++. I responded to their post and since it's on topic on both
newsgroup, it's appropriate to cross post it to both newsgroup. There
is a reason why cross posting is allowed on USENET, and you should use
it whenever appropriate.
Apr 11 '08 #22
On Thu, 10 Apr 2008 08:50:06 -0700 (PDT), lb*******@yahoo.com wrote:
>You still have to worry about leaking memory (and dereferencing null
pointers) in languages like Java.

http://www.ibm.com/developerworks/li...aks/index.html

True memory leaks memory are impossible, other than by holding
references to objects that are not needed. Also, according the site
above, in C++ memory is never returned to the operating system (at
least the older OS), even after the application is closed. This can
never happen in Java.
Apr 11 '08 #23
Razii wrote:
On Fri, 11 Apr 2008 15:39:36 +1200, Ian Collins <ia******@hotmail.com>
wrote:
>runs in 90mS on my box (compared to ~940mS with default new).

Doesn't compile..
#include <new>

--
Ian Collins.
Apr 11 '08 #24
Lew
Peter Duniho wrote:
On Thu, 10 Apr 2008 19:24:34 -0700, Arne Vajhøj <ar**@vajhoej.dkwrote:
>[...]
But as everyone knows it is not, so your argument is completely bogus.

So what else is new?

What I can't figure out is, why do people continue to feed this troll,
especially when he keeps cross-posting these dopey threads to both the
C++ and Java newsgroups?

I've got as many thread filters to junk stuff he started as I do for
spam. He has really suckered a bunch of you in (including people I'm
surprised could be suckered).
Amen to that, brother.

--
Lew
Apr 11 '08 #25
On Fri, 11 Apr 2008 03:35:53 GMT, "Mike Schilling"
<ms*************@hotmail.comwrote:
>And you're wrong, as has been demonstrated repeatedly. There's no
point trying to explain this any further.
There was nothing wrong with my basic premise that creating objects
with on heap is much faster in java than in c++. Google search
confirms..
---- quote---------
Creating heap objects in C++ is typically much slower because it's
based on the C concept of a heap as a big pool of memory that (and
this is essential) must be recycled. When you call delete in C++ the
released memory leaves a hole in the heap, so when you call new, the
storage allocation mechanism must go seeking to try to fit the storage
for your object into any existing holes in the heap or else you'll
rapidly run out of heap storage. Searching for available pieces of
memory is the reason that allocating heap storage has such a
performance impact in C++, so it's far faster to create stack-based
objects.

Again, because so much of C++ is based on doing everything at
compile-time, this makes sense. But in Java there are certain places
where things happen more dynamically and it changes the model. When it
comes to creating objects, it turns out that the garbage collector can
have a significant impact on increasing the speed of object creation.
This might sound a bit odd at first - that storage release affects
storage allocation - but it's the way some JVMs work and it means that
allocating storage for heap objects in Java can be nearly as fast as
creating storage on the stack in C++.

[...] In some JVMs, the Java heap is quite different; it's more like a
conveyor belt that moves forward every time you allocate a new object.
This means that object storage allocation is remarkably rapid. The
"heap pointer" is simply moved forward into virgin territory, so it's
effectively the same as C++'s stack allocation. (Of course, there's a
little extra overhead for bookkeeping but it's nothing like searching
for storage.)

Now you might observe that the heap isn't in fact a conveyor belt, and
if you treat it that way you'll eventually start paging memory a lot
(which is a big performance hit) and later run out. The trick is that
the garbage collector steps in and while it collects the garbage it
compacts all the objects in the heap so that you've effectively moved
the "heap pointer" closer to the beginning of the conveyor belt and
further away from a page fault. The garbage collector rearranges
things and makes it possible for the high-speed, infinite-free-heap
model to be used while allocating storage.

Apr 11 '08 #26

"Razii" <DO*************@hotmail.comwrote in message
news:ap********************************@4ax.com...
On Fri, 11 Apr 2008 03:35:53 GMT, "Mike Schilling"
<ms*************@hotmail.comwrote:
>>And you're wrong, as has been demonstrated repeatedly. There's no
point trying to explain this any further.

There was nothing wrong with my basic premise that creating objects
with on heap is much faster in java than in c++.
Your premise was that each C++ "new" does a system call.. That's
nonsense, as what you quote below demonstrates.
>

---- quote---------
Creating heap objects in C++ is typically much slower because it's
based on the C concept of a heap as a big pool of memory that (and
this is essential) must be recycled. When you call delete in C++ the
released memory leaves a hole in the heap, so when you call new, the
storage allocation mechanism must go seeking to try to fit the
storage
for your object into any existing holes in the heap or else you'll
rapidly run out of heap storage. Searching for available pieces of
memory is the reason that allocating heap storage has such a
performance impact in C++, so it's far faster to create stack-based
objects.

Again, because so much of C++ is based on doing everything at
compile-time, this makes sense. But in Java there are certain places
where things happen more dynamically and it changes the model. When
it
comes to creating objects, it turns out that the garbage collector
can
have a significant impact on increasing the speed of object
creation.
This might sound a bit odd at first - that storage release affects
storage allocation - but it's the way some JVMs work and it means
that
allocating storage for heap objects in Java can be nearly as fast as
creating storage on the stack in C++.

[...] In some JVMs, the Java heap is quite different; it's more like
a
conveyor belt that moves forward every time you allocate a new
object.
This means that object storage allocation is remarkably rapid. The
"heap pointer" is simply moved forward into virgin territory, so
it's
effectively the same as C++'s stack allocation. (Of course, there's
a
little extra overhead for bookkeeping but it's nothing like
searching
for storage.)

Now you might observe that the heap isn't in fact a conveyor belt,
and
if you treat it that way you'll eventually start paging memory a lot
(which is a big performance hit) and later run out. The trick is
that
the garbage collector steps in and while it collects the garbage it
compacts all the objects in the heap so that you've effectively
moved
the "heap pointer" closer to the beginning of the conveyor belt and
further away from a page fault. The garbage collector rearranges
things and makes it possible for the high-speed, infinite-free-heap
model to be used while allocating storage.

Apr 11 '08 #27
On Fri, 11 Apr 2008 01:26:33 -0400, Lew <le*@lewscanon.comwrote:
>Don't feed the trolls.
The guy turned into a whiner (like stan) whose only contribution to a
legitimate thread is posting on-liners that are far more annoying than
legitimate discussion, like my posts. If he continues with that I will
PLONK him too. That will be second one after stan.

Apr 11 '08 #28
On Fri, 11 Apr 2008 14:41:58 +1200, Ian Collins <ia******@hotmail.com>
wrote:
>Does the Java allocator/GC combination recycle the objects in the loop?
Oh, I figured it out... the flag is -verbose:gc

so trying this time now with.

java -verbose:gc -server Test>log.txt

and log.txt includes......

Test@19f953d
[GC 896K->108K(5056K), 0.0018061 secs]
[GC 1004K->108K(5056K), 0.0004453 secs]
[GC 1004K->108K(5056K), 0.0001847 secs]
[GC 1004K->108K(5056K), 0.0000500 secs]
[GC 1004K->108K(5056K), 0.0000461 secs]
[GC 1004K->108K(5056K), 0.0000478 secs]
[GC 1004K->108K(5056K), 0.0000458 secs]
[GC 1004K->108K(5056K), 0.0000478 secs]
[GC 1004K->108K(5056K), 0.0000506 secs]
[GC 1004K->108K(5056K), 0.0000483 secs]
[GC 1004K->108K(5056K), 0.0000461 secs]
[GC 1004K->108K(5056K), 0.0000497 secs]
[GC 1004K->108K(5056K), 0.0000458 secs]
[GC 1004K->108K(5056K), 0.0000461 secs]
[GC 1004K->108K(5056K), 0.0000475 secs]
[GC 1004K->108K(5056K), 0.0000458 secs]
[GC 1004K->108K(5056K), 0.0000955 secs]
[GC 1004K->108K(5056K), 0.0000494 secs]
[GC 1004K->108K(5056K), 0.0000464 secs]
[GC 1004K->108K(5056K), 0.0000481 secs]
[GC 1004K->108K(5056K), 0.0000455 secs]
[GC 1004K->108K(5056K), 0.0000469 secs]
[GC 1004K->108K(5056K), 0.0000455 secs]
[GC 1004K->108K(5056K), 0.0000486 secs]
[GC 1004K->108K(5056K), 0.0000469 secs]
[GC 1004K->108K(5056K), 0.0000455 secs]
[GC 1004K->108K(5056K), 0.0000469 secs]
[GC 1004K->108K(5056K), 0.0000458 secs]
[GC 1004K->108K(5056K), 0.0000531 secs]
[GC 1004K->108K(5056K), 0.0000461 secs]
[GC 1004K->108K(5056K), 0.0000461 secs]
[GC 1004K->108K(5056K), 0.0000475 secs]
[GC 1004K->108K(5056K), 0.0000534 secs]
[GC 1004K->108K(5056K), 0.0000481 secs]
[GC 1004K->108K(5056K), 0.0000461 secs]
[GC 1004K->108K(5056K), 0.0000472 secs]
[GC 1004K->108K(5056K), 0.0000458 secs]
[GC 1004K->108K(5056K), 0.0000458 secs]
[GC 1004K->108K(5056K), 0.0000472 secs]
[GC 1004K->108K(5056K), 0.0000453 secs]
[GC 1004K->108K(5056K), 0.0000472 secs]
[GC 1004K->108K(5056K), 0.0000455 secs]
[GC 1004K->108K(5056K), 0.0000472 secs]
[GC 1004K->108K(5056K), 0.0000455 secs]
[GC 1004K->108K(5056K), 0.0000469 secs]
[GC 1004K->108K(5056K), 0.0000453 secs]
[GC 1004K->108K(5056K), 0.0000472 secs]
[GC 1004K->108K(5056K), 0.0000534 secs]
[GC 1004K->108K(5056K), 0.0000458 secs]
[GC 1004K->108K(5056K), 0.0000475 secs]
[GC 1004K->108K(5056K), 0.0000455 secs]
[GC 1004K->108K(5056K), 0.0000469 secs]
[GC 1004K->108K(5056K), 0.0000472 secs]
[GC 1004K->108K(5056K), 0.0000472 secs]
[GC 1004K->108K(5056K), 0.0000455 secs]
[GC 1004K->108K(5056K), 0.0000469 secs]
[GC 1004K->108K(5056K), 0.0000481 secs]
[GC 1004K->108K(5056K), 0.0000475 secs]
[GC 1004K->108K(5056K), 0.0000458 secs]
[GC 1004K->108K(5056K), 0.0000461 secs]
[GC 1004K->108K(5056K), 0.0000472 secs]
[GC 1004K->108K(5056K), 0.0000458 secs]
[GC 1004K->108K(5056K), 0.0000475 secs]
[GC 1004K->108K(5056K), 0.0000455 secs]
[GC 1004K->108K(5056K), 0.0000475 secs]
[GC 1004K->108K(5056K), 0.0000472 secs]
[GC 1004K->108K(5056K), 0.0000469 secs]
[GC 1004K->108K(5056K), 0.0000458 secs]
[GC 1004K->108K(5056K), 0.0000592 secs]
[GC 1004K->108K(5056K), 0.0000464 secs]
[GC 1004K->108K(5056K), 0.0000458 secs]
[GC 1004K->108K(5056K), 0.0000469 secs]
[GC 1004K->108K(5056K), 0.0000453 secs]
[GC 1004K->108K(5056K), 0.0000469 secs]
[GC 1004K->108K(5056K), 0.0000486 secs]
[GC 1004K->108K(5056K), 0.0000472 secs]
[GC 1004K->108K(5056K), 0.0000458 secs]
[GC 1004K->108K(5056K), 0.0000455 secs]
[GC 1004K->108K(5056K), 0.0000601 secs]
[GC 1004K->108K(5056K), 0.0000458 secs]
[GC 1004K->108K(5056K), 0.0000455 secs]
[GC 1004K->108K(5056K), 0.0000472 secs]
[GC 1004K->108K(5056K), 0.0000453 secs]
[GC 1004K->108K(5056K), 0.0000472 secs]
[GC 1004K->108K(5056K), 0.0000453 secs]
[GC 1004K->108K(5056K), 0.0000469 secs]
[GC 1004K->108K(5056K), 0.0000481 secs]
Test@e86da0
[GC 1004K->108K(5056K), 0.0000528 secs]
[GC 1004K->108K(5056K), 0.0000464 secs]
[GC 1004K->108K(5056K), 0.0000475 secs]
[GC 1004K->108K(5056K), 0.0000545 secs]
[GC 1004K->108K(5056K), 0.0000472 secs]
[GC 1004K->108K(5056K), 0.0000481 secs]
[GC 1004K->108K(5056K), 0.0000458 secs]
[GC 1004K->108K(5056K), 0.0000478 secs]
[GC 1004K->108K(5056K), 0.0000475 secs]
[GC 1004K->108K(5056K), 0.0000472 secs]
[GC 1004K->108K(5056K), 0.0000458 secs]
[GC 1004K->108K(5056K), 0.0000472 secs]
[GC 1004K->108K(5056K), 0.0000475 secs]
[GC 1004K->108K(5056K), 0.0000481 secs]
[GC 1004K->108K(5056K), 0.0000464 secs]
[GC 1004K->108K(5056K), 0.0000486 secs]
[GC 1004K->108K(5056K), 0.0000478 secs]
[GC 1004K->108K(5056K), 0.0000489 secs]
[GC 1004K->108K(5056K), 0.0000481 secs]
[GC 1004K->108K(5056K), 0.0000461 secs]
[GC 1004K->108K(5056K), 0.0000472 secs]
[GC 1004K->108K(5056K), 0.0000455 secs]
[GC 1004K->108K(5056K), 0.0000472 secs]
[GC 1004K->108K(5056K), 0.0000455 secs]
[GC 1004K->108K(5056K), 0.0000472 secs]
[GC 1004K->108K(5056K), 0.0000461 secs]
[GC 1004K->108K(5056K), 0.0000458 secs]
[GC 1004K->108K(5056K), 0.0000472 secs]
[GC 1004K->108K(5056K), 0.0000455 secs]
[GC 1004K->108K(5056K), 0.0000475 secs]
[GC 1004K->108K(5056K), 0.0000453 secs]
[GC 1004K->108K(5056K), 0.0000472 secs]
[GC 1004K->108K(5056K), 0.0000453 secs]
[GC 1004K->108K(5056K), 0.0000472 secs]
[GC 1004K->108K(5056K), 0.0000464 secs]
[GC 1004K->108K(5056K), 0.0000612 secs]
[GC 1004K->108K(5056K), 0.0000455 secs]
[GC 1004K->108K(5056K), 0.0000455 secs]
[GC 1004K->108K(5056K), 0.0000458 secs]
[GC 1004K->108K(5056K), 0.0000469 secs]
[GC 1004K->108K(5056K), 0.0000455 secs]
[GC 1004K->108K(5056K), 0.0000902 secs]
[GC 1004K->108K(5056K), 0.0000469 secs]
[GC 1004K->108K(5056K), 0.0000601 secs]
[GC 1004K->108K(5056K), 0.0000475 secs]
[GC 1004K->108K(5056K), 0.0000458 secs]
[GC 1004K->108K(5056K), 0.0000475 secs]
[GC 1004K->108K(5056K), 0.0000458 secs]
[GC 1004K->108K(5056K), 0.0000475 secs]
[GC 1004K->108K(5056K), 0.0000461 secs]
[GC 1004K->108K(5056K), 0.0000472 secs]
[GC 1004K->108K(5056K), 0.0000503 secs]
[GC 1004K->108K(5056K), 0.0000483 secs]
[GC 1004K->108K(5056K), 0.0000458 secs]
[GC 1004K->108K(5056K), 0.0000623 secs]
[GC 1004K->108K(5056K), 0.0000461 secs]
[GC 1004K->108K(5056K), 0.0000461 secs]
[GC 1004K->108K(5056K), 0.0000475 secs]
[GC 1004K->108K(5056K), 0.0000458 secs]
[GC 1004K->108K(5056K), 0.0000475 secs]
[GC 1004K->108K(5056K), 0.0000458 secs]
[GC 1004K->108K(5056K), 0.0000472 secs]
[GC 1004K->108K(5056K), 0.0000458 secs]
[GC 1004K->108K(5056K), 0.0000475 secs]
[GC 1004K->108K(5056K), 0.0000673 secs]
[GC 1004K->108K(5056K), 0.0000461 secs]
[GC 1004K->108K(5056K), 0.0000478 secs]
[GC 1004K->108K(5056K), 0.0000455 secs]
[GC 1004K->108K(5056K), 0.0000475 secs]
[GC 1004K->108K(5056K), 0.0000458 secs]
[GC 1004K->108K(5056K), 0.0000478 secs]
[GC 1004K->108K(5056K), 0.0000455 secs]
[GC 1004K->108K(5056K), 0.0000475 secs]
[GC 1004K->108K(5056K), 0.0000455 secs]
[GC 1004K->108K(5056K), 0.0000623 secs]
[GC 1004K->108K(5056K), 0.0000455 secs]
[GC 1004K->108K(5056K), 0.0000481 secs]
[GC 1004K->108K(5056K), 0.0000472 secs]
[GC 1004K->108K(5056K), 0.0000455 secs]
[GC 1004K->108K(5056K), 0.0000469 secs]
[GC 1004K->108K(5056K), 0.0000475 secs]
[GC 1004K->108K(5056K), 0.0000475 secs]
[GC 1004K->108K(5056K), 0.0000453 secs]
[GC 1004K->108K(5056K), 0.0000453 secs]
[GC 1004K->108K(5056K), 0.0000481 secs]
[GC 1004K->108K(5056K), 0.0000514 secs]
[GC 1004K->108K(5056K), 0.0000455 secs]
Test@1975b59
Time: 172 ms
Apr 11 '08 #29
On Fri, 11 Apr 2008 17:26:01 +1200, Ian Collins <ia******@hotmail.com>
wrote:
>That's not the point. The example was merely an illustration of how
object allocation can be optimised if required.
Where are these objects deleted .. does the time includes that? As I
showed in the loop, in the case of java GC collects most of these
objects in the loop.
Apr 11 '08 #30
On Thu, 10 Apr 2008 20:37:59 -0500, Razii
<DO*************@hotmail.comwrote:
>int main(int argc, char *argv[]) {

clock_t start=clock();
for (int i=0; i<=10000000; i++) {
Test *test = new Test(i);
if (i % 5000000 == 0)
cout << test;
}
If I add delete test; to this loop it gets faster. huh? what the
exaplanation for this?

2156 ms

and after I add delete test; to the loop

1781 ms

why is that?
Apr 11 '08 #31
On Apr 11, 10:20 am, Razii <DONTwhatever...@hotmail.comwrote:
On Thu, 10 Apr 2008 20:37:59 -0500, Razii

<DONTwhatever...@hotmail.comwrote:
int main(int argc, char *argv[]) {
clock_t start=clock();
for (int i=0; i<=10000000; i++) {
Test *test = new Test(i);
if (i % 5000000 == 0)
cout << test;
}

If I add delete test; to this loop it gets faster. huh? what the
exaplanation for this?

2156 ms

and after I add delete test; to the loop

1781 ms

why is that?
Probably because of some compiler's optimizations.

When do you free memory in Java? Before or after you stop the clock?
Apr 11 '08 #32
Razii wrote:
On Thu, 10 Apr 2008 20:37:59 -0500, Razii
<DO*************@hotmail.comwrote:
>int main(int argc, char *argv[]) {

clock_t start=clock();
for (int i=0; i<=10000000; i++) {
Test *test = new Test(i);
if (i % 5000000 == 0)
cout << test;
}

If I add delete test; to this loop it gets faster. huh? what the
exaplanation for this?
You have just disproved your original hypothesis. Memory is returned to
the allocator, so it doesn't have to keep fetching more from the system.

--
Ian Collins.
Apr 11 '08 #33
Razii wrote:
On Fri, 11 Apr 2008 17:26:01 +1200, Ian Collins <ia******@hotmail.com>
wrote:
>That's not the point. The example was merely an illustration of how
object allocation can be optimised if required.

Where are these objects deleted .. does the time includes that? As I
showed in the loop, in the case of java GC collects most of these
objects in the loop.
As I pointed out earlier, you are leaking memory. Didn't you realise
that? Do you read peoples responses?

--
Ian Collins.
Apr 11 '08 #34
On Fri, 11 Apr 2008 00:31:30 -0700 (PDT), asterisc <Ra*******@ni.com>
wrote:
>When do you free memory in Java? Before or after you stop the clock?

You don't. When an object is no longer referenced by the program, the
GC will recycled it. The space is made available for subsequent new
objects.

Apr 11 '08 #35
On Fri, 11 Apr 2008 19:38:55 +1200, Ian Collins <ia******@hotmail.com>
wrote:
>As I pointed out earlier, you are leaking memory. Didn't you realise
that?
In the original C++ version I was leaking memory and I knew that. I
deliberately made it that way since I wasn't sure whether GC will run
at all in java version in such a short running application. However,
as it turns out, adding delete makes the C++ version faster. If I knew
that, I would have added it.
Apr 11 '08 #36
On Fri, 11 Apr 2008 19:35:52 +1200, Ian Collins <ia******@hotmail.com>
wrote:
>You have just disproved your original hypothesis.
My number one hypothesis was that "new" is faster in java than in c++.
It doesn't behave the same way as in c++. I was speculating that the
reason for that is call to operating system in c++.
Apr 11 '08 #37
Razii wrote:
On Fri, 11 Apr 2008 19:35:52 +1200, Ian Collins <ia******@hotmail.com>
wrote:
>You have just disproved your original hypothesis.

My number one hypothesis was that "new" is faster in java than in c++.
It doesn't behave the same way as in c++. I was speculating that the
reason for that is call to operating system in c++.
You claimed "In C++, each "new" allocation request will be sent to the
operating system, which is slow."

You have just disproved this by comparing the performance with and
without returning memory to the allocator. If each new requested memory
from the system, the performance would not have changed so
significantly.

--
Ian Collins.
Apr 11 '08 #38
On Fri, 11 Apr 2008 00:58:30 -0700 (PDT), asterisc <Ra*******@ni.com>
wrote:
>When exactly is the memory deallocated? Before or after you stop the
clock?
I already posted the output in the other post. The clock stops when
this is printed.

long end = System.currentTimeMillis();
System.out.println("Time: " + (end - start) + " ms");

The GC (in a different thread) runs several times while the main
thread is in the loop.

Once the main() thread ends, all the memory is returned to the
operating system by the JVM anyway.
Test@19f953d <----printed when i == 0
[GC 896K->108K(5056K), 0.0018061 secs]
[GC 1004K->108K(5056K), 0.0004453 secs]
[GC 1004K->108K(5056K), 0.0001847 secs]
[GC 1004K->108K(5056K), 0.0000500 secs]
[GC 1004K->108K(5056K), 0.0000461 secs]
[GC 1004K->108K(5056K), 0.0000478 secs]
[GC 1004K->108K(5056K), 0.0000458 secs]
[GC 1004K->108K(5056K), 0.0000478 secs]
[GC 1004K->108K(5056K), 0.0000506 secs]
[GC 1004K->108K(5056K), 0.0000483 secs]
[GC 1004K->108K(5056K), 0.0000461 secs]
[GC 1004K->108K(5056K), 0.0000497 secs]
[GC 1004K->108K(5056K), 0.0000458 secs]
[GC 1004K->108K(5056K), 0.0000461 secs]
[GC 1004K->108K(5056K), 0.0000475 secs]
[GC 1004K->108K(5056K), 0.0000458 secs]
[GC 1004K->108K(5056K), 0.0000955 secs]
[GC 1004K->108K(5056K), 0.0000494 secs]
[GC 1004K->108K(5056K), 0.0000464 secs]
[GC 1004K->108K(5056K), 0.0000481 secs]
[GC 1004K->108K(5056K), 0.0000455 secs]
[GC 1004K->108K(5056K), 0.0000469 secs]
[GC 1004K->108K(5056K), 0.0000455 secs]
[GC 1004K->108K(5056K), 0.0000486 secs]
[GC 1004K->108K(5056K), 0.0000469 secs]
[GC 1004K->108K(5056K), 0.0000455 secs]
[GC 1004K->108K(5056K), 0.0000469 secs]
[GC 1004K->108K(5056K), 0.0000458 secs]
[GC 1004K->108K(5056K), 0.0000531 secs]
[GC 1004K->108K(5056K), 0.0000461 secs]
[GC 1004K->108K(5056K), 0.0000461 secs]
[GC 1004K->108K(5056K), 0.0000475 secs]
[GC 1004K->108K(5056K), 0.0000534 secs]
[GC 1004K->108K(5056K), 0.0000481 secs]
[GC 1004K->108K(5056K), 0.0000461 secs]
[GC 1004K->108K(5056K), 0.0000472 secs]
[GC 1004K->108K(5056K), 0.0000458 secs]
[GC 1004K->108K(5056K), 0.0000458 secs]
[GC 1004K->108K(5056K), 0.0000472 secs]
[GC 1004K->108K(5056K), 0.0000453 secs]
[GC 1004K->108K(5056K), 0.0000472 secs]
[GC 1004K->108K(5056K), 0.0000455 secs]
[GC 1004K->108K(5056K), 0.0000472 secs]
[GC 1004K->108K(5056K), 0.0000455 secs]
[GC 1004K->108K(5056K), 0.0000469 secs]
[GC 1004K->108K(5056K), 0.0000453 secs]
[GC 1004K->108K(5056K), 0.0000472 secs]
[GC 1004K->108K(5056K), 0.0000534 secs]
[GC 1004K->108K(5056K), 0.0000458 secs]
[GC 1004K->108K(5056K), 0.0000475 secs]
[GC 1004K->108K(5056K), 0.0000455 secs]
[GC 1004K->108K(5056K), 0.0000469 secs]
[GC 1004K->108K(5056K), 0.0000472 secs]
[GC 1004K->108K(5056K), 0.0000472 secs]
[GC 1004K->108K(5056K), 0.0000455 secs]
[GC 1004K->108K(5056K), 0.0000469 secs]
[GC 1004K->108K(5056K), 0.0000481 secs]
[GC 1004K->108K(5056K), 0.0000475 secs]
[GC 1004K->108K(5056K), 0.0000458 secs]
[GC 1004K->108K(5056K), 0.0000461 secs]
[GC 1004K->108K(5056K), 0.0000472 secs]
[GC 1004K->108K(5056K), 0.0000458 secs]
[GC 1004K->108K(5056K), 0.0000475 secs]
[GC 1004K->108K(5056K), 0.0000455 secs]
[GC 1004K->108K(5056K), 0.0000475 secs]
[GC 1004K->108K(5056K), 0.0000472 secs]
[GC 1004K->108K(5056K), 0.0000469 secs]
[GC 1004K->108K(5056K), 0.0000458 secs]
[GC 1004K->108K(5056K), 0.0000592 secs]
[GC 1004K->108K(5056K), 0.0000464 secs]
[GC 1004K->108K(5056K), 0.0000458 secs]
[GC 1004K->108K(5056K), 0.0000469 secs]
[GC 1004K->108K(5056K), 0.0000453 secs]
[GC 1004K->108K(5056K), 0.0000469 secs]
[GC 1004K->108K(5056K), 0.0000486 secs]
[GC 1004K->108K(5056K), 0.0000472 secs]
[GC 1004K->108K(5056K), 0.0000458 secs]
[GC 1004K->108K(5056K), 0.0000455 secs]
[GC 1004K->108K(5056K), 0.0000601 secs]
[GC 1004K->108K(5056K), 0.0000458 secs]
[GC 1004K->108K(5056K), 0.0000455 secs]
[GC 1004K->108K(5056K), 0.0000472 secs]
[GC 1004K->108K(5056K), 0.0000453 secs]
[GC 1004K->108K(5056K), 0.0000472 secs]
[GC 1004K->108K(5056K), 0.0000453 secs]
[GC 1004K->108K(5056K), 0.0000469 secs]
[GC 1004K->108K(5056K), 0.0000481 secs]
Test@e86da0 <----printed when i == 5000000
[GC 1004K->108K(5056K), 0.0000528 secs]
[GC 1004K->108K(5056K), 0.0000464 secs]
[GC 1004K->108K(5056K), 0.0000475 secs]
[GC 1004K->108K(5056K), 0.0000545 secs]
[GC 1004K->108K(5056K), 0.0000472 secs]
[GC 1004K->108K(5056K), 0.0000481 secs]
[GC 1004K->108K(5056K), 0.0000458 secs]
[GC 1004K->108K(5056K), 0.0000478 secs]
[GC 1004K->108K(5056K), 0.0000475 secs]
[GC 1004K->108K(5056K), 0.0000472 secs]
[GC 1004K->108K(5056K), 0.0000458 secs]
[GC 1004K->108K(5056K), 0.0000472 secs]
[GC 1004K->108K(5056K), 0.0000475 secs]
[GC 1004K->108K(5056K), 0.0000481 secs]
[GC 1004K->108K(5056K), 0.0000464 secs]
[GC 1004K->108K(5056K), 0.0000486 secs]
[GC 1004K->108K(5056K), 0.0000478 secs]
[GC 1004K->108K(5056K), 0.0000489 secs]
[GC 1004K->108K(5056K), 0.0000481 secs]
[GC 1004K->108K(5056K), 0.0000461 secs]
[GC 1004K->108K(5056K), 0.0000472 secs]
[GC 1004K->108K(5056K), 0.0000455 secs]
[GC 1004K->108K(5056K), 0.0000472 secs]
[GC 1004K->108K(5056K), 0.0000455 secs]
[GC 1004K->108K(5056K), 0.0000472 secs]
[GC 1004K->108K(5056K), 0.0000461 secs]
[GC 1004K->108K(5056K), 0.0000458 secs]
[GC 1004K->108K(5056K), 0.0000472 secs]
[GC 1004K->108K(5056K), 0.0000455 secs]
[GC 1004K->108K(5056K), 0.0000475 secs]
[GC 1004K->108K(5056K), 0.0000453 secs]
[GC 1004K->108K(5056K), 0.0000472 secs]
[GC 1004K->108K(5056K), 0.0000453 secs]
[GC 1004K->108K(5056K), 0.0000472 secs]
[GC 1004K->108K(5056K), 0.0000464 secs]
[GC 1004K->108K(5056K), 0.0000612 secs]
[GC 1004K->108K(5056K), 0.0000455 secs]
[GC 1004K->108K(5056K), 0.0000455 secs]
[GC 1004K->108K(5056K), 0.0000458 secs]
[GC 1004K->108K(5056K), 0.0000469 secs]
[GC 1004K->108K(5056K), 0.0000455 secs]
[GC 1004K->108K(5056K), 0.0000902 secs]
[GC 1004K->108K(5056K), 0.0000469 secs]
[GC 1004K->108K(5056K), 0.0000601 secs]
[GC 1004K->108K(5056K), 0.0000475 secs]
[GC 1004K->108K(5056K), 0.0000458 secs]
[GC 1004K->108K(5056K), 0.0000475 secs]
[GC 1004K->108K(5056K), 0.0000458 secs]
[GC 1004K->108K(5056K), 0.0000475 secs]
[GC 1004K->108K(5056K), 0.0000461 secs]
[GC 1004K->108K(5056K), 0.0000472 secs]
[GC 1004K->108K(5056K), 0.0000503 secs]
[GC 1004K->108K(5056K), 0.0000483 secs]
[GC 1004K->108K(5056K), 0.0000458 secs]
[GC 1004K->108K(5056K), 0.0000623 secs]
[GC 1004K->108K(5056K), 0.0000461 secs]
[GC 1004K->108K(5056K), 0.0000461 secs]
[GC 1004K->108K(5056K), 0.0000475 secs]
[GC 1004K->108K(5056K), 0.0000458 secs]
[GC 1004K->108K(5056K), 0.0000475 secs]
[GC 1004K->108K(5056K), 0.0000458 secs]
[GC 1004K->108K(5056K), 0.0000472 secs]
[GC 1004K->108K(5056K), 0.0000458 secs]
[GC 1004K->108K(5056K), 0.0000475 secs]
[GC 1004K->108K(5056K), 0.0000673 secs]
[GC 1004K->108K(5056K), 0.0000461 secs]
[GC 1004K->108K(5056K), 0.0000478 secs]
[GC 1004K->108K(5056K), 0.0000455 secs]
[GC 1004K->108K(5056K), 0.0000475 secs]
[GC 1004K->108K(5056K), 0.0000458 secs]
[GC 1004K->108K(5056K), 0.0000478 secs]
[GC 1004K->108K(5056K), 0.0000455 secs]
[GC 1004K->108K(5056K), 0.0000475 secs]
[GC 1004K->108K(5056K), 0.0000455 secs]
[GC 1004K->108K(5056K), 0.0000623 secs]
[GC 1004K->108K(5056K), 0.0000455 secs]
[GC 1004K->108K(5056K), 0.0000481 secs]
[GC 1004K->108K(5056K), 0.0000472 secs]
[GC 1004K->108K(5056K), 0.0000455 secs]
[GC 1004K->108K(5056K), 0.0000469 secs]
[GC 1004K->108K(5056K), 0.0000475 secs]
[GC 1004K->108K(5056K), 0.0000475 secs]
[GC 1004K->108K(5056K), 0.0000453 secs]
[GC 1004K->108K(5056K), 0.0000453 secs]
[GC 1004K->108K(5056K), 0.0000481 secs]
[GC 1004K->108K(5056K), 0.0000514 secs]
[GC 1004K->108K(5056K), 0.0000455 secs]
Test@1975b59 <----printed when i == 10000000
Time: 172 ms
Apr 11 '08 #39
On Fri, 11 Apr 2008 20:06:08 +1200, Ian Collins <ia******@hotmail.com>
wrote:
>You claimed "In C++, each "new" allocation request will be sent to the
operating system, which is slow."
Yes, I did say that based on what I read on a web site. That was his
explanation regarding why allocating memory with new is slower in c++.

Apr 11 '08 #40
Razii wrote:
Yes, I did say that based on what I read on a web site. That was his
explanation regarding why allocating memory with new is slower in c++.
Maybe you shouldn't believe everything you read on a web site.

Apr 11 '08 #41
Sam
Razii writes:
On Thu, 10 Apr 2008 21:29:44 -0500, Sam <sa*@email-scan.comwrote:
>>See how well /that/ benchmarks against Java :-)

That is not the topic. The topic is how the keyword "new" behaves.

0x12ff5c
0x12ff5c
0x12ff5c
Time: 62 ms
See -- C++ is faster than Java. And the topic isn't how the keyword "new"
behaves, but, as it says, “java vs c++ difference in "new" ”. Which, of
course, includes the fact that their respective usage cases are completely
different, and a straight, linear comparison of the kind you made is
meaningless. In C++, objects are allocated on the heap, via new, much less
frequently than in Java, so a one-to-one benchmarking tells you very little.
I pointed out that in many instances C++ objects are instantiated on the
stack, which incurs much less overhead than "new". Furthermore, you don't
even have to use "new" to allocate objects on the heap anyway, in C++.

Take /that/ one for a spin, and see what happens.
All the references are the same -- not the same output as in the last
version (or java version).
In your Java version, you added a newline at the end of every print
statement. In your original C++ version, you printed the pointer value
without a trailing newline. I fixed it for you. That's why the output is
different.
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.7 (GNU/Linux)

iD8DBQBH/0ewx9p3GYHlUOIRAl6vAJ9LiO6RWyZJ7rPkT497bbGnv2K77QC fcBfy
xuaHP29jOnkHJMeuHY01ntQ=
=ryPZ
-----END PGP SIGNATURE-----

Apr 11 '08 #42
Razii wrote:
Also, according the site
above, in C++ memory is never returned to the operating system (at
least the older OS)
Which "older OS"? Some 30yo?
Apr 11 '08 #43
Razii wrote:
On Fri, 11 Apr 2008 03:35:27 +0300, Juha Nieminen
<no****@thanks.invalidwrote:
>Razii wrote:
>>In C++, each "new" allocation request
will be sent to the operating system, which is slow.
That's blatantly false.

Well, my friend, I have proven you wrong. Razi has been victorious
once again :)
Uh? I said that the claim that each "new" performs an OS call is
blatantly false. How does your program prove that wrong?

Your C++ program does *not* perform a system call for each executed
new, and I can prove that.
Apr 11 '08 #44
Razii wrote:
On Fri, 11 Apr 2008 03:35:53 GMT, "Mike Schilling"
<ms*************@hotmail.comwrote:
>And you're wrong, as has been demonstrated repeatedly. There's no
point trying to explain this any further.

There was nothing wrong with my basic premise that creating objects
with on heap is much faster in java than in c++. Google search
confirms..
Your quote confirms that your claim that each 'new' causes an OS call
is false.
Apr 11 '08 #45
Razii wrote:
That is not the topic. The topic is how the keyword "new" behaves.
You just don't get it, do you. Your original claim was that "new" in
C++ is slow "because each new calls the OS". That's just false. Each
"new" does *not* call the OS.
Apr 11 '08 #46
On Apr 11, 2:06 pm, r...@zedat.fu-berlin.de (Stefan Ram) wrote:
Juha Nieminen <nos...@thanks.invalidwrites:
I don't see how this is so much different from what Java does.
»[A]llocation in modern JVMs is far faster than the best
performing malloc implementations. The common code path
for new Object() in HotSpot 1.4.2 and later is
approximately 10 machine instructions (data provided by
Sun; see Resources), whereas the best performing malloc
implementations in C require on average between 60 and 100
instructions per call (Detlefs, et. al.; see Resources).
And allocation performance is not a trivial component of
overall performance -- benchmarks show that many
real-world C and C++ programs, such as Perl and
Ghostscript, spend 20 to 30 percent of their total
execution time in malloc and free -- far more than the
allocation and garbage collection overhead of a healthy
Java application (Zorn; see Resources).«
Just for the record, this is typical market people speak.

First, of course, it's well known that an allocation in a
compacting garbage collector can be significantly faster than in
any non-compacting scheme (for variable length allocations).
It's also well known that the actual garbage collection sweep in
a compacting collector is more expensive than in a
non-compacting one. You don't get something for nothing.
Whether the trade off is advantages depends on the application.
(I suspect that there are a lot of applications where the
tradeoff does favor compacting, but there are certainly others
where it doesn't, and only talking about the number of
machine instructions in allocation, without mentionning other
costs, is disingenuous, at best.)

Secondly, I've written C++ programs which spend 0% of their
total execution time in malloc and free, and I've had to fix one
which spent well over 99.9% of its time there. With such
variance, "average" looses all meaning. And the choice of Perl
and Ghostscript as "typical" C++ programs is somewhat
disingenuous as well; both implement interpreters for languages
which use garbage collection, so almost by definition, both
would benefit from garbage collection. (In fact, both probably
implement it somehow internally, so the percentage of time spent
in malloc/free is that of a garbage collected program.)

--
James Kanze (GABI Software) email:ja*********@gmail.com
Conseils en informatique orientée objet/
Beratung in objektorientierter Datenverarbeitung
9 place Sémard, 78210 St.-Cyr-l'École, France, +33 (0)1 30 23 00 34
Apr 11 '08 #47
lb*******@yahoo.com wrote:
But when Sun's marketing tells bad programmers they don't have to
worry abour memory anymore, it's a disaster. They do have to worry,
just in a different way.
Well... "marketing"... "bad programmers"...

Perhaps some of the more simple-minded specimens of "decision makers"
believe that Sun has solved the "memory problem". Maybe some of the less
experienced programmers believe that, too. However, that's not a problem
with automatic memory management or indeed any other technical issue.
You can't even blame Sun's marketing, it's their job, more or less.
Apr 11 '08 #48
Lew
Juha Nieminen wrote:
Razii wrote:
>That is not the topic. The topic is how the keyword "new" behaves.

You just don't get it, do you.
Don't feed the trolls. Especially don't feed them by being trollish in return.

--
Lew
Apr 11 '08 #49
On Fri, 11 Apr 2008 09:25:43 -0400, Lew <le*@lewscanon.comwrote:
>Don't feed the trolls. Especially don't feed them by being trollish in return.
*PLONK*

Apr 11 '08 #50

This thread has been closed and replies have been disabled. Please start a new discussion.

Similar topics

1
by: Bob | last post by:
Are there any known applications out there used to test the performance of the .NET garbage collector over a long period of time? Basically I need an application that creates objects, uses them, and...
4
by: Pedro Miguel Carvalho | last post by:
Greetings. I'm creating a project that as a intricate relation between object kind of like a set where each object in the set can be connect to a subset of object of the set and objects not in...
10
by: pachanga | last post by:
The Hans-Boehm garbage collector can be successfully used with C and C++, but not yet a standard for C++.. Is there talks about Garbage Collector to become in the C++ standard?
5
by: Ben | last post by:
Could someone please verify if what I am doing as follow is corrected: 1. when dealing with custom class objects: ..... public myObject as myClass myObject as New myClass .......here I am...
13
by: Mingnan G. | last post by:
Hello everyone. I have written a garbage collector for standard C++ application. It has following main features. 1) Deterministic Finalization Providing deterministic finalization, the system...
28
by: Goalie_Ca | last post by:
I have been reading (or at least googling) about the potential addition of optional garbage collection to C++0x. There are numerous myths and whatnot with very little detailed information. Will...
142
by: jacob navia | last post by:
Abstract -------- Garbage collection is a method of managing memory by using a "collector" library. Periodically, or triggered by an allocation request, the collector looks for unused memory...
8
by: Paul.Lee.1971 | last post by:
Hi everyone, A program that I'm helping to code seems to slow down drastically during initialisation, and looking at the profiling graph, it seems to be the garbage collector thats slowing things...
56
by: Johnny E. Jensen | last post by:
Hellow I'am not sure what to think about the Garbage Collector. I have a Class OutlookObject, It have two private variables. Private Microsoft.Office.Interop.Outlook.Application _Application =...
46
by: Carlo Milanesi | last post by:
Hello, traditionally, in C++, dynamically allocated memory has been managed explicitly by calling "delete" in the application code. Now, in addition to the standard library strings, containers,...
2
isladogs
by: isladogs | last post by:
The next Access Europe meeting will be on Wednesday 7 Feb 2024 starting at 18:00 UK time (6PM UTC) and finishing at about 19:30 (7.30PM). In this month's session, the creator of the excellent VBE...
0
by: DolphinDB | last post by:
The formulas of 101 quantitative trading alphas used by WorldQuant were presented in the paper 101 Formulaic Alphas. However, some formulas are complex, leading to challenges in calculation. Take...
0
by: DolphinDB | last post by:
Tired of spending countless mintues downsampling your data? Look no further! In this article, you’ll learn how to efficiently downsample 6.48 billion high-frequency records to 61 million...
0
by: Aftab Ahmad | last post by:
So, I have written a code for a cmd called "Send WhatsApp Message" to open and send WhatsApp messaage. The code is given below. Dim IE As Object Set IE =...
0
isladogs
by: isladogs | last post by:
The next Access Europe meeting will be on Wednesday 6 Mar 2024 starting at 18:00 UK time (6PM UTC) and finishing at about 19:15 (7.15PM). In this month's session, we are pleased to welcome back...
0
by: marcoviolo | last post by:
Dear all, I would like to implement on my worksheet an vlookup dynamic , that consider a change of pivot excel via win32com, from an external excel (without open it) and save the new file into a...
0
by: Vimpel783 | last post by:
Hello! Guys, I found this code on the Internet, but I need to modify it a little. It works well, the problem is this: Data is sent from only one cell, in this case B5, but it is necessary that data...
0
by: ArrayDB | last post by:
The error message I've encountered is; ERROR:root:Error generating model response: exception: access violation writing 0x0000000000005140, which seems to be indicative of an access violation...
1
by: PapaRatzi | last post by:
Hello, I am teaching myself MS Access forms design and Visual Basic. I've created a table to capture a list of Top 30 singles and forms to capture new entries. The final step is a form (unbound)...

By using Bytes.com and it's services, you agree to our Privacy Policy and Terms of Use.

To disable or enable advertisements and analytics tracking please visit the manage ads & tracking page.