473,386 Members | 1,835 Online
Bytes | Software Development & Data Engineering Community
Post Job

Home Posts Topics Members FAQ

Join Bytes to post your question to a community of 473,386 software developers and data experts.

Do you use a garbage collector?

I followed a link to James Kanze's web site in another thread and was
surprised to read this comment by a link to a GC:

"I can't imagine writing C++ without it"

How many of you c.l.c++'ers use one, and in what percentage of your
projects is one used? I have never used one in personal or professional
C++ programming. Am I a holdover to days gone by?
Apr 10 '08
350 11414
"Chris Thomasson" <cr*****@comcast.netwrote in message
news:F7******************************@comcast.com. ..
"Lew" <le*@lewscanon.comwrote in message
news:Jp******************************@comcast.com. ..
>Chris Thomasson wrote:
>>AFAICT, in a sense, you don't what your talking about. Explain how you
synchronize with this counter? Are you going to stripe it? If so, at
what level of granularity? Java can use advanced memory allocation
techniques.
How many memory allocators have you written?

These questions are meaningless and irrelevant.

No one has to synchronize with the memory "counter" (I would use the term
"pointer" myself) in Java. That is handled by the JVM.

Perhaps I a misunderstanding what Reedy meant by counter.
Roedy! Not Reedy... CRAP!

See, I thought that he meant count from a single base of memory. In other
words, increment a pointer. This can be analogous to a counter. Think in
terms of using FAA to increment a pointer location off of common base
memory. This can definitely be used for an allocator that does not really
like to free anything... How would you do it? I bet you would not use this
method.
Jun 27 '08 #151
Lew
"Lew" wrote
>However, Roedy did not mention a "global pointer". You introduced
that into the conversation, and have yet to explain what you mean by
that.
Chris Thomasson wrote:
I mean atomically incrementing a global pointer off a common base of
memory. This is basic memory allocator implementation 101. You know
this. Perhaps Roedy was talking about a distributed model. What say you?
I'll repeat what I said, but in different words in hope that I am clearer.

What Roedy meant is that Java allocators simply increment the pointer to
available memory by the size of the allocation. Details of *which* memory
pointer, and any synchronization needed if any, are left to the JVM. This
process is in-built to the semantics of the 'new' operator. One does not
explicitly manage any of that. I could not tell you if the operation involves
a global pointer or not without checking the specifics of a particular JVM
implementation, of which there are several from different vendors.
Java can use both, and all methods in between. You don't necessarily
want to send atomic mutations to a common location. Perhaps Roedy meant
N counts off the bases of multiple memory pools. I don't know. I was
speculating. I hope I was wrong. I thought he meant count off a single
location. That's not going to scale very well... You can break big
It scales just fine. The American IRS accepts millions of
electronically-filed tax returns in just a few days through such a system.
buffer into little ones and distributed them over threads... Then the
"count" would be off a thread local pool instead of a global set of
whatever... I have created a lot of allocators, and know a lot about
some of the caveats.
Then I suggest you google for how a particular JVM that interests you does it.
I assure you that in production systems the world over the volume scales
quite well, so the JVM writers must be addressing those issues to which you
refer. This is true for all the big JVM players of which I'm aware.

However, in the context of this conversation the point is moot. As Java
programmers we have *no* direct control over that aspect of the JVM. As Java
programmers, all we can do is invoke 'new'. We *never* deal programmatically
with how the JVM handles the issues you mention.

On the flip side, JVMs provide rich sets of options to select and tune the GC
algorithms, but this has nothing to do with the issues you raise.

Is that finally clear? That Java programmers do not write custom allocators?
That Java programmers know by the very language definition itself that 'new'
is thread-safe? That Java programmers know from JVM documentation and
practical experience that whatever technique they use, these JVMs manage to
scale fabulously? That all the points you raised are *completely* subsumed by
the definition and action of the 'new' operator?

If not, I have no idea how one could possibly make these points any clearer.

--
Lew
Jun 27 '08 #152
Lew
Chris Thomasson wrote:
"Chris Thomasson" <cr*****@comcast.netwrote in message
news:F7******************************@comcast.com. ..
>"Lew" <le*@lewscanon.comwrote in message
news:Jp******************************@comcast.com ...
>>Chris Thomasson wrote:
AFAICT, in a sense, you don't what your talking about. Explain how
you synchronize with this counter? Are you going to stripe it? If
so, at what level of granularity? Java can use advanced memory
allocation techniques.
How many memory allocators have you written?

These questions are meaningless and irrelevant.

No one has to synchronize with the memory "counter" (I would use the
term "pointer" myself) in Java. That is handled by the JVM.

Perhaps I a misunderstanding what Reedy meant by counter.

Roedy! Not Reedy... CRAP!

>See, I thought that he meant count from a single base of memory. In
At any given moment, that is how the JVM allocates memory.
>other words, increment a pointer. This can be analogous to a counter.
Think in terms of using FAA to increment a pointer location off of
common base memory. This can definitely be used for an allocator that
does not really like to free anything... How would you do it? I bet
you would not use this method.
I let the JVM do it, since it takes only about 10 machine instructions to
allocate an object, and guarantees the result to be thread safe.

--
Lew
Jun 27 '08 #153
On Sat, 12 Apr 2008 06:08:16 -0700, "Chris Thomasson"
<cr*****@comcast.netwrote, quoted or indirectly quoted someone who
said :
>Java can use a multitude of allocation techniques.
Yes, in theory, but since Java uses garbage collection which nearly
always would imply it can later move objects, it can get away with a
very simple, very rapid allocation algorithm, namely adding the length
of the object to the free space pointer. In Java there are all kinds
of short-lived objects created. It makes sense to allocate them as
rapidly as possible.

In C++, objects typically don't move, so the allocation algorithm has
to take more care with where it places them.

--

Roedy Green Canadian Mind Products
The Java Glossary
http://mindprod.com
Jun 27 '08 #154
On Sat, 12 Apr 2008 10:16:24 -0700, "Chris Thomasson"
<cr*****@comcast.netwrote, quoted or indirectly quoted someone who
said :
>
How does research hurt me, or anybody else?
It is not what you are saying, but HOW you are saying it. You come
across like a prosecutor crossed with a bratty five year old.
--

Roedy Green Canadian Mind Products
The Java Glossary
http://mindprod.com
Jun 27 '08 #155
ke****@gmail.com wrote:
It's so unfair!
But it doesn't matter much.

>
Razii $B<LF;!'(B
>On Fri, 11 Apr 2008 03:35:27 +0300, Juha Nieminen
<no****@thanks.invalidwrote:
>>Razii wrote:
In C++, each "new" allocation request
will be sent to the operating system, which is slow.

That's blatantly false.

Well, my friend, I have proven you wrong. Razi has been victorious
once again :)
Time: 2125 ms (C++)
Time: 328 ms (java)
--- c++--

#include <ctime>
#include <cstdlib>
#include <iostream>

using namespace std;

class Test {
public:
Test (int c) {count = c;}
This is assignment after initialization.
It should be like this:
Test(int c) : count(c) {}
A decent optimizer will recognize this and generate identical code for
int types.
> virtual ~Test() { }
int count;
};

int main(int argc, char *argv[]) {

clock_t start=clock();
for (int i=0; i<=10000000; i++) {
Test *test = new Test(i);
if (i % 5000000 == 0)
cout << test;
The memory you allocated is not released, so every new() is
actually a allocation operation to libc.
When the heap is empty, a new page of memory is allocated from OS.
The really unfair thing is that you would never ever do anything like
this in C++. Why would any application allocate ten million int sized
objects separately?

std::vector<Test test(10000001);
Bo Persson
Jun 27 '08 #156
On Sat, 12 Apr 2008 06:13:51 -0700, "Chris Thomasson"
<cr*****@comcast.netwrote, quoted or indirectly quoted someone who
said :
>
AFAICT, in a sense, you don't what your talking about. Explain how you
synchronize with this counter? Are you going to stripe it? If so, at what
level of granularity? Java can use advanced memory allocation techniques.
All I said was JVMs can for the usual case use an extremely fast
simple memory allocation mechanism. They just peel the next N bytes
off the free space pool. They don't have to search around for the
right sized hole. GC/memory allocation is the subject an astounding
amount of cleverness and creativity. I get people started on that
exploration with my essay at
http://mindprod.com/jgloss/garbagecollection.html

I had no intention of claiming this was all there was to memory
allocation or garbage collection.

I am not on trial. So get stuffed.
--

Roedy Green Canadian Mind Products
The Java Glossary
http://mindprod.com
Jun 27 '08 #157
In article <A9******************************@comcast.com>,
cr*****@comcast.net says...

[ ... fully connected network for deallocation ]
The design exhibits great performance, except when presented with a user
program that persistently creates and destroys multiple threads. This can be
fairly significant downside wrt creating general purpose tools... ;^(
I wonder whether it isn't better to just avoid this problem. Rather than
always creating and destroying threads on demand, create a thread pool.
When the user wants a thread, they're really just allocating the use of
the thread from the pool (i.e. you give that thread the address where it
needs to start executing, and send it on its way). When they ask to
delete a thread, you just return it to the pool. If they ask for a
thread and the pool is empty, then you create a new thread. You could
then add a task for the thread pool to execute occasionally that trims
the thread pool if it stays too large for too long.

Obviously, this goes a bit beyond pure memory management, but not really
by a huge amount -- it's still definitely in the realm of resource
management.

[ ... ]
Oh, I realize it doesn't need to be larger -- this wasn't an attempt at
an optimized implementation at all, but purely an attempt at
demonstrating the general idea in a minimum of code, so it includes no
"tricks" of any kind at all.

You have the 100% right idea overall.
I'm assuming you mean that since nearly all the data is constant on a
per-allocator basis, you just create a single information block per
allocator, then each allocated block just carries a pointer to the
information block in its associated allocator.

If you want to badly enough, you can reduce the per-block overhead even
more than that though -- instead of storing a pointer, store only an
index into a vector of information blocks. With some care, you should
even be able to eliminate that -- for example, have N bits of the
addresses produced by each allocator unique from the range used by any
other allocator. In this case, retrieving the allocator for a specific
block consists of a shift and mask of the block's address. Better still,
this is immune to the code using the block overwriting data by writing
to addresses outside the allocated space.

Obviously this latter method applies better to 64-bit addressing, but
with a bit of care could undoubtedly be applied for quite a few 32-bit
situations as well.

--
Later,
Jerry.

The universe is a figment of its own imagination.
Jun 27 '08 #158
On Sat, 12 Apr 2008 10:12:46 -0400, Lew <le*@lewscanon.comwrote,
quoted or indirectly quoted someone who said :
>How many memory allocators have you written?

These questions are meaningless and irrelevant.
Several simple ones, but I have not written a Java JVM. I wrote BBL
Forth which was twice as fast as the competition, so I am not a
complete newbie.

--

Roedy Green Canadian Mind Products
The Java Glossary
http://mindprod.com
Jun 27 '08 #159
On Sat, 12 Apr 2008 10:18:12 -0700, "Chris Thomasson"
<cr*****@comcast.netwrote, quoted or indirectly quoted someone who
said :
>Roedy! Not Reedy... CRAP!
Don't worry. Nearly everyone gets it wrong. Even my mother sometimes
called me "Roger".
--

Roedy Green Canadian Mind Products
The Java Glossary
http://mindprod.com
Jun 27 '08 #160
On Sat, 12 Apr 2008 10:08:31 -0700, "Chris Thomasson"
<cr*****@comcast.netwrote, quoted or indirectly quoted someone who
said :
>to increment a pointer location off of common base memory.
The initial discussion was about how Object.hashCode could be
implemented. That COULD be done with a global counter. You would
need simple synchronisation (e.g. an atomic memory increment)
instruction to share it among threads.

In a similar way you can do a fast object allocation simply by
incrementing the free space pointer, either one per thread with
separate pools or with a synchronised global one.

The two discussions became muddled and intertwined. I think also there
was confusion between what you might write IN Java vs assembler code
inside the JVM to handle to low level details of object allocation.
--

Roedy Green Canadian Mind Products
The Java Glossary
http://mindprod.com
Jun 27 '08 #161
On Sat, 12 Apr 2008 13:31:12 GMT, Mark Thornton
<ma*************@ntlworld.comwrote, quoted or indirectly quoted
someone who said :
>(Ignoring for now that there isn't one Java allocator), typically there
is a per thread area so these counters do not need locks.
Also consider the JVM does not have to use heavyweight Java-type
object locks. It can get away with much more light-weight techniques.
It can "cheat". This object allocation code has to be blindingly fast,
and all sorts of techniques you would never dream of using in
application code become legit.

All you have to do is read one GC paper to see this IS rocket science.
We can only scratch the surface in these discussions.
--

Roedy Green Canadian Mind Products
The Java Glossary
http://mindprod.com
Jun 27 '08 #162
On Sat, 12 Apr 2008 06:16:29 -0700, "Chris Thomasson"
<cr*****@comcast.netwrote, quoted or indirectly quoted someone who
said :
>Heap compaction is nothing all that special. Its certainly not tied to a GC.
Not at all.
You can have heap compaction without GC, e.g. the original Mac with
its explicit handle allocations in Pascal. But have you ever seen a
GC system without heap compaction? Perhaps someone cooked one up as
an add-on to C++?
--

Roedy Green Canadian Mind Products
The Java Glossary
http://mindprod.com
Jun 27 '08 #163
On Sat, 12 Apr 2008 06:11:56 -0700 (PDT), Mirek Fidler
<cx*@ntllib.orgwrote:
>Time: 4562 ms (U++)
Time: 27781 ms (g++)

java -server -Xms1024m -Xmx1024m

Time: 3578 ms (max memory I saw was on 300 M -- but at least it
finished 7 times faster than g++ ...

with -Xms75m -Xmx100m the time is around: 9969 ms

On Jet, 2344 ms (wow, that was fast but memory peak was 600 MB!)

IMO, manual management still wins...
No it didn't. g++ and vc9 are at least three times slower in this
case, even if I use -Xms75m (which is appropriate for this case). I
have a choice to increase -Xms value and make it 8 times faster.

Jun 27 '08 #164
On Sat, 12 Apr 2008 20:49:18 +0200, "Bo Persson" <bo*@gmb.dkwrote:
>The really unfair thing is that you would never ever do anything like
this in C++. Why would any application allocate ten million int sized
objects separately?
What about in the case of some kind of tree structure?

How would you do the following in c++?

#include <ctime>
#include <iostream>
struct Tree {
Tree *left;
Tree *right;
};

Tree *CreateTree(int n)
{
if(n <= 0)
return NULL;
Tree *t = new Tree;
t->left = CreateTree(n - 1);
t->right = CreateTree(n - 1);
return t;
}

void DeleteTree(Tree *t)
{
if(t) {
DeleteTree(t->left);
DeleteTree(t->right);
delete t;
}
}

int main(int argc, char *argv[])
{
clock_t start=clock();
for(int i = 0; i < 15; i++)
DeleteTree(CreateTree(22));

clock_t endt=clock();
std::cout <<"Time: " <<
double(endt-start)/CLOCKS_PER_SEC * 1000 << " ms\n";
}
Jun 27 '08 #165
Roedy Green wrote:
On Sat, 12 Apr 2008 13:31:12 GMT, Mark Thornton
<ma*************@ntlworld.comwrote, quoted or indirectly quoted
someone who said :
>(Ignoring for now that there isn't one Java allocator), typically there
is a per thread area so these counters do not need locks.

Also consider the JVM does not have to use heavyweight Java-type
object locks. It can get away with much more light-weight techniques.
It can "cheat".
JVM's often use highly optimised locking techniques even for regular
objects (e.g. synchronized methods). In some cases they can even
eliminate a lock altogether via escape analysis. This gets done even for
a newbies code whereas a junior programmer trying to achieve similar
locking performance in C++ is unwise at best. Even 10 years ago, JVMs
adapted locking to suit the machine at run time (faster locking on
single processors than multi cpu systems).

Mark Thornton
Jun 27 '08 #166
On Sat, 12 Apr 2008 06:11:56 -0700 (PDT), Mirek Fidler
<cx*@ntllib.orgwrote:
>IMO, manual management still wins...
The only disadvantage of GC seems to be it needs more RAM (though I
was able to fix it. See below). On the positive side, it has the big
advantages too. In manual management, you can delete the memory but
still has the pointer pointing at it, i.e dangling pointer. Or you can
forget to free it and have memory leak. Both of these are not possible
with GC.

Any, by adding System.gc(); I was able to decrease the memory usage
and make it even faster! .. I changed the loop in java version to
this:

for(int i = 0; i < 15; i++) {
CreateTree(22);
System.gc(); // gc()
}

java -server -Xms1024m -Xmx1024m Test

Time: 1828 ms (max memory used 77 MB)
UPP: Time: 4562 ms (U++ max memory used 68 MB)

Clear victory :)

I still don't understand why increasing -Xms improves the speed, even
though in this case gc is called after every loop and max memory used
is around 77 MB.
Jun 27 '08 #167
On Sat, 12 Apr 2008 15:42:43 -0500, Razii
<DO*************@hotmail.comwrote:
>Time: 1828 ms (max memory used 77 MB)
UPP: Time: 4562 ms (U++ max memory used 68 MB)

Clear victory :)

I still don't understand why increasing -Xms improves the speed, even
though in this case gc is called after every loop and max memory used
is around 77 MB.
I was able to improve speed without having to set -Xms to 1024m (only
needed -Xms170m) by adding -XX:NewRatio=1

java -server -Xms170m -Xmx170m -XX:NewRatio=1 Test

Time: 1812 ms (max memory used 74 MB)
Time: 4562 ms (U++ max memory used 68 MB)

:))

As for g++,
g++ -O2 -fomit-frame-pointer -finline-functions "test.cpp" -o "test.
exe"

Time: 27609 ms (max memory used 65 MB)

g++ is 15 times slower than Java! Huh?

http://pastebin.com/f6bfa4d78 (C++ version)

http://pastebin.com/f1b13da14 (java version)


Jun 27 '08 #168
On Sat, 12 Apr 2008 17:21:00 -0500, Razii
<DO*************@hotmail.comwrote:
>java -server -Xms170m -Xmx170m -XX:NewRatio=1 Test
With these flags, GC output looks like this

java -server -verbose:gc -Xms170m -Xmx170m -XX:NewRatio=1 Test

[Full GC 66847K->107K(165376K), 0.0431739 secs]
[Full GC 66954K->107K(165376K), 0.0426188 secs]
[Full GC 66954K->107K(165376K), 0.0423939 secs]
[Full GC 66953K->107K(165376K), 0.0443808 secs]
[Full GC 66953K->107K(165376K), 0.0425976 secs]
[Full GC 66953K->107K(165376K), 0.0427130 secs]
[Full GC 66953K->107K(165376K), 0.0421238 secs]
[Full GC 66953K->107K(165376K), 0.0423238 secs]
[Full GC 66953K->107K(165376K), 0.0428331 secs]
[Full GC 66953K->107K(165376K), 0.0427350 secs]
[Full GC 66953K->107K(165376K), 0.0423126 secs]
[Full GC 66953K->107K(165376K), 0.0425389 secs]
[Full GC 66953K->107K(165376K), 0.0445981 secs]
[Full GC 66953K->107K(165376K), 0.0425420 secs]
[Full GC 66953K->107K(165376K), 0.0425479 secs]

66847K == heap in ue before GC
107K == heap in use after GC
165376K == max heap available.

0.0428331 secs == time it took.

15 calls to GC (since for loop was 15)

0.0428331 * 15 = 0.6424965 sec == 642 ms.

Jun 27 '08 #169
Razii wrote:
http://pastebin.com/f6bfa4d78 (C++ version)
In a Sun-Fire-V240 computer I was able to bring down the execution
time of that program from 24630 ms to 3990 ms by using my own (portable)
memory allocator. The maximum memory usage was something like 55 MB.

"java Test" in that same computer runs the java program in 4502 ms.
With "java -server -Xms170m -Xmx170m -XX:NewRatio=1 Test" it was
1828 ms.
Jun 27 '08 #170
On Fri, 11 Apr 2008 21:17:47 -0500, Razii
<DO*************@hotmail.comwrote:
>Time: 24953 ms

huh? Removing super.finalize(); it went back to Time: 172 ms
From IBM site:

Objects with finalizers (those that have a non-trivial finalize()
method) have significant overhead compared to objects without
finalizers, and should be used sparingly. Finalizeable objects are
both slower to allocate and slower to collect. At allocation time, the
JVM must register any finalizeable objects with the garbage collector,
and (at least in the HotSpot JVM implementation) finalizeable objects
must follow a slower allocation path than most other objects.
Similarly, finalizeable objects are slower to collect, too. It takes
at least two garbage collection cycles (in the best case) before a
finalizeable object can be reclaimed, and the garbage collector has to
do extra work to invoke the finalizer. The result is more time spent
allocating and collecting objects and more pressure on the garbage
collector, because the memory used by unreachable finalizeable objects
is retained longer. Combine that with the fact that finalizers are not
guaranteed to run in any predictable timeframe, or even at all, and
you can see that there are relatively few situations for which
finalization is the right tool to use.

If you must use finalizers, there are a few guidelines you can follow
that will help contain the damage. Limit the number of finalizeable
objects, which will minimize the number of objects that have to incur
the allocation and collection costs of finalization. Organize your
classes so that finalizeable objects hold no other data, which will
minimize the amount of memory tied up in finalizeable objects after
they become unreachable, as there can be a long delay before they are
actually reclaimed. In particular, beware when extending finalizeable
classes from standard libraries.

Jun 27 '08 #171
On Sun, 13 Apr 2008 02:53:25 +0300, Juha Nieminen
<no****@thanks.invalidwrote:
In a Sun-Fire-V240 computer I was able to bring down the execution
time of that program from 24630 ms to 3990 ms by using my own (portable)
memory allocator. The maximum memory usage was something like 55 MB.

"java Test" in that same computer runs the java program in 4502 ms.
With "java -server -Xms170m -Xmx170m -XX:NewRatio=1 Test" it was
1828 ms.
What was the memory usage with -server -Xms170m -Xmx170m
-XX:NewRatio=1
Jun 27 '08 #172
REH
On Apr 10, 9:58 pm, Razii <DONTwhatever...@hotmail.comwrote:
On Thu, 10 Apr 2008 21:43:45 -0400, Arne Vajhøj <a...@vajhoej.dk>
wrote:
I can not imagine any C++ runtime that makes an operating system
call for each new.
The runtime allocates huge chunks from the OS and then manage
it internally.

Testing the keyword "new"

Time: 2125 ms (C++)
Time: 328 ms (java)

Explain that. What I am doing different in java than in c++? Code
below..
What you are doing differently is not freeing the memory. Since you
are allocating gobs of memory, and not releasing it, the C++ version
will incur the cost of page faults, virtual memory access, disk
thrashing, etc. You are timing a lot more than just memory
allocation. Of course a lot of people use contrived examples like
this to prove their pet language is faster than another. What a waste
of time...

REH
Jun 27 '08 #173
On Sun, 13 Apr 2008 02:53:25 +0300, Juha Nieminen
<no****@thanks.invalidwrote:
>http://pastebin.com/f6bfa4d78 (C++ version)

In a Sun-Fire-V240 computer I was able to bring down the execution
time of that program from 24630 ms to 3990 ms by using my own (portable)
memory allocator. The maximum memory usage was something like 55 MB.
I downloaded Boehm Collector

added...

#include <gc_cpp.h>

what else do I need to do? Can you post the version that would use
Boehm Collector and command line to compile and link it.
Jun 27 '08 #174
On Apr 12, 9:49 pm, Razii <DONTwhatever...@hotmail.comwrote:
On Sat, 12 Apr 2008 06:11:56 -0700 (PDT), Mirek Fidler

<c...@ntllib.orgwrote:
Time: 4562 ms (U++)
Time: 27781 ms (g++)
java -server -Xms1024m -Xmx1024m
Time: 3578 ms (max memory I saw was on 300 M -- but at least it
finished 7 times faster than g++ ...
with -Xms75m -Xmx100m the time is around: 9969 ms
On Jet, 2344 ms (wow, that was fast but memory peak was 600 MB!)
IMO, manual management still wins...

No it didn't. g++ and vc9 are at least three times slower in this
case, even if I use -Xms75m (which is appropriate for this case). I
have a choice to increase -Xms value and make it 8 times faster.
Well, I guess you are doing it again - finetuning conditions to
achieve "victory".

I could do equivalent (introducing better algorithm, custom allocator,
more parameters), but that would hardly prove anything, as I have a
little use of such tricks in my regular code. Normal code has to deal
with datasets of any size using as little memory as possible.

The final truth is that generally (that means, before you start to
change conditions to fit your case), optimal manual memory management
is faster and wastes significantly less memory than GC.

Mirek
Jun 27 '08 #175
On Sat, 12 Apr 2008 18:30:17 -0700 (PDT), Mirek Fidler
<cx*@ntllib.orgwrote:
>Well, I guess you are doing it again - finetuning conditions to
achieve "victory".
No, I made no changes to algorithm except forcing gc to run by adding
System.gc(). The C++ version does the same with DeleteTree() function.
That fixed the memory problem with java version. Other than that,
there was no change. Only flags were improved to get the best result.
C++ compilers have many flags too.

If you insist on removing System.gc(), even then giving JVM full heap
(1024m) still produces the same time (1818s). The C++ version also has
full 1024m heap available to it (on my comp). Why not allow JVM full
access to memory too?


Jun 27 '08 #176
Razii wrote:
#include <gc_cpp.h>

what else do I need to do? Can you post the version that would use
Boehm Collector and command line to compile and link it.
Never heard. I have no idea.
Jun 27 '08 #177
In article <3e440ef2-58c0-4aff-8f3e-d52aa579a763
@u3g2000hsc.googlegroups.com>, ja*********@gmail.com says...

[ ... ]
In practice (and I speak here from experience), using garbage
collection in C++ doesn't impact style that much. (And with the
exception of a few people like you and Jerry Collins, many of
those worried about its impact on style really do need to change
their style.)
Is this Jerry Collins anything like a Tom Collins? Personally in mixed
drinks, I prefer a Cape Cod (though I really prefer a good red Bordeaux
over almost any mixed drink).
Even with garbage collection, the "default" mode
in C++ is values, not references or pointers (call them whatever
you want), with true local variables, on the stack, and not
dynamically allocated memory. As far as I know, there's never
been the slightest suggestion that this should be changed.
I sure hope not -- changing that seems like it would change the
fundamental character of the language almost completely. Improving C++
is one thing, but changing its character is quite another. Trying to
make C++ into another Java doesn't strike me as a particularly useful
exercise.

--
Later,
Jerry.

The universe is a figment of its own imagination.
Jun 27 '08 #178
In article <ab0efd6b-51bd-4b20-a0a9-2a95f1ab2f15
@m44g2000hsc.googlegroups.com>, ja*********@gmail.com says...

[ ... ]
Are you sure about the "most-likely" part? From what little
I've seen, most C runtime libraries simply threw in a few locks
to make their 30 or 40 year old malloc thread safe, and didn't
bother with more. Since most of the marketing benchmarks don't
use malloc, there's no point in optimizing it. I suspect that
you're describing best practice, and not most likely. (But I'd
be pleased to learn differently.)
The malloc's I've looked at aren't nearly that old. At least the last
time I looked, libg++ used Doug Lea's allocator from around 2000 or so.

Microsoft rewrote their allocator for VC++ 5. At that point, the
implemented two separate heaps, one for small blocks and another for
large blocks. In VC++ 6, they tweaked it a bit, such as changing the
dividing line between "large" and "small".

OTOH, the multithreading support is pretty much as you've described --
lock the heap during critical sections, but not much more than that.

--
Later,
Jerry.

The universe is a figment of its own imagination.
Jun 27 '08 #179
Jerry Coffin wrote:
In article <3e440ef2-58c0-4aff-8f3e-d52aa579a763
@u3g2000hsc.googlegroups.com>, ja*********@gmail.com says...

[ ... ]
>In practice (and I speak here from experience), using garbage
collection in C++ doesn't impact style that much. (And with the
exception of a few people like you and Jerry Collins, many of
those worried about its impact on style really do need to change
their style.)

Is this Jerry Collins anything like a Tom Collins? Personally in mixed
drinks, I prefer a Cape Cod (though I really prefer a good red Bordeaux
over almost any mixed drink).
He's a well known (at least in these parts) All Black ruby player.

Maybe Jame was referring to me? If he was, then yes, my style does not
suit GC. I do a lot of real time embedded work where memory is a
precious resource to be free as soon as possible. I started programming
in C and later C++ in days of old before GC when leaking memory was a
mortal sin. I still have to stop myself deleting objects when I'm
writing JavaScript!

--
Ian Collins.
Jun 27 '08 #180

"Mirek Fidler" <cx*@ntllib.orgwrote in message
news:7a**********************************@r9g2000p rd.googlegroups.com...
On Apr 12, 7:48 am, "Mike Schilling" <mscottschill...@hotmail.com>
wrote:
>"Mirek Fidler" <c...@ntllib.orgwrote in message

news:aa**********************************@c19g200 0prf.googlegroups.com...
On Apr 11, 11:44 pm, Razii <DONTwhatever...@hotmail.comwrote:
Which "older OS"? Some 30yo?
>How about mobile and embedded devices that don't have
sophisticated
memory management? If a C++ application is leaking memory, the
memory
might never be returned even after the application is
terminated.
This is more dangerous than memory leak in Java application,
where,
after the application is terminated, all memory is returned by
VM.
If VM is able to return memory to OS, so it should be C++
runtime.

The JVM can (in principle, at least) compact its heap and return
the
now-free space to the OS. An environment that doesn't allow memory
compaction (which includes most C++ implementations) would find
this
impossible.

Yes, but we are speaking about "application is terminated" situation
here...
Oh. Yeah, arguing that an OS would treat C++ and Java different
during application cleanup is (what's the polite word again? Oh,
right.) questionable.
Jun 27 '08 #181
On Fri, 11 Apr 2008 05:30:07 -0700 (PDT), James Kanze
<ja*********@gmail.comwrote:
Since most of the marketing benchmarks don't
use malloc, there's no point in optimizing it. I suspect that
you're describing best practice, and not most likely. (But I'd
be pleased to learn differently.)
The "new" operator is as horrible as anything in both g++ and VC9, as
conclusively proven in the other thread.

Here is a C++ version to benchmark the operator "new"

http://pastebin.com/f6bfa4d78 (C++ version)

28422 ms (g++)
26750 ms (vc++)

U++, that overwrites the default new, can execute the same program in
4500 ms, 6.3 times faster than g++.

However, the fastest is java...

http://pastebin.com/f3f559ae2 (java )

(make sure to run it with these flags)
java -server -Xmx86m -Xms86m -Xmn85m Test

1800 ms, 15 times faster than g++.
Jun 27 '08 #182

"Chris Thomasson" <cr*****@comcast.netwrote in message
news:Gq******************************@comcast.com. ..
"Mike Schilling" <ms*************@hotmail.comwrote in message
news:f3****************@newssvr27.news.prodigy.net ...
>>
"Mirek Fidler" <cx*@ntllib.orgwrote in message
news:aa**********************************@c19g200 0prf.googlegroups.com...
>>On Apr 11, 11:44 pm, Razii <DONTwhatever...@hotmail.comwrote:
Which "older OS"? Some 30yo?

How about mobile and embedded devices that don't have
sophisticated
memory management? If a C++ application is leaking memory, the
memory
might never be returned even after the application is terminated.
This is more dangerous than memory leak in Java application,
where,
after the application is terminated, all memory is returned by
VM.

If VM is able to return memory to OS, so it should be C++ runtime.

The JVM can (in principle, at least) compact its heap and return
the now-free space to the OS. An environment that doesn't allow
memory compaction (which includes most C++ implementations) would
find this impossible.

Heap compaction is nothing all that special. Its certainly not tied
to a GC. Not at all.
Good thing I neither said nor implied that it is, then. However,
all Java VMs that I know of support it. for the obvious reason that
GCs which don't compact would fail pretty quickly with heap
fragmentation [1]. Most C++ implementations don't support it, because
their pointers are implemented as machine addresses.and they lack a
mechanism to fix up the pointers after a compaction.

1. Plus the fact that if you have enough information to do GC, you
have more than enough to make compaction work.
Jun 27 '08 #183
Razii wrote:
On Sat, 12 Apr 2008 20:49:18 +0200, "Bo Persson" <bo*@gmb.dkwrote:
>The really unfair thing is that you would never ever do anything
like this in C++. Why would any application allocate ten million
int sized objects separately?

What about in the case of some kind of tree structure?

How would you do the following in c++?

#include <ctime>
#include <iostream>
struct Tree {
Tree *left;
Tree *right;
};
This is a tree with empty nodes. Why would I want to build that?
Bo Persson
Jun 27 '08 #184
Razii wrote:
On Sat, 12 Apr 2008 06:11:56 -0700 (PDT), Mirek Fidler
<cx*@ntllib.orgwrote:
>IMO, manual management still wins...

The only disadvantage of GC seems to be it needs more RAM (though I
was able to fix it. See below). On the positive side, it has the big
advantages too. In manual management, you can delete the memory but
still has the pointer pointing at it, i.e dangling pointer. Or you
can forget to free it and have memory leak. Both of these are not
possible with GC.
Or you can put the delete statement in the desctructor of the class
owning the pointer. That way you cannot forget to use it.
Bo Persson
Jun 27 '08 #185
Razii wrote:
On Fri, 11 Apr 2008 05:30:07 -0700 (PDT), James Kanze
<ja*********@gmail.comwrote:
>Since most of the marketing benchmarks don't
use malloc, there's no point in optimizing it. I suspect that
you're describing best practice, and not most likely. (But I'd
be pleased to learn differently.)

The "new" operator is as horrible as anything in both g++ and VC9,
as conclusively proven in the other thread.

Here is a C++ version to benchmark the operator "new"

http://pastebin.com/f6bfa4d78 (C++ version)

28422 ms (g++)
26750 ms (vc++)

U++, that overwrites the default new, can execute the same program
in 4500 ms, 6.3 times faster than g++.

However, the fastest is java...

http://pastebin.com/f3f559ae2 (java )

(make sure to run it with these flags)
java -server -Xmx86m -Xms86m -Xmn85m Test

1800 ms, 15 times faster than g++.
And operator new isn't used much at all in C++, except when
benchmarking against Java. Wonder why?
Bo Persson
Jun 27 '08 #186
On Sun, 13 Apr 2008 10:28:02 +0200, "Bo Persson" <bo*@gmb.dkwrote:
>And operator new isn't used much at all in C++, except when
benchmarking against Java. Wonder why?
huh? That's pretty silly statement. The operator new is used to
dynamically allocated memory. It has nothing to do with benchmarking
with Java. You need to dynamically allocate memory in most large C++
programs, such as simulators, where the designer can never safely set
an upper limit on application size.
Jun 27 '08 #187
On 2008-04-11 23:44, Razii wrote:
On Fri, 11 Apr 2008 14:50:56 +0300, Juha Nieminen
<no****@thanks.invalidwrote:
>>Also, according the site
above, in C++ memory is never returned to the operating system (at
least the older OS)
Old OSes could not reclaim unused memory from applications simply
because it lacked the functionality. On OSes which can it is just a
question about whether the allocator used will return unused memory or
not, which means that on some platform it might just as well be the JVM
that does not return memory while the C++ app does.
> Which "older OS"? Some 30yo?

How about mobile and embedded devices that don't have sophisticated
memory management? If a C++ application is leaking memory, the memory
might never be returned even after the application is terminated.
This is more dangerous than memory leak in Java application, where,
after the application is terminated, all memory is returned by VM.
On all OSes which uses virtual memory all memory used by an application
is returned on termination since the OS unmaps those pages used by the
application. If no virtual memory is used you will not be able to
retrieve leaked memory, on the other hand most programs running in such
environments are heavily profiled to find memory leaks.

--
Erik Wikström
Jun 27 '08 #188
On 2008-04-11 02:14, Razii wrote:
On Thu, 10 Apr 2008 17:33:21 +0300, Juha Nieminen
<no****@thanks.invalidwrote:
>However, part of my C++ programming style just naturally also avoids
doing tons of news and deletes in tight loops (which is, again, very
different from eg. Java programming where you basically have no choice)

Howeever, Java allocates new memory blocks on it's internal heap
(which is allocated in huge chunks from the OS). In this way, in most
of the cases it bypasses memory allocation mechanisms of the
underlying OS and is very fast. In C++, each "new" allocation request
will be sent to the operating system, which is slow.
You would find that fewer people would regard you as a troll if you did
not post things like this. Anyone with at least a little bit of
knowledge of OSes and memory allocation knows that this is wrong. When
an application needs more memory it uses brk(), sbrk(), or mmap() (or
something equivalent) and adds the new memory to its internal heap, just
like in Java. Note that using mmap makes it much easier to return memory
to the OS than using brk/sbrk().

--
Erik Wikström
Jun 27 '08 #189
On Sun, 13 Apr 2008 09:33:02 GMT, Erik Wikström
<Er***********@telia.comwrote:
>You would find that fewer people would regard you as a troll if you did
not post things like this.
No problems. I have a thick skin ..
>Anyone with at least a little bit of knowledge of OSes and memory allocation
knows that this is wrong.
As I said, I read this on a web site. What about VirtualAlloc? When is
that called?

Jun 27 '08 #190
Razii wrote:
huh? That's pretty silly statement. The operator new is used to
dynamically allocated memory. It has nothing to do with benchmarking
with Java. You need to dynamically allocate memory in most large C++
programs, such as simulators, where the designer can never safely set
an upper limit on application size.
In many applications memory is allocated as arrays instead of
individual elements.

Granted, there are situations where there's no way around allocating
individual elements, but in those cases a custom allocator can be used
for considerable speedups. (And yes, it's hard to find such allocators,
but eg. I have made one, if anyone is interested.)
Jun 27 '08 #191
On 13 Apr., 11:01, Razii <DONTwhatever...@hotmail.comwrote:
On Sun, 13 Apr 2008 10:28:02 +0200, "Bo Persson" <b...@gmb.dkwrote:
And operator new isn't used much at all in C++, except when
benchmarking against Java. Wonder why?

huh? That's pretty silly statement. * The operator new is used to
dynamically allocated memory. It has nothing to do with benchmarking
with Java. You need to dynamically allocate memory in most large C++
programs, such as simulators, *where the designer can never safely set
an upper limit on application size.
This is of course true, but in most C++ projects the new is quite
rare. This is opposed to Java where you have to use new every time you
create an object.

/Peter
Jun 27 '08 #192
On 13 Apr., 07:41, Ian Collins <ian-n...@hotmail.comwrote:
Jerry Coffin wrote:
In article <3e440ef2-58c0-4aff-8f3e-d52aa579a763
@u3g2000hsc.googlegroups.com>, james.ka...@gmail.com says...
[ ... ]
In practice (and I speak here from experience), using garbage
collection in C++ doesn't impact style that much. *(And with the
exception of a few people like you and Jerry Collins, many of
those worried about its impact on style really do need to change
their style.)
Is this Jerry Collins anything like a Tom Collins? Personally in mixed
drinks, I prefer a Cape Cod (though I really prefer a good red Bordeaux
over almost any mixed drink).

He's a well known (at least in these parts) All Black ruby player.
But ruby is off topic here - as well as java! ;-)

/Peter
Jun 27 '08 #193
Razii wrote:
On Sun, 13 Apr 2008 10:28:02 +0200, "Bo Persson" <bo*@gmb.dkwrote:
>And operator new isn't used much at all in C++, except when
benchmarking against Java. Wonder why?

huh? That's pretty silly statement. The operator new is used to
dynamically allocated memory.
But you don't do that directly in C++ code. Most often you put your
data in a container, and let it handle the allocation.
It has nothing to do with benchmarking
with Java.
It does very much so. Dynamically allocating a huge number of very
small objects is perhaps something you do in Java, but you don't do
that in real C++ code. Especially not in a loop.

That might be one reason why C++ implementors haven't spent much
effort optimizing this.
You need to dynamically allocate memory in most large C++
programs, such as simulators, where the designer can never safely
set an upper limit on application size.
I can have a std::vector<itemor a std::deque<itemand don't have to
use new in my code. The container can grow as needed.

Bo Persson
Jun 27 '08 #194
On 13 avr, 07:08, Jerry Coffin <jcof...@taeus.comwrote:
In article <3e440ef2-58c0-4aff-8f3e-d52aa579a763
@u3g2000hsc.googlegroups.com>, james.ka...@gmail.com says...
[ ... ]
In practice (and I speak here from experience), using garbage
collection in C++ doesn't impact style that much. (And with the
exception of a few people like you and Jerry Collins, many of
those worried about its impact on style really do need to change
their style.)
Is this Jerry Collins anything like a Tom Collins?
Hmmm. I don't know how my fingers found l's instead of
f's---they're not even on the same hand. (On the other hand,
they do look vaguely similar in the font I'm using, if I don't
look too hard.)
Personally in mixed drinks, I prefer a Cape Cod (though I
really prefer a good red Bordeaux over almost any mixed
drink).
I definitely prefer wine to mixed drinks as well, although if
it's just as an aperitif, rather than with a meal, it will
generally be white. (About the only "mixed drink" I drink is a
Schorle---half white wine, half mineral water, and then only
when it's very hot.)
Even with garbage collection, the "default" mode
in C++ is values, not references or pointers (call them whatever
you want), with true local variables, on the stack, and not
dynamically allocated memory. As far as I know, there's never
been the slightest suggestion that this should be changed.
I sure hope not -- changing that seems like it would change the
fundamental character of the language almost completely. Improving C++
is one thing, but changing its character is quite another. Trying to
make C++ into another Java doesn't strike me as a particularly useful
exercise.
Agreed. The goal is simply to add functionality for those for
whom it would be useful, not to change the character of the
language.

--
James Kanze (GABI Software) email:ja*********@gmail.com
Conseils en informatique orientée objet/
Beratung in objektorientierter Datenverarbeitung
9 place Sémard, 78210 St.-Cyr-l'École, France, +33 (0)1 30 23 00 34
Jun 27 '08 #195
On 13 avr, 15:10, "Bo Persson" <b...@gmb.dkwrote:

[...]
It does very much so. Dynamically allocating a huge number of very
small objects is perhaps something you do in Java, but you don't do
that in real C++ code. Especially not in a loop.
This also holds with regards to Peter Koch's comment "new is
quite rare": it depends. You're certainly not forced to use new
for everything (nor should you), but there are specific
applications where you might end up with such allocation
patterns.
That might be one reason why C++ implementors haven't spent
much effort optimizing this.
Some have, actually, and there are implementations of
malloc/free which handle a lot of small allocations gracefully.
You need to dynamically allocate memory in most large C++
programs, such as simulators, where the designer can never
safely set an upper limit on application size.
I can have a std::vector<itemor a std::deque<itemand don't
have to use new in my code. The container can grow as needed.
Up until that point, the same holds true in Java. The
difference is that in Java, when you have a vector of item, what
you really have is a vector of pointers to item, and each item
has to be allocated individually. If my goal were to write a
benchmark proving C++ were faster than Java, I'd certainly start
by defining a small class, like Point (two or three double) or
Complex (two double), then start deap copying vectors of them.
Java memory allocation can be very fast, but not as fast as 0.
(If I really wanted Java to look bad, I'd also arrange things so
that the garbage collector pool space was almost full, almost
all of the time, so that the garbage collector had to run often,
on a lot of allocated memory.)

Of course, if I wanted to prove Java faster, I'd choose
something else. Never trust a benchmark you having falsified
yourself.

--
James Kanze (GABI Software) email:ja*********@gmail.com
Conseils en informatique orientée objet/
Beratung in objektorientierter Datenverarbeitung
9 place Sémard, 78210 St.-Cyr-l'École, France, +33 (0)1 30 23 00 34
Jun 27 '08 #196
On 2008-04-13 12:25, Razii wrote:
On Sun, 13 Apr 2008 09:33:02 GMT, Erik Wikström
<Er***********@telia.comwrote:
>>You would find that fewer people would regard you as a troll if you did
not post things like this.

No problems. I have a thick skin ..
>>Anyone with at least a little bit of knowledge of OSes and memory allocation
knows that this is wrong.

As I said, I read this on a web site. What about VirtualAlloc? When is
that called?
Not quite sure, seems to be something like mmap, perhaps you can use it
to allocate memory for a heap, but there are other functions available
to manage heaps in Windows.

--
Erik Wikström
Jun 27 '08 #197
In article <66*************@mid.individual.net>, ia******@hotmail.com
says...

[ ... "Jerry Collins" ]
Maybe Jame was referring to me? If he was, then yes, my style does not
suit GC. I do a lot of real time embedded work where memory is a
precious resource to be free as soon as possible. I started programming
in C and later C++ in days of old before GC when leaking memory was a
mortal sin. I still have to stop myself deleting objects when I'm
writing JavaScript!
Perhaps he's come up with one of those conspiracy theories. Maybe we're
really the same person -- after all, I'm pretty sure nobody's ever seen
both of us at the same time... :-)

--
Later,
Jerry.

The universe is a figment of its own imagination.
Jun 27 '08 #198
In article <nm****************@newssvr13.news.prodigy.net>,
ms*************@hotmail.com says...

[ ... ]
Good thing I neither said nor implied that it is, then. However,
all Java VMs that I know of support it. for the obvious reason that
GCs which don't compact would fail pretty quickly with heap
fragmentation [1]. Most C++ implementations don't support it, because
their pointers are implemented as machine addresses.and they lack a
mechanism to fix up the pointers after a compaction.

1. Plus the fact that if you have enough information to do GC, you
have more than enough to make compaction work.
Not really. A typical gc in C++ is added on after the fact, so it does
conservative collection -- since it doesn't know what is or isn't a
pointer, it treats everything as if it was a pointer, and assumes that
whatever if would point at if it was as pointer is live memory. Of
course, some values wouldn't be valid pointers and are eliminated.

It does NOT, however, know with any certainty that a particular value IS
a pointer -- some definitely aren't (valid) pointers, but others might
or might not be. Since it doesn't know for sure which are pointers and
which are just integers (or whatever) that hold values that could be
pointers, it can't modify any of them. It has enough information to do
garbage collection, but NOT enough to support compacting the heap.

--
Later,
Jerry.

The universe is a figment of its own imagination.
Jun 27 '08 #199
"Razii" <DO*************@hotmail.comwrote in message
news:ep********************************@4ax.com...
On Sat, 12 Apr 2008 07:39:11 -0700, "Chris Thomasson"
<cr*****@comcast.netwrote:
>>That is if the JVM happens to use counter(s) off a base to implement their
allocator.

from IBM site...
[...]

I know that the most scaleable, most peformant memory allocator designs are
able to hit a fast-path into a per-thread heap. No interlocked RMW, no
memory barriers, just plain load/store instructions. Java can use this
design. C/C++ can as well. End of story. The fast-path, and even the first
and second level slow-paths can be lock/wait-free. What I mean by
first/second level "slow-path" can be explained like:
Very High-Level Allocation Request Outline
_____________________________________________

1. Try per-thread heap. - (no atomics and/or membars)
2. Try remote gather. - (can be no atomics and/or membars, otherwise an
atomic SWAP is needed).
3. Try per-cpu heap - (atomics and/or membars)
4. Try global heap. - (atomics and/or membars)
5. Ask OS! - (CRAP!)

Very High-Level Deallocation Request Outline
_____________________________________________

1. Determine if request was allocated by calling thread. If so goto step 2,
otherwise goto step 3.

2. free to per-thread heap. Done. - (no atomics and/or membars)

3. free to remote-thread heap. - (can be no atomics and/or membars,
otherwise an atomic CAS is needed).

4. On overflow, free to per-cpu heap. - (atomics and/or membars)

5. On per-cpu heap overflow, free to global heap. - (atomics and/or
membars)

6. On global heap overflow, free to OS! (CRAP!)

Anyway Razii, you initial assertion that calls to malloc/new in C/C++ always
hit the OS is misleading, and false. Java, C/C++, .NET, whatever can call
use very highly optimized memory allocation algorithms. IMVHO, GC is not all
that relevant here...

Jun 27 '08 #200

This thread has been closed and replies have been disabled. Please start a new discussion.

Similar topics

1
by: Bob | last post by:
Are there any known applications out there used to test the performance of the .NET garbage collector over a long period of time? Basically I need an application that creates objects, uses them, and...
4
by: Pedro Miguel Carvalho | last post by:
Greetings. I'm creating a project that as a intricate relation between object kind of like a set where each object in the set can be connect to a subset of object of the set and objects not in...
10
by: pachanga | last post by:
The Hans-Boehm garbage collector can be successfully used with C and C++, but not yet a standard for C++.. Is there talks about Garbage Collector to become in the C++ standard?
5
by: Ben | last post by:
Could someone please verify if what I am doing as follow is corrected: 1. when dealing with custom class objects: ..... public myObject as myClass myObject as New myClass .......here I am...
13
by: Mingnan G. | last post by:
Hello everyone. I have written a garbage collector for standard C++ application. It has following main features. 1) Deterministic Finalization Providing deterministic finalization, the system...
28
by: Goalie_Ca | last post by:
I have been reading (or at least googling) about the potential addition of optional garbage collection to C++0x. There are numerous myths and whatnot with very little detailed information. Will...
142
by: jacob navia | last post by:
Abstract -------- Garbage collection is a method of managing memory by using a "collector" library. Periodically, or triggered by an allocation request, the collector looks for unused memory...
8
by: Paul.Lee.1971 | last post by:
Hi everyone, A program that I'm helping to code seems to slow down drastically during initialisation, and looking at the profiling graph, it seems to be the garbage collector thats slowing things...
56
by: Johnny E. Jensen | last post by:
Hellow I'am not sure what to think about the Garbage Collector. I have a Class OutlookObject, It have two private variables. Private Microsoft.Office.Interop.Outlook.Application _Application =...
46
by: Carlo Milanesi | last post by:
Hello, traditionally, in C++, dynamically allocated memory has been managed explicitly by calling "delete" in the application code. Now, in addition to the standard library strings, containers,...
0
by: taylorcarr | last post by:
A Canon printer is a smart device known for being advanced, efficient, and reliable. It is designed for home, office, and hybrid workspace use and can also be used for a variety of purposes. However,...
0
by: ryjfgjl | last post by:
If we have dozens or hundreds of excel to import into the database, if we use the excel import function provided by database editors such as navicat, it will be extremely tedious and time-consuming...
0
by: ryjfgjl | last post by:
In our work, we often receive Excel tables with data in the same format. If we want to analyze these data, it can be difficult to analyze them because the data is spread across multiple Excel files...
0
by: emmanuelkatto | last post by:
Hi All, I am Emmanuel katto from Uganda. I want to ask what challenges you've faced while migrating a website to cloud. Please let me know. Thanks! Emmanuel
0
BarryA
by: BarryA | last post by:
What are the essential steps and strategies outlined in the Data Structures and Algorithms (DSA) roadmap for aspiring data scientists? How can individuals effectively utilize this roadmap to progress...
1
by: Sonnysonu | last post by:
This is the data of csv file 1 2 3 1 2 3 1 2 3 1 2 3 2 3 2 3 3 the lengths should be different i have to store the data by column-wise with in the specific length. suppose the i have to...
0
by: Hystou | last post by:
Most computers default to English, but sometimes we require a different language, especially when relocating. Forgot to request a specific language before your computer shipped? No problem! You can...
0
Oralloy
by: Oralloy | last post by:
Hello folks, I am unable to find appropriate documentation on the type promotion of bit-fields when using the generalised comparison operator "<=>". The problem is that using the GNU compilers,...
0
jinu1996
by: jinu1996 | last post by:
In today's digital age, having a compelling online presence is paramount for businesses aiming to thrive in a competitive landscape. At the heart of this digital strategy lies an intricately woven...

By using Bytes.com and it's services, you agree to our Privacy Policy and Terms of Use.

To disable or enable advertisements and analytics tracking please visit the manage ads & tracking page.