471,612 Members | 1,654 Online
Bytes | Software Development & Data Engineering Community
Post +

Home Posts Topics Members FAQ

Join Bytes to post your question to a community of 471,612 software developers and data experts.

Dynamaic Memory verses static

Greetings,

I was going through the quake2 source code (as you do) when I noticed
arrays are statically allocated on the stack instead of being allocated
dynamically. Iíve seen this before and know itís for efficiency.

Iím basically questioning the merit of such technique nowadays, for
performance-orientated applications? Is it better to dynamically
allocate a smaller array to fit the memory or to use one huge statically
allocated array? Or even one even larger static array and chop it up at
runtime?

Can you point me to any websites (and yes I have searched) that compare
the differences in performance using both approaches?

Thanks.

Jul 19 '05 #1
6 3151
J Anderson wrote:
Greetings,

I was going through the quake2 source code (as you do) when I noticed
arrays are statically allocated on the stack instead of being allocated
dynamically. Iíve seen this before and know itís for efficiency.

Iím basically questioning the merit of such technique nowadays, for
performance-orientated applications? Is it better to dynamically
allocate a smaller array to fit the memory or to use one huge statically
allocated array?
I don't understand your question here. The 'better' technique would be
the one that fits your needs, I suppose. If you need smaller arrays, you
would allocate smaller ones. If you need one 'huge' array, you'd
allocate a huge one.

Static allocation should be very fast (just adjusting a pointer,
usually), while dynamic allocation may be slower (locating a piece of
memory large enough, preparing it for use and eventual deallocation). It
may make sense to avoid several dynamic allocations when a single
allocation (static or dynamic) can do the job.
Or even one even larger static array and chop it up at
runtime?
Don't you think that's basically what the run-time system does to supply
you with dynamic memory? What makes you think you can do better?

Can you point me to any websites (and yes I have searched) that compare
the differences in performance using both approaches?


You could compare it yourself. I have. I found that allocating dynamic
memory was very fast, and I could not significantly improve on it.

You should keep in mind that 99% of code is *NOT* time-critical, and
most of the time you should be far more concerned with writing clean,
understandable, maintainable, bug-free code than with writing fast code.

-Kevin
--
My email address is valid, but changes periodically.
To contact me please use the address from a recent posting.

Jul 19 '05 #2


J Anderson wrote:

Greetings,

I was going through the quake2 source code (as you do) when I noticed
arrays are statically allocated on the stack instead of being allocated
dynamically. Iíve seen this before and know itís for efficiency.

Iím basically questioning the merit of such technique nowadays, for
performance-orientated applications? Is it better to dynamically
allocate a smaller array to fit the memory or to use one huge statically
allocated array? Or even one even larger static array and chop it up at
runtime?
There is only one way to figure out:
Try it on your compiler.

But chances are good, that the statically allocated array will be faster
on most implementations. How much? That depends on your actual implementation.

Can you point me to any websites (and yes I have searched) that compare
the differences in performance using both approaches?


Try some test codes written by yourself. You might learn something.
You might also search for 'object pooling'.

--
Karl Heinz Buchegger
kb******@gascad.at
Jul 19 '05 #3
mjm
From what I have read and experienced myself operator new is not
efficient for
allocating very small objects. Ie. if you must allocate many (tens of
millions) of small objects then it might be worth you while to find
out about alternatives.

I have a structure consisting of about 10 million small objects and
the allocation is much slower than traversal and computation.
Jul 19 '05 #4

"mjm" <sp*******@yahoo.com> wrote in message news:f3**************************@posting.google.c om...
From what I have read and experienced myself operator new is not
efficient for
allocating very small objects. Ie. if you must allocate many (tens of
millions) of small objects then it might be worth you while to find
out about alternatives.


It might be. First try the standard allocators. And if that doesn't
meet your performance requirements you can optimize an operator new
for your particular type taking advantage of what you know (be it,
small, fixed size, etc...) about the type.
Jul 19 '05 #5
mjm wrote:
From what I have read and experienced myself operator new is not
efficient for
allocating very small objects. Ie. if you must allocate many (tens of
millions) of small objects then it might be worth you while to find
out about alternatives.

I have a structure consisting of about 10 million small objects and
the allocation is much slower than traversal and computation.


Actually, the new operator is efficient. Most courses on Operating
Systems will teach you that allocating many small objects is not
efficient as allocating one large {i.e. array} object.

On the platforms that I've programmed on, allocating one big chunk
of memory and dividing it up is a lot more efficient than allocating
a whole bunch of small pieces. The trade-off point depends on the
overhead required to allocate memory.

--
Thomas Matthews

C++ newsgroup welcome message:
http://www.slack.net/~shiva/welcome.txt
C++ Faq: http://www.parashift.com/c++-faq-lite
C Faq: http://www.eskimo.com/~scs/c-faq/top.html
alt.comp.lang.learn.c-c++ faq:
http://www.raos.demon.uk/acllc-c++/faq.html
Other sites:
http://www.josuttis.com -- C++ STL Library book

Jul 19 '05 #6
sp*******@yahoo.com (mjm) writes:
From what I have read and experienced myself operator new is not
efficient for
allocating very small objects. Ie. if you must allocate many (tens of
millions) of small objects then it might be worth you while to find
out about alternatives.

I have a structure consisting of about 10 million small objects and
the allocation is much slower than traversal and computation.


Boost provides a pool allocator for designed for exactly this
situation:

http://boost.org/libs/pool/doc/index.html

Jul 19 '05 #7

This discussion thread is closed

Replies have been disabled for this discussion.

Similar topics

24 posts views Thread by Steven T. Hatton | last post: by
13 posts views Thread by Robert Lario | last post: by
3 posts views Thread by Tom Jones | last post: by
12 posts views Thread by Joe Narissi | last post: by
reply views Thread by MichaelMortimer | last post: by
reply views Thread by CCCYYYY | last post: by
1 post views Thread by ZEDKYRIE | last post: by

By using Bytes.com and it's services, you agree to our Privacy Policy and Terms of Use.

To disable or enable advertisements and analytics tracking please visit the manage ads & tracking page.