By using this site, you agree to our updated Privacy Policy and our Terms of Use. Manage your Cookies Settings.
432,523 Members | 1,938 Online
Bytes IT Community
+ Ask a Question
Need help? Post your question and get tips & solutions from a community of 432,523 IT Pros & Developers. It's quick & easy.

Dynamaic Memory verses static

P: n/a
Greetings,

I was going through the quake2 source code (as you do) when I noticed
arrays are statically allocated on the stack instead of being allocated
dynamically. Iíve seen this before and know itís for efficiency.

Iím basically questioning the merit of such technique nowadays, for
performance-orientated applications? Is it better to dynamically
allocate a smaller array to fit the memory or to use one huge statically
allocated array? Or even one even larger static array and chop it up at
runtime?

Can you point me to any websites (and yes I have searched) that compare
the differences in performance using both approaches?

Thanks.

Jul 19 '05 #1
Share this Question
Share on Google+
6 Replies


P: n/a
J Anderson wrote:
Greetings,

I was going through the quake2 source code (as you do) when I noticed
arrays are statically allocated on the stack instead of being allocated
dynamically. Iíve seen this before and know itís for efficiency.

Iím basically questioning the merit of such technique nowadays, for
performance-orientated applications? Is it better to dynamically
allocate a smaller array to fit the memory or to use one huge statically
allocated array?
I don't understand your question here. The 'better' technique would be
the one that fits your needs, I suppose. If you need smaller arrays, you
would allocate smaller ones. If you need one 'huge' array, you'd
allocate a huge one.

Static allocation should be very fast (just adjusting a pointer,
usually), while dynamic allocation may be slower (locating a piece of
memory large enough, preparing it for use and eventual deallocation). It
may make sense to avoid several dynamic allocations when a single
allocation (static or dynamic) can do the job.
Or even one even larger static array and chop it up at
runtime?
Don't you think that's basically what the run-time system does to supply
you with dynamic memory? What makes you think you can do better?

Can you point me to any websites (and yes I have searched) that compare
the differences in performance using both approaches?


You could compare it yourself. I have. I found that allocating dynamic
memory was very fast, and I could not significantly improve on it.

You should keep in mind that 99% of code is *NOT* time-critical, and
most of the time you should be far more concerned with writing clean,
understandable, maintainable, bug-free code than with writing fast code.

-Kevin
--
My email address is valid, but changes periodically.
To contact me please use the address from a recent posting.

Jul 19 '05 #2

P: n/a


J Anderson wrote:

Greetings,

I was going through the quake2 source code (as you do) when I noticed
arrays are statically allocated on the stack instead of being allocated
dynamically. Iíve seen this before and know itís for efficiency.

Iím basically questioning the merit of such technique nowadays, for
performance-orientated applications? Is it better to dynamically
allocate a smaller array to fit the memory or to use one huge statically
allocated array? Or even one even larger static array and chop it up at
runtime?
There is only one way to figure out:
Try it on your compiler.

But chances are good, that the statically allocated array will be faster
on most implementations. How much? That depends on your actual implementation.

Can you point me to any websites (and yes I have searched) that compare
the differences in performance using both approaches?


Try some test codes written by yourself. You might learn something.
You might also search for 'object pooling'.

--
Karl Heinz Buchegger
kb******@gascad.at
Jul 19 '05 #3

P: n/a
mjm
From what I have read and experienced myself operator new is not
efficient for
allocating very small objects. Ie. if you must allocate many (tens of
millions) of small objects then it might be worth you while to find
out about alternatives.

I have a structure consisting of about 10 million small objects and
the allocation is much slower than traversal and computation.
Jul 19 '05 #4

P: n/a

"mjm" <sp*******@yahoo.com> wrote in message news:f3**************************@posting.google.c om...
From what I have read and experienced myself operator new is not
efficient for
allocating very small objects. Ie. if you must allocate many (tens of
millions) of small objects then it might be worth you while to find
out about alternatives.


It might be. First try the standard allocators. And if that doesn't
meet your performance requirements you can optimize an operator new
for your particular type taking advantage of what you know (be it,
small, fixed size, etc...) about the type.
Jul 19 '05 #5

P: n/a
mjm wrote:
From what I have read and experienced myself operator new is not
efficient for
allocating very small objects. Ie. if you must allocate many (tens of
millions) of small objects then it might be worth you while to find
out about alternatives.

I have a structure consisting of about 10 million small objects and
the allocation is much slower than traversal and computation.


Actually, the new operator is efficient. Most courses on Operating
Systems will teach you that allocating many small objects is not
efficient as allocating one large {i.e. array} object.

On the platforms that I've programmed on, allocating one big chunk
of memory and dividing it up is a lot more efficient than allocating
a whole bunch of small pieces. The trade-off point depends on the
overhead required to allocate memory.

--
Thomas Matthews

C++ newsgroup welcome message:
http://www.slack.net/~shiva/welcome.txt
C++ Faq: http://www.parashift.com/c++-faq-lite
C Faq: http://www.eskimo.com/~scs/c-faq/top.html
alt.comp.lang.learn.c-c++ faq:
http://www.raos.demon.uk/acllc-c++/faq.html
Other sites:
http://www.josuttis.com -- C++ STL Library book

Jul 19 '05 #6

P: n/a
sp*******@yahoo.com (mjm) writes:
From what I have read and experienced myself operator new is not
efficient for
allocating very small objects. Ie. if you must allocate many (tens of
millions) of small objects then it might be worth you while to find
out about alternatives.

I have a structure consisting of about 10 million small objects and
the allocation is much slower than traversal and computation.


Boost provides a pool allocator for designed for exactly this
situation:

http://boost.org/libs/pool/doc/index.html

Jul 19 '05 #7

This discussion thread is closed

Replies have been disabled for this discussion.