By using this site, you agree to our updated Privacy Policy and our Terms of Use. Manage your Cookies Settings.
443,369 Members | 1,147 Online
Bytes IT Community
+ Ask a Question
Need help? Post your question and get tips & solutions from a community of 443,369 IT Pros & Developers. It's quick & easy.

Preventing memory fragmentation

P: n/a
Given the following information about memory management in C++:

-----
The c-runtime dynamic memory manager (and most other commercial memory
managers) has issues with fragmentation similar to a hard drive file
system.
Over time, the more often use call new/delete or alloc/free, there
will be
gaps and fragments in the heap. This can lead to inefficient use of
available memory, as well as cache-hit inefficiencies.

The goal of performance-critical applications is to minimize the
number of
calls to new/delete. Many use buffer managers that allocate blocks of
contiguous memory, and chain them together with list pointers when the
block
is consumed. Another approach is to use a Spare List, in which each
block of
memory is not actually freed but returned to a pool manager.
-----

I have implemented a memory pool that will coalesce blocks of memory
that are returned to the pool when they are freed up. I am interested
in the effectiveness of this implementation over the default memory
manager, and what I might want to change or consider when using the
pool in a specific application. I would appreciate any constructive
feedback this group has to offer.

The code to implement and test the pool is as follows:

#include <stdlib.h>
#include <time.h>
#include <string.h>

#include <new>
#include <list>

class MemoryPool
{
public:
explicit MemoryPool(size_t memorySize);
~MemoryPool();

void* Reserve(size_t size);
void Release(void* pMemory);

private:
typedef unsigned char Byte;
class Block
{
public:
static Block* FromMemory(void* pMemory);

explicit Block(size_t size, Block* pPrior = NULL, Block*
pNext = NULL);

size_t GetSize() const;
void* GetMemory();
Block* Prior();
Block* Next();

Block* SplitOff(size_t size);
Block* FuseWith( Block* pBlock);

private:
const Block* End() const;

size_t m_size;
Block* m_pPrior;
Block* m_pNext;
};

Block* m_pFirstBlock;
void* m_pMemoryPool;
};

MemoryPool::MemoryPool
(
size_t memorySize
)
{
m_pMemoryPool = ::operator new(memorySize);
::memset(m_pMemoryPool, 0, memorySize);

size_t blockSize = memorySize - sizeof(Block);
m_pFirstBlock = new(m_pMemoryPool) Block(blockSize);
}

MemoryPool::~MemoryPool()
{
::operator delete(m_pMemoryPool);
}

void* MemoryPool::Reserve
(
size_t size
)
{
Block* pBlock = m_pFirstBlock;
while(pBlock->GetSize() < size){
pBlock = pBlock->Next();
if(NULL == pBlock){
throw std::bad_alloc();
}
}

Block* pFreeBlock = pBlock->SplitOff(size);

if(NULL == pFreeBlock->Prior()){
m_pFirstBlock = pFreeBlock;
}

return pBlock->GetMemory();
}

void MemoryPool::Release
(
void* pMemory
)
{
Block* pBlock = Block::FromMemory(pMemory);

Block* pNextBlock = m_pFirstBlock;
Block* pPriorBlock = NULL;

while(pBlock > pNextBlock){
pPriorBlock = pNextBlock;
pNextBlock = pNextBlock->Next();
if(NULL == pNextBlock){
break;
}
}

if(NULL != pNextBlock){
pBlock = pNextBlock->FuseWith(pBlock);
}

if(NULL != pPriorBlock){
pPriorBlock->FuseWith(pBlock);
}
else{
m_pFirstBlock = pBlock;
}
}

MemoryPool::Block* MemoryPool::Block::FromMemory
(
void* pMemory
)
{
Byte* pBytes = reinterpret_cast<Byte*>(pMemory);
return reinterpret_cast<Block*>(pBytes - sizeof(Block));
}

MemoryPool::Block::Block
(
size_t size,
Block* pPrior,
Block* pNext
) : m_size(size),
m_pPrior(pPrior),
m_pNext(pNext)
{
}

size_t MemoryPool::Block::GetSize() const
{
return m_size;
}

void* MemoryPool::Block::GetMemory()
{
Byte* pMemory = reinterpret_cast<Byte*>(this);
pMemory += sizeof(*this);
return pMemory;
}

MemoryPool::Block* MemoryPool::Block::Prior()
{
return m_pPrior;
}

MemoryPool::Block* MemoryPool::Block::Next()
{
return m_pNext;
}

MemoryPool::Block* MemoryPool::Block::SplitOff
(
size_t size
)
{
Block* pFreeBlock = NULL;
size_t totalSize = sizeof(*this) + size;
if((totalSize >= m_size)){
pFreeBlock = m_pNext;
pFreeBlock->m_pPrior = m_pPrior;
}
else{
Byte* pNextBlock = reinterpret_cast<Byte*>(this);
pNextBlock += totalSize;

pFreeBlock = new(pNextBlock) Block(m_size - totalSize,
m_pPrior,
m_pNext);

if(NULL != m_pNext){
m_pNext->m_pPrior = pFreeBlock;
}
}

if(NULL != m_pPrior){
m_pPrior->m_pNext = pFreeBlock;
}

m_size = size;
m_pPrior = NULL;
m_pNext = NULL;

return pFreeBlock;
}

MemoryPool::Block* MemoryPool::Block::FuseWith
(
Block* pBlock
)
{
if(pBlock < this){
if(pBlock->End() == this){
pBlock->m_size += (sizeof(*this) + m_size);
pBlock->m_pNext = m_pNext;

if(NULL != m_pNext){
m_pNext->m_pPrior = pBlock;
}
}
else{
pBlock->m_pNext = this;
}
}

else{
if(this->End() == pBlock){
m_size += (sizeof(*this) + pBlock->GetSize());
m_pNext = pBlock->m_pNext;
return this;
}
else{
pBlock->m_pPrior = this;
m_pNext = pBlock;
}
}

return pBlock;
}

const MemoryPool::Block* MemoryPool::Block::End() const
{
const Byte* pEnd = reinterpret_cast<const Byte*>(this);
pEnd += (sizeof(*this) + m_size);
return reinterpret_cast<const Block*>(pEnd);
}

class Buffer
{
private:
typedef unsigned char Byte;

static const size_t POOL_SIZE = 4096;
static MemoryPool m_memoryPool;
Byte* m_pData;

public:
static void* operator new(size_t size);
static void operator delete(void* pMemory);

explicit Buffer(size_t size);
~Buffer();
};

MemoryPool Buffer::m_memoryPool(Buffer::POOL_SIZE);

void* Buffer::operator new
(
size_t size
)
{
return m_memoryPool.Reserve(size);
}

void Buffer::operator delete
(
void* pMemory
)
{
m_memoryPool.Release(pMemory);
}

Buffer::Buffer
(
size_t size
)
{
m_pData = reinterpret_cast<Byte*>(m_memoryPool.Reserve(size) );
::memset(m_pData, 0xFF, size);
}

Buffer::~Buffer()
{
m_memoryPool.Release(m_pData);
}

typedef std::list<Buffer*> BufferList;
typedef BufferList::iterator BufferIterator;

void AddBuffer(BufferList* pList, size_t size = 0);
void RemoveBuffer(BufferList* pList, size_t buffer = 0);
void ClearBufferList(BufferList* pList);

void AddBuffer
(
BufferList* pList,
size_t size
)
{
const size_t MAX_BUFFER_SIZE = 128;
const size_t MIN_BUFFER_SIZE = 16;

if(0 == size){
size = ::rand() % MAX_BUFFER_SIZE + 1;
if(MIN_BUFFER_SIZE > size){
size = MIN_BUFFER_SIZE;
}
}

pList->push_back(new Buffer(size));
}

void RemoveBuffer
(
BufferList* pList,
size_t buffer
)
{
size_t size = pList->size();

if(0 == buffer || (buffer >= size)){
buffer = ::rand() % size;
}

BufferIterator itBuffer = pList->begin();
std::advance(itBuffer, buffer);

Buffer* pBuffer = *itBuffer;
pList->erase(itBuffer);
delete pBuffer;
}

void ClearBufferList
(
BufferList* pList
)
{
BufferIterator itBuffer = pList->begin();
const BufferIterator itEND = pList->end();
while(itEND != itBuffer){
delete *itBuffer;
++itBuffer;
}
}

int main()
{
::srand(::time(NULL));
BufferList bufferList;

const int nBUFFER_COUNT = 20;
for(int nBuffer; nBUFFER_COUNT > nBuffer; ++nBuffer){
::AddBuffer(&bufferList);
}

for(int nRemove = 0; 5 > nRemove; ++nRemove){
::RemoveBuffer(&bufferList);
}

for(int nTest = 0; 10 > nTest; ++nTest){
if(0 == ::rand() % 2){
::AddBuffer(&bufferList);
}
else{
::RemoveBuffer(&bufferList);
}
}

::ClearBufferList(&bufferList);
}
Jul 22 '05 #1
Share this Question
Share on Google+
18 Replies


P: n/a

"Tron Thomas" <tr*********@verizon.net> wrote in message news:a4**************************@posting.google.c om...> > The c-> runtime
dynamic memory manager (and most other commercial memory
managers) has issues with framentaiton. This can lead to inefficient use of
available memory, as well as cache-hit inefficiencies.
True. Some allocators do better than others.
The goal of performance-critical applications is to minimize the
number of calls to new/delete.
The goal of performance-critical applications is to minimize EVERYTHING.
I aasume you mean to the global operator new/delete.
I have implemented a memory pool that will coalesce blocks of memory
that are returned to the pool when they are freed up.


So do almost every default allocator out there. Your code is also suboptimal
as it exhibits the well-known bad behavior of wandering all over memory touching
random pages when you are looking for a block. Further, I can't see anything
in your approach that does anything to diminish fragmentation,

You might want to google around for some of the prior art on this.

Jul 22 '05 #2

P: n/a
For my application I am only allocating a specific class of objects on
the heap, so I don't believe that I woud need to override the global
new and delete operators.

"Ron Natalie" <ro*@sensor.com> wrote in message news:<3f*********************@news.newshosting.com >...
The goal of performance-critical applications is to minimize EVERYTHING.
I aasume you mean to the global operator new/delete.
I have implemented a memory pool that will coalesce blocks of memory
that are returned to the pool when they are freed up.
So do almost every default allocator out there. Your code is also suboptimal
as it exhibits the well-known bad behavior of wandering all over memory touching
random pages when you are looking for a block. Further, I can't see anything
in your approach that does anything to diminish fragmentation,


I expected that most allocator would already handle coalescing, so I
was really wondering what I was getting out of implementing it myself.
That's what motivated me to make this post

You might want to google around for some of the prior art on this.


I tried using Google to research before doing this implementation. I
did not find anything useful. What would you suggest I search under?
Jul 22 '05 #3

P: n/a

"Tron Thomas" <tr*********@verizon.net> wrote in message news:a4*************************@posting.google.co m...

I tried using Google to research before doing this implementation. I
did not find anything useful. What would you suggest I search under?


A ton of stuff shows up with
http://www.google.com/search?hl=en&i...ion+algorithms
Jul 22 '05 #4

P: n/a
On 23 Dec 2003 11:14:09 -0800, tr*********@verizon.net (Tron Thomas)
wrote:
For my application I am only allocating a specific class of objects on
the heap, so I don't believe that I woud need to override the global
new and delete operators.


Just some thoughts:

(1) You can override new and delete in the class whose objects are to
be stored in a special way;

(2) You can use "placement new/delete" and allocate the raw memory
using something besides operator new.
--
Bob Hairgrove
No**********@Home.com
Jul 22 '05 #5

P: n/a
On 22 Dec 2003 09:44:42 -0800, tr*********@verizon.net (Tron Thomas) wrote:
Given the following information about memory management in C++:

-----
The c-runtime dynamic memory manager (and most other commercial memory
managers) has issues with fragmentation similar to a hard drive file
system.
Over time, the more often use call new/delete or alloc/free, there
will be
gaps and fragments in the heap. This can lead to inefficient use of
available memory, as well as cache-hit inefficiencies.

The goal of performance-critical applications is to minimize the
number of
calls to new/delete. Many use buffer managers that allocate blocks of
contiguous memory, and chain them together with list pointers when the
block
is consumed. Another approach is to use a Spare List, in which each
block of
memory is not actually freed but returned to a pool manager.
-----

I have implemented a memory pool that will coalesce blocks of memory
that are returned to the pool when they are freed up. I am interested
in the effectiveness of this implementation over the default memory
manager, and what I might want to change or consider when using the
pool in a specific application. I would appreciate any constructive
feedback this group has to offer.

The code to implement and test the pool is as follows:

<snip>

Instead of asking for opinions, why not do some benchmarks? One
good benchmark trumps 1,000 opinions. Especially when the performance
of whole application is concerned.

See <http://www.microquill.com> for some tips on benchmarking
heap allocators. They also sell a very good heap allocator.

Are you really sure that heap fragmentation is a big issue? I'd be
interested in seeing some data, if you have it.

Don't forget that the reason for minimizing new/delete calls
is because they take time and they can cause a great deal of
contention in multi-threaded servers.

Slab allocators sound good, though I've never measured
one. See for example:
<http://srl.cs.jhu.edu/courses/600.418/SlabAllocator.pdf>

A very quick once over (well it was more like a glance) of
the code gave me the impression that its a naive memory allocator;
though it's entirely possible that I missed something.

Walking a linked list of blocks is likely to cache unfriendly.
I didn't even see the simple stuff like multiple pools for different
sizes of block, let alone, processor or thread specific pools.

And where do you deal with the minimum alignment?

You might also want to check out the Hoard allocator
at <www.hoard.org>. I tested it against the Microquil
allocator a few years ago and the Microquill allocator,
SmartHeap, was much faster in my application.

Bob S

Jul 22 '05 #6

P: n/a
I thought I was doing this in the implementation code I posted.

wouldnt_you_like@to_know.com (Bob Hairgrove) wrote in message news:<3f**************@news.webshuttle.ch>...
On 23 Dec 2003 11:14:09 -0800, tr*********@verizon.net (Tron Thomas)
wrote:
For my application I am only allocating a specific class of objects on
the heap, so I don't believe that I woud need to override the global
new and delete operators.


Just some thoughts:

(1) You can override new and delete in the class whose objects are to
be stored in a special way;

(2) You can use "placement new/delete" and allocate the raw memory
using something besides operator new.

Jul 22 '05 #7

P: n/a
"Ron Natalie" <ro*@sensor.com> wrote in message news:<3f***********************@news.newshosting.c om>...
"Tron Thomas" <tr*********@verizon.net> wrote in message news:a4*************************@posting.google.co m...

I tried using Google to research before doing this implementation. I
did not find anything useful. What would you suggest I search under?


A ton of stuff shows up with
http://www.google.com/search?hl=en&i...ion+algorithms


Yes, I have conducted simular searches that returned many results as
well. However, I have never found any site that really discuss how to
deal with memory fragmentation.

One suggested technique is free space compaction. This would imply
using handles as you have to shift memory around to compact memory,
and I believe that handles might be difficult to incoporate into what
is already implemented.

Most site discuss method of finding a free block, such as first fit
which means to find the first free block that will satisfy the
request. This is what I implemented for my memory pool. One web site
suggested that first fit is one of the methods that leads to less
memory fragmentation than others. So that might be a plus for what
I've done so far.

I have reviewed the code for the project where I want to handle memory
differently. It turns out there are only 3 classes that allocate
objects on the heap frequenty. Other classes only instantiate one
object during the lifetime of the program and wouldn't contribute much
to fragmentation.

Objects created from these three classes are all of fixed sizes. I'm
thinking I could override operator new for each of these classes where
I reserve a pool that would contain a certain number of blocks, each
of which can hold one object.

Since all objects are the same size, one block is as good as another
and the fact that the pool might become fragmented wouldn't matter too
much. Memory wouldn't be wasted as all blocks in the free list would
be able to hold a desired object regardless of its location it the
list.
Jul 22 '05 #8

P: n/a
po*********************@yahoo.com (Bob Summers) wrote in message news:<3f***************@news.concentric.net>...
Instead of asking for opinions, why not do some benchmarks? One
good benchmark trumps 1,000 opinions. Especially when the performance
of whole application is concerned.

See <http://www.microquill.com> for some tips on benchmarking
heap allocators. They also sell a very good heap allocator.

Are you really sure that heap fragmentation is a big issue? I'd be
interested in seeing some data, if you have it.

Don't forget that the reason for minimizing new/delete calls
is because they take time and they can cause a great deal of
contention in multi-threaded servers.

Slab allocators sound good, though I've never measured
one. See for example:
<http://srl.cs.jhu.edu/courses/600.418/SlabAllocator.pdf>

A very quick once over (well it was more like a glance) of
the code gave me the impression that its a naive memory allocator;
though it's entirely possible that I missed something.

Walking a linked list of blocks is likely to cache unfriendly.
I didn't even see the simple stuff like multiple pools for different
sizes of block, let alone, processor or thread specific pools.

And where do you deal with the minimum alignment?

You might also want to check out the Hoard allocator
at <www.hoard.org>. I tested it against the Microquil
allocator a few years ago and the Microquill allocator,
SmartHeap, was much faster in my application.

Bob S


The comments I listed in my post concerning fragmentation were based
on criticism given by a company where I applied for a programming
position. I sent them the code from some projects I had done and they
were concerned about the way I handled memory in one of my programs,
even though they had never run the program and done any benchmarks
against it.

The program is not that complicated and seems to run fine as it is. I
thought there would be value in dealing with the criticism they
offered in case someone else should review the code and come to a
similar conclusion. Plus, it would be a good learning experience.

I believe I have a strategy that will improve things. I will also
look at the links you suggested.

What exactly is minium alignment and how is it usually dealt with?
Jul 22 '05 #9

P: n/a
Tron Thomas wrote:
The comments I listed in my post concerning fragmentation were
based on criticism given by a company where I applied for a
programming position. I sent them the code from some projects I
had done and they were concerned about the way I handled memory
in one of my programs, even though they had never run the program
and done any benchmarks against it.
Did you try asking them for (or simply did you get) specifics
beyond, "it'll fragment memory"?
The program is not that complicated and seems to run fine as it
is. I thought there would be value in dealing with the criticism
they offered in case someone else should review the code and come
to a similar conclusion. Plus, it would be a good learning
experience.
Is it this memory pool class you wrote that does your allocation?
I.e. is this the memory manager that they called into question?
I believe I have a strategy that will improve things. I will
also look at the links you suggested.
While looking at other memory managers is always a good idea,
generally the most bang for your buck comes from minimizing the
need for one in the first place.

Your memory manager is THE naive memory manager. No offense
intended, it's just pretty much the most basic memory manager
that there could be.

When I say, "minimize the need for [a memory manager]," I mean,
do all that you can to strip calls to new and delete out of your
codebase. Or, push all of those calls to program initialization
or something.

I write video games for a living. We target small-memory
machines, so in addition to the performance ramifications of
general purpose memory management, fragmentation is a terrible
problem. Fragmentation hits us hard (or could, anyway, if we
used memory allocation naively) because there are a lot of small
allocations that are relatively temporary and a lot of large
allocations that stick around. So, you might allocate a few
small things, allocate a big thing, toss the small things, then
the next big thing can't fit into the small things space that's
now free.

The "solution" for us has always been to limit memory allocation
to being at startup, and to not ever call delete. So, you
allocate a bunch of stuff that you actually need, and you keep
that stuff around forever. This meant that stuff like the amount
of text on screen had a maximum (the text system allocated
buffers big enough for TEXT_NUM_CHARS or some such), there could
be a fixed maximum number of bullet holes strewn about the world,
and there was a fixed number of enemy characters that could be
visible on-screen at any given time. This pushed "dynamic"
memory management off to the different object types that managed
their own types of pools, but this actually ended up being
factored out into "base functionality" for most cases that needed
this sort of thing.
What exactly is minium alignment and how is it usually dealt
with?


The PlayStation2 RAM, for example, is set up so that it's really
easy to read things on 64k boundaries, slightly less easy to read
on any arbitrary 16-byte boundary, and only possible to read on
32-bit boundaries into the main CPU. In other words, "minimum
alignment" changes depending on what you're trying to do.
Arguably the memory allocator should be aware of these issues.
Anything that we wanted to DMA anywhere needed to live on these
16-byte boundaries, but if we had any blocks of anything that we
ever wanted to send anywhere, we actually wanted to do our best
to stick them on 64k boundaries, or far enough away from those
boundaries such that the data that the DMA hardware needed could
fit right at the boundary...

-tom!
Jul 22 '05 #10

P: n/a

"Bob Summers" <po*********************@yahoo.com> wrote in message news:3f***************@news.concentric.net...
\> Walking a linked list of blocks is likely to cache unfriendly.
I didn't even see the simple stuff like multiple pools for different
sizes of block, let alone, processor or thread specific pools.

It's worse than cache unfriendly. If you are running over an underlying
virtual memory system, walking the blocks pages in a lot of random pages
just to search the list.

Jul 22 '05 #11

P: n/a
On 24 Dec 2003 20:18:04 -0800, tr*********@verizon.net (Tron Thomas) wrote:
po*********************@yahoo.com (Bob Summers) wrote in message news:<3f***************@news.concentric.net>...
Instead of asking for opinions, why not do some benchmarks? One
good benchmark trumps 1,000 opinions. Especially when the performance
of whole application is concerned.

See <http://www.microquill.com> for some tips on benchmarking
heap allocators. They also sell a very good heap allocator.

Are you really sure that heap fragmentation is a big issue? I'd be
interested in seeing some data, if you have it.

Don't forget that the reason for minimizing new/delete calls
is because they take time and they can cause a great deal of
contention in multi-threaded servers.

Slab allocators sound good, though I've never measured
one. See for example:
<http://srl.cs.jhu.edu/courses/600.418/SlabAllocator.pdf>

A very quick once over (well it was more like a glance) of
the code gave me the impression that its a naive memory allocator;
though it's entirely possible that I missed something.

Walking a linked list of blocks is likely to cache unfriendly.
I didn't even see the simple stuff like multiple pools for different
sizes of block, let alone, processor or thread specific pools.

And where do you deal with the minimum alignment?

You might also want to check out the Hoard allocator
at <www.hoard.org>. I tested it against the Microquil
allocator a few years ago and the Microquill allocator,
SmartHeap, was much faster in my application.

Bob S


The comments I listed in my post concerning fragmentation were based
on criticism given by a company where I applied for a programming
position. I sent them the code from some projects I had done and they
were concerned about the way I handled memory in one of my programs,
even though they had never run the program and done any benchmarks
against it.

The program is not that complicated and seems to run fine as it is. I
thought there would be value in dealing with the criticism they
offered in case someone else should review the code and come to a
similar conclusion. Plus, it would be a good learning experience.

I believe I have a strategy that will improve things. I will also
look at the links you suggested.

What exactly is minium alignment and how is it usually dealt with?


On most (all?) RISC machines, addresses of primitive types have to
be a multiple of the size of the item, i.e. the address of a
4 byte int has be a multiple of the integer size. Even platforms
that allow unaligned loads and stores will generally run much faster
with aligned loads. Imagine what can happen when you try to load an
int that spans pages. Both sides of the int can generate their own
TLB miss, cache miss and pagefault.

That means that data structures must be aligned so that all of their
component parts are correctly aligned. The compiler will assume that
the heap allocator will use some minimum alignment. I think
most 32 bit machines will want 64 bits to be the minimal alignment.
You have to understand how your CPU does addressing to know what
the right value is. In some applications, you'd profit by spending
some space to get cache line alignment, i.e. 32 or 64 bytes with
64 bytes becoming more common.

I think that most allocators will just know what alignement the OS
gives them when they request a chunk of memory. I think that this
will be page aligned on all platforms. If not, it's pretty simple
to check what the actual alignment is as long as you know what the
internals of a pointer look like. Then the allocator will
allocate in chunks that are multiples of the minimum alignment.

I've spent quite a bit of time working in integer code performance
in the last 10 years or so and I can truthfully say that I've never
worried about heap fragmentation. Heap allocation and deallocation
I've worried about constantly but never fragmentation. It's possible
that the person that interviewed you didn't know what they were
talking about or the two of you miscommunicated. It's also
possible that they were right. Without more context, I can't come
up with a reasonable explanation for the criticism. Was this for
an embedded system or numeric application? Did your interviewer
mean pointer chasing or other skipping around in memory instead
of fragmentation?

Bob S


Jul 22 '05 #12

P: n/a
On Thu, 25 Dec 2003 10:07:07 -0800, Tom Plunket <to***@fancy.org> wrote:
Tron Thomas wrote:
<snip>
While looking at other memory managers is always a good idea,
generally the most bang for your buck comes from minimizing the
need for one in the first place.

Your memory manager is THE naive memory manager. No offense
intended, it's just pretty much the most basic memory manager
that there could be.

When I say, "minimize the need for [a memory manager]," I mean,
do all that you can to strip calls to new and delete out of your
codebase. Or, push all of those calls to program initialization
or something.

I write video games for a living. We target small-memory
machines, so in addition to the performance ramifications of
general purpose memory management, fragmentation is a terrible
problem. Fragmentation hits us hard (or could, anyway, if we
used memory allocation naively) because there are a lot of small
allocations that are relatively temporary and a lot of large
allocations that stick around. So, you might allocate a few
small things, allocate a big thing, toss the small things, then
the next big thing can't fit into the small things space that's
now free.

The "solution" for us has always been to limit memory allocation
to being at startup, and to not ever call delete. So, you
allocate a bunch of stuff that you actually need, and you keep
that stuff around forever. This meant that stuff like the amount
of text on screen had a maximum (the text system allocated
buffers big enough for TEXT_NUM_CHARS or some such), there could
be a fixed maximum number of bullet holes strewn about the world,
and there was a fixed number of enemy characters that could be
visible on-screen at any given time. This pushed "dynamic"
memory management off to the different object types that managed
their own types of pools, but this actually ended up being
factored out into "base functionality" for most cases that needed
this sort of thing.
<snip>-tom!


I work on large systems and I agree with your advice regarding
dynamic memory allocation. Don't do it unless it's the best
alternative. Most of the OO code that I've worked on goes crazy
overusing dynamic allocation. I've seen servers that do
20K allocations to print a 200 line list. I carefully
think over every new and malloc call that I code.

Bob S
Jul 22 '05 #13

P: n/a
Tom Plunket <to***@fancy.org> wrote in message news:<fl********************************@4ax.com>. ..
Did you try asking them for (or simply did you get) specifics
beyond, "it'll fragment memory"?
This is an editted version of the e-mail thread between myself and the
person at the comapany who provided the feedback on my code:

----------
Hi Tron,

The c-runtime dynamic memory manager (and most other commercial memory
managers) has issues with fragmentation similar to a hard drive file
system.
Over time, the more often use call new/delete or alloc/free, there
will be
gaps and fragments in the heap. This can lead to inefficient use of
available memory, as well as cache-hit inefficiencies.

The goal of performance-critical applications is to minimize the
number of
calls to new/delete. Many use buffer managers that allocate blocks of
contiguous memory, and chain them together with list pointers when the
block
is consumed. Another approach is to use a Spare List, in which each
block of
memory is not actually freed but returned to a pool manager.

-----Original Message-----
Where can I find information on implementing and using buffer pools?

-----Original Message-----
Hi Tron,

One of his concerns was frequent calls to new and delete, which can
cause memory fragmentation over time. An example is the allocation and
destruction of a memory buffer for every network packet transmission,
vs.
employing a
buffer pool.

-----Original Message-----
I'm confused. What kinds of problems did my code have with memory
management?

-----Original Message-----
Hi Tron,

Thanks for the code samples. I had a chance to review them with our
technical director today. He did like the fact that you are willing
to tackle complicated problems. However, he had some concerns about
the efficiency of your code, particularly in its use of the memory
manager, and so I am unable to offer you a position at this time.
----------

If anyone on this group would be interested in looking at the code to
see if they agree with the comments above I would be more than willing
to make the code available to them.
Is it this memory pool class you wrote that does your allocation?
I.e. is this the memory manager that they called into question?
No, I wrote the memory manager in attempt to improve the code after
receiving the feedback.
Your memory manager is THE naive memory manager. No offense
intended, it's just pretty much the most basic memory manager
that there could be.


No offense taken. I agree with you. The memory manager is naive.
After implementing it I was really wondering what I had accomplished
with it and if it was at all effective. It didn't seem to me that it
did anything more than what the default runtime memory manager for
would have been able to do. That's why I posted the code to this
newsgroup. I understood that if I requested feedback that there would
be people who would be critical of the code.
Jul 22 '05 #14

P: n/a
po*********************@yahoo.com (Bob Summers) wrote in message news:<3f***************@news.concentric.net>...
I've spent quite a bit of time working in integer code performance
in the last 10 years or so and I can truthfully say that I've never
worried about heap fragmentation. Heap allocation and deallocation
I've worried about constantly but never fragmentation. It's possible
that the person that interviewed you didn't know what they were
talking about or the two of you miscommunicated. It's also
possible that they were right. Without more context, I can't come
up with a reasonable explanation for the criticism. Was this for
an embedded system or numeric application? Did your interviewer
mean pointer chasing or other skipping around in memory instead
of fragmentation?


Here is an editted version of the e-mail thread between me and the
person at the company who provided the feedback:

Hi Tron,

The c-runtime dynamic memory manager (and most other commercial memory
managers) has issues with fragmentation similar to a hard drive file
system.
Over time, the more often use call new/delete or alloc/free, there
will be
gaps and fragments in the heap. This can lead to inefficient use of
available memory, as well as cache-hit inefficiencies.

The goal of performance-critical applications is to minimize the
number of
calls to new/delete. Many use buffer managers that allocate blocks of
contiguous memory, and chain them together with list pointers when the
block
is consumed. Another approach is to use a Spare List, in which each
block of
memory is not actually freed but returned to a pool manager.

-----Original Message-----
Where can I find information on implementing and using buffer pools?

-----Original Message-----
Hi Tron,

One of his concerns was frequent calls to new and delete, which can
cause memory fragmentation over time. An example is the allocation and
destruction of a memory buffer for every network packet transmission,
vs.
employing a
buffer pool.

-----Original Message-----
I'm confused. What kinds of problems did my code have with memory
management?

-----Original Message-----
Hi Tron,

Thanks for the code samples. I had a chance to review them with our
technical director today. He did like the fact that you are willing
to tackle complicated problems. However, he had some concerns about
the efficiency of your code, particularly in its use of the memory
manager, and so I am unable to offer you a position at this time.
Jul 22 '05 #15

P: n/a
Tron Thomas wrote:
Your memory manager is THE naive memory manager. No offense
intended, it's just pretty much the most basic memory manager
that there could be.


No offense taken. I agree with you. The memory manager is
naive. After implementing it I was really wondering what I had
accomplished with it and if it was at all effective.


Heh, it probably wasn't, just because it's extremely unlikely
that a "commercial" memory allocator is less efficient.

A "pool" allocator will have a block of memory that it manages,
and it doesn't do new/delete on its own. Instead, it handles
those requests from the application, and passes out memory to the
application from its internal pool.

Probably could do a Google search on "memory pool" and turn up
info on it.

Good luck.

-tom!
Jul 22 '05 #16

P: n/a
On 28 Dec 2003 13:25:19 -0800, tr*********@verizon.net (Tron Thomas) wrote:
po*********************@yahoo.com (Bob Summers) wrote in message news:<3f***************@news.concentric.net>...
I've spent quite a bit of time working in integer code performance
in the last 10 years or so and I can truthfully say that I've never
worried about heap fragmentation. Heap allocation and deallocation
I've worried about constantly but never fragmentation. It's possible
that the person that interviewed you didn't know what they were
talking about or the two of you miscommunicated. It's also
possible that they were right. Without more context, I can't come
up with a reasonable explanation for the criticism. Was this for
an embedded system or numeric application? Did your interviewer
mean pointer chasing or other skipping around in memory instead
of fragmentation?


Here is an editted version of the e-mail thread between me and the
person at the company who provided the feedback:

Hi Tron,

The c-runtime dynamic memory manager (and most other commercial memory
managers) has issues with fragmentation similar to a hard drive file
system.
Over time, the more often use call new/delete or alloc/free, there
will be
gaps and fragments in the heap. This can lead to inefficient use of
available memory, as well as cache-hit inefficiencies.

The goal of performance-critical applications is to minimize the
number of
calls to new/delete. Many use buffer managers that allocate blocks of
contiguous memory, and chain them together with list pointers when the
block
is consumed. Another approach is to use a Spare List, in which each
block of
memory is not actually freed but returned to a pool manager.

-----Original Message-----
Where can I find information on implementing and using buffer pools?

-----Original Message-----
Hi Tron,

One of his concerns was frequent calls to new and delete, which can
cause memory fragmentation over time. An example is the allocation and
destruction of a memory buffer for every network packet transmission,
vs.
employing a
buffer pool.

-----Original Message-----
I'm confused. What kinds of problems did my code have with memory
management?

-----Original Message-----
Hi Tron,

Thanks for the code samples. I had a chance to review them with our
technical director today. He did like the fact that you are willing
to tackle complicated problems. However, he had some concerns about
the efficiency of your code, particularly in its use of the memory
manager, and so I am unable to offer you a position at this time.


If what you said about the code being a non-performance critical
program, it seems like they are being silly. The best engineers use
the best approach for a given problem. Heavy duty memory management
optimization has no place in a typical one-off program. In a
high-performance server, the issues are different and memory
management is something to minimize.

It sounds to me like they are just trying to find an excuse not to
hire you. Many places are like that these days.

The solutions that he proposes have the same issues with fragmentation.
Perhaps worse fragmentation in some environments and better in other.

However, most stock heap allocators behave badly under performance
pressure, especially if the program is multi-threaded. The solutions
he proposed would help reduce contention for the heap lock. IOW,
splitting the heap into memory pools distributes the lock contention
across more locks. That can be a big performance win.

Bob S
Jul 22 '05 #17

P: n/a
po*********************@yahoo.com (Bob Summers) wrote in message news:<3f***************@news.concentric.net>...
If what you said about the code being a non-performance critical
program, it seems like they are being silly. The best engineers use
the best approach for a given problem. Heavy duty memory management
optimization has no place in a typical one-off program. In a
high-performance server, the issues are different and memory
management is something to minimize.

It sounds to me like they are just trying to find an excuse not to
hire you. Many places are like that these days.

The solutions that he proposes have the same issues with fragmentation.
Perhaps worse fragmentation in some environments and better in other.

However, most stock heap allocators behave badly under performance
pressure, especially if the program is multi-threaded. The solutions
he proposed would help reduce contention for the heap lock. IOW,
splitting the heap into memory pools distributes the lock contention
across more locks. That can be a big performance win.

Bob S


Just so you know Bob the company is making a massively multiplayer
on-line game. So their concerns about memory effeciency might not be
too unreasonable.

It is not encouraging to hear the validation of the suspicion that
companies are looking for ridiculous reasons to weed out potential
candidates. I believe that has been a significant barrier in my job
search.
Jul 22 '05 #18

P: n/a
Tron Thomas wrote:
Just so you know Bob the company is making a massively multiplayer
on-line game. So their concerns about memory effeciency might not be
too unreasonable.
One of _those_, huh? Might not be a great idea to get into that
anyway, there's not going to be a lot of "winners" in that arena
over the coming year.
It is not encouraging to hear the validation of the suspicion that
companies are looking for ridiculous reasons to weed out potential
candidates. I believe that has been a significant barrier in my job
search.


Speaking for myself (as a game developer presently looking for
staff), I don't look for "ridiculous reasons" to weed out anyone.
I can't find staff to save my life. The problem is that a
candidate needs to blow me away before I'll consider hiring them.
If there is ANY question in my mind that a person could do the
job, then, as Joel Splosky says, "No Hire." It's way more
problematic to hire someone who can't cut it than it is to just
wait for amazing coders. I've found one in the past three
months, I'm sure there are others out there.

Good luck all the same. Do what you can to learn everything you
can. Read everything in this newsgroup and in the moderated one.
Write your own software. Make it good. Pitch yourself to as
many companies as you can.
-tom!
Jul 22 '05 #19

This discussion thread is closed

Replies have been disabled for this discussion.