468,251 Members | 1,381 Online
Bytes | Developer Community
New Post

Home Posts Topics Members FAQ

Post your question to a community of 468,251 developers. It's quick & easy.

What's the boost pool allocator good for?

I tested the speed of a simple program like this:

//------------------------------------------------------------
#include <list>
#include <boost/pool/pool_alloc.hpp>

int main()
{
typedef std::list<intList_t; // default allocator
//typedef std::list<int, boost::pool_allocator<int List_t;

List_t l;

for(int i = 0; i < 100000; ++i)
l.push_back(i);

int counter = 0;
for(List_t::iterator iter = l.begin(); iter != l.end();)
if(++counter % 3 == 0)
iter = l.erase(iter);
else
++iter;

for(int i = 0; i < 100000; ++i)
l.push_back(i);
}
//------------------------------------------------------------

Compiling this with "-O3 -march=pentium4" and running it in my
computer, the running time was approximately 33 milliseconds.

When I change to the version of the list which uses the boost pool
allocator and do the same thing, the running time is a whopping 59
seconds. That's approximately 1800 times slower.

What the heck is this pool allocator good for? At least not for speed.
Jul 9 '08 #1
6 12970
On Jul 9, 3:11 pm, Juha Nieminen <nos...@thanks.invalidwrote:
I tested the speed of a simple program like this:
//------------------------------------------------------------
#include <list>
#include <boost/pool/pool_alloc.hpp>
int main()
{
typedef std::list<intList_t; // default allocator
//typedef std::list<int, boost::pool_allocator<int List_t;
List_t l;
for(int i = 0; i < 100000; ++i)
l.push_back(i);
int counter = 0;
for(List_t::iterator iter = l.begin(); iter != l.end();)
if(++counter % 3 == 0)
iter = l.erase(iter);
else
++iter;
for(int i = 0; i < 100000; ++i)
l.push_back(i);}
//------------------------------------------------------------
Compiling this with "-O3 -march=pentium4" and running it in my
computer, the running time was approximately 33 milliseconds.
When I change to the version of the list which uses the boost
pool allocator and do the same thing, the running time is a
whopping 59 seconds. That's approximately 1800 times slower.
Did you expect anything different? The standard has been
carefully worded so that an implementation can optimize use of
the standard allocators, in a very container dependent fashion.
The purpose of the allocator parameter is not for a possible
optimization, but to support different semantics, and normally,
I would expect any custom allocator to be slower than the
standard allocator, at least if the library author has done a
good job.
What the heck is this pool allocator good for? At least not
for speed.
Isn't that question backwards? First, decide what you are
trying to achieve, and why the standard allocator isn't
acceptable. Then look for an allocator which achieves what you
need.

--
James Kanze (GABI Software) email:ja*********@gmail.com
Conseils en informatique orientée objet/
Beratung in objektorientierter Datenverarbeitung
9 place Sémard, 78210 St.-Cyr-l'École, France, +33 (0)1 30 23 00 34
Jul 10 '08 #2
James Kanze wrote:
>When I change to the version of the list which uses the boost
pool allocator and do the same thing, the running time is a
whopping 59 seconds. That's approximately 1800 times slower.

Did you expect anything different?
Certainly. What sense does it make to use an allocator which makes
allocation almost 2000 times slower? It makes no sense whatsoever.

Even if this allocator cut total memory usage in half, implemented a
full garbage collector and performed full boundary checks, it would
*still* be useless because of the humongous slowdown. It couldn't even
be used for debugging purposes in programs which allocate even moderate
amounts of memory.

As the other poster replied, boost::fast_pool_allocator does a much
better job at this (although while it's a bit faster than the default
allocator, it's still no *so* fast).

So my question remains: What's boost::pool_allocator good for?
The standard has been
carefully worded so that an implementation can optimize use of
the standard allocators, in a very container dependent fashion.
I believe that. However, I still can't understand what's the use for
an allocator which is almost 2000 times slower than the default allocator.
The purpose of the allocator parameter is not for a possible
optimization, but to support different semantics, and normally,
I would expect any custom allocator to be slower than the
standard allocator, at least if the library author has done a
good job.
Why do you expect that? I can think of four reasons to use a custom
allocator:

1) The allocator is way faster than the default allocator.
2) The allocator consumes considerably less memory than the default
allocator.
3) The allocator performs GC or other type of safety checks (which would
be mostly irrelevant with the STL containers, but anyways).
4) The allocator does something exotic, such as use a file or a network
connection as memory, or something along those lines.

None of these (except perhaps number 4, for rather obvious reasons)
imply that the allocator ought to be thousands of times slower than the
default allocator.

1) and 2) are not even hard to do. I have created such an allocator
myself (and, in fact, my purpose in trying the boost allocator was to
compare its speed with mine).
>What the heck is this pool allocator good for? At least not
for speed.

Isn't that question backwards? First, decide what you are
trying to achieve, and why the standard allocator isn't
acceptable. Then look for an allocator which achieves what you
need.
I'm trying to achieve maximum allocation/deallocation speed. The
standard allocator (at least in linux) is very slow.
Jul 10 '08 #3
Michael DOUBEZ wrote:
> Certainly. What sense does it make to use an allocator which makes
allocation almost 2000 times slower? It makes no sense whatsoever.
[snip]

It makes sense if you don't want to use the system allocator: for a pet
thread allocator or if you want to limit memory usage while keeping some
flexibility in a specific area (I do that in some part of my -embedded-
software).
I'm sorry, but I *still* don't understand how it makes sense.

Let me repeat myself: boost::pool_allocator seems to be almost 2000
times slower than the default allocator. Not 2 times, not even 20 times,
2000 times.

What advantage there could ever be from boost::pool_allocator compared
to the default allocator? Even if you are making just one single
allocation during the execution of the entire program (in which case the
speed of the allocation doesn't really matter), I still fail to see what
would be the benefit over just using the default allocator.

What is boost::pool_allocator for? Why should I even consider paying
the humongous price for using it?
> I'm trying to achieve maximum allocation/deallocation speed. The
standard allocator (at least in linux) is very slow.

Did you try with *fast_pool_allocator* ?
Yes, it was suggested in an earlier post. That one is indeed a bit
faster than the default allocator, at least in my linux system, and
could thus be much more useful. (OTOH it's still much slower than my own
allocator, which is what I was comparing it against.)

However, my original question has still not been answered.
I tried quickly with your code
and without measuring, the time looked equivalent (at least not 1 min
like with pool_allocator).
Actually here fast_pool_allocator is a bit faster than the default
allocator. If you happened to be interested, I put some results here:
http://warp.povusers.org/FSBAllocator/#benchmark
Jul 11 '08 #4
Juha Nieminen wrote:
Michael DOUBEZ wrote:
>> Certainly. What sense does it make to use an allocator which makes
allocation almost 2000 times slower? It makes no sense whatsoever.
[snip]

It makes sense if you don't want to use the system allocator: for a pet
thread allocator or if you want to limit memory usage while keeping some
flexibility in a specific area (I do that in some part of my -embedded-
software).

I'm sorry, but I *still* don't understand how it makes sense.

Let me repeat myself: boost::pool_allocator seems to be almost 2000
times slower than the default allocator. Not 2 times, not even 20 times,
2000 times.
You seem to have tested the architecture unfairly. If the documentation
states that using pool_allocator in a std::list is inappropriate, as has
been said in this thread once already, then you need to come up with a
different test. Your question is completely pointless while it is based
on poorly arrived at statistics.
Jul 11 '08 #5
On Jul 10, 10:40 pm, Juha Nieminen <nos...@thanks.invalidwrote:
James Kanze wrote:
When I change to the version of the list which uses the boost
pool allocator and do the same thing, the running time is a
whopping 59 seconds. That's approximately 1800 times slower.
Did you expect anything different?
Certainly. What sense does it make to use an allocator which
makes allocation almost 2000 times slower? It makes no sense
whatsoever.
You use a non-standard allocator because you need different
semantics. If you need those semantics, it doesn't matter what
the speed, since the program won't be correct without them.

If the semantics are correct with the standard allocator, then
that's what you should be using.

[...]
The purpose of the allocator parameter is not for a possible
optimization, but to support different semantics, and normally,
I would expect any custom allocator to be slower than the
standard allocator, at least if the library author has done a
good job.
Why do you expect that?
Because if it were faster, then the implementors of the standard
library would have used the same algorithms.
I can think of four reasons to use a custom allocator:
1) The allocator is way faster than the default allocator.
2) The allocator consumes considerably less memory than the default
allocator.
3) The allocator performs GC or other type of safety checks (which would
be mostly irrelevant with the STL containers, but anyways).
4) The allocator does something exotic, such as use a file or a network
connection as memory, or something along those lines.
The only possible reason for using a non-standard allocator with
the STL should be the last. Unless the implementor has not done
his job correctly.

--
James Kanze (GABI Software) email:ja*********@gmail.com
Conseils en informatique orientée objet/
Beratung in objektorientierter Datenverarbeitung
9 place Sémard, 78210 St.-Cyr-l'École, France, +33 (0)1 30 23 00 34
Jul 11 '08 #6
James Kanze wrote:
>Why do you expect that?

Because if it were faster, then the implementors of the standard
library would have used the same algorithms.
I disagree.

The default allocator is designed for maximum flexibility (meaning
that it should work with allocated memory blocks of any size) and, if
possible, optimal memory usage (meaning that even though the memory
blocks are completely arbitrary in size, the allocator should waste
ancillary bookkeeping memory as little as possible).

In other words, the default memory allocator is a completely *generic*
allocator designed for all possible uses.

The price to be paid for it being so generic is that it's slower and
wastes unneeded memory in situations where a much simpler allocator
would be enough. For example, if you are instantiating a std::list with
lots of elements, each one of those elements has exactly the same size.
It would thus be enough to have a memory allocator which just allocates
memory blocks of that size and nothing else. The amount of required
bookkeeping is reduced considerably, and the allocator also can be made
a lot faster (mainly because there's less bookkeeping causing update
overhead).

It's perfectly possible to design a memory allocator which is
specialized for allocations which have all the same size (which is the
case with std::list, std::set and std::map). Because such allocator is
much simpler than the default generic allocator, it can be enormously
faster and waste considerably less memory.
I know this is possible because I have implemented such a memory
allocator myself. It *is* considerably faster than the default
allocator, and it uses less memory.
Of course this allocator has its limitations. For example, you can't
allocate arrays with it. However, you don't need the support for array
allocation when all you are doing is using the allocator with a
std::list, std::set or std::map.
>I can think of four reasons to use a custom allocator:
>1) The allocator is way faster than the default allocator.
2) The allocator consumes considerably less memory than the default
allocator.
3) The allocator performs GC or other type of safety checks (which would
be mostly irrelevant with the STL containers, but anyways).
4) The allocator does something exotic, such as use a file or a network
connection as memory, or something along those lines.

The only possible reason for using a non-standard allocator with
the STL should be the last. Unless the implementor has not done
his job correctly.
It's rather easy to prove you wrong:
http://warp.povusers.org/FSBAllocator/#benchmark

(Ok, you might argue that the implementors of the memory allocator in
linux libc have not done their job correctly, but that's not a big
consolation when you are in a situation where you require maximal
allocation speed for things like std::list.)
Jul 11 '08 #7

This discussion thread is closed

Replies have been disabled for this discussion.

Similar topics

7 posts views Thread by sbobrows | last post: by
7 posts views Thread by bluekite2000 | last post: by
5 posts views Thread by FBergemann | last post: by
2 posts views Thread by soren.andersen | last post: by
12 posts views Thread by contactmayankjain | last post: by
reply views Thread by NPC403 | last post: by
reply views Thread by kermitthefrogpy | last post: by
reply views Thread by zattat | last post: by
By using this site, you agree to our Privacy Policy and Terms of Use.