473,405 Members | 2,210 Online
Bytes | Software Development & Data Engineering Community
Post Job

Home Posts Topics Members FAQ

Join Bytes to post your question to a community of 473,405 software developers and data experts.

What's the boost pool allocator good for?

I tested the speed of a simple program like this:

//------------------------------------------------------------
#include <list>
#include <boost/pool/pool_alloc.hpp>

int main()
{
typedef std::list<intList_t; // default allocator
//typedef std::list<int, boost::pool_allocator<int List_t;

List_t l;

for(int i = 0; i < 100000; ++i)
l.push_back(i);

int counter = 0;
for(List_t::iterator iter = l.begin(); iter != l.end();)
if(++counter % 3 == 0)
iter = l.erase(iter);
else
++iter;

for(int i = 0; i < 100000; ++i)
l.push_back(i);
}
//------------------------------------------------------------

Compiling this with "-O3 -march=pentium4" and running it in my
computer, the running time was approximately 33 milliseconds.

When I change to the version of the list which uses the boost pool
allocator and do the same thing, the running time is a whopping 59
seconds. That's approximately 1800 times slower.

What the heck is this pool allocator good for? At least not for speed.
Jul 9 '08 #1
6 13820
On Jul 9, 3:11 pm, Juha Nieminen <nos...@thanks.invalidwrote:
I tested the speed of a simple program like this:
//------------------------------------------------------------
#include <list>
#include <boost/pool/pool_alloc.hpp>
int main()
{
typedef std::list<intList_t; // default allocator
//typedef std::list<int, boost::pool_allocator<int List_t;
List_t l;
for(int i = 0; i < 100000; ++i)
l.push_back(i);
int counter = 0;
for(List_t::iterator iter = l.begin(); iter != l.end();)
if(++counter % 3 == 0)
iter = l.erase(iter);
else
++iter;
for(int i = 0; i < 100000; ++i)
l.push_back(i);}
//------------------------------------------------------------
Compiling this with "-O3 -march=pentium4" and running it in my
computer, the running time was approximately 33 milliseconds.
When I change to the version of the list which uses the boost
pool allocator and do the same thing, the running time is a
whopping 59 seconds. That's approximately 1800 times slower.
Did you expect anything different? The standard has been
carefully worded so that an implementation can optimize use of
the standard allocators, in a very container dependent fashion.
The purpose of the allocator parameter is not for a possible
optimization, but to support different semantics, and normally,
I would expect any custom allocator to be slower than the
standard allocator, at least if the library author has done a
good job.
What the heck is this pool allocator good for? At least not
for speed.
Isn't that question backwards? First, decide what you are
trying to achieve, and why the standard allocator isn't
acceptable. Then look for an allocator which achieves what you
need.

--
James Kanze (GABI Software) email:ja*********@gmail.com
Conseils en informatique orientée objet/
Beratung in objektorientierter Datenverarbeitung
9 place Sémard, 78210 St.-Cyr-l'École, France, +33 (0)1 30 23 00 34
Jul 10 '08 #2
James Kanze wrote:
>When I change to the version of the list which uses the boost
pool allocator and do the same thing, the running time is a
whopping 59 seconds. That's approximately 1800 times slower.

Did you expect anything different?
Certainly. What sense does it make to use an allocator which makes
allocation almost 2000 times slower? It makes no sense whatsoever.

Even if this allocator cut total memory usage in half, implemented a
full garbage collector and performed full boundary checks, it would
*still* be useless because of the humongous slowdown. It couldn't even
be used for debugging purposes in programs which allocate even moderate
amounts of memory.

As the other poster replied, boost::fast_pool_allocator does a much
better job at this (although while it's a bit faster than the default
allocator, it's still no *so* fast).

So my question remains: What's boost::pool_allocator good for?
The standard has been
carefully worded so that an implementation can optimize use of
the standard allocators, in a very container dependent fashion.
I believe that. However, I still can't understand what's the use for
an allocator which is almost 2000 times slower than the default allocator.
The purpose of the allocator parameter is not for a possible
optimization, but to support different semantics, and normally,
I would expect any custom allocator to be slower than the
standard allocator, at least if the library author has done a
good job.
Why do you expect that? I can think of four reasons to use a custom
allocator:

1) The allocator is way faster than the default allocator.
2) The allocator consumes considerably less memory than the default
allocator.
3) The allocator performs GC or other type of safety checks (which would
be mostly irrelevant with the STL containers, but anyways).
4) The allocator does something exotic, such as use a file or a network
connection as memory, or something along those lines.

None of these (except perhaps number 4, for rather obvious reasons)
imply that the allocator ought to be thousands of times slower than the
default allocator.

1) and 2) are not even hard to do. I have created such an allocator
myself (and, in fact, my purpose in trying the boost allocator was to
compare its speed with mine).
>What the heck is this pool allocator good for? At least not
for speed.

Isn't that question backwards? First, decide what you are
trying to achieve, and why the standard allocator isn't
acceptable. Then look for an allocator which achieves what you
need.
I'm trying to achieve maximum allocation/deallocation speed. The
standard allocator (at least in linux) is very slow.
Jul 10 '08 #3
Michael DOUBEZ wrote:
> Certainly. What sense does it make to use an allocator which makes
allocation almost 2000 times slower? It makes no sense whatsoever.
[snip]

It makes sense if you don't want to use the system allocator: for a pet
thread allocator or if you want to limit memory usage while keeping some
flexibility in a specific area (I do that in some part of my -embedded-
software).
I'm sorry, but I *still* don't understand how it makes sense.

Let me repeat myself: boost::pool_allocator seems to be almost 2000
times slower than the default allocator. Not 2 times, not even 20 times,
2000 times.

What advantage there could ever be from boost::pool_allocator compared
to the default allocator? Even if you are making just one single
allocation during the execution of the entire program (in which case the
speed of the allocation doesn't really matter), I still fail to see what
would be the benefit over just using the default allocator.

What is boost::pool_allocator for? Why should I even consider paying
the humongous price for using it?
> I'm trying to achieve maximum allocation/deallocation speed. The
standard allocator (at least in linux) is very slow.

Did you try with *fast_pool_allocator* ?
Yes, it was suggested in an earlier post. That one is indeed a bit
faster than the default allocator, at least in my linux system, and
could thus be much more useful. (OTOH it's still much slower than my own
allocator, which is what I was comparing it against.)

However, my original question has still not been answered.
I tried quickly with your code
and without measuring, the time looked equivalent (at least not 1 min
like with pool_allocator).
Actually here fast_pool_allocator is a bit faster than the default
allocator. If you happened to be interested, I put some results here:
http://warp.povusers.org/FSBAllocator/#benchmark
Jul 11 '08 #4
Juha Nieminen wrote:
Michael DOUBEZ wrote:
>> Certainly. What sense does it make to use an allocator which makes
allocation almost 2000 times slower? It makes no sense whatsoever.
[snip]

It makes sense if you don't want to use the system allocator: for a pet
thread allocator or if you want to limit memory usage while keeping some
flexibility in a specific area (I do that in some part of my -embedded-
software).

I'm sorry, but I *still* don't understand how it makes sense.

Let me repeat myself: boost::pool_allocator seems to be almost 2000
times slower than the default allocator. Not 2 times, not even 20 times,
2000 times.
You seem to have tested the architecture unfairly. If the documentation
states that using pool_allocator in a std::list is inappropriate, as has
been said in this thread once already, then you need to come up with a
different test. Your question is completely pointless while it is based
on poorly arrived at statistics.
Jul 11 '08 #5
On Jul 10, 10:40 pm, Juha Nieminen <nos...@thanks.invalidwrote:
James Kanze wrote:
When I change to the version of the list which uses the boost
pool allocator and do the same thing, the running time is a
whopping 59 seconds. That's approximately 1800 times slower.
Did you expect anything different?
Certainly. What sense does it make to use an allocator which
makes allocation almost 2000 times slower? It makes no sense
whatsoever.
You use a non-standard allocator because you need different
semantics. If you need those semantics, it doesn't matter what
the speed, since the program won't be correct without them.

If the semantics are correct with the standard allocator, then
that's what you should be using.

[...]
The purpose of the allocator parameter is not for a possible
optimization, but to support different semantics, and normally,
I would expect any custom allocator to be slower than the
standard allocator, at least if the library author has done a
good job.
Why do you expect that?
Because if it were faster, then the implementors of the standard
library would have used the same algorithms.
I can think of four reasons to use a custom allocator:
1) The allocator is way faster than the default allocator.
2) The allocator consumes considerably less memory than the default
allocator.
3) The allocator performs GC or other type of safety checks (which would
be mostly irrelevant with the STL containers, but anyways).
4) The allocator does something exotic, such as use a file or a network
connection as memory, or something along those lines.
The only possible reason for using a non-standard allocator with
the STL should be the last. Unless the implementor has not done
his job correctly.

--
James Kanze (GABI Software) email:ja*********@gmail.com
Conseils en informatique orientée objet/
Beratung in objektorientierter Datenverarbeitung
9 place Sémard, 78210 St.-Cyr-l'École, France, +33 (0)1 30 23 00 34
Jul 11 '08 #6
James Kanze wrote:
>Why do you expect that?

Because if it were faster, then the implementors of the standard
library would have used the same algorithms.
I disagree.

The default allocator is designed for maximum flexibility (meaning
that it should work with allocated memory blocks of any size) and, if
possible, optimal memory usage (meaning that even though the memory
blocks are completely arbitrary in size, the allocator should waste
ancillary bookkeeping memory as little as possible).

In other words, the default memory allocator is a completely *generic*
allocator designed for all possible uses.

The price to be paid for it being so generic is that it's slower and
wastes unneeded memory in situations where a much simpler allocator
would be enough. For example, if you are instantiating a std::list with
lots of elements, each one of those elements has exactly the same size.
It would thus be enough to have a memory allocator which just allocates
memory blocks of that size and nothing else. The amount of required
bookkeeping is reduced considerably, and the allocator also can be made
a lot faster (mainly because there's less bookkeeping causing update
overhead).

It's perfectly possible to design a memory allocator which is
specialized for allocations which have all the same size (which is the
case with std::list, std::set and std::map). Because such allocator is
much simpler than the default generic allocator, it can be enormously
faster and waste considerably less memory.
I know this is possible because I have implemented such a memory
allocator myself. It *is* considerably faster than the default
allocator, and it uses less memory.
Of course this allocator has its limitations. For example, you can't
allocate arrays with it. However, you don't need the support for array
allocation when all you are doing is using the allocator with a
std::list, std::set or std::map.
>I can think of four reasons to use a custom allocator:
>1) The allocator is way faster than the default allocator.
2) The allocator consumes considerably less memory than the default
allocator.
3) The allocator performs GC or other type of safety checks (which would
be mostly irrelevant with the STL containers, but anyways).
4) The allocator does something exotic, such as use a file or a network
connection as memory, or something along those lines.

The only possible reason for using a non-standard allocator with
the STL should be the last. Unless the implementor has not done
his job correctly.
It's rather easy to prove you wrong:
http://warp.povusers.org/FSBAllocator/#benchmark

(Ok, you might argue that the implementors of the memory allocator in
linux libc have not done their job correctly, but that's not a big
consolation when you are in a situation where you require maximal
allocation speed for things like std::list.)
Jul 11 '08 #7

This thread has been closed and replies have been disabled. Please start a new discussion.

Similar topics

7
by: sbobrows | last post by:
{Whilst I think much of this is OT for this newsgroup, I think the issue of understanding diagnostics just about gets under the door. -mod} Hi, I'm a C++ newbie trying to use the Boost regex...
5
by: Knackeback | last post by:
task: - read/parse CSV file code snippet: string key,line; typedef tokenizer<char_separator<char> > tokenizer; tokenizer tok(string(""), sep); while ( getline(f, line) ){ ++lineNo;...
7
by: bluekite2000 | last post by:
new/delete, std::allocator, pool_allocator or mt_allocator??? What r the pros and cons if each method? PS: I m designing a matrix/tensor/vector lib, all of which call base_constructor of class...
5
by: FBergemann | last post by:
I use SunOS 5.8, gcc 3.3.2, boost 1.33.1. I have build the entire boost package and try to compile a simple example: #include <iostream> #include <string> #include <boost/regex.hpp //...
2
by: soren.andersen | last post by:
Hello out there :-) I'm new to c++, coming from Java, and trying to learn the basics of the language and all that, basically just for fun. :-) So, when once I played around with c++ a bit i...
5
by: mackenzie | last post by:
Hello, I am looking for a little bit of help. I am trying to create a dynamically allocated object which contains one or more objects of type boost::pool<>. I get a compiler error when an object...
3
by: Barry | last post by:
As boost::pool_alloc use singleton holder for pool allocator, client programmers have to reclaim the memory by hand through calling singleton_pool<alloc_tag, elem_size>::purge_memory() or something...
12
by: contactmayankjain | last post by:
Hi, Its said that one should avoid dynamic allocation of memory, as it de- fragment the memory into small chunks and allocation of memory is a costly process in terms of efficiency. So what is...
4
by: Man4ish | last post by:
namespace ve/////////////////ve.h { struct VertexProperties { std::size_t index; boost::default_color_type color; }; }...
0
by: emmanuelkatto | last post by:
Hi All, I am Emmanuel katto from Uganda. I want to ask what challenges you've faced while migrating a website to cloud. Please let me know. Thanks! Emmanuel
1
by: nemocccc | last post by:
hello, everyone, I want to develop a software for my android phone for daily needs, any suggestions?
0
by: Hystou | last post by:
There are some requirements for setting up RAID: 1. The motherboard and BIOS support RAID configuration. 2. The motherboard has 2 or more available SATA protocol SSD/HDD slots (including MSATA, M.2...
0
marktang
by: marktang | last post by:
ONU (Optical Network Unit) is one of the key components for providing high-speed Internet services. Its primary function is to act as an endpoint device located at the user's premises. However,...
0
by: Hystou | last post by:
Most computers default to English, but sometimes we require a different language, especially when relocating. Forgot to request a specific language before your computer shipped? No problem! You can...
0
Oralloy
by: Oralloy | last post by:
Hello folks, I am unable to find appropriate documentation on the type promotion of bit-fields when using the generalised comparison operator "<=>". The problem is that using the GNU compilers,...
0
jinu1996
by: jinu1996 | last post by:
In today's digital age, having a compelling online presence is paramount for businesses aiming to thrive in a competitive landscape. At the heart of this digital strategy lies an intricately woven...
0
by: Hystou | last post by:
Overview: Windows 11 and 10 have less user interface control over operating system update behaviour than previous versions of Windows. In Windows 11 and 10, there is no way to turn off the Windows...
0
tracyyun
by: tracyyun | last post by:
Dear forum friends, With the development of smart home technology, a variety of wireless communication protocols have appeared on the market, such as Zigbee, Z-Wave, Wi-Fi, Bluetooth, etc. Each...

By using Bytes.com and it's services, you agree to our Privacy Policy and Terms of Use.

To disable or enable advertisements and analytics tracking please visit the manage ads & tracking page.