473,769 Members | 5,878 Online
Bytes | Software Development & Data Engineering Community
+ Post

Home Posts Topics Members FAQ

What's the boost pool allocator good for?

I tested the speed of a simple program like this:

//------------------------------------------------------------
#include <list>
#include <boost/pool/pool_alloc.hpp>

int main()
{
typedef std::list<intLi st_t; // default allocator
//typedef std::list<int, boost::pool_all ocator<int List_t;

List_t l;

for(int i = 0; i < 100000; ++i)
l.push_back(i);

int counter = 0;
for(List_t::ite rator iter = l.begin(); iter != l.end();)
if(++counter % 3 == 0)
iter = l.erase(iter);
else
++iter;

for(int i = 0; i < 100000; ++i)
l.push_back(i);
}
//------------------------------------------------------------

Compiling this with "-O3 -march=pentium4" and running it in my
computer, the running time was approximately 33 milliseconds.

When I change to the version of the list which uses the boost pool
allocator and do the same thing, the running time is a whopping 59
seconds. That's approximately 1800 times slower.

What the heck is this pool allocator good for? At least not for speed.
Jul 9 '08 #1
6 13859
On Jul 9, 3:11 pm, Juha Nieminen <nos...@thanks. invalidwrote:
I tested the speed of a simple program like this:
//------------------------------------------------------------
#include <list>
#include <boost/pool/pool_alloc.hpp>
int main()
{
typedef std::list<intLi st_t; // default allocator
//typedef std::list<int, boost::pool_all ocator<int List_t;
List_t l;
for(int i = 0; i < 100000; ++i)
l.push_back(i);
int counter = 0;
for(List_t::ite rator iter = l.begin(); iter != l.end();)
if(++counter % 3 == 0)
iter = l.erase(iter);
else
++iter;
for(int i = 0; i < 100000; ++i)
l.push_back(i); }
//------------------------------------------------------------
Compiling this with "-O3 -march=pentium4" and running it in my
computer, the running time was approximately 33 milliseconds.
When I change to the version of the list which uses the boost
pool allocator and do the same thing, the running time is a
whopping 59 seconds. That's approximately 1800 times slower.
Did you expect anything different? The standard has been
carefully worded so that an implementation can optimize use of
the standard allocators, in a very container dependent fashion.
The purpose of the allocator parameter is not for a possible
optimization, but to support different semantics, and normally,
I would expect any custom allocator to be slower than the
standard allocator, at least if the library author has done a
good job.
What the heck is this pool allocator good for? At least not
for speed.
Isn't that question backwards? First, decide what you are
trying to achieve, and why the standard allocator isn't
acceptable. Then look for an allocator which achieves what you
need.

--
James Kanze (GABI Software) email:ja******* **@gmail.com
Conseils en informatique orientée objet/
Beratung in objektorientier ter Datenverarbeitu ng
9 place Sémard, 78210 St.-Cyr-l'École, France, +33 (0)1 30 23 00 34
Jul 10 '08 #2
James Kanze wrote:
>When I change to the version of the list which uses the boost
pool allocator and do the same thing, the running time is a
whopping 59 seconds. That's approximately 1800 times slower.

Did you expect anything different?
Certainly. What sense does it make to use an allocator which makes
allocation almost 2000 times slower? It makes no sense whatsoever.

Even if this allocator cut total memory usage in half, implemented a
full garbage collector and performed full boundary checks, it would
*still* be useless because of the humongous slowdown. It couldn't even
be used for debugging purposes in programs which allocate even moderate
amounts of memory.

As the other poster replied, boost::fast_poo l_allocator does a much
better job at this (although while it's a bit faster than the default
allocator, it's still no *so* fast).

So my question remains: What's boost::pool_all ocator good for?
The standard has been
carefully worded so that an implementation can optimize use of
the standard allocators, in a very container dependent fashion.
I believe that. However, I still can't understand what's the use for
an allocator which is almost 2000 times slower than the default allocator.
The purpose of the allocator parameter is not for a possible
optimization, but to support different semantics, and normally,
I would expect any custom allocator to be slower than the
standard allocator, at least if the library author has done a
good job.
Why do you expect that? I can think of four reasons to use a custom
allocator:

1) The allocator is way faster than the default allocator.
2) The allocator consumes considerably less memory than the default
allocator.
3) The allocator performs GC or other type of safety checks (which would
be mostly irrelevant with the STL containers, but anyways).
4) The allocator does something exotic, such as use a file or a network
connection as memory, or something along those lines.

None of these (except perhaps number 4, for rather obvious reasons)
imply that the allocator ought to be thousands of times slower than the
default allocator.

1) and 2) are not even hard to do. I have created such an allocator
myself (and, in fact, my purpose in trying the boost allocator was to
compare its speed with mine).
>What the heck is this pool allocator good for? At least not
for speed.

Isn't that question backwards? First, decide what you are
trying to achieve, and why the standard allocator isn't
acceptable. Then look for an allocator which achieves what you
need.
I'm trying to achieve maximum allocation/deallocation speed. The
standard allocator (at least in linux) is very slow.
Jul 10 '08 #3
Michael DOUBEZ wrote:
> Certainly. What sense does it make to use an allocator which makes
allocation almost 2000 times slower? It makes no sense whatsoever.
[snip]

It makes sense if you don't want to use the system allocator: for a pet
thread allocator or if you want to limit memory usage while keeping some
flexibility in a specific area (I do that in some part of my -embedded-
software).
I'm sorry, but I *still* don't understand how it makes sense.

Let me repeat myself: boost::pool_all ocator seems to be almost 2000
times slower than the default allocator. Not 2 times, not even 20 times,
2000 times.

What advantage there could ever be from boost::pool_all ocator compared
to the default allocator? Even if you are making just one single
allocation during the execution of the entire program (in which case the
speed of the allocation doesn't really matter), I still fail to see what
would be the benefit over just using the default allocator.

What is boost::pool_all ocator for? Why should I even consider paying
the humongous price for using it?
> I'm trying to achieve maximum allocation/deallocation speed. The
standard allocator (at least in linux) is very slow.

Did you try with *fast_pool_allo cator* ?
Yes, it was suggested in an earlier post. That one is indeed a bit
faster than the default allocator, at least in my linux system, and
could thus be much more useful. (OTOH it's still much slower than my own
allocator, which is what I was comparing it against.)

However, my original question has still not been answered.
I tried quickly with your code
and without measuring, the time looked equivalent (at least not 1 min
like with pool_allocator) .
Actually here fast_pool_alloc ator is a bit faster than the default
allocator. If you happened to be interested, I put some results here:
http://warp.povusers.org/FSBAllocator/#benchmark
Jul 11 '08 #4
Juha Nieminen wrote:
Michael DOUBEZ wrote:
>> Certainly. What sense does it make to use an allocator which makes
allocation almost 2000 times slower? It makes no sense whatsoever.
[snip]

It makes sense if you don't want to use the system allocator: for a pet
thread allocator or if you want to limit memory usage while keeping some
flexibility in a specific area (I do that in some part of my -embedded-
software).

I'm sorry, but I *still* don't understand how it makes sense.

Let me repeat myself: boost::pool_all ocator seems to be almost 2000
times slower than the default allocator. Not 2 times, not even 20 times,
2000 times.
You seem to have tested the architecture unfairly. If the documentation
states that using pool_allocator in a std::list is inappropriate, as has
been said in this thread once already, then you need to come up with a
different test. Your question is completely pointless while it is based
on poorly arrived at statistics.
Jul 11 '08 #5
On Jul 10, 10:40 pm, Juha Nieminen <nos...@thanks. invalidwrote:
James Kanze wrote:
When I change to the version of the list which uses the boost
pool allocator and do the same thing, the running time is a
whopping 59 seconds. That's approximately 1800 times slower.
Did you expect anything different?
Certainly. What sense does it make to use an allocator which
makes allocation almost 2000 times slower? It makes no sense
whatsoever.
You use a non-standard allocator because you need different
semantics. If you need those semantics, it doesn't matter what
the speed, since the program won't be correct without them.

If the semantics are correct with the standard allocator, then
that's what you should be using.

[...]
The purpose of the allocator parameter is not for a possible
optimization, but to support different semantics, and normally,
I would expect any custom allocator to be slower than the
standard allocator, at least if the library author has done a
good job.
Why do you expect that?
Because if it were faster, then the implementors of the standard
library would have used the same algorithms.
I can think of four reasons to use a custom allocator:
1) The allocator is way faster than the default allocator.
2) The allocator consumes considerably less memory than the default
allocator.
3) The allocator performs GC or other type of safety checks (which would
be mostly irrelevant with the STL containers, but anyways).
4) The allocator does something exotic, such as use a file or a network
connection as memory, or something along those lines.
The only possible reason for using a non-standard allocator with
the STL should be the last. Unless the implementor has not done
his job correctly.

--
James Kanze (GABI Software) email:ja******* **@gmail.com
Conseils en informatique orientée objet/
Beratung in objektorientier ter Datenverarbeitu ng
9 place Sémard, 78210 St.-Cyr-l'École, France, +33 (0)1 30 23 00 34
Jul 11 '08 #6
James Kanze wrote:
>Why do you expect that?

Because if it were faster, then the implementors of the standard
library would have used the same algorithms.
I disagree.

The default allocator is designed for maximum flexibility (meaning
that it should work with allocated memory blocks of any size) and, if
possible, optimal memory usage (meaning that even though the memory
blocks are completely arbitrary in size, the allocator should waste
ancillary bookkeeping memory as little as possible).

In other words, the default memory allocator is a completely *generic*
allocator designed for all possible uses.

The price to be paid for it being so generic is that it's slower and
wastes unneeded memory in situations where a much simpler allocator
would be enough. For example, if you are instantiating a std::list with
lots of elements, each one of those elements has exactly the same size.
It would thus be enough to have a memory allocator which just allocates
memory blocks of that size and nothing else. The amount of required
bookkeeping is reduced considerably, and the allocator also can be made
a lot faster (mainly because there's less bookkeeping causing update
overhead).

It's perfectly possible to design a memory allocator which is
specialized for allocations which have all the same size (which is the
case with std::list, std::set and std::map). Because such allocator is
much simpler than the default generic allocator, it can be enormously
faster and waste considerably less memory.
I know this is possible because I have implemented such a memory
allocator myself. It *is* considerably faster than the default
allocator, and it uses less memory.
Of course this allocator has its limitations. For example, you can't
allocate arrays with it. However, you don't need the support for array
allocation when all you are doing is using the allocator with a
std::list, std::set or std::map.
>I can think of four reasons to use a custom allocator:
>1) The allocator is way faster than the default allocator.
2) The allocator consumes considerably less memory than the default
allocator.
3) The allocator performs GC or other type of safety checks (which would
be mostly irrelevant with the STL containers, but anyways).
4) The allocator does something exotic, such as use a file or a network
connection as memory, or something along those lines.

The only possible reason for using a non-standard allocator with
the STL should be the last. Unless the implementor has not done
his job correctly.
It's rather easy to prove you wrong:
http://warp.povusers.org/FSBAllocator/#benchmark

(Ok, you might argue that the implementors of the memory allocator in
linux libc have not done their job correctly, but that's not a big
consolation when you are in a situation where you require maximal
allocation speed for things like std::list.)
Jul 11 '08 #7

This thread has been closed and replies have been disabled. Please start a new discussion.

Similar topics

7
3181
by: sbobrows | last post by:
{Whilst I think much of this is OT for this newsgroup, I think the issue of understanding diagnostics just about gets under the door. -mod} Hi, I'm a C++ newbie trying to use the Boost regex libraries. Here's my situation. My system is Red Hat Linux 9, using the Borland C++BuilderX Personal IDE (which uses the g++ compiler).
5
2986
by: Knackeback | last post by:
task: - read/parse CSV file code snippet: string key,line; typedef tokenizer<char_separator<char> > tokenizer; tokenizer tok(string(""), sep); while ( getline(f, line) ){ ++lineNo; tok.assign(line, sep);
7
4547
by: bluekite2000 | last post by:
new/delete, std::allocator, pool_allocator or mt_allocator??? What r the pros and cons if each method? PS: I m designing a matrix/tensor/vector lib, all of which call base_constructor of class Array.
5
7191
by: FBergemann | last post by:
I use SunOS 5.8, gcc 3.3.2, boost 1.33.1. I have build the entire boost package and try to compile a simple example: #include <iostream> #include <string> #include <boost/regex.hpp // Boost.Regex lib using namespace std;
2
8789
by: soren.andersen | last post by:
Hello out there :-) I'm new to c++, coming from Java, and trying to learn the basics of the language and all that, basically just for fun. :-) So, when once I played around with c++ a bit i found out I didn't really have any ways of looking at directories. So, I looked around and found the boost library, which seems pretty cool. However, I seem to be unable to compile anything using these libraries.
5
2850
by: mackenzie | last post by:
Hello, I am looking for a little bit of help. I am trying to create a dynamically allocated object which contains one or more objects of type boost::pool<>. I get a compiler error when an object of this type is contained in my class and am not sure why. To be honest I have a little but not a lot of experience with templates and it could simply be obvious to a more experienced template user; however, the answer escapes me. Here is a...
3
3782
by: Barry | last post by:
As boost::pool_alloc use singleton holder for pool allocator, client programmers have to reclaim the memory by hand through calling singleton_pool<alloc_tag, elem_size>::purge_memory() or something to release the memory, so the program should have knowledge of the elem_size, sometime, it's impossible or at least hard to know the size of the element allocated by the allocator. for example: struct my_tag list<int, boost::pool_allocator<int...
12
2384
by: contactmayankjain | last post by:
Hi, Its said that one should avoid dynamic allocation of memory, as it de- fragment the memory into small chunks and allocation of memory is a costly process in terms of efficiency. So what is the best solution to handle to situation in which we need to allocate memory at run time. Regards Mayank Jain(Nawal)
4
2557
by: Man4ish | last post by:
namespace ve/////////////////ve.h { struct VertexProperties { std::size_t index; boost::default_color_type color; }; } /////////////////////////////////////////////////////////////////////////////////////////////////// namespace ed///////////////////////ed.h
0
9589
marktang
by: marktang | last post by:
ONU (Optical Network Unit) is one of the key components for providing high-speed Internet services. Its primary function is to act as an endpoint device located at the user's premises. However, people are often confused as to whether an ONU can Work As a Router. In this blog post, we’ll explore What is ONU, What Is Router, ONU & Router’s main usage, and What is the difference between ONU and Router. Let’s take a closer look ! Part I. Meaning of...
0
10215
Oralloy
by: Oralloy | last post by:
Hello folks, I am unable to find appropriate documentation on the type promotion of bit-fields when using the generalised comparison operator "<=>". The problem is that using the GNU compilers, it seems that the internal comparison operator "<=>" tries to promote arguments from unsigned to signed. This is as boiled down as I can make it. Here is my compilation command: g++-12 -std=c++20 -Wnarrowing bit_field.cpp Here is the code in...
0
10049
jinu1996
by: jinu1996 | last post by:
In today's digital age, having a compelling online presence is paramount for businesses aiming to thrive in a competitive landscape. At the heart of this digital strategy lies an intricately woven tapestry of website design and digital marketing. It's not merely about having a website; it's about crafting an immersive digital experience that captivates audiences and drives business growth. The Art of Business Website Design Your website is...
1
9996
by: Hystou | last post by:
Overview: Windows 11 and 10 have less user interface control over operating system update behaviour than previous versions of Windows. In Windows 11 and 10, there is no way to turn off the Windows Update option using the Control Panel or Settings app; it automatically checks for updates and installs any it finds, whether you like it or not. For most users, this new feature is actually very convenient. If you want to control the update process,...
0
9865
tracyyun
by: tracyyun | last post by:
Dear forum friends, With the development of smart home technology, a variety of wireless communication protocols have appeared on the market, such as Zigbee, Z-Wave, Wi-Fi, Bluetooth, etc. Each protocol has its own unique characteristics and advantages, but as a user who is planning to build a smart home system, I am a bit confused by the choice of these technologies. I'm particularly interested in Zigbee because I've heard it does some...
1
7410
isladogs
by: isladogs | last post by:
The next Access Europe User Group meeting will be on Wednesday 1 May 2024 starting at 18:00 UK time (6PM UTC+1) and finishing by 19:30 (7.30PM). In this session, we are pleased to welcome a new presenter, Adolph Dupré who will be discussing some powerful techniques for using class modules. He will explain when you may want to use classes instead of User Defined Types (UDT). For example, to manage the data in unbound forms. Adolph will...
0
6674
by: conductexam | last post by:
I have .net C# application in which I am extracting data from word file and save it in database particularly. To store word all data as it is I am converting the whole word file firstly in HTML and then checking html paragraph one by one. At the time of converting from word file to html my equations which are in the word document file was convert into image. Globals.ThisAddIn.Application.ActiveDocument.Select();...
0
5307
by: TSSRALBI | last post by:
Hello I'm a network technician in training and I need your help. I am currently learning how to create and manage the different types of VPNs and I have a question about LAN-to-LAN VPNs. The last exercise I practiced was to create a LAN-to-LAN VPN between two Pfsense firewalls, by using IPSEC protocols. I succeeded, with both firewalls in the same network. But I'm wondering if it's possible to do the same thing, with 2 Pfsense firewalls...
0
5447
by: adsilva | last post by:
A Windows Forms form does not have the event Unload, like VB6. What one acts like?

By using Bytes.com and it's services, you agree to our Privacy Policy and Terms of Use.

To disable or enable advertisements and analytics tracking please visit the manage ads & tracking page.