473,789 Members | 2,368 Online
Bytes | Software Development & Data Engineering Community
+ Post

Home Posts Topics Members FAQ

maps to hold ultra large data sets using customer allocators to allocate disk space rather than main memory

one of the projects im working in currently requires use of ultra
large sized maps, lists, vector, etc. (basically stl containers).
Sizes might grow up to 1000 Million entries. since it is impossible to
have all this data in memory, im planning to implement these
containers to hold data both in memory and disk at the same time.
im not sure this can be achieved using customer allocators and im
wondering if there are any such implementations .

thank you

May 15 '07 #1
15 2599
* CMOS:
one of the projects im working in currently requires use of ultra
large sized maps, lists, vector, etc. (basically stl containers).
Sizes might grow up to 1000 Million entries. since it is impossible to
have all this data in memory, im planning to implement these
containers to hold data both in memory and disk at the same time.
im not sure this can be achieved using customer allocators and im
wondering if there are any such implementations .
A few GBytes of data isn't that much, really, if you have the hardware
to match. However, from your comment about "customer (sic) allocators",
and simply from the fact that you're seeking advice here, I'm reasonably
sure that this is not a million-dollar budget project, but rather a
student project, and that the requirement of billions of entries stems
from bad design, and is not an inherent requirement of the problem
you're trying to solve. So do tell about the problem, not how you're
envisioning solving it; perhaps we can suggest better ways.

--
A: Because it messes up the order in which people normally read text.
Q: Why is it such a bad thing?
A: Top-posting.
Q: What is the most annoying thing on usenet and in e-mail?
May 15 '07 #2
CMOS wrote:
one of the projects im working in currently requires use of ultra
large sized maps, lists, vector, etc. (basically stl containers).
Sizes might grow up to 1000 Million entries. since it is impossible to
have all this data in memory, im planning to implement these
containers to hold data both in memory and disk at the same time.
im not sure this can be achieved using customer allocators and im
wondering if there are any such implementations .
The short answer is yes, but are you sure you want to?

--
Ian Collins.
May 15 '07 #3
NOTED: Custom Allocator: sorry.

the problem is to index 10 Billion records of certain type using a
given field. field type might be a number, string, date, etc.
and to query the results for fast retrieval.

thanks

May 15 '07 #4
CMOS wrote:
NOTED: Custom Allocator: sorry.

the problem is to index 10 Billion records of certain type using a
given field. field type might be a number, string, date, etc.
and to query the results for fast retrieval.
Why not just use a database, which will have been optimised for this task?

--
Ian Collins.
May 15 '07 #5
a generic DB's performance will not be enough and i do not need it to
support any data modifications, transactions, etc which will slow down
operation.
the only requrement is to Insert data and query and delete records
using keys. no need of SQL interface either.

May 15 '07 #6
On May 15, 4:12 pm, CMOS <manu...@millen niumit.comwrote :
one of the projects im working in currently requires use of ultra
large sized maps, lists, vector, etc. (basically stl containers).
Sizes might grow up to 1000 Million entries. since it is impossible to
have all this data in memory, im planning to implement these
containers to hold data both in memory and disk at the same time.
im not sure this can be achieved using customer allocators and im
wondering if there are any such implementations .

thank you
http://www.sqlite.org/
http://www.postgresql.org/

Either one should do the job.

May 15 '07 #7
CMOS wrote:

Please quote enough context for your reply to make sense.
a generic DB's performance will not be enough and i do not need it to
support any data modifications, transactions, etc which will slow down
operation.
the only requrement is to Insert data and query and delete records
using keys. no need of SQL interface either.
Well I'd still use one, unless I had a real performance issue. Even
then my first action would be upgrade the hardware!

--
Ian Collins.
May 15 '07 #8
On May 15, 1:12 am, CMOS <manu...@millen niumit.comwrote :
one of the projects im working in currently requires use of ultra
large sized maps, lists, vector, etc. (basically stl containers).
Sizes might grow up to 1000 Million entries. since it is impossible to
have all this data in memory, im planning to implement these
containers to hold data both in memory and disk at the same time.
im not sure this can be achieved using customer allocators and im
wondering if there are any such implementations .

thank you
I don't think you appreciate how slow it will be to search a billion
records without loading the bulk of them into RAM. I mean, you're
going to be swapping the entire file in and out of RAM anyways, as the
OS will be buffering the file in memory anyways.

You might consider storing the records in a file, and then creating a
separate indexing map which just contains the unique identifying
fields from the object, mapped to an byte-offset leading to the record
in the file.

May 15 '07 #9
On May 15, 9:04 am, CMOS <manu...@millen niumit.comwrote :
a generic DB's performance will not be enough and i do not need it to
support any data modifications, transactions, etc which will slow down
operation.
the only requrement is to Insert data and query and delete records
using keys. no need of SQL interface either.
Using std::map with allocators for data on disk will *not*
result in better performance than a commercial data base.
Commercial data bases have invested hundreds of man years in
optimizing their accesses. At least one commercial vendor,
SyBase, has a variant of their data base optimized for exactly
this sort of application: updates only in batch, no
transactions, but very fast read access for very, very large
data sets. And all commercial data bases know how to maintain
indexes for multiple fields, in a fashion optimized for disk (B+
trees or hash tables, rather than the classical binary tree
typically used by std::map.) It may be possible to do better
than the commercial data bases for a specialized application,
but to do so will require very special custom code (and not just
std::map with a special allocator), and probably something up of
ten man years development time.

In answer to your question, however, I have my doubts as to
whether it is even possible. The accessors to std::map return
references, and these are required by the standard to be real
references. Which means that user code will have references
into your in memory data which you cannot track, which in turn
means that you cannot know when you can release the in memory
data---any data, once accessed, must be maintained in memory for
all time.

--
James Kanze (GABI Software) email:ja******* **@gmail.com
Conseils en informatique orientée objet/
Beratung in objektorientier ter Datenverarbeitu ng
9 place Sémard, 78210 St.-Cyr-l'École, France, +33 (0)1 30 23 00 34

May 16 '07 #10

This thread has been closed and replies have been disabled. Please start a new discussion.

Similar topics

36
6406
by: Andrea Griffini | last post by:
I did it. I proposed python as the main language for our next CAD/CAM software because I think that it has all the potential needed for it. I'm not sure yet if the decision will get through, but something I'll need in this case is some experience-based set of rules about how to use python in this context. For example... is defining readonly attributes in classes worth the hassle ? Does duck-typing scale well in complex
13
2542
by: Jeff Melvaine | last post by:
I note that I can write expressions like "1 << 100" and the result is stored as a long integer, which means it is stored as an integer of arbitrary length. I may need to use a large number of these, and am interested to know whether the storage efficiency of long integers is in danger of breaking my code if I use too many. Would I do better to write a class that defines bitwise operations on arrays of integers, each integer being assumed...
1
1637
by: Robert May | last post by:
Hi, I am trying to execute some code compiled by g++ on Linux and have found that after some time, the program allocates a huge amount of swap space (250MB on my machine which has 512MB physical) and (700MB on another server with 1GB physical RAM). I have used vmstat to trend the amount of swap and observed that the memory is not being "thrashed" and there is simply a large amount of data that has been swapped out. This still slows...
7
3538
by: Joseph | last post by:
Hi, I'm having bit of questions on recursive pointer. I have following code that supports upto 8K files but when i do a file like 12K i get a segment fault. I Know it is in this line of code. How do i make the last pointer in the indirect sector that has another level of indirect pointer, and be defined recursively to support infinite large files? -code-
43
5029
by: Steven T. Hatton | last post by:
Now that I have a better grasp of the scope and capabilities of the C++ Standard Library, I understand that products such as Qt actually provide much of the same functionality through their own libraries. I'm not sure if that's a good thing or not. AFAIK, most of Qt is compatable with the Standard Library. That is, QLT can interoperate with STL, and you can convert back and forth between std::string and Qt::QString, etc. Are there any...
3
1637
by: Akyl Tulegenov | last post by:
Dear All! I've got a question which is rather system specific case. Suppose I am dealing with a large array of ~4Gig of entities. The question is : what is faster , to address it through a binary file created on disk or using dynamic memory which is extended through a large swap file (again on disk)? Note that swap file is created at the point of the disk partition. I would be grateful if in addition to your comments you could point me...
7
2177
by: mef526 | last post by:
I have had this problem for months now and it has been nagging me. I have a large project that has several C++ DLL's, one of them uses malloc / calloc / free for the other DLL's. I process very large datafiles (100MB to 300 MB) that have more than 524,000 lines with 5 columns of numbers in text. I allocate 8 arrays with 524,000 or more doubles , and 10 arrays of 32,768 doubles. Is there a min size for malloc / calloc required to...
12
2474
by: geerrxin | last post by:
Hi, I have a need to manipulate a large matrix, say, A(N,N) (of real) 8GB which can't fit in physical memory (2 BG). But the nature of computation requires the operation on only a portion of the data, e.g. 500 MB (0.5 GB) at a time. The procedure is as follows:
3
5846
by: mediratta | last post by:
Hi, I want to allocate memory for a large matrix, whose size will be around 2.5 million x 17000. Three fourth of its rows will have all zeroes, but it is not known which will be those rows. If I try to allocate memory for this huge array, then I get a segmentation fault saying: Program received signal SIGSEGV, Segmentation fault. 0xb7dd5226 in mallopt () from /lib/tls/i686/cmov/libc.so.6
0
9663
marktang
by: marktang | last post by:
ONU (Optical Network Unit) is one of the key components for providing high-speed Internet services. Its primary function is to act as an endpoint device located at the user's premises. However, people are often confused as to whether an ONU can Work As a Router. In this blog post, we’ll explore What is ONU, What Is Router, ONU & Router’s main usage, and What is the difference between ONU and Router. Let’s take a closer look ! Part I. Meaning of...
0
9511
by: Hystou | last post by:
Most computers default to English, but sometimes we require a different language, especially when relocating. Forgot to request a specific language before your computer shipped? No problem! You can effortlessly switch the default language on Windows 10 without reinstalling. I'll walk you through it. First, let's disable language synchronization. With a Microsoft account, language settings sync across devices. To prevent any complications,...
0
10404
Oralloy
by: Oralloy | last post by:
Hello folks, I am unable to find appropriate documentation on the type promotion of bit-fields when using the generalised comparison operator "<=>". The problem is that using the GNU compilers, it seems that the internal comparison operator "<=>" tries to promote arguments from unsigned to signed. This is as boiled down as I can make it. Here is my compilation command: g++-12 -std=c++20 -Wnarrowing bit_field.cpp Here is the code in...
1
10136
by: Hystou | last post by:
Overview: Windows 11 and 10 have less user interface control over operating system update behaviour than previous versions of Windows. In Windows 11 and 10, there is no way to turn off the Windows Update option using the Control Panel or Settings app; it automatically checks for updates and installs any it finds, whether you like it or not. For most users, this new feature is actually very convenient. If you want to control the update process,...
0
5415
by: TSSRALBI | last post by:
Hello I'm a network technician in training and I need your help. I am currently learning how to create and manage the different types of VPNs and I have a question about LAN-to-LAN VPNs. The last exercise I practiced was to create a LAN-to-LAN VPN between two Pfsense firewalls, by using IPSEC protocols. I succeeded, with both firewalls in the same network. But I'm wondering if it's possible to do the same thing, with 2 Pfsense firewalls...
0
5548
by: adsilva | last post by:
A Windows Forms form does not have the event Unload, like VB6. What one acts like?
1
4090
by: 6302768590 | last post by:
Hai team i want code for transfer the data from one system to another through IP address by using C# our system has to for every 5mins then we have to update the data what the data is updated we have to send another system
2
3695
muto222
by: muto222 | last post by:
How can i add a mobile payment intergratation into php mysql website.
3
2906
bsmnconsultancy
by: bsmnconsultancy | last post by:
In today's digital era, a well-designed website is crucial for businesses looking to succeed. Whether you're a small business owner or a large corporation in Toronto, having a strong online presence can significantly impact your brand's success. BSMN Consultancy, a leader in Website Development in Toronto offers valuable insights into creating effective websites that not only look great but also perform exceptionally well. In this comprehensive...

By using Bytes.com and it's services, you agree to our Privacy Policy and Terms of Use.

To disable or enable advertisements and analytics tracking please visit the manage ads & tracking page.