473,387 Members | 1,705 Online
Bytes | Software Development & Data Engineering Community
Post Job

Home Posts Topics Members FAQ

Join Bytes to post your question to a community of 473,387 software developers and data experts.

maps to hold ultra large data sets using customer allocators to allocate disk space rather than main memory

one of the projects im working in currently requires use of ultra
large sized maps, lists, vector, etc. (basically stl containers).
Sizes might grow up to 1000 Million entries. since it is impossible to
have all this data in memory, im planning to implement these
containers to hold data both in memory and disk at the same time.
im not sure this can be achieved using customer allocators and im
wondering if there are any such implementations.

thank you

May 15 '07 #1
15 2564
* CMOS:
one of the projects im working in currently requires use of ultra
large sized maps, lists, vector, etc. (basically stl containers).
Sizes might grow up to 1000 Million entries. since it is impossible to
have all this data in memory, im planning to implement these
containers to hold data both in memory and disk at the same time.
im not sure this can be achieved using customer allocators and im
wondering if there are any such implementations.
A few GBytes of data isn't that much, really, if you have the hardware
to match. However, from your comment about "customer (sic) allocators",
and simply from the fact that you're seeking advice here, I'm reasonably
sure that this is not a million-dollar budget project, but rather a
student project, and that the requirement of billions of entries stems
from bad design, and is not an inherent requirement of the problem
you're trying to solve. So do tell about the problem, not how you're
envisioning solving it; perhaps we can suggest better ways.

--
A: Because it messes up the order in which people normally read text.
Q: Why is it such a bad thing?
A: Top-posting.
Q: What is the most annoying thing on usenet and in e-mail?
May 15 '07 #2
CMOS wrote:
one of the projects im working in currently requires use of ultra
large sized maps, lists, vector, etc. (basically stl containers).
Sizes might grow up to 1000 Million entries. since it is impossible to
have all this data in memory, im planning to implement these
containers to hold data both in memory and disk at the same time.
im not sure this can be achieved using customer allocators and im
wondering if there are any such implementations.
The short answer is yes, but are you sure you want to?

--
Ian Collins.
May 15 '07 #3
NOTED: Custom Allocator: sorry.

the problem is to index 10 Billion records of certain type using a
given field. field type might be a number, string, date, etc.
and to query the results for fast retrieval.

thanks

May 15 '07 #4
CMOS wrote:
NOTED: Custom Allocator: sorry.

the problem is to index 10 Billion records of certain type using a
given field. field type might be a number, string, date, etc.
and to query the results for fast retrieval.
Why not just use a database, which will have been optimised for this task?

--
Ian Collins.
May 15 '07 #5
a generic DB's performance will not be enough and i do not need it to
support any data modifications, transactions, etc which will slow down
operation.
the only requrement is to Insert data and query and delete records
using keys. no need of SQL interface either.

May 15 '07 #6
On May 15, 4:12 pm, CMOS <manu...@millenniumit.comwrote:
one of the projects im working in currently requires use of ultra
large sized maps, lists, vector, etc. (basically stl containers).
Sizes might grow up to 1000 Million entries. since it is impossible to
have all this data in memory, im planning to implement these
containers to hold data both in memory and disk at the same time.
im not sure this can be achieved using customer allocators and im
wondering if there are any such implementations.

thank you
http://www.sqlite.org/
http://www.postgresql.org/

Either one should do the job.

May 15 '07 #7
CMOS wrote:

Please quote enough context for your reply to make sense.
a generic DB's performance will not be enough and i do not need it to
support any data modifications, transactions, etc which will slow down
operation.
the only requrement is to Insert data and query and delete records
using keys. no need of SQL interface either.
Well I'd still use one, unless I had a real performance issue. Even
then my first action would be upgrade the hardware!

--
Ian Collins.
May 15 '07 #8
On May 15, 1:12 am, CMOS <manu...@millenniumit.comwrote:
one of the projects im working in currently requires use of ultra
large sized maps, lists, vector, etc. (basically stl containers).
Sizes might grow up to 1000 Million entries. since it is impossible to
have all this data in memory, im planning to implement these
containers to hold data both in memory and disk at the same time.
im not sure this can be achieved using customer allocators and im
wondering if there are any such implementations.

thank you
I don't think you appreciate how slow it will be to search a billion
records without loading the bulk of them into RAM. I mean, you're
going to be swapping the entire file in and out of RAM anyways, as the
OS will be buffering the file in memory anyways.

You might consider storing the records in a file, and then creating a
separate indexing map which just contains the unique identifying
fields from the object, mapped to an byte-offset leading to the record
in the file.

May 15 '07 #9
On May 15, 9:04 am, CMOS <manu...@millenniumit.comwrote:
a generic DB's performance will not be enough and i do not need it to
support any data modifications, transactions, etc which will slow down
operation.
the only requrement is to Insert data and query and delete records
using keys. no need of SQL interface either.
Using std::map with allocators for data on disk will *not*
result in better performance than a commercial data base.
Commercial data bases have invested hundreds of man years in
optimizing their accesses. At least one commercial vendor,
SyBase, has a variant of their data base optimized for exactly
this sort of application: updates only in batch, no
transactions, but very fast read access for very, very large
data sets. And all commercial data bases know how to maintain
indexes for multiple fields, in a fashion optimized for disk (B+
trees or hash tables, rather than the classical binary tree
typically used by std::map.) It may be possible to do better
than the commercial data bases for a specialized application,
but to do so will require very special custom code (and not just
std::map with a special allocator), and probably something up of
ten man years development time.

In answer to your question, however, I have my doubts as to
whether it is even possible. The accessors to std::map return
references, and these are required by the standard to be real
references. Which means that user code will have references
into your in memory data which you cannot track, which in turn
means that you cannot know when you can release the in memory
data---any data, once accessed, must be maintained in memory for
all time.

--
James Kanze (GABI Software) email:ja*********@gmail.com
Conseils en informatique orientée objet/
Beratung in objektorientierter Datenverarbeitung
9 place Sémard, 78210 St.-Cyr-l'École, France, +33 (0)1 30 23 00 34

May 16 '07 #10
On May 15, 8:42 am, Ian Collins <ian-n...@hotmail.comwrote:
CMOS wrote:
one of the projects im working in currently requires use of ultra
large sized maps, lists, vector, etc. (basically stl containers).
Sizes might grow up to 1000 Million entries. since it is impossible to
have all this data in memory, im planning to implement these
containers to hold data both in memory and disk at the same time.
im not sure this can be achieved using customer allocators and im
wondering if there are any such implementations.
The short answer is yes, but are you sure you want to?
Are you sure? It's probably possible to do something so that
parts of the map are loaded lazily, but functions like
map<>::operator[] and map<>::iterator::operator* return
references that are required to be real references, and are
guaranteed to be valid as long as the corresponding entry has
not been erased. I think that that more or less pins any
accessed entry in memory, at its original address. Which means
that while you can load lazily (maybe), you cannot drop an entry
from memory once it has been accessed.

One solution that probably is possible, however, is to put the
map in shared memory, backed by a file, using mmap (or its
Windows equivalent). In theory, I think it is possible to even
allow loading it at an arbitrary address; in practice, the one
time I played this game, we loaded at a fixed adress, and left
the pointer type a T*. We also designed the data structures so
that they only contained PODs: char[] instead of std::string,
for example. Of course, this still isn't optimized for disk; if
your data set is significantly larger than real memory, and you
start accessing randomly, you're going to page fault like crazy,
and probably end up significantly slower than a classical data
base (which optimize for disk accesses, taking into account the
difference in access times between real memory and disk).

--
James Kanze (GABI Software) email:ja*********@gmail.com
Conseils en informatique orientée objet/
Beratung in objektorientierter Datenverarbeitung
9 place Sémard, 78210 St.-Cyr-l'École, France, +33 (0)1 30 23 00 34

May 16 '07 #11
James Kanze wrote:
On May 15, 8:42 am, Ian Collins <ian-n...@hotmail.comwrote:
>CMOS wrote:
>>one of the projects im working in currently requires use of ultra
large sized maps, lists, vector, etc. (basically stl containers).
Sizes might grow up to 1000 Million entries. since it is impossible to
have all this data in memory, im planning to implement these
containers to hold data both in memory and disk at the same time.
im not sure this can be achieved using customer allocators and im
wondering if there are any such implementations.
>The short answer is yes, but are you sure you want to?

Are you sure?
I was thinking of the solution you propose in you second paragraph and
the drawbacks you mention were my reason for suggesting a database.
It's probably possible to do something so that
parts of the map are loaded lazily, but functions like
map<>::operator[] and map<>::iterator::operator* return
references that are required to be real references, and are
guaranteed to be valid as long as the corresponding entry has
not been erased. I think that that more or less pins any
accessed entry in memory, at its original address. Which means
that while you can load lazily (maybe), you cannot drop an entry
from memory once it has been accessed.

One solution that probably is possible, however, is to put the
map in shared memory, backed by a file, using mmap (or its
Windows equivalent). In theory, I think it is possible to even
allow loading it at an arbitrary address; in practice, the one
time I played this game, we loaded at a fixed adress, and left
the pointer type a T*. We also designed the data structures so
that they only contained PODs: char[] instead of std::string,
for example. Of course, this still isn't optimized for disk; if
your data set is significantly larger than real memory, and you
start accessing randomly, you're going to page fault like crazy,
and probably end up significantly slower than a classical data
base (which optimize for disk accesses, taking into account the
difference in access times between real memory and disk).
--
Ian Collins.
May 16 '07 #12
thanks for all the suggessions. i'll be looking at something like
SysBase while investigating the possibility of implementing a
specialized DB.
one other problem im facing in this project is to have millions of
files in the same directory. this might go up to billions
(2000Million) as well.
does any one have any experiance on this type of thing?

May 17 '07 #13
thanks for all the suggessions. i'll be looking at something like
SysBase while investigating the possibility of implementing a
specialized DB.
one other problem im facing in this project is to have millions of
files in the same directory. this might go up to billions
(2000Million) as well.
does any one have any experiance on this type of thing?

May 17 '07 #14
* CMOS:
thanks for all the suggessions. i'll be looking at something like
SysBase while investigating the possibility of implementing a
specialized DB.
one other problem im facing in this project is to have millions of
files in the same directory. this might go up to billions
(2000Million) as well.
does any one have any experiance on this type of thing?
Where do the files come from?

You're leaving us guessing.

I'd guess this is a design for storing collected measurements. Some
sort of automated physical data acquisition. Is that right?

By the way, you should really be asking in e.g. [comp.programming],
since questions of design at that level are off-topic in clc++.

Follow-ups set accordingly.

--
A: Because it messes up the order in which people normally read text.
Q: Why is it such a bad thing?
A: Top-posting.
Q: What is the most annoying thing on usenet and in e-mail?
May 17 '07 #15
On May 17, 2:01 pm, CMOS <manu...@millenniumit.comwrote:
thanks for all the suggessions. i'll be looking at something like
SysBase while investigating the possibility of implementing a
specialized DB.
one other problem im facing in this project is to have millions of
files in the same directory. this might go up to billions
(2000Million) as well.
does any one have any experiance on this type of thing?
Yes, but it's very system dependent. At least on some earlier
versions of Unix (and maybe still today---I'm not about to try
it), access becomes very, very slow for anything over a couple
of hundred files.

More generally, I don't think any file system is designed with
this kind of thing in mind. Anytime you need more than a couple
of hundred elements in a flat structure, with rapid access, you
should be thinking in terms of a data base.

--
James Kanze (Gabi Software) email: ja*********@gmail.com
Conseils en informatique orientée objet/
Beratung in objektorientierter Datenverarbeitung
9 place Sémard, 78210 St.-Cyr-l'École, France, +33 (0)1 30 23 00 34
May 17 '07 #16

This thread has been closed and replies have been disabled. Please start a new discussion.

Similar topics

36
by: Andrea Griffini | last post by:
I did it. I proposed python as the main language for our next CAD/CAM software because I think that it has all the potential needed for it. I'm not sure yet if the decision will get through, but...
13
by: Jeff Melvaine | last post by:
I note that I can write expressions like "1 << 100" and the result is stored as a long integer, which means it is stored as an integer of arbitrary length. I may need to use a large number of...
1
by: Robert May | last post by:
Hi, I am trying to execute some code compiled by g++ on Linux and have found that after some time, the program allocates a huge amount of swap space (250MB on my machine which has 512MB...
7
by: Joseph | last post by:
Hi, I'm having bit of questions on recursive pointer. I have following code that supports upto 8K files but when i do a file like 12K i get a segment fault. I Know it is in this line of code. ...
43
by: Steven T. Hatton | last post by:
Now that I have a better grasp of the scope and capabilities of the C++ Standard Library, I understand that products such as Qt actually provide much of the same functionality through their own...
3
by: Akyl Tulegenov | last post by:
Dear All! I've got a question which is rather system specific case. Suppose I am dealing with a large array of ~4Gig of entities. The question is : what is faster , to address it through a...
7
by: mef526 | last post by:
I have had this problem for months now and it has been nagging me. I have a large project that has several C++ DLL's, one of them uses malloc / calloc / free for the other DLL's. I process very...
12
by: geerrxin | last post by:
Hi, I have a need to manipulate a large matrix, say, A(N,N) (of real) 8GB which can't fit in physical memory (2 BG). But the nature of computation requires the operation on only a portion of...
3
by: mediratta | last post by:
Hi, I want to allocate memory for a large matrix, whose size will be around 2.5 million x 17000. Three fourth of its rows will have all zeroes, but it is not known which will be those rows. If I...
0
by: taylorcarr | last post by:
A Canon printer is a smart device known for being advanced, efficient, and reliable. It is designed for home, office, and hybrid workspace use and can also be used for a variety of purposes. However,...
0
by: Charles Arthur | last post by:
How do i turn on java script on a villaon, callus and itel keypad mobile phone
0
by: ryjfgjl | last post by:
If we have dozens or hundreds of excel to import into the database, if we use the excel import function provided by database editors such as navicat, it will be extremely tedious and time-consuming...
0
by: ryjfgjl | last post by:
In our work, we often receive Excel tables with data in the same format. If we want to analyze these data, it can be difficult to analyze them because the data is spread across multiple Excel files...
0
by: emmanuelkatto | last post by:
Hi All, I am Emmanuel katto from Uganda. I want to ask what challenges you've faced while migrating a website to cloud. Please let me know. Thanks! Emmanuel
1
by: nemocccc | last post by:
hello, everyone, I want to develop a software for my android phone for daily needs, any suggestions?
1
by: Sonnysonu | last post by:
This is the data of csv file 1 2 3 1 2 3 1 2 3 1 2 3 2 3 2 3 3 the lengths should be different i have to store the data by column-wise with in the specific length. suppose the i have to...
0
marktang
by: marktang | last post by:
ONU (Optical Network Unit) is one of the key components for providing high-speed Internet services. Its primary function is to act as an endpoint device located at the user's premises. However,...
0
by: Hystou | last post by:
Most computers default to English, but sometimes we require a different language, especially when relocating. Forgot to request a specific language before your computer shipped? No problem! You can...

By using Bytes.com and it's services, you agree to our Privacy Policy and Terms of Use.

To disable or enable advertisements and analytics tracking please visit the manage ads & tracking page.