one of the projects im working in currently requires use of ultra
large sized maps, lists, vector, etc. (basically stl containers).
Sizes might grow up to 1000 Million entries. since it is impossible to
have all this data in memory, im planning to implement these
containers to hold data both in memory and disk at the same time.
im not sure this can be achieved using customer allocators and im
wondering if there are any such implementations .
thank you
May 15 '07
15 2601
On May 15, 8:42 am, Ian Collins <ian-n...@hotmail.co mwrote:
CMOS wrote:
one of the projects im working in currently requires use of ultra
large sized maps, lists, vector, etc. (basically stl containers).
Sizes might grow up to 1000 Million entries. since it is impossible to
have all this data in memory, im planning to implement these
containers to hold data both in memory and disk at the same time.
im not sure this can be achieved using customer allocators and im
wondering if there are any such implementations .
The short answer is yes, but are you sure you want to?
Are you sure? It's probably possible to do something so that
parts of the map are loaded lazily, but functions like
map<>::operator[] and map<>::iterator ::operator* return
references that are required to be real references, and are
guaranteed to be valid as long as the corresponding entry has
not been erased. I think that that more or less pins any
accessed entry in memory, at its original address. Which means
that while you can load lazily (maybe), you cannot drop an entry
from memory once it has been accessed.
One solution that probably is possible, however, is to put the
map in shared memory, backed by a file, using mmap (or its
Windows equivalent). In theory, I think it is possible to even
allow loading it at an arbitrary address; in practice, the one
time I played this game, we loaded at a fixed adress, and left
the pointer type a T*. We also designed the data structures so
that they only contained PODs: char[] instead of std::string,
for example. Of course, this still isn't optimized for disk; if
your data set is significantly larger than real memory, and you
start accessing randomly, you're going to page fault like crazy,
and probably end up significantly slower than a classical data
base (which optimize for disk accesses, taking into account the
difference in access times between real memory and disk).
--
James Kanze (GABI Software) email:ja******* **@gmail.com
Conseils en informatique orientée objet/
Beratung in objektorientier ter Datenverarbeitu ng
9 place Sémard, 78210 St.-Cyr-l'École, France, +33 (0)1 30 23 00 34
James Kanze wrote:
On May 15, 8:42 am, Ian Collins <ian-n...@hotmail.co mwrote:
>CMOS wrote:
>>one of the projects im working in currently requires use of ultra large sized maps, lists, vector, etc. (basically stl containers). Sizes might grow up to 1000 Million entries. since it is impossible to have all this data in memory, im planning to implement these containers to hold data both in memory and disk at the same time. im not sure this can be achieved using customer allocators and im wondering if there are any such implementations .
>The short answer is yes, but are you sure you want to?
Are you sure?
I was thinking of the solution you propose in you second paragraph and
the drawbacks you mention were my reason for suggesting a database.
It's probably possible to do something so that
parts of the map are loaded lazily, but functions like
map<>::operator[] and map<>::iterator ::operator* return
references that are required to be real references, and are
guaranteed to be valid as long as the corresponding entry has
not been erased. I think that that more or less pins any
accessed entry in memory, at its original address. Which means
that while you can load lazily (maybe), you cannot drop an entry
from memory once it has been accessed.
One solution that probably is possible, however, is to put the
map in shared memory, backed by a file, using mmap (or its
Windows equivalent). In theory, I think it is possible to even
allow loading it at an arbitrary address; in practice, the one
time I played this game, we loaded at a fixed adress, and left
the pointer type a T*. We also designed the data structures so
that they only contained PODs: char[] instead of std::string,
for example. Of course, this still isn't optimized for disk; if
your data set is significantly larger than real memory, and you
start accessing randomly, you're going to page fault like crazy,
and probably end up significantly slower than a classical data
base (which optimize for disk accesses, taking into account the
difference in access times between real memory and disk).
--
Ian Collins.
thanks for all the suggessions. i'll be looking at something like
SysBase while investigating the possibility of implementing a
specialized DB.
one other problem im facing in this project is to have millions of
files in the same directory. this might go up to billions
(2000Million) as well.
does any one have any experiance on this type of thing?
thanks for all the suggessions. i'll be looking at something like
SysBase while investigating the possibility of implementing a
specialized DB.
one other problem im facing in this project is to have millions of
files in the same directory. this might go up to billions
(2000Million) as well.
does any one have any experiance on this type of thing?
* CMOS:
thanks for all the suggessions. i'll be looking at something like
SysBase while investigating the possibility of implementing a
specialized DB.
one other problem im facing in this project is to have millions of
files in the same directory. this might go up to billions
(2000Million) as well.
does any one have any experiance on this type of thing?
Where do the files come from?
You're leaving us guessing.
I'd guess this is a design for storing collected measurements. Some
sort of automated physical data acquisition. Is that right?
By the way, you should really be asking in e.g. [comp.programmin g],
since questions of design at that level are off-topic in clc++.
Follow-ups set accordingly.
--
A: Because it messes up the order in which people normally read text.
Q: Why is it such a bad thing?
A: Top-posting.
Q: What is the most annoying thing on usenet and in e-mail?
On May 17, 2:01 pm, CMOS <manu...@millen niumit.comwrote :
thanks for all the suggessions. i'll be looking at something like
SysBase while investigating the possibility of implementing a
specialized DB.
one other problem im facing in this project is to have millions of
files in the same directory. this might go up to billions
(2000Million) as well.
does any one have any experiance on this type of thing?
Yes, but it's very system dependent. At least on some earlier
versions of Unix (and maybe still today---I'm not about to try
it), access becomes very, very slow for anything over a couple
of hundred files.
More generally, I don't think any file system is designed with
this kind of thing in mind. Anytime you need more than a couple
of hundred elements in a flat structure, with rapid access, you
should be thinking in terms of a data base.
--
James Kanze (Gabi Software) email: ja*********@gma il.com
Conseils en informatique orientée objet/
Beratung in objektorientier ter Datenverarbeitu ng
9 place Sémard, 78210 St.-Cyr-l'École, France, +33 (0)1 30 23 00 34 This thread has been closed and replies have been disabled. Please start a new discussion. Similar topics |
by: Andrea Griffini |
last post by:
I did it.
I proposed python as the main language for our next CAD/CAM
software because I think that it has all the potential needed
for it. I'm not sure yet if the decision will get through, but
something I'll need in this case is some experience-based set
of rules about how to use python in this context.
For example... is defining readonly attributes in classes
worth the hassle ? Does duck-typing scale well in complex
|
by: Jeff Melvaine |
last post by:
I note that I can write expressions like "1 << 100" and the result is stored
as a long integer, which means it is stored as an integer of arbitrary
length. I may need to use a large number of these, and am interested to
know whether the storage efficiency of long integers is in danger of
breaking my code if I use too many. Would I do better to write a class that
defines bitwise operations on arrays of integers, each integer being assumed...
|
by: Robert May |
last post by:
Hi,
I am trying to execute some code compiled by g++ on Linux and have
found that after some time, the program allocates a huge amount of
swap space (250MB on my machine which has 512MB physical) and (700MB
on another server with 1GB physical RAM).
I have used vmstat to trend the amount of swap and observed that the
memory is not being "thrashed" and there is simply a large amount of
data that has been swapped out. This still slows...
|
by: Joseph |
last post by:
Hi,
I'm having bit of questions on recursive pointer. I have following
code that supports upto 8K files but when i do a file like 12K i get a
segment fault. I Know it is in this line of code. How do i make the
last pointer in the indirect sector that has another level of indirect
pointer, and be defined recursively to support infinite large files?
-code-
|
by: Steven T. Hatton |
last post by:
Now that I have a better grasp of the scope and capabilities of the C++
Standard Library, I understand that products such as Qt actually provide
much of the same functionality through their own libraries. I'm not sure
if that's a good thing or not. AFAIK, most of Qt is compatable with the
Standard Library. That is, QLT can interoperate with STL, and you can
convert back and forth between std::string and Qt::QString, etc.
Are there any...
| |
by: Akyl Tulegenov |
last post by:
Dear All!
I've got a question which is rather system specific case.
Suppose I am dealing with a large array of ~4Gig of entities.
The question is : what is faster , to address it through a binary file
created on disk or using dynamic memory which is extended through a large
swap file (again on disk)? Note that swap file is created at the
point of the disk partition.
I would be grateful if in addition to your comments you could point me...
|
by: mef526 |
last post by:
I have had this problem for months now and it has been nagging me.
I have a large project that has several C++ DLL's, one of them uses malloc /
calloc / free for the other DLL's.
I process very large datafiles (100MB to 300 MB) that have more than 524,000
lines with 5 columns of numbers in text. I allocate 8 arrays with 524,000 or
more doubles , and 10 arrays of 32,768 doubles.
Is there a min size for malloc / calloc required to...
|
by: geerrxin |
last post by:
Hi,
I have a need to manipulate a large matrix, say, A(N,N) (of real) 8GB
which can't fit in physical memory (2 BG). But the nature of
computation
requires the operation on only a portion of the data, e.g. 500 MB (0.5
GB)
at a time.
The procedure is as follows:
|
by: mediratta |
last post by:
Hi,
I want to allocate memory for a large matrix, whose size will be
around 2.5 million x 17000. Three fourth of its rows will have all
zeroes, but it is not known which will be those rows. If I try to
allocate memory for this huge array, then I get a segmentation fault
saying:
Program received signal SIGSEGV, Segmentation fault.
0xb7dd5226 in mallopt () from /lib/tls/i686/cmov/libc.so.6
|
by: Hystou |
last post by:
Most computers default to English, but sometimes we require a different language, especially when relocating. Forgot to request a specific language before your computer shipped? No problem! You can effortlessly switch the default language on Windows 10 without reinstalling. I'll walk you through it.
First, let's disable language synchronization. With a Microsoft account, language settings sync across devices. To prevent any complications,...
|
by: Oralloy |
last post by:
Hello folks,
I am unable to find appropriate documentation on the type promotion of bit-fields when using the generalised comparison operator "<=>".
The problem is that using the GNU compilers, it seems that the internal comparison operator "<=>" tries to promote arguments from unsigned to signed.
This is as boiled down as I can make it.
Here is my compilation command:
g++-12 -std=c++20 -Wnarrowing bit_field.cpp
Here is the code in...
| |
by: jinu1996 |
last post by:
In today's digital age, having a compelling online presence is paramount for businesses aiming to thrive in a competitive landscape. At the heart of this digital strategy lies an intricately woven tapestry of website design and digital marketing. It's not merely about having a website; it's about crafting an immersive digital experience that captivates audiences and drives business growth.
The Art of Business Website Design
Your website is...
|
by: Hystou |
last post by:
Overview:
Windows 11 and 10 have less user interface control over operating system update behaviour than previous versions of Windows. In Windows 11 and 10, there is no way to turn off the Windows Update option using the Control Panel or Settings app; it automatically checks for updates and installs any it finds, whether you like it or not. For most users, this new feature is actually very convenient. If you want to control the update process,...
|
by: tracyyun |
last post by:
Dear forum friends,
With the development of smart home technology, a variety of wireless communication protocols have appeared on the market, such as Zigbee, Z-Wave, Wi-Fi, Bluetooth, etc. Each protocol has its own unique characteristics and advantages, but as a user who is planning to build a smart home system, I am a bit confused by the choice of these technologies. I'm particularly interested in Zigbee because I've heard it does some...
|
by: agi2029 |
last post by:
Let's talk about the concept of autonomous AI software engineers and no-code agents. These AIs are designed to manage the entire lifecycle of a software development project—planning, coding, testing, and deployment—without human intervention. Imagine an AI that can take a project description, break it down, write the code, debug it, and then launch it, all on its own....
Now, this would greatly impact the work of software developers. The idea...
|
by: isladogs |
last post by:
The next Access Europe User Group meeting will be on Wednesday 1 May 2024 starting at 18:00 UK time (6PM UTC+1) and finishing by 19:30 (7.30PM).
In this session, we are pleased to welcome a new presenter, Adolph Dupré who will be discussing some powerful techniques for using class modules.
He will explain when you may want to use classes instead of User Defined Types (UDT). For example, to manage the data in unbound forms.
Adolph will...
|
by: 6302768590 |
last post by:
Hai team
i want code for transfer the data from one system to another through IP address by using C# our system has to for every 5mins then we have to update the data what the data is updated we have to send another system
| |
by: bsmnconsultancy |
last post by:
In today's digital era, a well-designed website is crucial for businesses looking to succeed. Whether you're a small business owner or a large corporation in Toronto, having a strong online presence can significantly impact your brand's success. BSMN Consultancy, a leader in Website Development in Toronto offers valuable insights into creating effective websites that not only look great but also perform exceptionally well. In this comprehensive...
| |