Akyl Tulegenov wrote:
Dear All!
I've got a question which is rather system specific case.
Suppose I am dealing with a large array of ~4Gig of entities.
The question is : what is faster , to address it through a binary file
created on disk or using dynamic memory which is extended through a large
swap file (again on disk)? Note that swap file is created at the
point of the disk partition.
I would be grateful if in addition to your comments you could point me to
the relevant source (URL or else).
Thanks,
Akyl.
A great topic for news:comp.programming (follups set).
As other people have stated, measure.
With memory access, you may get a "page fault" which tells
the operating system to swap in memory from a "virtual page".
This would be your worst case timing.
When reading from a file, your worst case is that you
have to read from the file without any caching. Best
case is that the data is in a cache somewhere.
Another alternative is for you to allocate some memory and
read in a chunk of data. You can set the chunk size yourself.
As for references, search the web for:
"Operating Systems" "page fault"
or "double buffering"
or "spooling".
--
Thomas Matthews
C++ newsgroup welcome message:
http://www.slack.net/~shiva/welcome.txt
C++ Faq:
http://www.parashift.com/c++-faq-lite
C Faq:
http://www.eskimo.com/~scs/c-faq/top.html
alt.comp.lang.learn.c-c++ faq:
http://www.raos.demon.uk/acllc-c++/faq.html
Other sites:
http://www.josuttis.com -- C++ STL Library book