By using this site, you agree to our updated Privacy Policy and our Terms of Use. Manage your Cookies Settings.
435,463 Members | 3,115 Online
Bytes IT Community
+ Ask a Question
Need help? Post your question and get tips & solutions from a community of 435,463 IT Pros & Developers. It's quick & easy.

I/O ofa large number of files

P: n/a
Hi all:

I am processing a 3D bitmaps(essentially ~1024 2D bitmaps with a size of 1MB
each).

If I want read large amount of radom data from this series, how could I
buffer the file to get optimized performance? With WinXP pro/512MB memories
and no other big programmes running at the same time.

Cheers

David
Nov 15 '05 #1
Share this Question
Share on Google+
5 Replies


P: n/a


David wrote:
Hi all:

I am processing a 3D bitmaps(essentially ~1024 2D bitmaps with a size of 1MB
each).

If I want read large amount of radom data from this series, how could I
buffer the file to get optimized performance?
If the access pattern is "radom" there isn't much you
can do. Load as much as you can manage into memory and hope
your next "radom" access is for something already loaded; if
it isn't, throw something away and load the piece you're
trying to get at. The selection of what to pre-load, what
to throw away, and how much to load for an out-of-memory
experience depends on the access patterns. Some patterns
will lend themselves to exploitation, others won't.

The C pieces you'll need will probably be fopen (with
"rb" mode, most likely), fseek, fread, and fclose, along
with the usual memory-management stuff. I mention fclose
because your system may not permit you to have 1024 file
streams open simultaneously; you may need to "multiplex"
the large number of files across a smaller number of FILE*
streams by closing and re-opening as necessary. The
FOPEN_MAX macro in <stdio.h> gives an approximation to the
number of streams you can open simultaneously, but the
value should be treated only as an approximation.
With WinXP pro/512MB memories
and no other big programmes running at the same time.


Irrelevant detail. Well, highly relevant in some ways,
but not to your question (if there is one) about the C
programming language. Windows-oriented newsgroups may have
suggestions that go beyond what C itself can provide.

--
Er*********@sun.com

Nov 15 '05 #2

P: n/a
David wrote:
Hi all:

I am processing a 3D bitmaps(essentially ~1024 2D bitmaps with a size of 1MB
each).

If I want read large amount of radom data from this series, how could I
buffer the file to get optimized performance? With WinXP pro/512MB memories
and no other big programmes running at the same time.

Cheers

David

Map the whole file into memory (backed up by the OS).
Read the docs for MapViewOfFile API. This will not use
all the RAM of course, but the system will do the paging
for you, what is far more efficient than what you can do
yourself.

This is not a standard C function, and in another operating
systems you may have to use a different approach.

Using standard C functions you can do the same (as Eric
Sossman replied), but it will be less efficient and is not
trivial to develop.

A simpler approach using just fopen would be to allocate 1MB of virtual
memory with malloc(), then read the whole file into it. The system will
do the paging for you in that case. Frequently used pages will be kept
in memory, less frequently used will be eventually be paged out.

jacob
Nov 15 '05 #3

P: n/a
In article <dc**********@scorpius.csx.cam.ac.uk>,
David <xz***@noreply.com> wrote:

I am processing a 3D bitmaps(essentially ~1024 2D bitmaps with a size of 1MB
each).

If I want read large amount of radom data from this series, how could I
buffer the file to get optimized performance? With WinXP pro/512MB memories
and no other big programmes running at the same time.


I'll take a wild guess and say that whatever process is generating
these 3D bitmaps is pretty high-tech and not cheap...perhaps there is
room in the budget for another gig of RAM? Then you could load the
whole 3D image into RAM and still have the original 512 for the OS and
application. Next step would be to upgrade to a processor with large
amounts of cache.

Sure, it would be cool to optimize some graphics-intensive C code to
minimize thrashing but I hope you are being paid well enough that
1G RAM is cheaper than a couple days of your valuable time.
--
7842++
Nov 15 '05 #4

P: n/a
Thank you all for thsoe informatvie and inspiring replies!

Maybe for my purpose, I may opt to buffer a number of files and use a
counter to record the times of individual files being accessed. Once an
unbuffered file is required, it will replace the least accessed file.
I would prefer this easy and platform independent method... coz I know
little APIs...
:( ...

Hope the random statistics would do the jod themselves.

"David" <xz***@noreply.com> wrote in message
news:dc**********@scorpius.csx.cam.ac.uk...
Hi all:

I am processing a 3D bitmaps(essentially ~1024 2D bitmaps with a size of
1MB each).

If I want read large amount of radom data from this series, how could I
buffer the file to get optimized performance? With WinXP pro/512MB
memories and no other big programmes running at the same time.

Cheers

David

Nov 15 '05 #5

P: n/a
In article <dc**********@scorpius.csx.cam.ac.uk>, David wrote:
Hi all:

I am processing a 3D bitmaps(essentially ~1024 2D bitmaps with a size of 1MB
each).

If I want read large amount of radom data from this series, how could I
buffer the file to get optimized performance? With WinXP pro/512MB memories
and no other big programmes running at the same time.


the answer is process specific - it depends what you're doing with the bitmap.

the easiest solution is probably to put more ram in your PC - if the OS can
handle it.

otherwise you may need to break the bitmap up into chunks which can be
processed independantly.
Bye.
Jasen
Nov 15 '05 #6

This discussion thread is closed

Replies have been disabled for this discussion.