Hi Jonathan,
assuming that you are talking about a 32 bit process here, you only have
(on XP) 2 GB of virtual memory accessible in user mode. Even if you have
more physical RAM and plenty of paging file space, your process will not
be able to use more than 2GB of VM.
Your modules, stacks and heaps are sprinkled throughout these 2 GB of
VM, so you have to assume that the 2 GB are already quite a bit
fragmented. Given these constraints, it is pretty obvious that
allocating 500 MB of contiguous VM is very likely to fail.
So first you should ask yourself if you really need such large blocks of
contiguous virtual memory or if you can split them into chains of
multiple, much smaller sized blocks (of which maybe you do not even need
all in memory at once).
Finally, if you cannot avoid using large contiguous blocks of VM,
consider reserving a fair amount of VM early in the lifecycle of your
process and commit parts of it as you need them.
--Johannes
Jonathan Wilson wrote:
I am working on some software which has to deal with data that could be
as large as 500mb or so. Currently I am using new[] and delete[] to
manage this memory but I find it is not ideal and sometimes gives out of
memory errors if I open one large data item then free that data item
then another large data item even though I have enough memory (including
2GB of physical RAM and 100GB of free disk space for swap file). Are
there any functions (either in the CRT or in the win32 API) that would
be better for this job (specifically functions that are guaranteed to
return the memory to the OS and make task manager show that the memory
is no longer in use as well as having as little overhead as possible
when using it)
The app in question runs only on windows XP and only has one thread
touching this memory at all.
--
Johannes Passing -
http://int3.de/