471,585 Members | 1,592 Online
Bytes | Software Development & Data Engineering Community
Post +

Home Posts Topics Members FAQ

Join Bytes to post your question to a community of 471,585 software developers and data experts.

vector::resize(): Why no fixed memory usage?

Hi,

why does the memory consumption of the program attached below increase
steadily during execution? Shouldn't vector::reserve() allocate one
large memory chunk of memory that doesn't change anymore?

CPU: Pentium III Coppermine (Celeron)
OS: Slackware LINUX 9.1 with kernel 2.4.22
Compiler: 3.2.3
Compile command: g++ -O0 -o foo foo.cpp
Tools used to check memory consumption: top and xosview

Felix

foo.cpp:
#include <iostream>
#include <vector>
#include <cmath>
using namespace std;

int main() {
vector<double> vect;
vect.reserve(50l*1000000l);

for (int i = 0; i < 50; ++i) {
for (long j = 0; j < 1000000; ++j)
vect.push_back(sin(j*0.34543));
cout << "vect.capacity()=" << vect.capacity()
<< "vect.size()=" << vect.size()
<< endl;
}
return 0;
}

PS: To contact me off list don't reply but send mail to "felix.klee" at
the domain "inka.de". Otherwise your email to me might get automatically
deleted!
Jul 22 '05 #1
7 2149
Felix E. Klee wrote in
news:20*************************************@gmx.n et:
Hi,

why does the memory consumption of the program attached below increase
steadily during execution? Shouldn't vector::reserve() allocate one
large memory chunk of memory that doesn't change anymore?
[snip] #include <iostream>
#include <vector>
#include <cmath>
using namespace std;

int main() {
vector<double> vect;
vect.reserve(50l*1000000l);

for (int i = 0; i < 50; ++i) {
for (long j = 0; j < 1000000; ++j)
vect.push_back(sin(j*0.34543));
cout << "vect.capacity()=" << vect.capacity()
<< "vect.size()=" << vect.size()
<< endl;
}
return 0;
}


This is an OS issue not a C++ issue, what is (probably) happening
is that when you call reserve, reserve asks the OS for 400MB of
memory, however the OS does this "virtualy". So the "real" memory
is only given to your app when its first accessed. This occurs
during your loop, so you see a steady increase in memory usage by
your programme.

Rob.
--
http://www.victim-prime.dsl.pipex.com/
Jul 22 '05 #2
On Thu, 19 Feb 2004 15:11:13 +0100, "Felix E. Klee"
<fe*************@gmx.net> wrote:
Hi,

why does the memory consumption of the program attached below increase
steadily during execution? Shouldn't vector::reserve() allocate one
large memory chunk of memory that doesn't change anymore?

CPU: Pentium III Coppermine (Celeron)
OS: Slackware LINUX 9.1 with kernel 2.4.22
Compiler: 3.2.3
Compile command: g++ -O0 -o foo foo.cpp
Tools used to check memory consumption: top and xosview

Felix

foo.cpp:
#include <iostream>
#include <vector>
#include <cmath>
using namespace std;

int main() {
vector<double> vect;
vect.reserve(50l*1000000l);

for (int i = 0; i < 50; ++i) {
for (long j = 0; j < 1000000; ++j)
vect.push_back(sin(j*0.34543));
cout << "vect.capacity()=" << vect.capacity()
<< "vect.size()=" << vect.size()
<< endl;
}
return 0;
}


I suspect your OS isn't actually allocating the space when you make
the reserve call, but only when it gets page faults from accessing
pages of virtual memory that haven't yet been backed up with physical
memory. Read up on virtual memory.

This isn't conforming, since bad_alloc probably won't be correctly
thrown under some circumstances, but low memory conditions is a
complex topic on modern OSes.

Tom

C++ FAQ: http://www.parashift.com/c++-faq-lite/
C FAQ: http://www.eskimo.com/~scs/C-faq/top.html
Jul 22 '05 #3
On Thu, 19 Feb 2004 15:37:19 +0000, tom_usenet
<to********@hotmail.com> wrote:
I suspect your OS isn't actually allocating the space when you make
the reserve call,


I mean to say that it is allocating virtual address space but not
physical memory.

Tom

C++ FAQ: http://www.parashift.com/c++-faq-lite/
C FAQ: http://www.eskimo.com/~scs/C-faq/top.html
Jul 22 '05 #4
tom_usenet <to********@hotmail.com> wrote in message news:<ds********************************@4ax.com>. ..

I suspect your OS isn't actually allocating the space when you make
the reserve call, but only when it gets page faults from accessing
pages of virtual memory that haven't yet been backed up with physical
memory. Read up on virtual memory.

This isn't conforming, since bad_alloc probably won't be correctly
thrown under some circumstances, but low memory conditions is a
complex topic on modern OSes.

Are you saying that after reserving memory, if you later try to access
that memory, it's possible that the OS might not be able to provide
physical memory, and hence necessarily crash your software? Surely
OS's must make some attempt to ensure that reserved blocks of virtual
memory will be available when required, or it would be basically
impossible to write software stable enough to cope with low-memory
conditions. To me it seems to be more an issue of the OS memory usage
reporting: I can understand distinguishing between "reserved but not
yet accessed" and "reserved and accessed/allocated" memory, but I
would have though both figures would be available.

Dylan
Jul 22 '05 #5
tom_usenet <to********@hotmail.com> wrote in message news:<ds********************************@4ax.com>. ..

I suspect your OS isn't actually allocating the space when you make
the reserve call, but only when it gets page faults from accessing
pages of virtual memory that haven't yet been backed up with physical
memory. Read up on virtual memory.

This isn't conforming, since bad_alloc probably won't be correctly
thrown under some circumstances, but low memory conditions is a
complex topic on modern OSes.


FWIW, I did a little test under Win2000 using:

char* p = (char*)malloc(10000000);
p[0] = 1;
p[10000000 - 1] = 2;
p[5000000] = 3;
p[2000000] = 4;
p[7000000] = 5;

The first call sets the 'VM size'* for the process to 10M, but each
subsequent access causes the 'Mem usage' to go up (although never to
10M). It's the same with calloc() too, which obviously means it uses
OS to provide default 0-initialized memory (HeapAlloc with
HEAP_ZERO_MEMORY).
Of course in _DEBUG build, because the CRT initializes the area with
'landfill', the 'Mem usage' immediately jumps to 10M, hence if what
you say is true regarding bad_alloc, you'd get quite different
behaviour depending on whether you were using debug-enabled memory
allocators (you'd except some slight difference perhaps, as most debug
allocators use extra space, but not that much).

Dylan
* Columns under Task Manager.
Jul 22 '05 #6
Dylan Nicholson wrote in
news:7d**************************@posting.google.c om:
tom_usenet <to********@hotmail.com> wrote in message
news:<ds********************************@4ax.com>. ..

I suspect your OS isn't actually allocating the space when you make
the reserve call, but only when it gets page faults from accessing
pages of virtual memory that haven't yet been backed up with physical
memory. Read up on virtual memory.

This isn't conforming, since bad_alloc probably won't be correctly
thrown under some circumstances, but low memory conditions is a
complex topic on modern OSes.

FWIW, I did a little test under Win2000 using:

char* p = (char*)malloc(10000000);
p[0] = 1;
p[10000000 - 1] = 2;
p[5000000] = 3;
p[2000000] = 4;
p[7000000] = 5;

The first call sets the 'VM size'* for the process to 10M, but each
subsequent access causes the 'Mem usage' to go up (although never to
10M). It's the same with calloc() too, which obviously means it uses
OS to provide default 0-initialized memory (HeapAlloc with
HEAP_ZERO_MEMORY).


This seems fine to me, the OS has reserved 10M of memory/page file
for the process, but only gives the process real memory when it
uses it. This leaves the real memory available for other processes
until its actually used. Presumably at this point the OS can then
swap out the real memory used by the other process('s) into the
page file (VM) that its reserved, and give this process real memory.

Of course in _DEBUG build, because the CRT initializes the area with
'landfill', the 'Mem usage' immediately jumps to 10M, hence if what
you say is true regarding bad_alloc, you'd get quite different
behaviour depending on whether you were using debug-enabled memory
allocators (you'd except some slight difference perhaps, as most debug
allocators use extra space, but not that much).


The only difference should be that the debug build is slower. and
is also slowing down the OS/other apps by over commiting memory.
Rob.
--
http://www.victim-prime.dsl.pipex.com/
Jul 22 '05 #7
On 19 Feb 2004 14:11:38 -0800, wi******@hotmail.com (Dylan Nicholson)
wrote:
tom_usenet <to********@hotmail.com> wrote in message news:<ds********************************@4ax.com>. ..

I suspect your OS isn't actually allocating the space when you make
the reserve call, but only when it gets page faults from accessing
pages of virtual memory that haven't yet been backed up with physical
memory. Read up on virtual memory.

This isn't conforming, since bad_alloc probably won't be correctly
thrown under some circumstances, but low memory conditions is a
complex topic on modern OSes.
Are you saying that after reserving memory, if you later try to access
that memory, it's possible that the OS might not be able to provide
physical memory, and hence necessarily crash your software?


Yup. Normally paging will slow the system to a crawl first though.

SurelyOS's must make some attempt to ensure that reserved blocks of virtual
memory will be available when required, or it would be basically
impossible to write software stable enough to cope with low-memory
conditions.
On modern OSes, you can't rely on bad_alloc being thrown from new, you
have to keep track of memory usage some other way.

To me it seems to be more an issue of the OS memory usagereporting: I can understand distinguishing between "reserved but not
yet accessed" and "reserved and accessed/allocated" memory, but I
would have though both figures would be available.


With paging and virtual memory, it doesn't necessarily make sense to
back up all allocations with physical memory at the time of the
allocation, since that prevents other processes from using it.

All this exception safety/new throwing bad_alloc is actually less of
an issue than it seems, since normally new won't throw unless you run
out of virtual memory (which is near-impossible on a 64-bit machine),
but some kind of platform specific signal or exception may be
generated when attempting to access the memory if you've run out of
physical memory and page file space.

On the Windows test I just did, I got a message box telling me that
virtual memory was getting low. I don't recall seeing that in the spec
for new!

Tom

C++ FAQ: http://www.parashift.com/c++-faq-lite/
C FAQ: http://www.eskimo.com/~scs/C-faq/top.html
Jul 22 '05 #8

This discussion thread is closed

Replies have been disabled for this discussion.

Similar topics

reply views Thread by solosnake | last post: by
4 posts views Thread by Peter Mrosek | last post: by
9 posts views Thread by Alex Vinokur | last post: by
10 posts views Thread by vsgdp | last post: by
2 posts views Thread by mj | last post: by
2 posts views Thread by mrbrightsidestolemymoney | last post: by
3 posts views Thread by Bram Kuijper | last post: by
reply views Thread by leo001 | last post: by

By using Bytes.com and it's services, you agree to our Privacy Policy and Terms of Use.

To disable or enable advertisements and analytics tracking please visit the manage ads & tracking page.