Yes my example code was in error, let me post the correct version and clean
up my question a little:
new example code:
struct base_class { };
struct derive_class : base_class { };
void foo(std::vector<base_class*>& vec)
{
base_class* pBases = new derive_class[100];
for(int i = 0; i < 100; ++i)
vec.push_back(&pBases[i]);
}
void bar(std::vector<base_class*>& vec)
{
for(int i = 0; i < vec.size(); ++i)
delete vec[i];
vec.clear();
}
// I dont give a crap that the main func has bad prototype so dont even
bother
int main()
{
std::vector<base_class*> vec;
foo(vec);
bar(vec);
return 0;
}
Now, with this example only, is memory leaked? My intuition says yes
because I allocated with new[] but didnt de-allocate with delete[], so
somewhere the compiler must have stored the size of the array but it never
cleaned that up.
And just forget about the latter part of my original question, I didn't
provide enough context to make it clear and doing so would require more time
and space than most would care to read I believe.
Jeff
"Kevin Goodsell" <us*********************@neverbox.com> wrote in message
news:3f415089@shknews01...
Jeff Williams wrote:
Ok, everyone loves to talk about dynamic arrays and ptr's etc, they
provide endless conversation :) so here goes:
Will this leak memory (my intuition says yes):
As far as I can tell, it will not. But only because it won't compile.
void foo(vector<int*>& vec)
{
int* pInts = new int[100];
for(int i = 0; i < 100; i++)
vec.push_back(pInts[i]);
}
vec holds int pointers. pInts[i] is an int. Implicit conversion from int
to int * is not allowed.
// It can be assumed that later on in the program vec will have
// delete called on each individual element
That would create an error, since each individual element is *not* a
pointer to something that was allocated with 'new'. You don't get to
make up the rules for how 'new' and 'delete' are used. They are:
Anything that is created with 'new' must eventually be destroyed by
applying 'delete' to it exactly once. Anything that is created with
'new[]' must eventually be destroyed by applying 'delete[]' to it
exactly once.
This example is to simulate a technique I am using involving more
complicated stuff so yes I realize there are better ways to do the above
in this simple case.
But also I will explain what I am trying to do in the bigger picture in
case anyone wishes to make suggestions.
I want to fill a container (a vector but it shouldnt matter what) with
ptr's to instances of objects. But there will be millions of these objects
and I wish to allocate them all at once and add them to the container. The
calling code doesn't really know the exact type of the object that will
go into the container, it only knows they will at least derive off the type
of the ptr in the container. BUT in reality they will all be the same
object, which is why i want to allocate them all at once... for speed and to
avoid fragmenting memory.
That's fine, but possibly misguided. Are you sure allocating all at once
will speed it up enough to be worth the effort? Also, what if you can't
get a single, huge chunk of memory large enough to hold all of the
objects, but you *can* get 2 or 3 smaller chunks that together are large
enough?
So the obvious solution of something like:
vector<basePtr*> vec;
vec.reserve(1000000);
for(int i = 0; i < 1000000; ++i)
{
basePtr* pOb = CreateObject(); // virtual func
vec.push_back(pOb);
What's wrong with
vec.push_back(CreateObject());
?
}
isn't the solution I want
Why not? It looks fine to me (except for the part about CreateObject
being virtual). CreateObject could simply return the next available
object in your massive allocation.
In fact, you could make it a bit better by doing something like this:
class Cache
{
public:
Cache() : p(new SomeObject[1000000]) {}
SomeBase *CreateObject();
void ReleaseObject(SomeBase *);
~Cache() { delete [] p; }
private:
SomeBase *p;
};
It needs work, but at least with this you can be sure that your objects
are deleted when the Cache object is destroyed. You can even change the
implementation of Cache to allow for other methods of organizing the
SomeObjects if you find that one huge chunk of memory doesn't work so
well. Better still, you can change it to create the objects one at a
time in the usual way so that you have something nice and simple to use
during testing and debugging, and then possibly stick with if you find
that performance is good enough.
-Kevin
--
My email address is valid, but changes periodically.
To contact me please use the address from a recent posting.