By using this site, you agree to our updated Privacy Policy and our Terms of Use. Manage your Cookies Settings.
435,268 Members | 1,371 Online
Bytes IT Community
+ Ask a Question
Need help? Post your question and get tips & solutions from a community of 435,268 IT Pros & Developers. It's quick & easy.

stl vector problem

P: n/a
Hi,

I am trying to speed up the perfomance of stl vector by
allocating/deallocating blocks of memory manually. one version of the
code crashes when I try to free the memory. The other version seem to
work. I would appreciate someone to comment on this.

Version 1 (crashes on deallocating)

#include <iostream>
#include <vector>
#include <stdlib.h>

using namespace std;

int main(int argc, char **argv)
{
vector<double *> vec;
double *ary1 = new double[5];
for(int i=0;i<5;i++)
{
vec.push_back(&ary1[i]);
}

double *ary2 = new double[5];
for(i=0;i<5;i++)
{
vec.push_back(&ary2[i]);
}

for(i=0;i<vec.size();i++)
*vec[i]=double(i);

for(i=0;i<vec.size();i++)
cout << *vec[i] << endl;

// crash!!
for(i=0;i<vec.size();i++)
{
double *p = vec[i];
delete p; // free each element, one at a time
}
}

Version 2: seem to work, but is it good?

#include <iostream>
#include <vector>
#include <stdlib.h>

using namespace std;

int main(int argc, char **argv)
{
vector<double *> tab;
vector<double *> vec;

double *ary1 = new double[5];

for(int i=0;i<5;i++)
{
vec.push_back(&ary1[i]);
}

tab.push_back(ary1);

double *ary2 = new double[5];

for(i=0;i<5;i++)
{
vec.push_back(&ary2[i]);
}

tab.push_back(ary2);

for(i=0;i<vec.size();i++)
*vec[i]=double(i);

for(i=0;i<vec.size();i++)
cout << *vec[i] << endl;

for(i=0;i<tab.size();i++)
delete [] tab[i]; // delete as two arrays

for(i=0;i<vec.size();i++)
{
cout << vec[i] << endl;
}

for(i=0;i<vec.size();i++)
{
cout << *vec[i] << endl;
}

}
Jul 22 '05 #1
Share this Question
Share on Google+
9 Replies


P: n/a
luigi wrote:
Hi,

I am trying to speed up the perfomance of stl vector by
allocating/deallocating blocks of memory manually. one version of the
code crashes when I try to free the memory. The other version seem to
work. I would appreciate someone to comment on this.

Version 1 (crashes on deallocating)

#include <iostream>
#include <vector>
#include <stdlib.h>

using namespace std;

int main(int argc, char **argv)
{
vector<double *> vec;
double *ary1 = new double[5];
new # 1
for(int i=0;i<5;i++)
{
vec.push_back(&ary1[i]);
}

double *ary2 = new double[5];
new # 2
for(i=0;i<5;i++)
{
vec.push_back(&ary2[i]);
}

for(i=0;i<vec.size();i++)
*vec[i]=double(i);

for(i=0;i<vec.size();i++)
cout << *vec[i] << endl;

// crash!!
for(i=0;i<vec.size();i++)
{
double *p = vec[i];
delete p; // free each element, one at a time
???? delete 5 things ????
}
}
new and delete need to match.

Version 2: seem to work, but is it good?

#include <iostream>
#include <vector>
#include <stdlib.h>

using namespace std;

int main(int argc, char **argv)
{
vector<double *> tab;
vector<double *> vec;

double *ary1 = new double[5];
** new # 1

for(int i=0;i<5;i++)
{
vec.push_back(&ary1[i]);
}

tab.push_back(ary1);

double *ary2 = new double[5];
** new # 2

for(i=0;i<5;i++)
{
vec.push_back(&ary2[i]);
}

tab.push_back(ary2);

for(i=0;i<vec.size();i++)
*vec[i]=double(i);

for(i=0;i<vec.size();i++)
cout << *vec[i] << endl;

for(i=0;i<tab.size();i++)
delete [] tab[i]; // delete as two arrays
again - more deletes than news.

for(i=0;i<vec.size();i++)
{
cout << vec[i] << endl;
}

for(i=0;i<vec.size();i++)
{
cout << *vec[i] << endl;
}

}


I have an idea for you - measure the performance BEFORE you start making
an optimization.

Jul 22 '05 #2

P: n/a
"luigi" <lc****@yahoo.com> wrote in message
news:27**************************@posting.google.c om...
Hi,

I am trying to speed up the perfomance of stl vector by
allocating/deallocating blocks of memory manually. one version of the
code crashes when I try to free the memory. The other version seem to
work. I would appreciate someone to comment on this.


I think you are making your life unnecessary hard. Assuming you have a
performance problem in the first place and the performance problem is
caused by the way std::vector allocates memory(?), why not just use
std::vector::reserve() for more efficient memory allocation?

Version 1 has some very serious problems that are an indication that you
do not fully understand what you are doing.
First if you allocate something with 'new double[5];' it must be
deallocated with 'delete[] p;'. Secondly you cannot delete an individual
element at a time from an array allocated with 'new double[5];', you can
only delete the complete array. I see no way you could turn Version 1
into something reasonable.

I recommend that you first try gain a better understanding of C++ before
you start doing tricky optimizations. The second recommendation I have
is to optimize code only if the perfomance of the application is
unacceptable and it is proven that the performance bottleneck is in the
code you are trying to optimize. I have seen too many futile
optimization attempts fail, resulting in code that is buggy, hard to
maintain and little or no faster that than the clean unoptimized
version.

--
Peter van Merkerk
peter.van.merkerk(at)dse.nl

Jul 22 '05 #3

P: n/a
lc****@yahoo.com (luigi) wrote in message news:<27**************************@posting.google. com>...
Hi,

I am trying to speed up the perfomance of stl vector by
allocating/deallocating blocks of memory manually.


That won't stop the vector allocating and deallocating memory. Every
time you call push_back your vector might allocate another chunck of
memory. If you know how big you want your vector to be, just make it
that big when you create it. For example

std::vector<double> vec(5);

vec is 5 elements long and each element contains 0.0 (the
default-initialisation value for double).

std::vector<double> vec2(5, 42.42);

vec is 5 elements long and each element contains 42.42.

This does everything I think you were trying to do. Also look at the
vector member functions reserve, capacity and resize.

<snip>

hth
GJD
Jul 22 '05 #4

P: n/a
Hi,

Thanks for the suggestions.

Any comment on the second version?

The vec.size() gives the right number of vector elements, so the
news and deletes are balanced. The problem is not here.

Finally, why not (besides what the books say you MUST)

for(i=0;i<len;i++) delete &p[i]; be the same as delete []; ?

Anyone cares to explain what delete/delete[] "really" does?

"Peter van Merkerk" <me*****@deadspam.com> wrote in message news:<bp*************@ID-133164.news.uni-berlin.de>...
"luigi" <lc****@yahoo.com> wrote in message
news:27**************************@posting.google.c om...
Hi,

I am trying to speed up the perfomance of stl vector by
allocating/deallocating blocks of memory manually. one version of the
code crashes when I try to free the memory. The other version seem to
work. I would appreciate someone to comment on this.


I think you are making your life unnecessary hard. Assuming you have a
performance problem in the first place and the performance problem is
caused by the way std::vector allocates memory(?), why not just use
std::vector::reserve() for more efficient memory allocation?

Version 1 has some very serious problems that are an indication that you
do not fully understand what you are doing.
First if you allocate something with 'new double[5];' it must be
deallocated with 'delete[] p;'. Secondly you cannot delete an individual
element at a time from an array allocated with 'new double[5];', you can
only delete the complete array. I see no way you could turn Version 1
into something reasonable.

I recommend that you first try gain a better understanding of C++ before
you start doing tricky optimizations. The second recommendation I have
is to optimize code only if the perfomance of the application is
unacceptable and it is proven that the performance bottleneck is in the
code you are trying to optimize. I have seen too many futile
optimization attempts fail, resulting in code that is buggy, hard to
maintain and little or no faster that than the clean unoptimized
version.

Jul 22 '05 #5

P: n/a

"luigi" <lc****@yahoo.com> wrote in message news:27**************************@posting.google.c om...
Finally, why not (besides what the books say you MUST)
The standard says you must. It's the rules for which programs are written and compilers
are implemented. The fact that the standard lists it as undefined behavior to do otherwise
means that a compiler implementer can ignore the consequences of someone doing that.
for(i=0;i<len;i++) delete &p[i]; be the same as delete []; ?

Anyone cares to explain what delete/delete[] "really" does?


delete calls the destructor (for classes) and then the appropriate deallocation function.
The deallocation function for delete[] is different than the one for delete. There's no
requirement that the compiler, or anybody who overloads these handle mismatches.
Jul 22 '05 #6

P: n/a
> Thanks for the suggestions.

Any comment on the second version?
I didn't bother to study the second version, because the optimization
attempt seemed rather futile to me.
The vec.size() gives the right number of vector elements, so the
news and deletes are balanced. The problem is not here.
They are not. You allocate a double array with 5 elements. Then you push
the address of every element into the vector (which negates any
performance benefit you are looking for as the vector would need to
allocate memory for the pointers - now you do two allocations insteads
of one!). When you get to the point where you do the clean up you try to
delete those elements. However since you allocated an array you must
delete an array. It is not possible to delete a individual element from
an array; it is all or nothing. That is the way C++ works, period.

You could fix the clean-up code like this:

for(i=0; i<vec.size(); i+=5)
{
double *p = vec[i];
delete[] p; // free each *array*, one at a time
}

Though I see no reason why one would want to fix that code in the first
place. Your code will be slower than the straightforward alternative
below, it is much more error prone / high maintainance and is not
exception safe.

BTW: the straighforward way would look like this (which is certainly
faster than the code you posted and exception safe):

#include <iostream>
#include <vector>

using namespace std;

int main(int argc, char **argv)
{
vector<double > vec;
vec.reserve(10); // Avoids reallocations when filling vec
for(int i=0;i<10;++i)
{
vec.push_back(i);
}

for(i=0;i<vec.size(); ++i)
cout << vec[i] << endl;
}

Finally, why not (besides what the books say you MUST)
Because if you don't it is not guaranteed to work (even though you may
get away with it on some compilers).
for(i=0;i<len;i++) delete &p[i]; be the same as delete []; ?

Anyone cares to explain what delete/delete[] "really" does?


You can find that in the FAQ (which is an interestesting read anyway):
http://www.parashift.com/c++-faq-lit...es.html#faq-37.
7
http://www.parashift.com/c++-faq-lit...html#faq-16.13

--
Peter van Merkerk
peter.van.merkerk(at)dse.nl

Jul 22 '05 #7

P: n/a
Hi,

Thanks again for the good comments.

These are just made-up test cases. Please conside the case when
elements are pointers to huge structures, with their own dynamic
allocation. The vector size could also vary widely. Would your
comments on the performance still hold?
de*********@hotmail.com (Gavin Deane) wrote in message news:<6d**************************@posting.google. com>...
lc****@yahoo.com (luigi) wrote in message news:<27**************************@posting.google. com>...
Hi,

I am trying to speed up the perfomance of stl vector by
allocating/deallocating blocks of memory manually.


That won't stop the vector allocating and deallocating memory. Every
time you call push_back your vector might allocate another chunck of
memory. If you know how big you want your vector to be, just make it
that big when you create it. For example

std::vector<double> vec(5);

vec is 5 elements long and each element contains 0.0 (the
default-initialisation value for double).

std::vector<double> vec2(5, 42.42);

vec is 5 elements long and each element contains 42.42.

This does everything I think you were trying to do. Also look at the
vector member functions reserve, capacity and resize.

<snip>

hth
GJD

Jul 22 '05 #8

P: n/a
luigi wrote:
Hi,

Thanks again for the good comments.

These are just made-up test cases. Please conside the case when
elements are pointers to huge structures, with their own dynamic
allocation. The vector size could also vary widely. Would your
comments on the performance still hold?


There is too little information here to tell what the right answer is.

I suspect that:

std::vector< std::vector<double> >

will probably work with acceptable performance.

However, I don't know what you're trying to do.

Jul 22 '05 #9

P: n/a
lc****@yahoo.com (luigi) wrote in message news:<27**************************@posting.google. com>...

Please don't top-post. Put you reply below the previous message. It
makes the thread easier to follow. Thanks.

<rearranged>
de*********@hotmail.com (Gavin Deane) wrote in message news:<6d**************************@posting.google. com>...
lc****@yahoo.com (luigi) wrote in message news:<27**************************@posting.google. com>...
Hi,

I am trying to speed up the perfomance of stl vector by
allocating/deallocating blocks of memory manually.


That won't stop the vector allocating and deallocating memory. Every
time you call push_back your vector might allocate another chunck of
memory. If you know how big you want your vector to be, just make it
that big when you create it. For example

std::vector<double> vec(5);

vec is 5 elements long and each element contains 0.0 (the
default-initialisation value for double).

std::vector<double> vec2(5, 42.42);

vec is 5 elements long and each element contains 42.42.

This does everything I think you were trying to do. Also look at the
vector member functions reserve, capacity and resize.

<snip>

hth
GJD

Hi,

Thanks again for the good comments.

These are just made-up test cases. Please conside the case when
elements are pointers to huge structures, with their own dynamic
allocation. The vector size could also vary widely. Would your
comments on the performance still hold?


I hope I've understood you question and what I've written is relevant
to you.

There is nothing to be gained by trying to take low-level control of
the vector's internal storage. Use of the appropriate vector
constructors, as well as member functions like resize, reserve and
capacity will give you all the control you need. Very simple example
(look up the other vector member functions for further reference):

#include <vector>

int main()
{
std::vector<int> vi;
vi.reserve(10); // vi can hold ten ints before needing to
// allocate more memory.

for (int i = 0; i < 5; ++i)
{
vi.push_back(i); // 5 push back calls but no further dynamic
// allocation inside the vector is necessary.
}

return 0;
}

Now, you have a huge structure, instances of which will be created
dynamically, something like

struct my_struct { /* huge */ };
my_struct* s = new my_struct;
// later ...
delete s;

This is a completely separate issue from the memory management going
on inside the vector. If you have a std::vector<my_struct*>, it is
that vector's responsibility to manage dynamic storage for enough
pointers-to-my_struct. And you have some control over that through the
vector class interface.

It is your responsibility to remember to delete any my_struct objects
you allocated with new. Another simple example:

#include <vector>

struct my_struct { /* huge */ };

int main()
{
std::vector<my_struct*> vec;

my_struct* s0 = new my_struct;
my_struct* s1 = new my_struct;

// vec is responsible for managing the dynamic storage for
// enough pointers-to-my_struct here.
vec.push_back(s0);
vec.push_back(s1);

// You have to do this bit.
delete vec[0]; // or delete s0;
delete vec[1]; // or delete s1;

return 0;

// At the end of the function, vec is destroyed. During destruction
// it cleans up its dynamic pointers-to-my_struct storage.
}

The vector can't help you with deleting the actual my_struct objects
though, because it has no way of knowing whether the contained
pointers were created in a new expression or not. For example, you
could store pointers to dynamically allocated objects in the same
vector as pointers to local objects. I would not recommend this at
all, it's just to illustrate why the vector cannot be responsible for
deleting objects it holds pointers to.

#include <vector>

struct my_struct { /* huge */ };

int main()
{
std::vector<my_struct*> vec;

my_struct* s0 = new my_struct;
my_struct local_object;
my_struct* s1 = &local_object;

// vec is responsible for managing the dynamic storage for
// enough pointers-to-my_struct here.
vec.push_back(s0);
vec.push_back(s1);

// The vector can not do this bit because it can not know
// which pointers point to objects allocated with new.
// You have to do this.
delete vec[0];
// You must not do this.
//delete vec[1];

return 0;

// At the end of the function, vec is destroyed. During destruction
// it cleans up its dynamic pointers-to-my_struct storage.
}

--
hth
GJD
Jul 22 '05 #10

This discussion thread is closed

Replies have been disabled for this discussion.