By using this site, you agree to our updated Privacy Policy and our Terms of Use. Manage your Cookies Settings.
440,159 Members | 1,941 Online
Bytes IT Community
+ Ask a Question
Need help? Post your question and get tips & solutions from a community of 440,159 IT Pros & Developers. It's quick & easy.

dynamic allocation vs array

P: n/a
Hi all,

I was told this in one of the university course I was doing.

In C we may expect good performance for:
double a[N][N], c[N][N], d;
for (i=0; i<N; i++)
for(j=0; j<N; j++)
a[i][j] = a[i][j] + c[i][j] *d;

But this is another matter:
double *a[N], *c[N], d;
for(i=0; i<N; i++) {a[i] = (double *) malloc(N*sizeof(double));
c[i] = (double *) malloc(N*sizeof(double)); }

for(i=0; i<N; i++)
for(j=0; j<N; j++)
a[i][j] = a[i][j] + c[i][j] * d;
It seems that we would expect some performance hit if we were to use dynamic
memory allocation of some sort. But it's not a standard ANSI C++ to have
array with the size determined during runtime. So is there any good
recommendation to minimize this performance hit or totally avoiding it
through some other method? I would expect a linked list to be even worse.
Any recommendation? Thanks.
Thuan Seah

Jul 22 '05 #1
Share this Question
Share on Google+
12 Replies


P: n/a
On Fri, 10 Sep 2004 14:47:41 +1000, Tan Thuan Seah <u2******@anu.edu.au> wrote:
Hi all,

I was told this in one of the university course I was doing.

In C we may expect good performance for:
double a[N][N], c[N][N], d;
for (i=0; i<N; i++)
for(j=0; j<N; j++)
a[i][j] = a[i][j] + c[i][j] *d;

But this is another matter:
double *a[N], *c[N], d;
for(i=0; i<N; i++) {a[i] = (double *) malloc(N*sizeof(double));
c[i] = (double *) malloc(N*sizeof(double)); }

for(i=0; i<N; i++)
for(j=0; j<N; j++)
a[i][j] = a[i][j] + c[i][j] * d;
It seems that we would expect some performance hit if we were to use dynamic
memory allocation of some sort. But it's not a standard ANSI C++ to have
array with the size determined during runtime. So is there any good
recommendation to minimize this performance hit or totally avoiding it
through some other method? I would expect a linked list to be even worse.
Any recommendation? Thanks.


You are using arrays with non-constant size (assuming N is not constant)
in both examples. C99 allows that, but C++ does not. So both examples
won't compile under standard C++.

What were the reasons cited for getting worse performance?

If it's the cost of the malloc, then you are doing it at setup time
anyway and it's unavoidable without resorting to platform specific
stack access (such as alloca).

If it's the extra pointer deref then that's pretty much unavoidable,
though you managed to add two of them, which can be avoided by
the code that follows the next point too.

If it's that the memory isn't "localised" and hence cache misses are
more frequent you could use "flattened" arrays:

double *a = new double[N*N];
double *c = new double[N*N];
double d;

/* ... */

for(int i=0; i<N; i++)
for(int j=0; j<N; j++)
a[i*N+j] += c[i*N+j] * d;

Have you benchmarked to see if it matters? I get an 8% difference with
everything as globals (or 76 milliseconds), with things on the stack it
crashes since two million doubles don't fit on the stack on my machine.

Is speed or not crashing more important?

--
Sam Holden
Jul 22 '05 #2

P: n/a
Tan Thuan Seah posted:
Hi all,

I was told this in one of the university course I was doing.

In C we may expect good performance for:
double a[N][N], c[N][N], d;
for (i=0; i<N; i++)
for(j=0; j<N; j++)
a[i][j] = a[i][j] + c[i][j] *d;

But this is another matter:
double *a[N], *c[N], d;
for(i=0; i<N; i++) {a[i] = (double *) malloc(N*sizeof(double));
c[i] = (double *) malloc(N*sizeof(double)); }

for(i=0; i<N; i++)
for(j=0; j<N; j++)
a[i][j] = a[i][j] + c[i][j] * d;
It seems that we would expect some performance hit if we were to use
dynamic memory allocation of some sort. But it's not a standard ANSI
C++ to have array with the size determined during runtime. So is there
any good recommendation to minimize this performance hit or totally
avoiding it through some other method? I would expect a linked list to
be even worse. Any recommendation? Thanks.
Thuan Seah


You could allocate one big chunk of memory and then chop it up like so.

I'd give an example but it'd take me a few minutes to figure out what your
code is doing.

-JKop
Jul 22 '05 #3

P: n/a
"Tan Thuan Seah" <u2******@anu.edu.au> wrote in message news:<41********@clarion.carno.net.au>...
Hi all,

I was told this in one of the university course I was doing.

In C we may expect good performance for:
double a[N][N], c[N][N], d;
for (i=0; i<N; i++)
for(j=0; j<N; j++)
a[i][j] = a[i][j] + c[i][j] *d;

But this is another matter:
double *a[N], *c[N], d;
for(i=0; i<N; i++) {a[i] = (double *) malloc(N*sizeof(double));
c[i] = (double *) malloc(N*sizeof(double)); }
It seems that we would expect some performance hit if we were to use dynamic
memory allocation of some sort. But it's not a standard ANSI C++ to have
array with the size determined during runtime.


Try using a vector of vectors. With a modern implementation
it may not be as slow as you expect; certainly faster than
the malloc version (which doesnt work anyway , you can't have
the arrays a[] and c[]). If this is still too slow, you could
look for a custom Matrix class that someone has already
written efficiently.
Jul 22 '05 #4

P: n/a
Tan Thuan Seah wrote:
Hi all,

I was told this in one of the university course I was doing.

In C we may expect good performance for:
double a[N][N], c[N][N], d;
for (i=0; i<N; i++)
for(j=0; j<N; j++)
a[i][j] = a[i][j] + c[i][j] *d;

But this is another matter:
double *a[N], *c[N], d;
for(i=0; i<N; i++) {a[i] = (double *) malloc(N*sizeof(double));
c[i] = (double *) malloc(N*sizeof(double)); }

for(i=0; i<N; i++)
for(j=0; j<N; j++)
a[i][j] = a[i][j] + c[i][j] * d;
It seems that we would expect some performance hit if we were to use dynamic
memory allocation of some sort. But it's not a standard ANSI C++ to have
array with the size determined during runtime. So is there any good
recommendation to minimize this performance hit or totally avoiding it
through some other method? I would expect a linked list to be even worse.
Any recommendation? Thanks.

Example code:
#include <vector>
int main()
{
using namespace std;

int n=8;

vector<vector<int> > someVector(n, vector<int>(n));
someVector[3][2]=4;

}

If you have to, you may use a valarray. Also check this:

http://www23.brinkster.com/noicys/cppfaq.htm


--
Ioannis Vranos

http://www23.brinkster.com/noicys
Jul 22 '05 #5

P: n/a
Thanks for everyone's contribution. Actually I am currently working on a
project on sparse matrix ordering which involves quite a fair bit of graph
and sets and stuff. I am looking for an efficient way to implement the sets
and graph. As Old Wolf suggested using vectors, I came across the set in
STL. Has anyone has anyone good experiences using a combination of vectors
and set? Do they go well together?

Last bit would be to figure out a way for the graph. The book I am reading
(Computer Solution of Large Sparse Positive Definite Systems by Alan George
and Joseph W. Liu) briefly mentioned about storing the graph with 2 arrays.
But since the input matrices are going to differ in size, I need to find a
dynamic way to allocate the arrays. Vector seems pretty to be a potential
candidate. I am still open to other suggestion though. Thanks.

Thuan Seah

"Ioannis Vranos" <iv*@guesswh.at.grad.com> wrote in message
news:ch***********@ulysses.noc.ntua.gr...
Tan Thuan Seah wrote:
Hi all,

I was told this in one of the university course I was doing.

In C we may expect good performance for:
double a[N][N], c[N][N], d;
for (i=0; i<N; i++)
for(j=0; j<N; j++)
a[i][j] = a[i][j] + c[i][j] *d;

But this is another matter:
double *a[N], *c[N], d;
for(i=0; i<N; i++) {a[i] = (double *) malloc(N*sizeof(double));
c[i] = (double *) malloc(N*sizeof(double)); }

for(i=0; i<N; i++)
for(j=0; j<N; j++)
a[i][j] = a[i][j] + c[i][j] * d;
It seems that we would expect some performance hit if we were to use dynamic memory allocation of some sort. But it's not a standard ANSI C++ to have
array with the size determined during runtime. So is there any good
recommendation to minimize this performance hit or totally avoiding it
through some other method? I would expect a linked list to be even worse. Any recommendation? Thanks.

Example code:
#include <vector>
int main()
{
using namespace std;

int n=8;

vector<vector<int> > someVector(n, vector<int>(n));
someVector[3][2]=4;

}

If you have to, you may use a valarray. Also check this:

http://www23.brinkster.com/noicys/cppfaq.htm


--
Ioannis Vranos

http://www23.brinkster.com/noicys

------------ And now a word from our sponsor ----------------------
For a quality mail server, try SurgeMail, easy to install,
fast, efficient and reliable. Run a million users on a standard
PC running NT or Unix without running out of power, use the best!
---- See http://netwinsite.com/sponsor/sponsor_surgemail.htm ----
Jul 22 '05 #6

P: n/a
PKH

"Tan Thuan Seah" <u2******@anu.edu.au> wrote in message
news:41********@clarion.carno.net.au...
Hi all,

I was told this in one of the university course I was doing.

In C we may expect good performance for:
double a[N][N], c[N][N], d;
for (i=0; i<N; i++)
for(j=0; j<N; j++)
a[i][j] = a[i][j] + c[i][j] *d;

But this is another matter:
double *a[N], *c[N], d;
for(i=0; i<N; i++) {a[i] = (double *) malloc(N*sizeof(double));
c[i] = (double *) malloc(N*sizeof(double)); }

for(i=0; i<N; i++)
for(j=0; j<N; j++)
a[i][j] = a[i][j] + c[i][j] * d;
It seems that we would expect some performance hit if we were to use
dynamic
memory allocation of some sort. But it's not a standard ANSI C++ to have
array with the size determined during runtime. So is there any good
recommendation to minimize this performance hit or totally avoiding it
through some other method? I would expect a linked list to be even worse.
Any recommendation? Thanks.
Thuan Seah


I think most likely operations on a[N][N] would/could perform better than
the dynamically a[N]* allocated
because of memory-cacheing. a[N][N] would have all its memory in one chunk,
while a[N]* could have
bits be spread around in memory, decreasing cache-performance.

PKH


Jul 22 '05 #7

P: n/a
That's my guess also. But it seems no way around that unless you are able to
know the size of array you need in advance. I will take a look at vector of
the STL, wonder if the performance is as good as array or as bad as dynamic
memory allocation.

Thuan Seah
"PKH" <no************@online.no> wrote in message
news:yW******************@news4.e.nsc.no...

"Tan Thuan Seah" <u2******@anu.edu.au> wrote in message
news:41********@clarion.carno.net.au...
Hi all,

I was told this in one of the university course I was doing.

In C we may expect good performance for:
double a[N][N], c[N][N], d;
for (i=0; i<N; i++)
for(j=0; j<N; j++)
a[i][j] = a[i][j] + c[i][j] *d;

But this is another matter:
double *a[N], *c[N], d;
for(i=0; i<N; i++) {a[i] = (double *) malloc(N*sizeof(double));
c[i] = (double *) malloc(N*sizeof(double)); }

for(i=0; i<N; i++)
for(j=0; j<N; j++)
a[i][j] = a[i][j] + c[i][j] * d;
It seems that we would expect some performance hit if we were to use
dynamic
memory allocation of some sort. But it's not a standard ANSI C++ to have
array with the size determined during runtime. So is there any good
recommendation to minimize this performance hit or totally avoiding it
through some other method? I would expect a linked list to be even worse. Any recommendation? Thanks.
Thuan Seah
I think most likely operations on a[N][N] would/could perform better than
the dynamically a[N]* allocated
because of memory-cacheing. a[N][N] would have all its memory in one

chunk, while a[N]* could have
bits be spread around in memory, decreasing cache-performance.

PKH

Jul 22 '05 #8

P: n/a
"Tan Thuan Seah" <u2******@anu.edu.au> wrote:
Thanks for everyone's contribution. Actually I am currently working on a
project on sparse matrix ordering which involves quite a fair bit of graph
and sets and stuff. I am looking for an efficient way to implement the sets
and graph. As Old Wolf suggested using vectors, I came across the set in
STL. Has anyone has anyone good experiences using a combination of vectors
and set? Do they go well together?


'set' is usually implemented as a tree. If this suits your
performance requirements, feel free to use it. You can always
replace the STL set with some other tree that's hand-tuned to
your requirements, at a later point in time.

If you need graphs in general, you could try the Boost Graph Library
at www.boost.org .
Jul 22 '05 #9

P: n/a
Tan Thuan Seah wrote:
I was told this in one of the university course I was doing.

In C we may expect good performance for:

double a[N][N], c[N][N], d;
for (size_t i = 0; i < N; ++i)
for(size_t j = 0; j < N; ++j)
a[i][j] = a[i][j] + c[i][j] * d;

But this is another matter:

double *a[N], *c[N], d;
for(size_t i = 0; i < N; ++i) {
a[i] = (double*)malloc(N*sizeof(double));
c[i] = (double*) malloc(N*sizeof(double)); } for(size_t i = 0; i < N; ++i)
for(size_t j = 0; j < N; ++j)
a[i][j] = a[i][j] + c[i][j] * d;

It seems that we would expect some performance hit
if we were to use dynamic memory allocation of some sort.
Let's try this:
cat main.cc #include <cstdlib>

int main(int argc, char* argv[]) {

if (1 < argc) {
const
size_t N = 1024;

#ifdef YNAMIC
double *a[N], *c[N], d = 1.0;

a[0] = (double*)malloc(N*N*sizeof(double));
c[0] = (double*)malloc(N*N*sizeof(double));

for(size_t i = 1; i < N; ++i) {
a[i] = a[0] + i*N;
c[i] = c[0] + i*N;
}
#else //YNAMIC
double a[N][N], c[N][N], d = 1.0;
#endif//YNAMIC

for (size_t i = 0; i < N; ++i)
for(size_t j = 0; j < N; ++j) {
a[i][j] = N*i + j;
c[i][j] = N*i + j;
}

const
size_t trips = atoi(argv[1]);
for (size_t trip = 0; trip < trips; ++trip)
for (size_t i = 0; i < N; ++i)
for(size_t j = 0; j < N; ++j)
a[i][j] += c[i][j] * d;
}
return EXIT_SUCCESS;
}
g++ -Wall -ansi -pedantic -O2 -o main main.cc
time ./main 60 3.854u 0.130s 0:03.98 100.0% 0+0k 0+0io 0pf+0w g++ -DYNAMIC -Wall -ansi -pedantic -O2 -o main main.cc
time ./main 60 3.865u 0.129s 0:03.99 99.7% g++ --version g++ (GCC) 3.4.1

On my machine, it doesn't seem to matter much
whether the memory is allocated from automatic or free storage.
But it's not a standard ANSI C++ to have array
with the size determined during runtime.
So is there any good recommendation
to minimize this performance hit
or totally avoiding it through some other method?
I would expect a linked list to be even worse.
Have you tested it?
Any recommendation?


You may be a victim of Premature Optimization:

http://c2.com/cgi/wiki?PrematureOptimization
Jul 22 '05 #10

P: n/a
Tan Thuan Seah wrote:
That's my guess also. But it seems no way around that unless you are able to
know the size of array you need in advance. I will take a look at vector of
the STL, wonder if the performance is as good as array or as bad as dynamic
memory allocation.


If you are under *severe* time concerns, you must use valarray.
Also a good idea is to read TC++PL3 chapter 17, for the time costs of
each standard library container.


--
Ioannis Vranos

http://www23.brinkster.com/noicys
Jul 22 '05 #11

P: n/a

"E. Robert Tisdale" <E.**************@jpl.nasa.gov> wrote in message
news:ci**********@nntp1.jpl.nasa.gov...
Have you tested it?


hm.. not quite. I just based on the fact about computers that I know of.
Having a pointer is always slower isn't it? Linked list is constructed using
pointers so I reckon it would be slow also.

Thuan Seah
Jul 22 '05 #12

P: n/a
Tan Thuan Seah wrote:
"E. Robert Tisdale" <E.**************@jpl.nasa.gov> wrote in message
news:ci**********@nntp1.jpl.nasa.gov...
Have you tested it?

hm.. not quite. I just based on the fact about computers that I know of.
Having a pointer is always slower isn't it? Linked list is constructed using
pointers so I reckon it would be slow also.

Than built in arrays? What would be the time cost difference between
these two for loops?

int main()
{
int array[100], *p=new int [100];
for(unsigned i=0; i<100; ++i)
array[i]=1;

for(unsigned i=0; i<100; ++i)
p[i]=1;
delete[] p;
}

--
Ioannis Vranos

http://www23.brinkster.com/noicys
Jul 22 '05 #13

This discussion thread is closed

Replies have been disabled for this discussion.