By using this site, you agree to our updated Privacy Policy and our Terms of Use. Manage your Cookies Settings.
454,465 Members | 1,480 Online
Bytes IT Community
+ Ask a Question
Need help? Post your question and get tips & solutions from a community of 454,465 IT Pros & Developers. It's quick & easy.

Filling 2d array in less than O(n^2) time

P: n/a
standard 2d array filling with increasing numbers for rows and columns:

for(int i=0;i<n;i++)
for(int j=0;j<n;j++)
a[i][j] = i + j;

problem is it's O(n^2). I'm looking for a method to decrease the time,
any suggestions? I'm googling for dynamic programming solutions, but
not coming up with much.

Nov 22 '05 #1
Share this Question
Share on Google+
12 Replies


P: n/a
pj*****@gmail.com wrote:
standard 2d array filling with increasing numbers for rows and columns:

for(int i=0;i<n;i++)
for(int j=0;j<n;j++)
a[i][j] = i + j;

problem is it's O(n^2). I'm looking for a method to decrease the time,
any suggestions? I'm googling for dynamic programming solutions, but
not coming up with much.


I don't think it's possible, since you *must* assign n^2 elements.
Nov 22 '05 #2

P: n/a

red floyd wrote:
pj*****@gmail.com wrote:
standard 2d array filling with increasing numbers for rows and columns:

for(int i=0;i<n;i++)
for(int j=0;j<n;j++)
a[i][j] = i + j;

problem is it's O(n^2). I'm looking for a method to decrease the time,
any suggestions? I'm googling for dynamic programming solutions, but
not coming up with much.


I don't think it's possible, since you *must* assign n^2 elements.


Yes, this algorithm is not O(n^2). It is O(n), and as efficient
(theoretically) as can be.

Ryan

Nov 22 '05 #3

P: n/a
rwf_20 wrote:
red floyd wrote:
pj*****@gmail.com wrote:
standard 2d array filling with increasing numbers for rows and columns:

for(int i=0;i<n;i++)
for(int j=0;j<n;j++)
a[i][j] = i + j;

problem is it's O(n^2). I'm looking for a method to decrease the time,
any suggestions? I'm googling for dynamic programming solutions, but
not coming up with much.


I don't think it's possible, since you *must* assign n^2 elements.

Yes, this algorithm is not O(n^2). It is O(n), and as efficient
(theoretically) as can be.


Would you care to explain how it's O(n), and not O(n^2)?
Nov 22 '05 #4

P: n/a
Hi

rwf_20 wrote:

red floyd wrote:
I don't think it's possible, since you *must* assign n^2 elements.


Yes, this algorithm is not O(n^2). It is O(n), and as efficient
(theoretically) as can be.


Huh? Linear in what? If, as given in the example, n is to denote the number
of columns and rows and you count the number of assignments, then of course
there are exactly n^2 \in O(n^2) of them.

Nevertheless it is obviously true that this is optimal.

Markus

Nov 22 '05 #5

P: n/a
In article <_7*******************@newssvr11.news.prodigy.com> ,
red floyd <no*****@here.dude> wrote:
rwf_20 wrote:


Yes, this algorithm is not O(n^2). It is O(n), and as efficient
(theoretically) as can be.


Would you care to explain how it's O(n), and not O(n^2)?


Because 'n' typically represents the number of elements. There are
are rows*columns elements. One operation per element. Hence O(n)
where n is the number of elements in the array.
--
Mark Ping
em****@soda.CSUA.Berkeley.EDU
Nov 22 '05 #6

P: n/a
pj*****@gmail.com wrote:
standard 2d array filling with increasing numbers for rows and columns:

for(int i=0;i<n;i++)
for(int j=0;j<n;j++)
a[i][j] = i + j;

problem is it's O(n^2). I'm looking for a method to decrease the time,
any suggestions? I'm googling for dynamic programming solutions, but
not coming up with much.


Initialize it statically, and then it will be filled at load-time
rather than run-time. At file/namespace scope, do this:

int a[][] = { { 0, 1, 2, 3 }, { 4, 5, 6, 7 } };

Of course this limits your flexibility.

Cheers! --M

Nov 22 '05 #7

P: n/a

"E. Mark Ping" <em****@soda.csua.berkeley.edu> wrote in message
news:dl***********@agate.berkeley.edu...
In article <_7*******************@newssvr11.news.prodigy.com> ,
red floyd <no*****@here.dude> wrote:
rwf_20 wrote:


Yes, this algorithm is not O(n^2). It is O(n), and as efficient
(theoretically) as can be.


Would you care to explain how it's O(n), and not O(n^2)?


Because 'n' typically represents the number of elements. There are
are rows*columns elements. One operation per element. Hence O(n)
where n is the number of elements in the array.
--


There are not neccessarily any "elements" to speak of in the general case.

And in this case, he's speaking of n as the size of both the rows and the
columns, as you can see by the code. There are rows*columns operations, so
that means n*n operations, which makes it O(n^2). The cost grows by the
square of the value n.

-Howard

Nov 22 '05 #8

P: n/a
Howard wrote:
"E. Mark Ping" <em****@soda.csua.berkeley.edu> wrote in message
news:dl***********@agate.berkeley.edu...
In article <_7*******************@newssvr11.news.prodigy.com> ,
red floyd <no*****@here.dude> wrote:
rwf_20 wrote:
Yes, this algorithm is not O(n^2). It is O(n), and as efficient
(theoretically) as can be.

Would you care to explain how it's O(n), and not O(n^2)?
Because 'n' typically represents the number of elements. There are
are rows*columns elements. One operation per element. Hence O(n)
where n is the number of elements in the array.
--

There are not neccessarily any "elements" to speak of in the general case.


Big-Oh notation refers to the depenency of the time complexity of an
algorithm on the number of elements that it's applied to.
And in this case, he's speaking of n as the size of both the rows and the
columns, as you can see by the code. There are rows*columns operations, so
that means n*n operations, which makes it O(n^2). The cost grows by the
square of the value n.


The cost grows by the square of the value of n, but that's not the
meaning of big-Oh notation.

--

Pete Becker
Dinkumware, Ltd. (http://www.dinkumware.com)
Nov 22 '05 #9

P: n/a
In article <11**********************@o13g2000cwo.googlegroups .com>,
mlimber <ml*****@gmail.com> wrote:
int a[][] = { { 0, 1, 2, 3 }, { 4, 5, 6, 7 } };


You can't have [][] like that even on a definition.
Only the first dimension can be []'d
--
Greg Comeau / Celebrating 20 years of Comeauity!
Comeau C/C++ ONLINE ==> http://www.comeaucomputing.com/tryitout
World Class Compilers: Breathtaking C++, Amazing C99, Fabulous C90.
Comeau C/C++ with Dinkumware's Libraries... Have you tried it?
Nov 22 '05 #10

P: n/a
Even for very large arrays, the method you provide should only add a
small fixed overhead to your program's initialization.

If, however, you're continually modifying the array's values and then
restoring it's initial state, you might consider using your method to
initially populate the array, and then creating a copy to work on with
memcpy. You could then restore the copy's state to the original values
each time with a single call to memcpy. You'll use twice the memory,
but it will be much faster then repeatedly using your nested for loops.

Nov 22 '05 #11

P: n/a

"Pete Becker" <pe********@acm.org> wrote in message
news:4q********************@rcn.net...
Howard wrote:
"E. Mark Ping" <em****@soda.csua.berkeley.edu> wrote in message
news:dl***********@agate.berkeley.edu...
In article <_7*******************@newssvr11.news.prodigy.com> ,
red floyd <no*****@here.dude> wrote:

rwf_20 wrote:

>
>Yes, this algorithm is not O(n^2). It is O(n), and as efficient
>(theoretically) as can be.

Would you care to explain how it's O(n), and not O(n^2)?

Because 'n' typically represents the number of elements. There are
are rows*columns elements. One operation per element. Hence O(n)
where n is the number of elements in the array.
--

There are not neccessarily any "elements" to speak of in the general
case.


Big-Oh notation refers to the depenency of the time complexity of an
algorithm on the number of elements that it's applied to.
And in this case, he's speaking of n as the size of both the rows and the
columns, as you can see by the code. There are rows*columns operations,
so that means n*n operations, which makes it O(n^2). The cost grows by
the square of the value n.


The cost grows by the square of the value of n, but that's not the meaning
of big-Oh notation.


I was about to argue with you, given what seems obvious to me that the
complexity (cost) increases by the square of the input variable. But I did
some "Googling", and found that it does indeed seem to be the convention
that for arrays, the fact that they have width and height is not relevant,
but rather it is their total "size" that matters. Which makes sense, since
it is the size that grows by the square of that "n" variable, not the
computing cost, given that size.

-Howard

-Howard


Nov 22 '05 #12

P: n/a

meagar wrote:
memcpy. You could then restore the copy's state to the original values
each time with a single call to memcpy. You'll use twice the memory,
but it will be much faster then repeatedly using your nested for loops.


The call to memcpy may be single, but extra reading from memory
would make initialization quite slower in practice.

--
Vladimir

Nov 22 '05 #13

This discussion thread is closed

Replies have been disabled for this discussion.