469,927 Members | 1,273 Online
Bytes | Developer Community
New Post

Home Posts Topics Members FAQ

Post your question to a community of 469,927 developers. It's quick & easy.

Quick way to zero array

Man, that title should be in a poem, but anyways...So, I have this
program, and it has an array of 40 million elements. The problem is
that I have a for loop that continually is doing stuff to this array,
and for each iteration, I have to zero out the array. This is how I am
currently doing it: // Zero out the lnscores for( count=0; count <
chunksize; count++ ) lnscores[count] = 0; Is there no quicker way
to do this? I know there must be a trick since this array is one big
contiguous chunk of memory right?

May 12 '06 #1
14 51692
memset( lnscores, 0, chunksize )

May 12 '06 #2
ro**********@gmail.com opined:
Man, that title should be in a poem, but anyways...So, I have this
program, and it has an array of 40 million elements. The problem is
that I have a for loop that continually is doing stuff to this array,
and for each iteration, I have to zero out the array. This is how I
am
currently doing it: // Zero out the lnscores for( count=0; count <
chunksize; count++ ) lnscores[count] = 0; Is there no quicker
way to do this? I know there must be a trick since this array is one
big contiguous chunk of memory right?


If your array has file scope you don't even have to zero it yourself.
All such variables get zeroed at startup. Otherwise, declare it and
initialise thus:

int lnscores[WHATEVER_SIZE] = {0};

If you have to re-zero it afterwards, you could use `memset()`.

--
Worlds are conquered, galaxies destroyed -- but a woman is always a
woman.
-- Kirk, "Conscience of the King", stardate unknown

<http://clc-wiki.net/wiki/Introduction_to_comp.lang.c>

May 12 '06 #3
On 2006-05-12, ro**********@gmail.com <ro**********@gmail.com> wrote:
Man, that title should be in a poem, but anyways...So, I have this
program, and it has an array of 40 million elements. The problem is
that I have a for loop that continually is doing stuff to this array,
and for each iteration, I have to zero out the array. This is how I am
currently doing it: // Zero out the lnscores for( count=0; count <
chunksize; count++ ) lnscores[count] = 0; Is there no quicker way
to do this? I know there must be a trick since this array is one big
contiguous chunk of memory right?


memset(lnscores, 0, chunksize * sizeof lnscores[0]);

Assuming 0 is represented by 0... what type is lnscores?
May 12 '06 #4
ro**********@gmail.com schrieb:
Man, that title should be in a poem, but anyways...So, I have this
program, and it has an array of 40 million elements. The problem is
that I have a for loop that continually is doing stuff to this array,
and for each iteration, I have to zero out the array. This is how I am
currently doing it: // Zero out the lnscores for( count=0; count <
chunksize; count++ ) lnscores[count] = 0; Is there no quicker way
to do this? I know there must be a trick since this array is one big
contiguous chunk of memory right?


As you did not tell us what type lnscores has, there is no
definite "quickest" way.
If lnscores[count] = 0 means that all bits of lnscores[count] are
set to zero, you can use memset() to zero out all bits.

Otherwise, you can use memcpy():
-zero out lnscores[0]
-memcpy() lnscores[0] to lnscores[1]
-memcpy() lnscores[0] and lnscores[1] to lnscores[2] and lnscores[3]
....
-memcpy() lnscores[0], ..., and lnscores[16777215] to
lnscores[16777215], ..., and lnscores[33554431]
- Last step (no doubling):
memcpy() lnscores[0], ..., and lnscores[6445567] to
lnscores[33554431], ..., and lnscores[39999999]

This _may_ be quicker than the loop itself. It may be slower if
your compiler recognizes the "zero out" pattern and does something
intelligent.

Try it out and measure.

Cheers
Michael
--
E-Mail: Mine is an /at/ gmx /dot/ de address.
May 12 '06 #5
if not all elements will be used, there are some methods to just zero out as
fewer as needed
<ro**********@gmail.com> wrote in message
news:11**********************@d71g2000cwd.googlegr oups.com...
Man, that title should be in a poem, but anyways...So, I have this
program, and it has an array of 40 million elements. The problem is
that I have a for loop that continually is doing stuff to this array,
and for each iteration, I have to zero out the array. This is how I am
currently doing it: // Zero out the lnscores for( count=0; count <
chunksize; count++ ) lnscores[count] = 0; Is there no quicker way
to do this? I know there must be a trick since this array is one big
contiguous chunk of memory right?

May 12 '06 #6
ro**********@gmail.com wrote:
Man, that title should be in a poem, but anyways...So, I have this
program, and it has an array of 40 million elements. The problem is
that I have a for loop that continually is doing stuff to this array,
and for each iteration, I have to zero out the array. This is how I am
currently doing it: // Zero out the lnscores for( count=0; count <
chunksize; count++ ) lnscores[count] = 0; Is there no quicker way
to do this? I know there must be a trick since this array is one big
contiguous chunk of memory right?


Why do you have to zero the array? Are you only doing something to a
few elements at each iteration? While there may be a fast C function
that zeros arrays, unless you have a parallel machine you can't zero
all cells at once, so this is likely to be costly. I suggest you
seek a better algorithm--for example, whenever you update an element
in an iteration, before you leave that loop, zero that element, so
the whole array returns to the zeroed state.

--
Julian V. Noble
Professor Emeritus of Physics
University of Virginia
May 12 '06 #7
Julian V. Noble wrote:

Why do you have to zero the array?
Zeroing out any piece of allocated memory is necessary to avoid
unforeseen results, which are annoying most of the time.
Are you only doing something to a
few elements at each iteration? While there may be a fast C function
that zeros arrays, unless you have a parallel machine you can't zero
all cells at once, so this is likely to be costly. I suggest you
seek a better algorithm--for example, whenever you update an element
in an iteration, before you leave that loop, zero that element, so
the whole array returns to the zeroed state.


That's a good idea.

You can use "calloc" for allocating for the first time. It already
returns memory region fillled with zero. Then memset or bzero are also
good option.
I mostly use bzero for such things.

BTW, 40 million elements array, are u doing image processing or some
other kind of signal processing?

cheers,

--Himanshu

--
+-----------------------------------+
| Himanshu Singh Chauhan |
| MCA (Final Year) |
| I.G. National Open University |
| Delhi (India) |
| |
| Contact: hs********@gmail.com |
+-----------------------------------+
May 12 '06 #8
Himanshu Chauhan <hs********@gmail.com> writes:
Julian V. Noble wrote:
Why do you have to zero the array?
Zeroing out any piece of allocated memory is necessary to avoid
unforeseen results, which are annoying most of the time.


It's not always necessary. For example, if you can keep track of
which elements of the array you're currently using, you don't have to
worry about what's in the other elements. The best approach depends
on what you're actually doing with the array.

[...] You can use "calloc" for allocating for the first time. It already
returns memory region fillled with zero. Then memset or bzero are also
good option.
I mostly use bzero for such things.


calloc() and memset() will set the array contents to all-bits-zero.
If you have an array of integers, it will set each element to 0. If
it's an array of some pointer or floating-point type, it will
*probably* do so, but it's no guaranteed (the language doesn't
guarantee that a floating-point 0.0 is represented as all-bits-zero).

bzero() is non-standard, and might not exist on all systems.

It's not obvious that memset() will be faster than a loop explicitly
setting each element to zero. You should measure the performance of
the code.

--
Keith Thompson (The_Other_Keith) ks***@mib.org <http://www.ghoti.net/~kst>
San Diego Supercomputer Center <*> <http://users.sdsc.edu/~kst>
We must do something. This is something. Therefore, we must do this.
May 12 '06 #9
Himanshu Chauhan wrote:
Julian V. Noble wrote:

Why do you have to zero the array?
Zeroing out any piece of allocated memory is necessary to avoid
unforeseen results, which are annoying most of the time.


Depends on whether all bits zero is a valid initialiser. It is not
necessarily valid for floats, doubles or pointers.
Are you only doing something to a
few elements at each iteration? While there may be a fast C function
that zeros arrays, unless you have a parallel machine you can't zero
all cells at once, so this is likely to be costly. I suggest you
seek a better algorithm--for example, whenever you update an element
in an iteration, before you leave that loop, zero that element, so
the whole array returns to the zeroed state.


That's a good idea.

You can use "calloc" for allocating for the first time. It already
returns memory region fillled with zero. Then memset or bzero are also
good option.
I mostly use bzero for such things.


Why choose the non-standard bzero when there is a perfectly standard memset?
BTW, 40 million elements array, are u doing image processing or some
other kind of signal processing?


Please don't use stupid contractions like u for you.
--
Flash Gordon, living in interesting times.
Web site - http://home.flash-gordon.me.uk/
comp.lang.c posting guidelines and intro:
http://clc-wiki.net/wiki/Intro_to_clc

Inviato da X-Privat.Org - Registrazione gratuita http://www.x-privat.org/join.php
May 12 '06 #10
> BTW, 40 million elements array, are u doing image processing or some > other kind of signal processing? Close. I'm doing some quadratic field sieve work. The program is actually quite fast. I factored a 203-bit number in about 9 hours on a 2.8GHz P4. >> Why do you have to zero the array? It has to do with keeping a score (hence the name "lnscore") of the residues modulo n. I process 40 million each go-round and then look at the scores to see if they are high-enough to progress through the algorithm. I usually get 1,000 that pass out of 40 million residues which are then narrowed down to around 10 numbers that I am actually looking for. It will find these 10 in around 11 seconds. Granted, I actually have to find tens of thousands of these before I can factor anything. >> I suggest you >> seek a better algorithm--for example, whenever you update an element >> in an iteration, before you leave that loop, zero that element, so >> the whole array returns to the zeroed state. No, this will not work. I have thought about it, and there is just no way to do it. Like I said, my program spits out the scores into lnscores (this is actually the "sieve" part of the QFS...it's pretty fast), and then I must search through the scores to take out the best ones. Only then can I return everything to zero. Oh, and I tried using memset(), but it was not any faster than the loop I had, and it hung after one iteration. So, still looping for now. Thanks for all the replies. I didn't expect so many :)

May 12 '06 #11

<ro**********@gmail.com> wrote in message
news:11**********************@i40g2000cwc.googlegr oups.com...
BTW, 40 million elements array, are u doing image processing or some other kind of signal processing?


<corrected header...> "Julian V. Noble" <jv*@virginia.edu> wrote in message news:e4**********@murdoch.acc.Virginia.EDU...
I suggest you
seek a better algorithm--for example, whenever you update an element
in an iteration, before you leave that loop, zero that element, so
the whole array returns to the zeroed state.

No, this will not work. I have thought about it, and there is just no way
to do it. Like I said, my program spits out the scores into lnscores
(this is actually the "sieve" part of the QFS...it's pretty fast), and then I must search through the scores to take out the best ones. Only then
can I return everything to zero. Oh, and I tried using memset(), but
it was not any faster than the loop I had, and it hung after one iteration. So, still looping for now. Thanks for all the replies. I didn't expect so many :)

In regards to:
I must search through the scores to take out the best ones.


You didn't read Keith's post. A simple approach (I'll mention a better one
further down) is to use a second array of unsigned char. As you assign a
value to elements in array lnscores[], set an "in use" indicator in the
second array. Then you only need to clear and search through the in_use
array to find valid lnscores[]. Let's say your array is 40 instead of 40
million:

unsigned long lnscores[40];
unsigned char in_use[40];

memset(in_use,0,40); /* clear in use array */

/* in loop finding potential residues */
lnscores[index]=somevalue;
in_use[index]=1; /* all non-used in_use values are 0 */

/* in loop searching for potential residues */
if (in_use[index]) /* if not in use skip */
{
/* look at lnscores[index] */
}

This will reduce clearing 40 million large values to 40 million small
values.

Now that you (hopefully) understood that, a better approach is to hash
function or stack to just store the indexes of values you modified. This
would reduce the number of values you need to clear to the peak number of
residues.
Rod Pemberton
May 12 '06 #12
aha
there are more efficient methods
u know TAOCP, by Knuth
there is a mazing method as this:

suppose the array u will use is A[N], where N maybe very large, such as 40
Million
then u just need another two array, such as B[N] & C[N], all of them are of
integer type

#define N (40 * 1024 * 1024)
int A[N], B[N], C[N];
int cnt = 0;

//u just only need to reset the counter *cnt*

//when u need to refer an element such as A[i]
//u first check
int t = B[i];
if(t >= 0 && t < N && C[t] == i) {
//that's ok, this element has been accessed before, so it must be goood
}else{
//this element has not been initalized
A[i] = 0;
C[cnt] = i;
B[i] = cnt++;
}

//so, this is really maybe the best, *when u need, u do that!*

"Rod Pemberton" <do*********@bitfoad.cmm> wrote in message
news:e4**********@defiant.vmunix.org...

<ro**********@gmail.com> wrote in message
news:11**********************@i40g2000cwc.googlegr oups.com...
BTW, 40 million elements array, are u doing image processing or some other kind of signal processing?


<corrected header...>
"Julian V. Noble" <jv*@virginia.edu> wrote in message

news:e4**********@murdoch.acc.Virginia.EDU...
I suggest you
seek a better algorithm--for example, whenever you update an element
in an iteration, before you leave that loop, zero that element, so
the whole array returns to the zeroed state.
No, this will not work. I have thought about it, and there is just no

way to do it. Like I said, my program spits out the scores into lnscores
(this is actually the "sieve" part of the QFS...it's pretty fast), and

then
I must search through the scores to take out the best ones. Only then
can I return everything to zero. Oh, and I tried using memset(), but
it was not any faster than the loop I had, and it hung after one

iteration.
So, still looping for now. Thanks for all the replies. I didn't expect

so many :)

In regards to:
I must search through the scores to take out the best ones.
You didn't read Keith's post. A simple approach (I'll mention a better

one further down) is to use a second array of unsigned char. As you assign a
value to elements in array lnscores[], set an "in use" indicator in the
second array. Then you only need to clear and search through the in_use
array to find valid lnscores[]. Let's say your array is 40 instead of 40
million:

unsigned long lnscores[40];
unsigned char in_use[40];

memset(in_use,0,40); /* clear in use array */

/* in loop finding potential residues */
lnscores[index]=somevalue;
in_use[index]=1; /* all non-used in_use values are 0 */

/* in loop searching for potential residues */
if (in_use[index]) /* if not in use skip */
{
/* look at lnscores[index] */
}

This will reduce clearing 40 million large values to 40 million small
values.

Now that you (hopefully) understood that, a better approach is to hash
function or stack to just store the indexes of values you modified. This
would reduce the number of values you need to clear to the peak number of
residues.
Rod Pemberton

May 13 '06 #13

"daizisheng" <da********@gmail.com> wrote in message
news:e4**********@news.cn99.com...
aha
there are more efficient methods
u know TAOCP, by Knuth
there is a mazing method as this:

suppose the array u will use is A[N], where N maybe very large, such as 40
Million
then u just need another two array, such as B[N] & C[N], all of them are of integer type

#define N (40 * 1024 * 1024)
int A[N], B[N], C[N];
int cnt = 0;

//u just only need to reset the counter *cnt*

//when u need to refer an element such as A[i]
//u first check
int t = B[i];
if(t >= 0 && t < N && C[t] == i) { sorry for a mistake
the condition is
if(t >= 0 && t < cnt && C[t] == i) //that's ok, this element has been accessed before, so it must be goood
}else{
//this element has not been initalized
A[i] = 0;
C[cnt] = i;
B[i] = cnt++;
}

//so, this is really maybe the best, *when u need, u do that!*

"Rod Pemberton" <do*********@bitfoad.cmm> wrote in message
news:e4**********@defiant.vmunix.org...

<ro**********@gmail.com> wrote in message
news:11**********************@i40g2000cwc.googlegr oups.com...
> BTW, 40 million elements array, are u doing image processing or some
other kind of signal processing?


<corrected header...>
"Julian V. Noble" <jv*@virginia.edu> wrote in message

news:e4**********@murdoch.acc.Virginia.EDU...
> I suggest you
> seek a better algorithm--for example, whenever you update an element
> in an iteration, before you leave that loop, zero that element, so
> the whole array returns to the zeroed state.

No, this will not work. I have thought about it, and there is just no way to do it. Like I said, my program spits out the scores into lnscores
(this is actually the "sieve" part of the QFS...it's pretty fast), and

then
I must search through the scores to take out the best ones. Only then
can I return everything to zero. Oh, and I tried using memset(), but
it was not any faster than the loop I had, and it hung after one

iteration.
So, still looping for now. Thanks for all the replies. I didn't expect so
many :)

In regards to:
I must search through the scores to take out the best ones.


You didn't read Keith's post. A simple approach (I'll mention a better

one
further down) is to use a second array of unsigned char. As you assign

a value to elements in array lnscores[], set an "in use" indicator in the
second array. Then you only need to clear and search through the in_use
array to find valid lnscores[]. Let's say your array is 40 instead of 40 million:

unsigned long lnscores[40];
unsigned char in_use[40];

memset(in_use,0,40); /* clear in use array */

/* in loop finding potential residues */
lnscores[index]=somevalue;
in_use[index]=1; /* all non-used in_use values are 0 */

/* in loop searching for potential residues */
if (in_use[index]) /* if not in use skip */
{
/* look at lnscores[index] */
}

This will reduce clearing 40 million large values to 40 million small
values.

Now that you (hopefully) understood that, a better approach is to hash
function or stack to just store the indexes of values you modified. This would reduce the number of values you need to clear to the peak number of residues.
Rod Pemberton


May 13 '06 #14
"Flash Gordon" <sp**@flash-gordon.me.uk> wrote
I mostly use bzero for such things.


Why choose the non-standard bzero when there is a perfectly standard
memset?

If memset isn't fast enough, probably the only way is to go for a
non-standard function.

Actually bzero is probably just a slightly different memset(), but many
machines have hardware for clearing memory buffers in parallel. There's no
easy way to give an ANSI C interface to this sort of operation, but normally
you would set the operation going, do something else, and then check a flag
to see if it had completed.

The other solution is to use an algorithm that doesn't require the array to
be initialised.
--
www.personal.leeds.ac.uk/~bgy1mm


May 13 '06 #15

This discussion thread is closed

Replies have been disabled for this discussion.

Similar topics

41 posts views Thread by Berk Birand | last post: by
13 posts views Thread by DazedAndConfused | last post: by
22 posts views Thread by spam.noam | last post: by
3 posts views Thread by Brian | last post: by
reply views Thread by Adam Warner | last post: by
10 posts views Thread by Visual Systems AB \(Martin Arvidsson\) | last post: by
12 posts views Thread by rajus | last post: by
24 posts views Thread by DomoChan | last post: by
By using this site, you agree to our Privacy Policy and Terms of Use.