By using this site, you agree to our updated Privacy Policy and our Terms of Use. Manage your Cookies Settings.
432,490 Members | 1,398 Online
Bytes IT Community
+ Ask a Question
Need help? Post your question and get tips & solutions from a community of 432,490 IT Pros & Developers. It's quick & easy.

How to "reduce" a numpy array using a costum binary function

P: n/a
I know there must be a simple method to do this.

I have implemented this function for calculating a checksum based on a
ones complement addition:

def complement_ones_checksum(ints):
"""
Returns a complements one checksum based
on a specified numpy.array of dtype=uint16
"""
result = 0x0
for i in ints:
result += i
result = (result + (result >16)) & 0xFFFF
return result

It works, but is of course inefficient. My prfiler syas this is the
99.9% botteleneck in my applicaiton.

What is the efficient numpy way to do this?

No need to dwelve into fast inlining of c-code or Fortran and stuff
like that although that may give further performance imporevements.
Nov 13 '08 #1
Share this Question
Share on Google+
3 Replies


P: n/a
It is always good to ask yourself a question.
I had forgooten about the reduce function

I guess this implementation

from numpy import *

def compl_add_uint16(a, b):
c = a + b
c += c >16
return c & 0xFFFF

def compl_one_checksum(uint16s):
return reduce(compl_add_uint16, uint16s, 0x0000)

is somewhat better?

But is it the best way to do it with numpy?

In [2]: hex(compl_add_uint16(0xF0F0, 0x0F0F))
Out[2]: '0xffff'

In [3]: hex(compl_add_uint16(0xFFFF, 0x0001))
Out[3]: '0x1'

In [5]: hex(compl_one_checksum(array([], dtype=uint16)))
Out[5]: '0x0'

In [6]: hex(compl_one_checksum(array([0xF0F0, 0x0F0F, 0x0001],
dtype=uint16)))
Out[6]: '0x1L'
Nov 13 '08 #2

P: n/a
Slaunger wrote:
It is always good to ask yourself a question.
I had forgooten about the reduce function

I guess this implementation

from numpy import *

def compl_add_uint16(a, b):
c = a + b
c += c >16
return c & 0xFFFF

def compl_one_checksum(uint16s):
return reduce(compl_add_uint16, uint16s, 0x0000)

is somewhat better?

But is it the best way to do it with numpy?
It's not too bad, if you only have 1D arrays to worry about (or you are only
concerned with reducing down the first axis). With a Python-implemented
function, there isn't much that will get you faster.

My coworker Ilan Schnell came up with a neat way to use PyPy's RPython->C
translation scheme and scipy.weave's ad-hoc extension module-building
capabilities to generate new numpy ufuncs (which have a .reduce() method)
implemented in pure RPython.

http://conference.scipy.org/proceedi.../full_text.pdf
http://svn.scipy.org/svn/scipy/branches/fast_vectorize/

If you have more numpy questions, please join us on the numpy mailing list.

http://www.scipy.org/Mailing_Lists

--
Robert Kern

"I have come to believe that the whole world is an enigma, a harmless enigma
that is made terrible by our own mad attempt to interpret it as though it had
an underlying truth."
-- Umberto Eco

Nov 13 '08 #3

P: n/a
On 13 Nov., 22:48, Robert Kern <robert.k...@gmail.comwrote:
Slaunger wrote:
It is always good to ask yourself a question.
I had forgooten about the reduce function
I guess this implementation
from numpy import *
def compl_add_uint16(a, b):
* * c = a + b
* * c += c >16
* * return c & 0xFFFF
def compl_one_checksum(uint16s):
* * return reduce(compl_add_uint16, uint16s, 0x0000)
is somewhat better?
But is it the best way to do it with numpy?

It's not too bad, if you only have 1D arrays to worry about (or you are only
concerned with reducing down the first axis). With a Python-implemented
function, there isn't much that will get you faster.
Yes, I only have 1D arrays in this particular problem.
>
My coworker Ilan Schnell came up with a neat way to use PyPy's RPython->C
translation scheme and scipy.weave's ad-hoc extension module-building
capabilities to generate new numpy ufuncs (which have a .reduce() method)
implemented in pure RPython.

* *http://conference.scipy.org/proceedi.../full_text.pdf
* *http://svn.scipy.org/svn/scipy/branches/fast_vectorize/
OK. Thanks. I am still a rather inexperienced SciPy and Python
programmer, and I must admit
that right now this seems to be in the advanced end for me. But, now
that you
mention weave I have given it a thought to reimplement my binary compl
add function
shown above using weave - if my profiler says that is where I should
be spending my
time optimizing.
If you have more numpy questions, please join us on the numpy mailing list.

* *http://www.scipy.org/Mailing_Lists
Thank you for directing me to that numpy specific mailing list.
I have been on scipy.org many times, but apparently overlooked
that very prominent link to mailing lists.

Slaunger
--
Robert Kern

"I have come to believe that the whole world is an enigma, a harmless enigma
* that is made terrible by our own mad attempt to interpret it as though it had
* an underlying truth."
* *-- Umberto Eco- Skjul tekst i anførselstegn -

- Vis tekst i anførselstegn -
Nov 14 '08 #4

This discussion thread is closed

Replies have been disabled for this discussion.