469,270 Members | 1,026 Online
Bytes | Developer Community
New Post

Home Posts Topics Members FAQ

Post your question to a community of 469,270 developers. It's quick & easy.

numpy numbers converted wrong

in Gnuplot (Gnuplot.utils) the input array will be converted to a Numeric float array as shown below. When I insert a numpy array into Gnuplot like that below, numbers 7.44 are cast to 7.0
Why is this and what should I do ? Is this bug in numpy or in Numeric?
[Dbg]>>m #numpy array
array([[ 9.78109200e+08, 7.44000000e+00],
[ 9.78454800e+08, 7.44000000e+00],
[ 9.78541200e+08, 8.19000000e+00],
...,
[ 1.16162280e+09, 8.14600000e+01],
[ 1.16170920e+09, 8.10500000e+01],
[ 1.16179560e+09, 8.16800000e+01]])
[Dbg]>>Numeric.asarray(m, Numeric.Float32)[:10]
array([[ 9.78109184e+008, 7.00000000e+000],
[ 9.78454784e+008, 7.00000000e+000],
[ 9.78541184e+008, 8.00000000e+000],
[ 9.78627584e+008, 8.00000000e+000],
[ 9.78713984e+008, 8.00000000e+000],
[ 9.78973184e+008, 8.00000000e+000],
[ 9.79059584e+008, 8.00000000e+000],
[ 9.79145984e+008, 8.00000000e+000],
[ 9.79232384e+008, 9.00000000e+000],
[ 9.79318784e+008, 8.00000000e+000]],'f')
[Dbg]>>Numeric.asarray(m, Numeric.Float)[:10]
array([[ 9.78109200e+008, 7.00000000e+000],
[ 9.78454800e+008, 7.00000000e+000],
[ 9.78541200e+008, 8.00000000e+000],
[ 9.78627600e+008, 8.00000000e+000],
[ 9.78714000e+008, 8.00000000e+000],
[ 9.78973200e+008, 8.00000000e+000],
[ 9.79059600e+008, 8.00000000e+000],
[ 9.79146000e+008, 8.00000000e+000],
[ 9.79232400e+008, 9.00000000e+000],
[ 9.79318800e+008, 8.00000000e+000]])
[Dbg]>>>
and why and what is:

[Dbg]>>m[0,1]
7.44
[Dbg]>>type(_)
<type 'numpy.float64'>
[Dbg]>>>
does this also slow down python math computations?
should one better stay away from numpy in current stage of numpy development?
I remember, with numarray there were no such problems.

-robert
PS: in Gnuplot.utils:

def float_array(m):
"""Return the argument as a Numeric array of type at least 'Float32'.

Leave 'Float64' unchanged, but upcast all other types to
'Float32'. Allow also for the possibility that the argument is a
python native type that can be converted to a Numeric array using
'Numeric.asarray()', but in that case don't worry about
downcasting to single-precision float.

"""

try:
# Try Float32 (this will refuse to downcast)
return Numeric.asarray(m, Numeric.Float32)
except TypeError:
# That failure might have been because the input array was
# of a wider data type than Float32; try to convert to the
# largest floating-point type available:
return Numeric.asarray(m, Numeric.Float)
Oct 26 '06 #1
2 2608
robert wrote:
in Gnuplot (Gnuplot.utils) the input array will be converted to a Numeric float array as shown below. When I insert a numpy array into Gnuplot like that below, numbers 7.44 are cast to 7.0
Why is this and what should I do ? Is this bug in numpy or in Numeric?
[Dbg]>>m #numpy array
array([[ 9.78109200e+08, 7.44000000e+00],
[ 9.78454800e+08, 7.44000000e+00],
[ 9.78541200e+08, 8.19000000e+00],
...,
[ 1.16162280e+09, 8.14600000e+01],
[ 1.16170920e+09, 8.10500000e+01],
[ 1.16179560e+09, 8.16800000e+01]])
[Dbg]>>Numeric.asarray(m, Numeric.Float32)[:10]
array([[ 9.78109184e+008, 7.00000000e+000],
[ 9.78454784e+008, 7.00000000e+000],
[ 9.78541184e+008, 8.00000000e+000],
[ 9.78627584e+008, 8.00000000e+000],
[ 9.78713984e+008, 8.00000000e+000],
[ 9.78973184e+008, 8.00000000e+000],
[ 9.79059584e+008, 8.00000000e+000],
[ 9.79145984e+008, 8.00000000e+000],
[ 9.79232384e+008, 9.00000000e+000],
[ 9.79318784e+008, 8.00000000e+000]],'f')
[Dbg]>>Numeric.asarray(m, Numeric.Float)[:10]
array([[ 9.78109200e+008, 7.00000000e+000],
[ 9.78454800e+008, 7.00000000e+000],
The problem is with the version of Numeric you are using. I can replicate this
problem with Numeric 24.0 but not with 24.2.
and why and what is:

[Dbg]>>m[0,1]
7.44
[Dbg]>>type(_)
<type 'numpy.float64'>
[Dbg]>>>
It is a scalar object. numpy supports more number types than Python does so the
scalar results of indexing operations need representations beyond the standard
int, float, complex types. These scalar objects also support the array
interface, so it's easier to write generic code that may operate on arrays or
scalars. Their existence also resolves the long-standing problem of maintaining
the precision of arrays even when performing operations with scalars. In
Numeric, adding the scalar 2.0 to a single precision array would return a
double-precision array. Worse, if a and b are single precision arrays, (a+b[0])
would give a double-precision result because b[0] would have to be represented
as a standard Python float.

The _Guide to NumPy_ has a discussion of these in Chapter 2, part of the sample
chapters:

http://numpy.scipy.org/numpybooksample.pdf
does this also slow down python math computations?
If you do a whole lot of computations with scalar values coming out of arrays,
then yes, somewhat. You can forestall that by casting to Python floats or ints
if that is causing problems for you.
should one better stay away from numpy in current stage of numpy development?
I remember, with numarray there were no such problems.
Not really, no.

--
Robert Kern

"I have come to believe that the whole world is an enigma, a harmless enigma
that is made terrible by our own mad attempt to interpret it as though it had
an underlying truth."
-- Umberto Eco

Oct 26 '06 #2
robert wrote:
in Gnuplot (Gnuplot.utils) the input array will be converted to a Numeric float array as shown below. When I insert a numpy array into Gnuplot like that below, numbers 7.44 are cast to 7.0
Why is this and what should I do ? Is this bug in numpy or in Numeric?
[Dbg]>>m #numpy array
array([[ 9.78109200e+08, 7.44000000e+00],
[ 9.78454800e+08, 7.44000000e+00],
[ 9.78541200e+08, 8.19000000e+00],
...,
[ 1.16162280e+09, 8.14600000e+01],
[ 1.16170920e+09, 8.10500000e+01],
[ 1.16179560e+09, 8.16800000e+01]])
[Dbg]>>Numeric.asarray(m, Numeric.Float32)[:10]
array([[ 9.78109184e+008, 7.00000000e+000],
[ 9.78454784e+008, 7.00000000e+000],
[ 9.78541184e+008, 8.00000000e+000],
[ 9.78627584e+008, 8.00000000e+000],
[ 9.78713984e+008, 8.00000000e+000],
[ 9.78973184e+008, 8.00000000e+000],
[ 9.79059584e+008, 8.00000000e+000],
[ 9.79145984e+008, 8.00000000e+000],
[ 9.79232384e+008, 9.00000000e+000],
[ 9.79318784e+008, 8.00000000e+000]],'f')
[Dbg]>>Numeric.asarray(m, Numeric.Float)[:10]
array([[ 9.78109200e+008, 7.00000000e+000],
[ 9.78454800e+008, 7.00000000e+000],
[ 9.78541200e+008, 8.00000000e+000],
[ 9.78627600e+008, 8.00000000e+000],
[ 9.78714000e+008, 8.00000000e+000],
[ 9.78973200e+008, 8.00000000e+000],
[ 9.79059600e+008, 8.00000000e+000],
[ 9.79146000e+008, 8.00000000e+000],
[ 9.79232400e+008, 9.00000000e+000],
[ 9.79318800e+008, 8.00000000e+000]])
This is odd but we need to know the version numbers of both packages to
help further. For one, I'm surprised that you can use Numeric.asarray
to force cast to Numeric.Float32 without raising an error.

Also, you can ask on nu**************@lists.sourceforge.net to reach an
audience more directly able to help.
[Dbg]>>>
and why and what is:

[Dbg]>>m[0,1]
7.44
[Dbg]>>type(_)
<type 'numpy.float64'>
[Dbg]>>>
does this also slow down python math computations?
No, not necessarily (depends on what you mean).

Python floats are still Python floats. NumPy provides, in addition, an
array scalar for every "kind" of data that a NumPy array can be composed
of. This avoids the problems with being unable to find an appropriate
Python scalar for a given data-type. Where possible, the NumPy scalar
inherits from the Python one.

By default, the NumPy scalars have their own math defined which uses the
error-mode setting capabilities of NumPy to handle errors. Right now,
these operations are a bit slower than Python's built-ins because of the
way that "mixed" calculations are handled.

For, the data-types that over-lap with Python scalars you can set things
up so that NumPy scalars use the Python math instead if you want. But,
again, NumPy does nothing to change the way that Python numbers are
calculated.

should one better stay away from numpy in current stage of numpy development?
No, definitely not. Don't stay away. NumPy 1.0 is out.

Oct 26 '06 #3

This discussion thread is closed

Replies have been disabled for this discussion.

Similar topics

1 post views Thread by drife | last post: by
15 posts views Thread by greg.landrum | last post: by
2 posts views Thread by mcdurr | last post: by
5 posts views Thread by auditory | last post: by
3 posts views Thread by lancered | last post: by
reply views Thread by Gary Herron | last post: by
1 post views Thread by martin.nordstrom87 | last post: by
2 posts views Thread by Rick Giuly | last post: by
3 posts views Thread by Rüdiger Werner | last post: by
1 post views Thread by CARIGAR | last post: by
reply views Thread by suresh191 | last post: by
By using this site, you agree to our Privacy Policy and Terms of Use.