By using this site, you agree to our updated Privacy Policy and our Terms of Use. Manage your Cookies Settings.
443,818 Members | 1,282 Online
Bytes IT Community
+ Ask a Question
Need help? Post your question and get tips & solutions from a community of 443,818 IT Pros & Developers. It's quick & easy.

Single precision floating point calcs?

P: n/a
I'm pretty sure the answer is "no", but before I give up on the
idea, I thought I'd ask...

Is there any way to do single-precision floating point
calculations in Python?

I know the various array modules generally support arrays of
single-precision floats. I suppose I could turn all my
variables into single-element arrays, but that would be way
ugly...

--
Grant Edwards grante Yow! -- I have seen the
at FUN --
visi.com
May 9 '07 #1
Share this Question
Share on Google+
8 Replies


P: n/a

"Grant Edwards" <gr****@visi.comwrote in message
news:13*************@corp.supernews.com...
| I'm pretty sure the answer is "no", but before I give up on the
| idea, I thought I'd ask...

| Is there any way to do single-precision floating point
| calculations in Python?

Make your own Python build from altered source. And run it on an ancient
processor/OS/C compiler combination that does not automatically convert C
floats to double when doing any sort of calculation.

Standard CPython does not have C single-precision floats.

The only point I can think of for doing this with single numbers, as
opposed to arrays of millions, is to show that there is no point. Or do
you have something else in mind?

Terry Jan Reedy

May 9 '07 #2

P: n/a
Grant Edwards wrote:
I'm pretty sure the answer is "no", but before I give up on the
idea, I thought I'd ask...

Is there any way to do single-precision floating point
calculations in Python?

I know the various array modules generally support arrays of
single-precision floats. I suppose I could turn all my
variables into single-element arrays, but that would be way
ugly...
We also have scalar types of varying precisions in numpy:
In [9]: from numpy import *

In [10]: float32(1.0) + float32(1e-8) == float32(1.0)
Out[10]: True

In [11]: 1.0 + 1e-8 == 1.0
Out[11]: False
If you can afford to be slow, I believe there is an ASPN Python Cookbook recipe
for simulating floating point arithmetic of any precision.

--
Robert Kern

"I have come to believe that the whole world is an enigma, a harmless enigma
that is made terrible by our own mad attempt to interpret it as though it had
an underlying truth."
-- Umberto Eco

May 9 '07 #3

P: n/a
On 2007-05-09, Terry Reedy <tj*****@udel.eduwrote:
>| I'm pretty sure the answer is "no", but before I give up on the
| idea, I thought I'd ask...
|
| Is there any way to do single-precision floating point
| calculations in Python?

Make your own Python build from altered source. And run it on
an ancient processor/OS/C compiler combination that does not
automatically convert C floats to double when doing any sort
of calculation.
It wouldn't have to be that ancient. The current version of
gcc supports 32-bit doubles on quite a few platforms -- though
it doesn't seem to for IA32 :/

Simply storing intermediate and final results as
single-precision floats would probably be sufficient.
Standard CPython does not have C single-precision floats.
I know.
The only point I can think of for doing this with single numbers, as
opposed to arrays of millions, is to show that there is no point.
I use Python to test algorithms before implementing them in C.
It's far, far easier to do experimentation/prototyping in
Python than in C. I also like to have two sort-of independent
implementations to test against each other (it's a good way to
catch typos).

In the C implementations, the algorithms will be done
implemented in single precision, so doing my Python prototyping
in as close to single precision as possible would be "a good
thing".
Or do you have something else in mind?
--
Grant Edwards grante Yow! Yow! Is my fallout
at shelter termite proof?
visi.com
May 10 '07 #4

P: n/a
On 2007-05-09, Robert Kern <ro*********@gmail.comwrote:
Grant Edwards wrote:
>I'm pretty sure the answer is "no", but before I give up on the
idea, I thought I'd ask...

Is there any way to do single-precision floating point
calculations in Python?

I know the various array modules generally support arrays of
single-precision floats. I suppose I could turn all my
variables into single-element arrays, but that would be way
ugly...

We also have scalar types of varying precisions in numpy:

In [9]: from numpy import *

In [10]: float32(1.0) + float32(1e-8) == float32(1.0)
Out[10]: True
Very interesting. Converting a few key variables and
intermediate values to float32 and then back to CPython floats
each time through the loop would probably be more than
sufficient.

So far as I know, I haven't run into any cases where the
differences between 64-bit prototype calculations in Python and
32-bit production calculations in C have been significant. I
certainly try to design the algorithms so that it won't make
any difference, but it's a nagging worry...
In [11]: 1.0 + 1e-8 == 1.0
Out[11]: False

If you can afford to be slow,
Yes, I can afford to be slow.

I'm not sure I can afford the decrease in readability.
I believe there is an ASPN Python Cookbook recipe for
simulating floating point arithmetic of any precision.
Thanks, I'll go take a look.

--
Grant Edwards grante Yow! It's the RINSE
at CYCLE!! They've ALL IGNORED
visi.com the RINSE CYCLE!!
May 10 '07 #5

P: n/a
Grant Edwards <gr****@visi.comwrote:
>In the C implementations, the algorithms will be done
implemented in single precision, so doing my Python prototyping
in as close to single precision as possible would be "a good
thing".
Something like numpy might give you reproducable IEEE 32-bit floating
point arithmetic, but you may find it difficult to get that out of a
IA-32 C compiler. IA-32 compilers either set the x87 FPU's precision to
either 64-bits or 80-bits and only round results down to 32-bits when
storing values in memory. If you can target CPUs that support SSE,
then compiler can use SSE math to do most single precision operations
in single precision, although the compiler may not set the required SSE
flags for full IEEE complaince.

In other words, since you're probably going to have to allow for some
small differences in results anyways, it may not be worth the trouble
of trying to get Python to use 32-bit floats.

(You might also want to consider whether you want to using single
precision in your C code to begin with, on IA-32 CPUs it seldom makes
a difference in performance.)

Ross Ridge

--
l/ // Ross Ridge -- The Great HTMU
[oo][oo] rr****@csclub.uwaterloo.ca
-()-/()/ http://www.csclub.uwaterloo.ca/~rridge/
db //
May 10 '07 #6

P: n/a
On May 9, 6:51 pm, Grant Edwards <gra...@visi.comwrote:
Is there any way to do single-precision floating point
calculations in Python?
Yes, use numpy.float32 objects.
I know the various array modules generally support arrays of
single-precision floats. I suppose I could turn all my
variables into single-element arrays, but that would be way
ugly...

Numpy has scalars as well.
>>import numpy
a = numpy.float32(2.0)
b = numpy.float32(8.0)
c = a+b
print c
10.0
>>type(c)
<type 'numpy.float32'>
>>>


May 10 '07 #7

P: n/a
On 2007-05-10, Ross Ridge <rr****@caffeine.csclub.uwaterloo.cawrote:
Grant Edwards <gr****@visi.comwrote:
>>In the C implementations, the algorithms will be done
implemented in single precision, so doing my Python prototyping
in as close to single precision as possible would be "a good
thing".

Something like numpy might give you reproducable IEEE 32-bit
floating point arithmetic, but you may find it difficult to
get that out of a IA-32 C compiler.
That's OK, I don't run the C code on an IA32. The C target is
a Hitachi H8/300.
(You might also want to consider whether you want to using
single precision in your C code to begin with, on IA-32 CPUs
it seldom makes a difference in performance.)
Since I'm running the C code on a processor without HW floating
point support, using single precision makes a big difference.

--
Grant Edwards grante Yow! I have many CHARTS
at and DIAGRAMS..
visi.com
May 10 '07 #8

P: n/a
Off-topic, but maybe as practical as "[making] your own Python build
from altered source." ---

Fortran 95 (and earlier versions) has single and double precision
floats. One could write a Fortran code with variables declared REAL,
and compilers will by default treat the REALs as single precision, but
most compilers have an option to promote single precision variables to
double. In Fortran 90+ one can specify the KIND of a REAL, so if
variables as

REAL (kind=rp) :: x,y,z

throughout the code with rp being a global parameter, and one can
switch from single to double by changing rp from 4 to 8. G95 is a
good, free compiler. F95 has most but not all of the array operations
of NumPy.

May 10 '07 #9

This discussion thread is closed

Replies have been disabled for this discussion.