By using this site, you agree to our updated Privacy Policy and our Terms of Use. Manage your Cookies Settings.
458,127 Members | 1,160 Online
Bytes IT Community
+ Ask a Question
Need help? Post your question and get tips & solutions from a community of 458,127 IT Pros & Developers. It's quick & easy.

finding out the precision of floats

P: n/a
Hi all,

I want to know the precision (number of significant digits) of a float
in a platform-independent manner. I have scoured through the docs but
I can't find anything about it!

At the moment I use this terrible substitute:

FLOAT_PREC = repr(1.0/3).count('3')

How can I do this properly or where is relevant part of the docs?

Thanks

--
Arnaud

Feb 25 '07 #1
Share this Question
Share on Google+
23 Replies


P: n/a
On Feb 25, 9:57 pm, "Arnaud Delobelle" <arno...@googlemail.comwrote:
Hi all,

I want to know the precision (number of significant digits) of a float
in a platform-independent manner. I have scoured through the docs but
I can't find anything about it!

At the moment I use this terrible substitute:

FLOAT_PREC = repr(1.0/3).count('3')
I'm a little puzzled:

You don't seem to want a function that will tell you the actual number
of significant decimal digits in a particular number e.g.

nsig(12300.0) -3
nsig(0.00123400) -4
etc

You appear to be trying to determine what is the maximum number of
significant decimal digits afforded by the platform's implementation
of Python's float type. Is Python implemented on a platform that
*doesn't* use IEEE 754 64-bit FP as the in-memory format for floats?

Cheers,
John

Feb 25 '07 #2

P: n/a
On Feb 25, 11:20 am, "John Machin" <sjmac...@lexicon.netwrote:
[...]
I'm a little puzzled:

You don't seem to want a function that will tell you the actual number
of significant decimal digits in a particular number e.g.

nsig(12300.0) -3
nsig(0.00123400) -4
etc

You appear to be trying to determine what is the maximum number of
significant decimal digits afforded by the platform's implementation
of Python's float type.
Yes you are correct.
Is Python implemented on a platform that
*doesn't* use IEEE 754 64-bit FP as the in-memory format for floats?
I had no knowledge of IEEE 754 64-bit FP. The python doc says that
floats are implemented using the C 'double' data type but I didn't
realise there was a standard for this accross platforms .

Thanks for clarifying this. As my question shows I am not versed in
floating point arithmetic!

Looking at the definition of IEEE 754, the mantissa is made of 53
significant binary digits, which means
53*log10(2) = 15.954589770191003 significant decimal digits
(I got 16 with my previous dodgy calculation).

Does it mean it is safe to assume that this would hold on any
platform?

--
Arnaud

Feb 25 '07 #3

P: n/a
On Feb 25, 11:06 pm, "Arnaud Delobelle" <arno...@googlemail.com>
wrote:
On Feb 25, 11:20 am, "John Machin" <sjmac...@lexicon.netwrote:
[...]
I'm a little puzzled:
You don't seem to want a function that will tell you the actual number
of significant decimal digits in a particular number e.g.
nsig(12300.0) -3
nsig(0.00123400) -4
etc
You appear to be trying to determine what is the maximum number of
significant decimal digits afforded by the platform's implementation
of Python's float type.

Yes you are correct.
Is Python implemented on a platform that
*doesn't* use IEEE 754 64-bit FP as the in-memory format for floats?

I had no knowledge of IEEE 754 64-bit FP. The python doc says that
floats are implemented using the C 'double' data type but I didn't
realise there was a standard for this accross platforms .

Thanks for clarifying this. As my question shows I am not versed in
floating point arithmetic!

Looking at the definition of IEEE 754, the mantissa is made of 53
significant binary digits, which means
53*log10(2) = 15.954589770191003 significant decimal digits
(I got 16 with my previous dodgy calculation).

Does it mean it is safe to assume that this would hold on any
platform?
Evidently not; here's some documentation we both need(ed) to read:

http://docs.python.org/tut/node16.html
"""
Almost all machines today (November 2000) use IEEE-754 floating point
arithmetic, and almost all platforms map Python floats to IEEE-754
"double precision".
"""
I'm very curious to know what the exceptions were in November 2000 and
if they still exist. There is also the question of how much it matters
to you. Presuming the representation is 64 bits, even taking 3 bits
off the mantissa and donating them to the exponent leaves you with
15.05 decimal digits -- perhaps you could assume that you've got at
least 15 decimal digits.

While we're waiting for the gurus to answer, here's a routine that's
slightly less dodgy than yours:

| >>for n in range(200):
| ... if (1.0 + 1.0/2**n) == 1.0:
| ... print n, "bits"
| ... break
| ...
| 53 bits

At least this method has no dependency on the platform's C library.
Note carefully the closing words of that tutorial section:
"""
(well, will display on any 754-conforming platform that does best-
possible input and output conversions in its C library -- yours may
not!).
"""

I hope some of this helps ...
Cheers,
John

Feb 25 '07 #4

P: n/a
On Feb 25, 1:31 pm, "John Machin" <sjmac...@lexicon.netwrote:
On Feb 25, 11:06 pm, "Arnaud Delobelle" <arno...@googlemail.com>
wrote:
[...]
Evidently not; here's some documentation we both need(ed) to read:

http://docs.python.org/tut/node16.html
Thanks for this link
I'm very curious to know what the exceptions were in November 2000 and
if they still exist. There is also the question of how much it matters
to you. Presuming the representation is 64 bits, even taking 3 bits
off the mantissa and donating them to the exponent leaves you with
15.05 decimal digits -- perhaps you could assume that you've got at
least 15 decimal digits.
It matters to me because I prefer to avoid making assumptions in my
code!
Moreover the reason I got interested in this is because I am creating
a
Dec class (decimal numbers) and I would like that:
* given a float x, float(Dec(x)) == x for as many values of x as
possible
* given a decimal d, Dec(float(d)) == d for as many values of d as
possible.

(and I don't want the standard Decimal class :)
While we're waiting for the gurus to answer, here's a routine that's
slightly less dodgy than yours:

| >>for n in range(200):
| ... if (1.0 + 1.0/2**n) == 1.0:
| ... print n, "bits"
| ... break
| ...
| 53 bits
Yes I have a similar routine:

|def fptest():
| "Gradually fill the mantissa of a float with 1s until running out
of bits"
| ix = 0
| fx = 0.0
| for i in range(200):
| fx = 2*fx+1
| ix = 2*ix+1
| if ix != fx:
| return i

....
>>fptest()
53

I guess it would be OK to use this. It's just that this 'poking into
floats'
seems a bit strange to me ;)

Thanks for your help

--
Arnaud

Feb 25 '07 #5

P: n/a
On 25 Feb 2007 06:11:02 -0800, Arnaud Delobelle <ar*****@googlemail.comwrote:
Moreover the reason I got interested in this is because I am creating
a Dec class (decimal numbers)
Are you familiar with Python's Decimal library?
http://docs.python.org/lib/module-decimal.html

--
Jerry
Feb 25 '07 #6

P: n/a
On Feb 25, 3:59 pm, "Jerry Hill" <malaclyp...@gmail.comwrote:
On 25 Feb 2007 06:11:02 -0800, Arnaud Delobelle <arno...@googlemail.comwrote:
Moreover the reason I got interested in this is because I am creating
a Dec class (decimal numbers)

Are you familiar with Python's Decimal library?http://docs.python.org/lib/module-decimal.html
Read on my post for a few more lines and you'll know that the answer
is yes :)

--
Arnaud

Feb 25 '07 #7

P: n/a
John Machin wrote:
Evidently not; here's some documentation we both need(ed) to read:

http://docs.python.org/tut/node16.html
"""
Almost all machines today (November 2000) use IEEE-754 floating point
arithmetic, and almost all platforms map Python floats to IEEE-754
"double precision".
"""
I'm very curious to know what the exceptions were in November 2000 and
if they still exist.
All Python interpreters use whatever is the C double type on its platform to
represent floats. Not all of the platforms use *IEEE-754* floating point types.
The most notable and still relevant example that I know of is Cray.

--
Robert Kern

"I have come to believe that the whole world is an enigma, a harmless enigma
that is made terrible by our own mad attempt to interpret it as though it had
an underlying truth."
-- Umberto Eco

Feb 25 '07 #8

P: n/a
Dennis Lee Bieber wrote:
On 25 Feb 2007 05:31:11 -0800, "John Machin" <sj******@lexicon.net>
declaimed the following in comp.lang.python:
>Evidently not; here's some documentation we both need(ed) to read:

http://docs.python.org/tut/node16.html
"""
Almost all machines today (November 2000) use IEEE-754 floating point
arithmetic, and almost all platforms map Python floats to IEEE-754
"double precision".
"""
I'm very curious to know what the exceptions were in November 2000 and
if they still exist. There is also the question of how much it matters

Maybe a few old Vaxes/Alphas running OpenVMS... Those machines had
something like four or five different floating point representations (F,
D, G, and H, that I recall -- single, double, double with extended
exponent range, and quad)
I actually used Python on an Alpha running OpenVMS a few years ago. IIRC, the
interpreter was built with IEEE floating point types rather than the other types.

--
Robert Kern

"I have come to believe that the whole world is an enigma, a harmless enigma
that is made terrible by our own mad attempt to interpret it as though it had
an underlying truth."
-- Umberto Eco

Feb 26 '07 #9

P: n/a
Arnaud Delobelle wrote:

(and I don't want the standard Decimal class :)
Why?
--
.. Facundo
..
Blog: http://www.taniquetil.com.ar/plog/
PyAr: http://www.python.org/ar/
Feb 27 '07 #10

P: n/a
On Feb 27, 1:36 pm, Facundo Batista <facu...@taniquetil.com.arwrote:
Arnaud Delobelle wrote:
(and I don't want the standard Decimal class :)

Why?
Why should you? It only gives you 28 significant digits, while 64-bit
float (as in 32-bit version of Python) gives you 53 significant
digits. Also note, that on x86 FPU uses 80-bit registers. An then
Decimal executes over 1500 times slower.
>>from timeit import Timer
t1 = Timer('(1.0/3.0)*3.0 - 1.0')
t2 = Timer('(Decimal(1)/Decimal(3))*Decimal(3)-Decimal(1)',
'from decimal import Decimal')
>>t2.timeit()/t1.timeit()
1621.7838879255889

If that's not enough to forget about Decimal, take a look at this:
>>(Decimal(1)/Decimal(3))*Decimal(3) == Decimal(1)
False
>>((1.0/3.0)*3.0) == 1.0
True

Feb 27 '07 #11

P: n/a
On 27 Feb, 14:09, "Bart Ogryczak" <B.Ogryc...@gmail.comwrote:
On Feb 27, 1:36 pm, Facundo Batista <facu...@taniquetil.com.arwrote:
Arnaud Delobelle wrote:
(and I don't want the standard Decimal class :)
Why?

Why should you? It only gives you 28 significant digits, while 64-bit
float (as in 32-bit version of Python) gives you 53 significant
digits. Also note, that on x86 FPU uses 80-bit registers. An then
Decimal executes over 1500 times slower.
Actually 28 significant digits is the default, it can be set to
anything you like. Moreover 53 significant bits (as this is what 53
counts) is about 16 decimal digits.
>from timeit import Timer
t1 = Timer('(1.0/3.0)*3.0 - 1.0')
t2 = Timer('(Decimal(1)/Decimal(3))*Decimal(3)-Decimal(1)',

'from decimal import Decimal')>>t2.timeit()/t1.timeit()

1621.7838879255889
Yes. The internal representation of a Decimal is a tuple of one-digit
strings!
This is one of the reasons (by no means the main) why I decided to
write my own class.
If that's not enough to forget about Decimal, take a look at this:
>(Decimal(1)/Decimal(3))*Decimal(3) == Decimal(1)
False
>((1.0/3.0)*3.0) == 1.0

True
OTOH float is not the panacea:
>>0.1+0.1+0.1==0.3
False
>>3*0.1==0.3
False

Decimals will behave better in this case.

Cheers

--
Arnaud

Feb 27 '07 #12

P: n/a
Why should you? It only gives you 28 significant digits, while 64-bit
float (as in 32-bit version of Python) gives you 53 significant
digits. Also note, that on x86 FPU uses 80-bit registers. An then
Decimal executes over 1500 times slower.
64-bit floating point only gives you 53 binary bits, not 53 digits.
That's approximately 16 decimal digits. And anyway, Decimal can be
configured to support than 28 digits.
>
>from timeit import Timer
t1 = Timer('(1.0/3.0)*3.0 - 1.0')
t2 = Timer('(Decimal(1)/Decimal(3))*Decimal(3)-Decimal(1)',

'from decimal import Decimal')>>t2.timeit()/t1.timeit()

1621.7838879255889

If that's not enough to forget about Decimal, take a look at this:
>(Decimal(1)/Decimal(3))*Decimal(3) == Decimal(1)
False
>((1.0/3.0)*3.0) == 1.0

True
Try ((15.0/11.0)*11.0) == 15.0. Decimal is actually returning the
correct result. Your example was just lucky.

Decimal was intended to solve a different class of problems. It
provides predictable arithmetic using "decimal" floating point.
IEEE-754 provides predictable arithmetic using "binary" floating
point.

casevh

Feb 27 '07 #13

P: n/a
On Feb 27, 7:58 pm, "Arnaud Delobelle" <arno...@googlemail.comwrote:
On 27 Feb, 14:09, "Bart Ogryczak" <B.Ogryc...@gmail.comwrote:
On Feb 27, 1:36 pm, Facundo Batista <facu...@taniquetil.com.arwrote:
Arnaud Delobelle wrote:
(and I don't want the standard Decimal class :)
Why?
Why should you? It only gives you 28 significant digits, while 64-bit
float (as in 32-bit version of Python) gives you 53 significant
digits. Also note, that on x86 FPU uses 80-bit registers. An then
Decimal executes over 1500 times slower.

Actually 28 significant digits is the default, it can be set to
anything you like. Moreover 53 significant bits (as this is what 53
counts) is about 16 decimal digits.
My mistake.
>>from timeit import Timer
>>t1 = Timer('(1.0/3.0)*3.0 - 1.0')
>>t2 = Timer('(Decimal(1)/Decimal(3))*Decimal(3)-Decimal(1)',
'from decimal import Decimal')>>t2.timeit()/t1.timeit()
1621.7838879255889

Yes. The internal representation of a Decimal is a tuple of one-digit
strings!
This is one of the reasons (by no means the main) why I decided to
write my own class.
Why not GMP?
If that's not enough to forget about Decimal, take a look at this:
>>(Decimal(1)/Decimal(3))*Decimal(3) == Decimal(1)
False
>>((1.0/3.0)*3.0) == 1.0
True

OTOH float is not the panacea:
My point is, that neither is Decimal. It doesn't solve the problem,
creating additional problem with efficiency.
>0.1+0.1+0.1==0.3
False
>3*0.1==0.3

False

Decimals will behave better in this case.
Decimal will work fine as long as you deal only with decimal numbers
and the most basic arithmetic. Any rational number with base that is
not power of 10, or any irrational number won't have an exact
representation. So as long as you're dealing with something like
invoices, Decimal does just fine. When you start real calculations,
not only scientific, but even financial ones[1], it doesn't do any
better then binary float, and it's bloody slow.

[1] eg. consider calculating interests rate, which often is defined as
math.pow(anualRate,days/365.0). It's not rational number (unless days
happen to be mutliple of 365), therefore has no exact representation.
BTW, math.* functions do not return Decimals.
Feb 28 '07 #14

P: n/a
On Feb 28, 10:38 pm, "Bart Ogryczak" <B.Ogryc...@gmail.comwrote:
[1] eg. consider calculating interests rate, which often is defined as
math.pow(anualRate,days/365.0).
In what jurisdiction for what types of transactions? I would have
thought/hoped that the likelihood that any law, standard or procedure
manual would define an interest calculation in terms of the C stdlib
would be somewhere in the region of math.pow(epsilon, HUGE), not
"often".

More importantly, the formula you give is dead wrong. The correct
formula for converting an annual rate of interest to the rate of
interest to be used for n days (on the basis of 365 days per year) is:

(1 + annual_rate) ** (n / 365.0) - 1.0
or
math.pow(1 + annual_rate, n / 365.0) - 1.0
if you prefer.
BTW, math.* functions do not return Decimals.
For which Bell Labs be praised, but what's your point?

Cheers,
John

Feb 28 '07 #15

P: n/a
On Feb 28, 3:53 pm, "John Machin" <sjmac...@lexicon.netwrote:
On Feb 28, 10:38 pm, "BartOgryczak" <B.Ogryc...@gmail.comwrote:
[1] eg. consider calculating interests rate, which often is defined as
math.pow(anualRate,days/365.0).

In what jurisdiction for what types of transactions? I would have
thought/hoped that the likelihood that any law, standard or procedure
manual would define an interest calculation in terms of the C stdlib
would be somewhere in the region of math.pow(epsilon, HUGE), not
"often".
YPB? Have you ever heard of real-time systems?
More importantly, the formula you give is dead wrong. The correct
formula for converting an annual rate of interest to the rate of
interest to be used for n days (on the basis of 365 days per year) is:

(1 + annual_rate) ** (n / 365.0) - 1.0
or
math.pow(1 + annual_rate, n / 365.0) - 1.0
if you prefer.
YPB? Anyone with half a brain knows, that you can either express rate
as 0.07 and do all those ridiculous conversions above, or express it
as 1.07 and apply it directly.
BTW, math.* functions do not return Decimals.

For which Bell Labs be praised, but what's your point?
That Decimal is useless, for anything but invoices.


Feb 28 '07 #16

P: n/a
On 28 Feb, 11:38, "Bart Ogryczak" <B.Ogryc...@gmail.comwrote:
On Feb 27, 7:58 pm, "Arnaud Delobelle" <arno...@googlemail.comwrote:
This is one of the reasons (by no means the main) why I decided to
write my own class.

Why not GMP?
I need decimals.
My point is, that neither is Decimal. It doesn't solve the problem,
creating additional problem with efficiency.
>>0.1+0.1+0.1==0.3
False
>>3*0.1==0.3
False
Decimals will behave better in this case.

Decimal will work fine as long as you deal only with decimal numbers
and the most basic arithmetic. Any rational number with base that is
not power of 10, or any irrational number won't have an exact
representation.
My problem is precisely to represent rational numbers whose
denominator is a power of 10 (aka decimals) accurately.
So as long as you're dealing with something like
invoices, Decimal does just fine. When you start real calculations,
not only scientific, but even financial ones[1], it doesn't do any
better then binary float, and it's bloody slow.
I'm not doing 'real world' calcultations, I'm making an app to help
teach children maths. I need numerical values that behave well as
decimals. I also need them to have an arbitrary number of significant
figures. Floats are great but they won't help me with either. Amongst
other things, I need 3.0*0.1==0.3 to be True.

Please do not make the assumption that I have chosen to use a decimal
type without some careful consideration.

--
Arnaud

Feb 28 '07 #17

P: n/a
On Feb 28, 6:34 pm, "Arnaud Delobelle" <arno...@googlemail.comwrote:
So as long as you're dealing with something like
invoices, Decimal does just fine. When you start real calculations,
not only scientific, but even financial ones[1], it doesn't do any
better then binary float, and it's bloody slow.

I'm not doing 'real world' calcultations, I'm making an app to help
teach children maths.
Without divisions?
I need numerical values that behave well as
decimals. I also need them to have an arbitrary number of significant
figures. Floats are great but they won't help me with either. Amongst
other things, I need 3.0*0.1==0.3 to be True.
How about (1.0/3.0)*3.0 == 1.0? That doesn't need to be True?
Please do not make the assumption that I have chosen to use a decimal
type without some careful consideration.
Well, you didn't indicate this before. Anyway, in that case efficiency
has no impact at all, so you might as well use Decimal.
Feb 28 '07 #18

P: n/a
Arnaud Delobelle wrote:
I'm not doing 'real world' calcultations, I'm making an app to help
teach children maths. I need numerical values that behave well as
decimals. I also need them to have an arbitrary number of significant
figures. Floats are great but they won't help me with either. Amongst
other things, I need 3.0*0.1==0.3 to be True.
I guess that speed is not at premium in your application,
so you might try my continued fractions module,
advertised here a few weeks ago:
http://www-zo.iinf.polsl.gliwice.pl/...software/cf.py

It represents rational numbers exactly, and irrational numbers
with an arbitrary accuracy, i.e. unlike Decimal it implements
exact rather than multiple-precision arithmetic.
Once you create an expression, you can pull from it as many
decimal digits as you wish. The subexpressions dynamically
adjust their accuracy, so that you always get exact digits.

Regards,
Marcin
Feb 28 '07 #19

P: n/a
On Feb 28, 7:28 pm, Marcin Ciura <marcin.ci...@poczta.NOSPAMonet.pl>
wrote:
I guess that speed is not at premium in your application,
so you might try my continued fractions module,
advertised here a few weeks ago:http://www-zo.iinf.polsl.gliwice.pl/...software/cf.py
Thanks for the link, I've had a quick glance and it seems great. I'll
have to take a closer look to see if I can use it for this app, but in
general it looks like a very useful piece of code.

--
Arnaud
Feb 28 '07 #20

P: n/a
On Mar 1, 4:19 am, "Bart Ogryczak" <B.Ogryc...@gmail.comwrote:
On Feb 28, 3:53 pm, "John Machin" <sjmac...@lexicon.netwrote:
On Feb 28, 10:38 pm, "BartOgryczak" <B.Ogryc...@gmail.comwrote:
[1] eg. consider calculating interests rate, which often is defined as
math.pow(anualRate,days/365.0).

More importantly, the formula you give is dead wrong. The correct
formula for converting an annual rate of interest to the rate of
interest to be used for n days (on the basis of 365 days per year) is:
(1 + annual_rate) ** (n / 365.0) - 1.0
or
math.pow(1 + annual_rate, n / 365.0) - 1.0
if you prefer.

YPB? Anyone with half a brain knows, that you can either express rate
as 0.07 and do all those ridiculous conversions above, or express it
as 1.07 and apply it directly.
A conversion involving an exponentiation is necessary. "All those"?? I
see only two.

Please re-read your original post, and note that there are *TWO* plus-
or-minus 1.0 differences between your formula and mine. For an annual
rate of 10%, yours would calculate the rate for 6 months (expressed as
182.5 days) as:
math.pow(0.10, 0.5) = 0.316... i.e. 31.6%

My formula produces:
math.pow(1.10, 0.5) - 1.0 = 0.0488... i.e. 4.88%.

Which answer is ridiculous?

Feb 28 '07 #21

P: n/a
On Feb 28, 10:29 pm, "John Machin" <sjmac...@lexicon.netwrote:
On Mar 1, 4:19 am, "BartOgryczak" <B.Ogryc...@gmail.comwrote:
On Feb 28, 3:53 pm, "John Machin" <sjmac...@lexicon.netwrote:
On Feb 28, 10:38 pm, "BartOgryczak" <B.Ogryc...@gmail.comwrote:
[1] eg. consider calculating interests rate, which often is defined as
math.pow(anualRate,days/365.0).
More importantly, the formula you give is dead wrong. The correct
formula for converting an annual rate of interest to the rate of
interest to be used for n days (on the basis of 365 days per year) is:
(1 + annual_rate) ** (n / 365.0) - 1.0
or
math.pow(1 + annual_rate, n / 365.0) - 1.0
if you prefer.
YPB? Anyone with half a brain knows, that you can either express rate
as 0.07 and do all those ridiculous conversions above, or express it
as 1.07 and apply it directly.

A conversion involving an exponentiation is necessary. "All those"?? I
see only two.

Please re-read your original post, and note that there are *TWO* plus-
or-minus 1.0 differences between your formula and mine. For an annual
rate of 10%, yours would calculate the rate for 6 months (expressed as
182.5 days) as:
math.pow(0.10, 0.5) = 0.316... i.e. 31.6%
You're assuming that I'd store annual rate as 0.1, while actually it'd
stored as 1.1.
Which is logical and applicable directly.


Mar 1 '07 #22

P: n/a
On Mar 1, 9:33 pm, "Bart Ogryczak" <B.Ogryc...@gmail.comwrote:
On Feb 28, 10:29 pm, "John Machin" <sjmac...@lexicon.netwrote:
On Mar 1, 4:19 am, "BartOgryczak" <B.Ogryc...@gmail.comwrote:
On Feb 28, 3:53 pm, "John Machin" <sjmac...@lexicon.netwrote:
On Feb 28, 10:38 pm, "BartOgryczak" <B.Ogryc...@gmail.comwrote:
[1] eg. consider calculating interests rate, which often is defined as
math.pow(anualRate,days/365.0).
More importantly, the formula you give is dead wrong. The correct
formula for converting an annual rate of interest to the rate of
interest to be used for n days (on the basis of 365 days per year) is:
(1 + annual_rate) ** (n / 365.0) - 1.0
or
math.pow(1 + annual_rate, n / 365.0) - 1.0
if you prefer.
YPB? Anyone with half a brain knows, that you can either express rate
as 0.07 and do all those ridiculous conversions above, or express it
as 1.07 and apply it directly.
A conversion involving an exponentiation is necessary. "All those"?? I
see only two.
Please re-read your original post, and note that there are *TWO* plus-
or-minus 1.0 differences between your formula and mine. For an annual
rate of 10%, yours would calculate the rate for 6 months (expressed as
182.5 days) as:
math.pow(0.10, 0.5) = 0.316... i.e. 31.6%

You're assuming that I'd store annual rate as 0.1, while actually it'd
stored as 1.1.
Which is logical and applicable directly.
Storing 1.1 and using it in calculations may save you a few
microseconds a day in your real-time apps. However the annual rate of
interest is 10% aka 0.1; naming 1.1 as "anualRate" (sic) is utterly
ludicrous.

Mar 1 '07 #23

P: n/a
John Machin wrote:
Storing 1.1 and using it in calculations may save you a few
microseconds a day in your real-time apps.
The main advantage would be clarity of code.
naming 1.1 as "anualRate" (sic) is utterly ludicrous.
So call it annualMultiplicationFactor or something
in the code.

--
Greg
Mar 2 '07 #24

This discussion thread is closed

Replies have been disabled for this discussion.