In python 2.3 (IDLE 1.0.3) running under windows 95, I get the
following types of errors whenever I do simple arithmetic:
1st example: 12.10 + 8.30
20.399999999999999 1.1  0.2
0.90000000000000013
2nd example(no errors here): bool(130.0  129.0 == 1.0)
True
3rd example: a = 0.013 b = 0.0129 c = 0.0001 [a, b, c]
[0.012999999999999999, 0.0129, 0.0001] bool((a  b) == c)
False
This sort of error is no big deal in most cases, but I'm sure it could
become a problem under certain conditions, particularly the 3rd
example, where I'm using truth testing. The same results occur in all
cases whether I define variables a, b, and c, or enter the values
directly into the bool statement. Also, it doesn't make a difference
whether "a = 0.013" or "a = 0.0130".
I haven't checked this under windows 2000 or XP, but I expect the same
thing would happen. Any suggestions for a way to fix this sort of
error?
Jul 18 '05
89 4748
Paul Rubin <http://ph****@NOSPAM.invalid> wrote in message news:<7x************@ruckus.brouhaha.com>... Gary Herron <gh*****@islandtraining.com> writes: Any representation of the infinity of numbers on a finite computer *must* necessarily be unable to represent some (actually infinity many) of those numbers. The inaccuracies stem from that fact. Well, finite computers can't even represent all the integers, but we can reasonably think of Python as capable of doing exact integer arithmetic.
The issue here is that Python's behavior confuses the hell out of some new users. There is a separate area of confusion, that
a = 2 / 3
sets a to 0,
That may confusing for nonC programmers, but it's easy to explain.
The real flaw of oldstyle division is that code like
def mean(seq):
return sum(seq) / len(seq)
subtly fails when seq happens to contain all integers, and you can't
even correctly use:
def mean(seq):
return 1.0 * sum(seq) / len(seq)
because it could lose accuracy if seq's elements were of a custom
highprecision numeric type that is closed under integer division but
gets coerced to float when multiplied by a float.
That doesn't solve the also very common confusion that (1.0/3.0)*3.0 = 0.99999999.
What problem? (1.0 / 3.0) * 3.0
1.0
The rounding error of multiplying 1/3 by 3 happens to exactly cancel
out that of dividing 1 by 3. It's an accident, but you can use it as
a quick argument against the "decimal arithmetic is always more
acurate" crowd.
Rational arithmetic can solve that.
Yes, it can, and imho it would be a good idea to use rational
arithmetic as the default for integer division (but _not_ as a general
replacement for float).
the problem with BCD or other 'decimal' computations is that it either
doesn't have the dynamic range of binary floating point (~ +10**310)
or if it has unlimited digits then there is a LOT of software cranking
to do the math, whereas binary floating point is in the hardware. If
you want the language to use binary floating point (fast) but do the
rounding for you, then fine, but then you will have problems using it
for any real numerical task because the issue of rounding is very
important to numerical analysis, and is done different ways in
different cases. Every time the language runtime rounds for you, it is
introducing errors to your computations that you may or may not want.
There is a large body of knowledge surrounding the use of IEEE 754
floating point representation and if the language diverges from that
then users who want to do numerical analysis won't use it.
another question: do you want the math package to round for you, or do
you want the IO package to do it only when you print? You will get
different results from each. I could imagine a language runtime could
have a switch that tells it to automatically round the results for
you, either in the math or the IO.
[Paul Rubin] I don't know that it's generally tractable to do exact computation on constructive reals. How do you implement comparison (<, >, ==)?
Equality of constructive reals is undecidable. In practice, CR
packages allow specifying a "number of digits of evidence " parameter
N, so that equality is taken to mean "provably don't differ by more
than a unit in the N'th digit".
[Chris S.] Sqrt is a fair criticism, but Pi equals 22/7, exactly the form this arithmetic is meant for.
That's absurd. pi is 3, and nothing but grief comes from listening to
fancypants socalled "mathematicians" trying to convince you that
their inability to find integer results is an intellectual failing you
should share <wink>.
On Mon, 20 Sep 2004 15:16:07 +1200, Paul Foley <se*@below.invalid> wrote: On 19 Sep 2004 15:24:31 0700, Dan Bishop wrote:
There are, of course, reasonably accurate rational approximations of pi. For example, 355/113 (accurate to 6 decimal places), 312689/99532 (9 decimal places), or 3126535/995207 (11 decimal places). Also, the IEEE 754 doubleprecision representation of pi is equal to the rational number 4503599627370496/281474976710656.
I hope not! That's equal to 16. (The double float closest to) pi is 884279719003555/281474976710656
Amazingly, that is _exactly_ equal to math.pi from ut.exactdec import ED import math ED('884279719003555/281474976710656')
ED('3.14159265358979311599796346854418516159057617 1875') ED(math.pi,'all')
ED('3.14159265358979311599796346854418516159057617 1875') ED('884279719003555/281474976710656') == ED(math.pi,'all')
True
ED('884279719003555/281474976710656').astuple()
(3141592653589793115997963468544185161590576171875 L, 1L, 48) ED(math.pi,'all').astuple()
(3141592653589793115997963468544185161590576171875 L, 1L, 48)
So it's also equal to the rational number
3141592653589793115997963468544185161590576171875 / 10**48
ED('3141592653589793115997963468544185161590576171 875'
... '/1000000000000000000000000000000000000000000000000' )
ED('3.14159265358979311599796346854418516159057617 1875')
or
ED('3141592653589793115997963468544185161590576171 875') / ED(10**48)
ED('3.14159265358979311599796346854418516159057617 1875')
Regards,
Bengt Richter
[Bengt Richter] ... If you add a small Decimal delta repeatedly, will it get rounded away like the floating point version,
Decimal *is* a floatingpoint type, containing most of IEEE 854 (the
radixgeneralized variant of IEEE 754). It's got infinities, signed
zeroes, NaNs, ..., all that FP hair. Decimal specifies unnormalized
fp, though, so there's no special class of "denormal" values in
Decimal.
or will accuracy get promoted,
No, but the number of digits of precision is userspecifiable. In all
places this makes sense, the result of an operation is the exact
(infinite precision) mathematical result, rounded once to the current
context precision, according to the current context rounding mode. If
you want 100 digits, ask for 100 digits  but you have to ask in
advance.
On 19 Sep 2004 15:24:31 0700, da*****@yahoo.com (Dan Bishop) wrote: Also, the IEEE 754 doubleprecision representation of pi is equal to the rational number 4503599627370496/281474976710656.
I know the real uses of a precise pi are not that many... but
isn't that a quite raw approximation ? that fraction equals 16...
Base 10 _is_ more accurate for monetary amounts, and for this reason I agreed with the addition of a decimal class. But it would be a mistake to use decimal arithmetic, which has a performance disadvantage with no accuracy advantage, in the general case.
For monetary computation why not using fixed point instead
(i.e. integers representing the number of thousands of cents,
for example) ? IMO using floating point instead of something
like arbitrary precision integers is looking for trouble in
that area as often what is required is accuracy up to a
specified fraction of the unit.
Andrea
PS: From a study seems that 75.7% of people tends to believe
more in messages that contain precise numbers (like 75.7%).
On Mon, 20 Sep 2004 01:07:03 0400, Tim Peters <ti********@gmail.com>
wrote: [Chris S.] Sqrt is a fair criticism, but Pi equals 22/7, exactly the form this arithmetic is meant for.
That's absurd. pi is 3, and nothing but grief comes from listening to fancypants socalled "mathematicians" trying to convince you that their inability to find integer results is an intellectual failing you should share <wink>.
This is from the Bible...
007:023 And he made a molten sea, ten cubits from the one brim to the
other: it was round all about, and his height was five cubits:
and a line of thirty cubits did compass it round about.
So it's clear that pi must be 3
Andrea
Uncle Tim: That's absurd. pi is 3
Personally I've found that pie is usually round, though
if you're talking price I agree  I can usually get a
slice for about $3, more like $3.14 with tax. I like
mine apple, with a bit of ice cream.
Strange spelling though.
Andrew da***@dalkescientific.com
Andrea: This is from the Bible...
007:023 And he made a molten sea, ten cubits from the one brim to the other: it was round all about, and his height was five cubits: and a line of thirty cubits did compass it round about.
So it's clear that pi must be 3
Or that the walls were 0.25 cubits thick, if you're talking
inner diameter vs. outer. ;)
Andrew da***@dalkescientific.com
Andrew Dalke <ad****@mindspring.com> wrote: Uncle Tim: That's absurd. pi is 3
Personally I've found that pie is usually round, though if you're talking price I agree  I can usually get a slice for about $3, more like $3.14 with tax. I like mine apple, with a bit of ice cream.
Strange spelling though.
Yeah, everybody knows it's spelled "py"!
Alex
On Mon, 20 Sep 2004 01:07:03 0400, Tim Peters wrote: [Chris S.] Sqrt is a fair criticism, but Pi equals 22/7, exactly the form this arithmetic is meant for.
That's absurd. pi is 3,
Except in Indiana, where it's 4, of course.

Don't worry about people stealing your ideas. If your ideas are any good,
you'll have to ram them down people's throats.
 Howard Aiken
(setq replyto
(concatenate 'string "Paul Foley " "<mycroft" '(#\@) "actrix.gen.nz>"))
On 20 Sep 2004 02:08:54 +0200, Johan Ur Riise wrote: There is not much of a precision/speed tradoff in Common Lisp, you can use fractional numbers (which give you exact results with operations +, , * and /) internally and round them off to decimal before display. With the OP's example:
(+ 1210/100 830/100) 102/5
(coerce * 'float) 20.4
Integers can have unlimited number of digits, but the precision of floats and reals are still limited to what the hardware can do, so if
Most CL implementations only support the hardware float types, that's
true, but it's not required by the spec.
CLISP's longfloat has arbitrary precision (set by the user in
advance).
[And the Common Lisp type named "real" is the union of floats and
rationals; they're certainly not limited by hardware support]

Don't worry about people stealing your ideas. If your ideas are any good,
you'll have to ram them down people's throats.
 Howard Aiken
(setq replyto
(concatenate 'string "Paul Foley " "<mycroft" '(#\@) "actrix.gen.nz>")) al*****@yahoo.com (Alex Martelli) wrote in news:1gkdncx.kyq0oz1excwtyN% al*****@yahoo.com: Nothing strange there  HP's calculators were squarely aimed at scientists and engineers, who are supposed to know what they're doing when it comes to numeric computation (they mostly _don't_, but they like to kid themselves that they do!).
Oi!!! I resemble that remark !
;)
Frithiof Andreas Jensen
<fr*************@diespammerdie.jensen.tdcadsl.dk > wrote: al*****@yahoo.com (Alex Martelli) wrote in news:1gkdncx.kyq0oz1excwtyN% al*****@yahoo.com:
Nothing strange there  HP's calculators were squarely aimed at scientists and engineers, who are supposed to know what they're doing when it comes to numeric computation (they mostly _don't_, but they like to kid themselves that they do!).
Oi!!! I resemble that remark !
;)
OK, I should have used first person plural to count myself in, since,
after all, I _am_ an engineer...: _we_ mostly don't, but we like to kid
ourselves that we do!)
Alex
Am Sonntag, 19. September 2004 19:41 schrieb Alex Martelli: gmpy (or to be more precise the underlying GMP library) runs optimally on AMD Athlon 32bit processors, which happen to be dirt cheap these days, so a cleverlypurchased 300dollars desktop Linux PC using such an Athlon chip would no doubt let you use way more than these humble couple thousand bits for such interactive computations while maintaining a perfectly acceptable interactive response time.
But still, no algorithm implemented in software will ever beat the
FADD/FMUL/FDIV/FPOW/FSIN/FCOS etc. instructions in runtime, that was my
point... And error calculation is always possible, so that you can give
bounds to your result, even when using normal floating point arithmetic. And,
even when using GMPy, you have to know about the underlying limitations of
binary floating point so that you can reorganize your code if need be to add
precision (because one calculation might be much less precise if done in some
way than in another).
Heiko. bo**@oz.net (Bengt Richter) wrote in message news:<ci*************************@theriver.com>... On 19 Sep 2004 15:24:31 0700, da*****@yahoo.com (Dan Bishop) wrote: [...]There are, of course, reasonably accurate rational approximations of pi. For example, 355/113 (accurate to 6 decimal places), 312689/99532 (9 decimal places), or 3126535/995207 (11 decimal places). Also, the IEEE 754 doubleprecision representation of pi is equal to the rational number 4503599627370496/281474976710656. >>> divmod(4503599627370496,281474976710656) (16L, 0L)
a little glitch somewhere ? ;)
Oops. I meant 884279719003555/281474976710656.
Heiko Wundram <he*****@ceosg.de> wrote: Am Sonntag, 19. September 2004 19:41 schrieb Alex Martelli: gmpy (or to be more precise the underlying GMP library) runs optimally on AMD Athlon 32bit processors, which happen to be dirt cheap these days, so a cleverlypurchased 300dollars desktop Linux PC using such an Athlon chip would no doubt let you use way more than these humble couple thousand bits for such interactive computations while maintaining a perfectly acceptable interactive response time. But still, no algorithm implemented in software will ever beat the FADD/FMUL/FDIV/FPOW/FSIN/FCOS etc. instructions in runtime, that was my
Yep, the hardware would have to be designed in a very lousy way for its
instructions to run slower than software running on the same CPU;).
If you're not using some "vectorized" package such as Numeric or
numarray, though, it's unlikely that you care about speed  and if you
_are_ using Numeric or numarray, it doesn't matter to you what type
Python itself uses for some literal such as 3.17292  it only matters
(speedwise) what your computational package is using (single precision,
double precision, whatever).
point... And error calculation is always possible, so that you can give bounds to your result, even when using normal floating point arithmetic. And,
Sure! Your problems come when the bounds you compute are not good
enough for your purposes (given how deucedly loose errorinterval
computations tend to be, that's going to happen more often than actual
accuracy loss in your computations... try an intervalarithmetic package
some day, to see what I mean...).
even when using GMPy, you have to know about the underlying limitations of binary floating point so that you can reorganize your code if need be to add precision (because one calculation might be much less precise if done in some way than in another).
Sure. Throwing more precision at a badly analyzed and structured
algorithm is putting a bandaid on a wound. I _have_ taught numeric
analysis to undergrads and nobody could have passed my course unless
they had learned to quote that "party line" back at me, obviously.
In the real world, the bandaid stops the blood loss often enough that
few practising engineers and scientists are seriously motivated to
remember and apply all they've learned in their numeric analysis courses
(assuming they HAVE taken some: believe it or not, it IS quite possible
to get a degree in engineering, physics, etc, in most places, without
even getting ONE course in numeric analysis! the university where I
taught was an exception only for _some_ of the degrees they granted 
you couldn't graduate in _materials_ engineering without that course,
for example, but you COULD graduate in _buildings_ engineering while
bypassing it...).
Yes, this IS a problem. But I don't know what to do about it  after
all, I _am_ quite prone to taking such shortcuts myself... if some
computation is giving me results that smell wrong, I just do it over
with 10 or 100 times more bits... yeah, I _do_ know that will only work
99.99% of the time, leaving a serious problem, possibly hidden and
unsuspected, more often than one can be comfortable with. In my case, I
have excuses  I'm more likely to have fallen into some subtle trap of
_statistics_, making my precise computations pretty meaningless anyway,
than to be doing perfectly correct statistics in numerically smelly ways
(hey, I _have_ been brought up, as an example of falling into traps, in
"American Statistician", but not yet, AFAIK, in any journal dealing with
numerical analysis...:).
Alex
Andrew Dalke <ad****@mindspring.com> wrote in message news:<QV*****************@newsread1.news.pas.earth link.net>... Andrea: This is from the Bible...
007:023 And he made a molten sea, ten cubits from the one brim to the other: it was round all about, and his height was five cubits: and a line of thirty cubits did compass it round about.
So it's clear that pi must be 3
Or that the walls were 0.25 cubits thick, if you're talking inner diameter vs. outer. ;)
Or it could be 9.60 cubits across and 30.16 cubits around, and the
numbers are rounded to the nearest cubit.
Also, I've heard that the original Hebrew uses an uncommon spelling of
the word for "line" or "circumference". Perhaps that affects the
meaning.
On 20040920, david h <da***@dmh2000.com> wrote: the problem with BCD or other 'decimal' computations is that it either doesn't have the dynamic range of binary floating point (~ +10**310)
Huh? Why would BCD floating point have any less range than
binary floating point? Due to the space inefficiencies of BCD,
it would take a few more bits to cover the same range, but I
don't see your point.

Grant Edwards grante Yow! Hey, LOOK!! A pair of
at SIZE 9 CAPRI PANTS!! They
visi.com probably belong to SAMMY
DAVIS, JR.!!
On 20040920, Andrea Griffini <ag****@tin.it> wrote: This is from the Bible...
007:023 And he made a molten sea, ten cubits from the one brim to the other: it was round all about, and his height was five cubits: and a line of thirty cubits did compass it round about.
So it's clear that pi must be 3
If you've only got 1 significant digit in your measured values,
then Pi == 3 is a prefectly reasonable value to use.

Grant Edwards grante Yow! Why is everything
at made of Lycra Spandex?
visi.com
On 20 Sep 2004 14:34:03 GMT, Grant Edwards <gr****@visi.com> declaimed
the following in comp.lang.python: On 20040920, david h <da***@dmh2000.com> wrote:
the problem with BCD or other 'decimal' computations is that it either doesn't have the dynamic range of binary floating point (~ +10**310) Huh? Why would BCD floating point have any less range than binary floating point? Due to the space inefficiencies of BCD, it would take a few more bits to cover the same range, but I don't see your point.
There /was/ an "or" in that sentence, which you trimmed out...
Though working with numbers that are stored in >150 bytes
doesn't interest me. Uhm, actually, to handle the +/ exponent range,
make that 300+ bytes (150+ bytes before the decimal, and the same after
it). As soon as you start storing an exponent as a separate component
you introduce a loss of precision in computations.
 ================================================== ============ < wl*****@ix.netcom.com  Wulfraed Dennis Lee Bieber KD6MOG < wu******@dm.net  Bestiaria Support Staff < ================================================== ============ < Home Page: <http://www.dm.net/~wulfraed/> < Overflow Page: <http://wlfraed.home.netcom.com/> <
On 20040920, Dennis Lee Bieber <wl*****@ix.netcom.com> wrote: On 20 Sep 2004 14:34:03 GMT, Grant Edwards <gr****@visi.com> declaimed the following in comp.lang.python:
On 20040920, david h <da***@dmh2000.com> wrote:
> the problem with BCD or other 'decimal' computations is that it either > doesn't have the dynamic range of binary floating point (~ +10**310) Huh? Why would BCD floating point have any less range than binary floating point? Due to the space inefficiencies of BCD, it would take a few more bits to cover the same range, but I don't see your point.
There /was/ an "or" in that sentence, which you trimmed out...
Sorry about that, but I wasn't addressing the other complaint,
just the lack of range part.
Though working with numbers that are stored in >150 bytes doesn't interest me. Uhm, actually, to handle the +/ exponent range, make that 300+ bytes (150+ bytes before the decimal, and the same after it).
To get the same range and precision as a 32bit IEEE, you need
4 bytes for mantissa and 2 for the exponent. That's 6 bytes,
not 300.
As soon as you start storing an exponent as a separate component you introduce a loss of precision in computations.
I thought you were complaining about range and storage required
for BCD vs. binary.
Floating point BCD can have the same range and precision and
binary floating point with about a 50% penalty in storage
space.
If you're going to compare fixed point verses floating point,
that's a completely separate (and orthogonal) issue.

Grant Edwards grante Yow! Let's send the
at Russians defective
visi.com lifestyle accessories!
"Chris S." <ch*****@NOSPAM.udel.edu> wrote in message news:<70b3d.1822$uz1.747@trndny03>... I just find it funny how a $20 calculator can be more accurate than Python running on a $1000 Intel machine.
Actually, if you look at Intel's track record, it isn't that surprising.
How many Intel Pentium engineers does it take to change a light bulb?
Three. One to screw in the bulb, and one to hold the ladder.

CARL BANKS
On 20040921, Carl Banks <im*****@aerojockey.com> wrote: "Chris S." <ch*****@NOSPAM.udel.edu> wrote in message news:<70b3d.1822$uz1.747@trndny03>... I just find it funny how a $20 calculator can be more accurate than Python running on a $1000 Intel machine.
Actually, if you look at Intel's track record, it isn't that surprising.
How many Intel Pentium engineers does it take to change a light bulb? Three. One to screw in the bulb, and one to hold the ladder.
Intel, where quality is Job 0.9999999997.

Grant Edwards grante Yow! My CODE of ETHICS
at is vacationing at famed
visi.com SCHROON LAKE in upstate
New York!!
Grant Edwards said unto the world upon 20040921 16:12: On 20040921, Carl Banks <im*****@aerojockey.com> wrote:
"Chris S." <ch*****@NOSPAM.udel.edu> wrote in message news:<70b3d.1822$uz1.747@trndny03>...
I just find it funny how a $20 calculator can be more accurate than Python running on a $1000 Intel machine.
Actually, if you look at Intel's track record, it isn't that surprising.
How many Intel Pentium engineers does it take to change a light bulb? Three. One to screw in the bulb, and one to hold the ladder.
Intel, where quality is Job 0.9999999997.
Since we're playing:
Why'd Intel call it the Pentium chip?
'Cause they added 100 to 486 and got 585.999999999989
Brian vdB
Peter Otten wrote: Paul Rubin wrote:
I haven't tried 2.4 yet. After
The auther is currently working on an installer, but just dropping it into 2.3's sitepackages should work, too.
I just dropped decimal.py from 2.4's Lib dir into 2.3.4's Lib dir.
Seems to work. Any gotchas with this route?
By the way, I got decimal.py revision 1.24 from CVS several days ago
and noted a speedup of over an order of magnitude  almost
twentyfive times faster with this simple snippet calculating a square
root to 500 decimal places. :)
[On Win98SE:]
 from time import clock
 from decimal import *

 a = Decimal('18974018374087403187404701740918.74817040 84710473048017483047104')
 t = clock()
 b = a.sqrt(Context(prec=500))

 print "Time: ", clock()t
 print "b =", b
With decimal.py from 2.4a3.2 dropped into 2.3.4's Lib dir:
 IDLE 1.0.3
 >>> ================================ RESTART ================================
 >>>
 Time: 7.40197958397
 b = 4355917627100793.0054682072286...[elided]...67722472416430409564807807874919604463
 >>>
With decimal.py from CVS (revision 1.24) in 2.3.4's Lib dir:
 IDLE 1.0.3
 >>> ================================ RESTART ================================
 >>>
 Time: 0.300008380965
 b = 4355917627100793.0054682072286...[elided]...67722472416430409564807807874919604463
 >>>
For a check, I did:
 >>> setcontext(Context(prec=500))
 >>> b * b
 Decimal("18974018374087403187404701740918.74817040 8471047304801748304710400...[lotsa zeroes]...00")
Pretty damn impressive!  Try it, you'll like it!
Good job to the crew for Decimal and the latest optimizations!
nowIjustneedatan[2]()ly y'rs,
Richard Hanson

sick<PERI0D>old<P0INT>fart<PIEDEC0SYMB0L>newsguy<MARK>com
Richard Hanson <me@privacy.net> wrote: Peter Otten wrote:
Paul Rubin wrote:
I haven't tried 2.4 yet. After
The auther is currently working on an installer, but just dropping it into 2.3's sitepackages should work, too.
I just dropped decimal.py from 2.4's Lib dir into 2.3.4's Lib dir. Seems to work. Any gotchas with this route?
None that I know of. Indeed, the author originally wanted to have that
approach as the one and only way to use decimal with 2.3, I just made
myself a nuisance to him insisting that many inexperienced Pythonistas
would have been frigthened to even try that, so finally he packaged
things up when he realized it would take less of his time than putting
up with yet another whining email from me;).
Alex
Alex Martelli wrote: Richard Hanson <me@privacy.net> wrote:
Peter Otten wrote:
The auther is currently working on an installer, but just dropping it into 2.3's sitepackages should work, too. I just dropped decimal.py from 2.4's Lib dir into 2.3.4's Lib dir. Seems to work. Any gotchas with this route?
None that I know of.
Good to hear. My interest (besides my Pathfinder project which I
introduced in another [OT] thread :) ), also is in developing
continuing improvements to my COGO software (despite my being retired
for many years and having no personal need for such).
(Now that I note the substantial speedup in CVS of Decimal's sqrt(), I
am getting interested once again in getting back to my COGOinPython
project [when I get a chance :) ], and see just what success I have
working up a relatively fast arctangent function.[1] Besides an
arctangent function, a COGOusingDecimal also needs to have pi
available to an arbitrary number of decimal places. My sketches
heretofore used precalculated values of pi to a ridiculous number of
decimal places in string form, from which a simple slice would give
the requisite value as the current Context may require. [Decimal's
help files include snippets for some simpler versions of sine and
cosine, and thus, tangent. If I get the chance  and the ability to
comprehend the stateoftheart as referenced in my footnote  to
implement a fast arctangent function in Decimal, similarly implemented
algorithms should also speed up sine and cosine.][2])
Indeed, the author originally wanted to have that approach as the one and only way to use decimal with 2.3, I just made myself a nuisance to him insisting that many inexperienced Pythonistas would have been frigthened to even try that, so finally he packaged things up when he realized it would take less of his time than putting up with yet another whining email from me;).
Heh. I, for one, have greatly enjoyed reading your posts over the
years. I think that you are correct in that *non*computer folks 
arguably part of a target audience for Python  can be easily
frightened by the complexity of the abstractions common in compsci. It
is part of Guido's genius that he recognizes such; he has aimed to
keep Python accessible to nonprogrammertypes (Donald Knuth estimated
that only about onepercent of humans even have the proper brain
organization to be potential programmers, but I digest. :) ) even as
fancier, but quite powerful, complexitycontrolling enhancements such
as generators are added to the language.
In any event, thanks for the comments! Keep up the good work and the
posting!
enjoyingreadingthepostings'ly y'rs,
Richard Hanson
_____________________________________
[1] I've skimmed R. P. Brent's work, but haven't yet found the time to
understand all of it well enough to develop the requisite trig
functions in Python's Decimal.
[2] Traditionally, the precision of generic eightbyte floating point
types has been sufficient for COGO. However, now, with the advancement
in cheapbutpowerful computing power, and with the prevalence of GPS,
such things as the Kalman Filter and other iterative matrix algorithms
now require BIGNUM decimal places to avoid degenerate solutions near
singularities and such.[3]
[3] I caution that I am an autodidact, and may sound more educated
than I actually am. ;)

sick<PERI0D>old<P0INT>fart<PIEDEC0SYMB0L>newsguy<MARK>com
Radioactive Man wrote: Thanks to all for info here. Sorry for inadvertently creating such a long thread.
Please don't feel you need to apologize. These guys enjoy discussing
such abstract, theoretical complexities  that's why they're so good at
what they do!
Hope that you did get your question answered along the way. If you have
more questions, please post them. You may also consider posting to the
Python Tutor mailing list <tu***@python.org>, where they are geared more
specifically to answering newbie questions (rather than debating the
intricacies of Pi).
Welcome and enjoy Python!
Anna

Whaddya mean  Pie are squared?
Pie aren't square  pie are round.
*Cake* are square.
Note: I posted a response yesterday, but it apparently never appeared (I
was having some trouble with my newsreader) so I'm posting this now. My
apologies if it is a duplicate.
Alex Martelli wrote: Paul Rubin <http://ph****@NOSPAM.invalid> wrote: ...
The issue here is that Python's behavior confuses the hell out of some new users. There is a separate area of confusion, that
a = 2 / 3
sets a to 0, and to clear that up, the // operator was introduced and Python 3.0 will supposedly treat / as floatingpoint division even when both operands are integers. That doesn't solve the also very common confusion that (1.0/3.0)*3.0 = 0.99999999. Rational arithmetic can solve that.
Yes, but applying rational arithmetic by default might slow some computations far too much for beginners' liking! My favourite for Python 3.0 would be to have decimals by default, with special notations to request floats and rationals (say '1/3r' for a rational, '1/3f' for a float, '1/3' or '1/3d' for a decimal with some default parameters such as number of digits). This is because my guess is that most naive users would _expect_ decimals by default...
I agree. Naive (eg, nonCS, nonMathemetician/Engineer) users who grew
up with calculators and standard math courses in school may have never
even heard of floats! (I made it as far as Calculus 2 in college, but
still had never heard of them.)
This brings me to another issue. Often c.l.py folks seem surprised that
people don't RTFM about floats before they ask about why their math
calculations aren't working. Most of the folks asking have no idea they
are *doing* float arithmetic, so when they try to google for the answer,
or look in the docs for the answer, and skip right past the "Float
Arithmetic" section of the FAQ and the Tutorial, it's because they're
not DOING float arithmetic  that they know of... So, of course they
won't read those sections to look for their answer, any more than they'd
read the Complex Number calculations section... People who know about
floats con't need that section  the ones who do need it, con't know
they need it.
If you want people to find those sections when they are looking for
answers to why their math calculations aren't working  I suggest you
remove the "FLOAT" from the title. Something in the FAQ like: "Why are
my math calculations giving weird or unexpected results?" would attract
a lot more of the people you WANT to read it. Once you've roped them in,
*then* you can explain to them about floats...
Anna Martelli Ravenscroft
Anna Martelli Ravenscroft wrote: If you want people to find those sections when they are looking for answers to why their math calculations aren't working  I suggest you remove the "FLOAT" from the title. Something in the FAQ like: "Why are my math calculations giving weird or unexpected results?" would attract a lot more of the people you WANT to read it. Once you've roped them in, *then* you can explain to them about floats...
Excellent point.
(Or, "+1" as the "oldbies" say. ;) )
Nice to "meet" you, too  welcome! (Even if I'm primarily only a
lurker.)
(Alex mentioned you have a Fujitsu LifeBook  I do, too, and like it
very much!)

[Note: I am having equipment and connectivity problems. I'll be back
as I can when I get things sorted out better, and as appropriate (or
inappropriate ;) ). Thanks to you and to all for the civil
and fun discussions!]
Richard Hanson

sick<PERI0D>old<P0INT>fart<PIEDEC0SYMB0L>newsguy<MARK>com
Richard Hanson <me@privacy.net> wrote:
... (Alex mentioned you have a Fujitsu LifeBook  I do, too, and like it very much!)
There are many 'series' of such "Lifebooks" nowadays  it's become as
undescriptive as Sony's "Vaio" brand or IBM's "Thinkpad". Anna's is a
PSeries  10.5" wideform screen, incredibly tiny, light, VERY
longlasting batteries. It was the _only_ nonApple computer around at
the local MacDay (I'm a Mac fan, and she attended too, to keep an eye on
me I suspect...;), yet it got nothing but admiring "ooh!"s from the
crowd of designobsessed Machies (Apple doesn't make any laptop smaller
than 12", sigh...).
OBCLPY: Python runs just as wonderfully on her tiny PSeries as on my
iBook, even though only Apple uses it within the OS itself;)
Alex
[Connection working again...?]
Alex Martelli wrote: Richard Hanson <me@privacy.net> wrote: ... (Alex mentioned you have a Fujitsu LifeBook  I do, too, and like it very much!) There are many 'series' of such "Lifebooks" nowadays  it's become as undescriptive as Sony's "Vaio" brand or IBM's "Thinkpad". Anna's is a PSeries  10.5" wideform screen, incredibly tiny, light, VERY longlasting batteries.
Ahem. As I said ;) in my reply to your post mentioning Anna's P2000
(in my MID: <lg********************************@4ax.com>), and in
earlier postings re 2.4x installation difficulties, mine is a Fujitsu
LifeBook P1120. (Sorry, Alex! I definitely *should* have mentioned the
model again  I'm just beginning to appreciate the difficulty of even
*partially* keeping up with c.l.py. I'm learning, though. :) )
In any event, the Fujitsu LifeBook P1120 has a 8.9" wideformat
screen, is 2.2lbs.light with the smaller *very* longlasting battery
and 2.5lbs.light with the very, *very* longlasting battery, and has
 what tipped the scales, as it were, for my needs  a touchscreen
and stylus.
It was the _only_ nonApple computer around at the local MacDay (I'm a Mac fan, and she attended too, to keep an eye on me I suspect...;), yet it got nothing but admiring "ooh!"s from the crowd of designobsessed Machies (Apple doesn't make any laptop smaller than 12", sigh...).
I can feel your pain. I would switch to Apple in a second if they had
such light models (and if I had the bucks ;) ). I need a very light
machine for reasons specified earlier. (Okay, slightly reluctantly:
Explicit may be better even with *this* particular info  I have
arthritis [ankylosing spondylitis] and need very light laptops to read
and write with. :) )
OBCLPY: Python runs just as wonderfully on her tiny PSeries as on my iBook, even though only Apple uses it within the OS itself;)
ObC.l.pyFollowup: Python also runs very well on my tinier ;) P1120
with the Transmeta Crusoe TM5800 processor running at 800MHz and with
256MB RAM and a 256KB L2 onchip cache  even using Win2k. :) It's
really nice not needing a fan on a laptop, as well  even when
calculating Decimal's sqrt() to thousands of decimal places. ;)
ObExplicitmetacomment: I'm only attempting a mixture of info *and*
levity. :)
what?menarguingaboutwhoseis*tinier*?!'ly y'rs,
Richard Hanson

sick<PERI0D>old<P0INT>fart<PIEDEC0SYMB0L>newsguy<MARK>com
In article <ad********************************@4ax.com>,
Richard Hanson <me@privacy.net> wrote: [Connection working again...?]
Alex Martelli wrote:
Richard Hanson <me@privacy.net> wrote: ... > (Alex mentioned you have a Fujitsu LifeBook  I do, too, and like it > very much!)
Cameron Laird wrote: In article <ad********************************@4ax.com>, Richard Hanson <me@privacy.net> wrote [comparing Anna Martelli Ravenscroft's Fujitsu LifeBook P2000 to my (Richard Hanson's) Fujitsu LifeBook P1120]:
[...]
In any event, the Fujitsu LifeBook P1120 has a 8.9" wideformat screen, is 2.2lbs.light with the smaller *very* longlasting battery and 2.5lbs.light with the very, *very* longlasting battery, and has  what tipped the scales, as it were, for my needs  a touchscreen and stylus.
[...]
Alex Martelli wrote:
OBCLPY: Python runs just as wonderfully on her tiny PSeries as on my iBook, even though only Apple uses it within the OS itself;) ObC.l.pyFollowup: Python also runs very well on my tinier ;) P1120 with the Transmeta Crusoe TM5800 processor running at 800MHz and with 256MB RAM and a 256KB L2 onchip cache  even using Win2k. :) It's really nice not needing a fan on a laptop, as well  even when calculating Decimal's sqrt() to thousands of decimal places. ;) . . . Is Linux practical on these boxes?
I've found on the web accounts of two people, at least, getting the
P1120 working with Linux and with at least partial functionality of
the touchscreen  one individual claimed full functionality. (I found
some accounts of success with getting Linux working on the P2000, as
well.) I'm currently waiting to purchase a new harddrive for my P1120
to see for myself if I can get Linux installed with the touchscreen
fully functioning  which, as I mentioned in my post, is particularly
important to me.
How do touchtypists like them
I've been touchtyping since I was about nineyearsold. When I was
looking for a very light laptop for reasons mentioned in my post, I
was concerned that I wouldn't be able to touchtype on the ~85% (16mm
pitch) keyboard. I went to a local "big box" computer store (who shall
remain nameless) and tried one of the P1120s  within seconds I
realized I could easily adapt and subsequently ordered one from
Fujitsu.
I would estimate that I was typing *faster* and with substantially
*fewer* errors inside of several weeks  and occasional uses of the
standardsized keyboard on my HP Omnibook 900B made me feel like a
Munchkin. :)
Now that I'm temporarily back on the standardpitch Omnibook 900B, I
have adapted to the whathadcometoseemahumongous keyboard, once
again. I most definitely prefer the P1120's keyboard.
I note that on the P1120, I could reach difficult keycombinations
much easier, and also, that I could often hold down two keys of a
threekey combo, say, with one finger or thumb.
Your mileage may vary, as they say, but I now prefer smaller
keyboards.
The "instant onoff" works very well, too. I highly recommend the
P1120 for anyone who isn't put off by the smaller keyboard. (Drawing
on the screen with the stylus is pretty trick, as well.)
Richard Hanson

sick<PERI0D>old<P0INT>fart<PIEDEC0SYMB0L>newsguy<MARK>com
Cameron Laird <cl****@lairds.us> wrote:
... Is Linux practical on these boxes?
Never got 'sleep' to work (there's supposed to be a 'hybernate' thingy,
but I haven't found it to work reliably either). AFAIMC, that's the
biggie; everything else is fine.
How do touchtypists like them
Just fine (the 10.5" P2000  can't speak for the evensmaller P1000s).
Alex
Cameron Laird wrote: In article <ad********************************@4ax.com>, Richard Hanson <me@privacy.net> wrote:
[Connection working again...?]
Alex Martelli wrote:
Richard Hanson <me@privacy.net> wrote: ...
(Alex mentioned you have a Fujitsu LifeBook  I do, too, and like it very much!)
. . .
Ahem. As I said ;) in my reply to your post mentioning Anna's P2000 (in my MID: <lg********************************@4ax.com>), and in earlier postings re 2.4x installation difficulties, mine is a Fujitsu LifeBook P1120. (Sorry, Alex! I definitely *should* have mentioned the model again  I'm just beginning to appreciate the difficulty of even *partially* keeping up with c.l.py. I'm learning, though. :) )
In any event, the Fujitsu LifeBook P1120 has a 8.9" wideformat screen, is 2.2lbs.light with the smaller *very* longlasting battery and 2.5lbs.light with the very, *very* longlasting battery, and has  what tipped the scales, as it were, for my needs  a touchscreen and stylus.
. . .
I can feel your pain. I would switch to Apple in a second if they had such light models (and if I had the bucks ;) ). I need a very light machine for reasons specified earlier. (Okay, slightly reluctantly: Explicit may be better even with *this* particular info  I have arthritis [ankylosing spondylitis] and need very light laptops to read and write with. :) )
OBCLPY: Python runs just as wonderfully on her tiny PSeries as on my iBook, even though only Apple uses it within the OS itself;)
ObC.l.pyFollowup: Python also runs very well on my tinier ;) P1120 with the Transmeta Crusoe TM5800 processor running at 800MHz and with 256MB RAM and a 256KB L2 onchip cache  even using Win2k. :) It's really nice not needing a fan on a laptop, as well  even when calculating Decimal's sqrt() to thousands of decimal places. ;)
. . . Is Linux practical on these boxes? How do touchtypists like them
Well, mine is dual boot. I'm currently experimenting with Ubuntu on my
Linux partition... I'm really REALLY hoping for a linux kernel with a
decent 'sleep' function to come up RSN because I despise having to work
in Windoze XP instead of Linux. Ah well, at least the XP hasn't been too
terrible to work on  it runs surprisingly smoothly, particularly with
Firefox and Thunderbird for browsing and email...
And I can touch type just fine  except for the damn capslock key (there
is NO purpose whatsoever for a capslock key as a standalone key on a
modern keyboard, imho). I've had only minor problems with the touch
typing that I do  and that, only due to the slightly different layout
of the SHIFT key on the right side compared to where I'd normally expect
to find it: keyboard layout is a common bugbear on laptops though,
regardless of size....
Anna
Anna Martelli Ravenscroft wrote:
[This post primarily contains solutions to Anna's problem with the
Fujitsu LifeBook P2000's key locations. But, there's also some 2.4x
MSI Installer anecdotal info in my footnote.] Cameron Laird wrote:
Is Linux practical on these boxes? How do touchtypists like them Well, mine is dual boot. I'm currently experimenting with Ubuntu on my Linux partition... I'm really REALLY hoping for a linux kernel with a decent 'sleep' function to come up RSN because I despise having to work in Windoze XP instead of Linux. Ah well, at least the XP hasn't been too terrible to work on  it runs surprisingly smoothly, particularly with Firefox and Thunderbird for browsing and email...
My Fujitsu LifeBook P1120 is (was) only singlebooting Win2k, so I
can't help with the Linux "sleep" function as yet  I'll be working
on dualbooting Win2k and Linux on the P1120 as soon as I get the
requisite hardware to rebuild things. The "sleep" function is a *very*
high priority for me, so if and when I find a solution, I'll post it
if you're still needing such  may well work for your P2000 as well.
And I can touch type just fine  except for the damn capslock key (there is NO purpose whatsoever for a capslock key as a standalone key on a modern keyboard, imho).
It seems *many* folks agree; read below.
I've had only minor problems with the touch typing that I do  and that, only due to the slightly different layout of the SHIFT key on the right side compared to where I'd normally expect to find it: keyboard layout is a common bugbear on laptops though, regardless of size....
[I lost all my recent archives in a recent series of "crashes"  so I
regoogled this morning for the info herein.]
On Win2k, and claimed for WinXP, one can manually edit the registry to
remap any of the keys. I originally did this on my P1120 with Win2k.
Worked just fine.
(I had saved to disc before a Win98SE crash just a few minutes ago
;), the manual regedit values. If you're interested in 'em you may
post here or contact me offgroup. The email addie below works if
unmunged  ObExplicit: replace the anglebracketed items with the
appropriate symbol.)
Also, there are tools available from both MS, and for those who don't
like to visit MS ;), free from many other helpful folks.
If my memory serves, I liked best the (freeware, I believe) tool
KeyTweak:
<http://webpages.charter.net/krumsick/KeyTweak_install.exe>
available from this page:
<http://webpages.charter.net/krumsick>

MS's tool is Remapkey.exe. (NB: I have not tried this tool 
*usually* my firewall blocks MS :) [which required an unblocking to
install 2.4ax because of the new MSI Installer[1] :) ].) This tool
may already be on one of your MS CDs in the reskit dirs (I haven't
looked in mine).
In any event, one webpage:
<http://www.annoyances.org/exec/forum/winxp/t1014389848>
describes Remapkey.exe as:
"... a nifty tool put out by microsoft (sic). Make sure you get the
correct version for your OS. Not resource intensive like other dll
apps."
The page has these links (quoted herein):
For individual downloads:
<http://www.dynawell.com/support/ResKit/winxp.asp>
Free from Microsoft site, for full downloads
<http://www.microsoft.com/downloads/details.aspx?familyid=9d467a6957ff4ae796eeb18c4790cffd&displaylang=en>
or shorter link:
<http://www.petri.co.il/download_windows_2003_reskit_tools.htm>

I also have links to a few other freeware (some opensource) tools for
all versions of Win32. I won't add them now, but repost or contact me
if you want more info from my research.

Additionally, I found many solutions for Linux, but haven't
investigated those as (as I said) I have not yet installed Linux on my
Fujitsu LifeBook P1120. Again, if you have trouble locating a Linux
keyremapping method, let me know as I found lots of links for the
better OS :), as well.
(I do note that after several reinstalls on the P1120, that I was
finally used to the capslock and shift key locations well enough to
avoid wrongly hitting them very often. As they say, though, your
mileage may vary.)
Richard Hanson
___________________________________________
[1] On this HP Omnibook 900B even after downloading the requisite MSI
Install file, I experienced multiple errors trying to install 2.4a3.2
on Win98SE. I finally got 2.4x working, but I note that the helpfiles
are still missing the navigation icons. I have the MSI Installer error
messages if Martin or anyone is interested.

sick<PERI0D>old<P0INT>fart<PIEDEC0SYMB0L>newsguy<MARK>com
Alex Martelli wrote: Cameron Laird <cl****@lairds.us> wrote:
How do touchtypists like them
Just fine (the 10.5" P2000  can't speak for the evensmaller P1000s).
I commented on my P1120  works better for me than the standardsized
keyboards. See my MID:
<14********************************@4ax.com>
Richard Hanson

sick<PERI0D>old<P0INT>fart<PIEDEC0SYMB0L>newsguy<MARK>com This discussion thread is closed Replies have been disabled for this discussion. Similar topics
1 post
views
Thread by TC 
last post: by
          