
I'd like to turn off ZeroDivisionError. I'd like 0./0. to just give NaN,
and when output, just print 'NaN'. I notice fpconst has the required
constants. I don't want to significantly slow floating point math, so I
don't want to just trap the exception.
If I use C code to turn off the hardware signal, will that stop python from
detecting the exception, or is python checking for 0 denominator on it's
own (hope not, that would waste cycles).  
Share:

Would a wrapper function be out of the question here?
def MyDivision(num, denom):
if denom==0:
return "NaN"
else
return num / denom   
On Feb 9, 5:03*pm, Neal Becker <ndbeck...@gmail.comwrote:
If I use C code to turn off the hardware signal, will that stop python from
detecting the exception, or is python checking for 0 denominator on it's
own (hope not, that would waste cycles).
Yes, Python does do an explicit check for a zero denominator. Here's
an excerpt from floatdiv.c in Objects/floatobject.c:
if (b == 0.0) {
PyErr_SetString(PyExc_ZeroDivisionError, "float division");
return NULL;
}
This is probably the only sane way to deal with differences in
platform behaviour when doing float divisions.   
Mark Dickinson wrote:
On Feb 9, 5:03 pm, Neal Becker <ndbeck...@gmail.comwrote:
>If I use C code to turn off the hardware signal, will that stop python from detecting the exception, or is python checking for 0 denominator on it's own (hope not, that would waste cycles).
Yes, Python does do an explicit check for a zero denominator. Here's
an excerpt from floatdiv.c in Objects/floatobject.c:
if (b == 0.0) {
PyErr_SetString(PyExc_ZeroDivisionError, "float division");
return NULL;
}
This is probably the only sane way to deal with differences in
platform behaviour when doing float divisions.
Are you sure?
It could very well be that 1/(smallest possible number)>(greatest
possible number). So I would also trap any errors besides trapping for
the obvious zero division.   
Mark Dickinson:
This is probably the only sane way to deal with differences in
platform behaviour when doing float divisions.
What Python run on a CPU that doesn't handle the nan correctly?
Bye,
bearophile   
On 20080210, Mark Dickinson <di******@gmail.comwrote:
On Feb 9, 5:03*pm, Neal Becker <ndbeck...@gmail.comwrote:
>If I use C code to turn off the hardware signal, will that stop python from detecting the exception, or is python checking for 0 denominator on it's own (hope not, that would waste cycles).
Yes, Python does do an explicit check for a zero denominator. Here's
an excerpt from floatdiv.c in Objects/floatobject.c:
if (b == 0.0) {
PyErr_SetString(PyExc_ZeroDivisionError, "float division");
return NULL;
}
This is probably the only sane way to deal with differences in
platform behaviour when doing float divisions.
I've always found that check to be really annoying. Every time
anybody asks about floating point handling, the standard
response is that "Python just does whatever the underlying
platform does". Except it doesn't in cases like this. All my
platforms do exactly what I want for division by zero: they
generate a properly signed INF. Python chooses to override
that (IMO correct) platform behavior with something surprising.
Python doesn't generate exceptions for other floating point
"events"  why the inconsistency with divide by zero?

Grant Edwards grante Yow! Where's th' DAFFY
at DUCK EXHIBIT??
visi.com   
Dikkie Dik wrote:
Mark Dickinson wrote:
>On Feb 9, 5:03 pm, Neal Becker <ndbeck...@gmail.comwrote:
>>If I use C code to turn off the hardware signal, will that stop python from detecting the exception, or is python checking for 0 denominator on it's own (hope not, that would waste cycles).
Yes, Python does do an explicit check for a zero denominator. Here's an excerpt from floatdiv.c in Objects/floatobject.c:
if (b == 0.0) { PyErr_SetString(PyExc_ZeroDivisionError, "float division"); return NULL; }
This is probably the only sane way to deal with differences in platform behaviour when doing float divisions.
Are you sure?
It could very well be that 1/(smallest possible number)>(greatest
possible number). So I would also trap any errors besides trapping for
the obvious zero division.
What's so special about one? You surely don't expect the Python code to
check for all possible cases of overflow before allowing the hardware to
proceed with a division?
regards
Steve

Steve Holden +1 571 484 6266 +1 800 494 3119
Holden Web LLC http://www.holdenweb.com/   
Grant Edwards wrote:
I've always found that check to be really annoying. Every time
anybody asks about floating point handling, the standard
response is that "Python just does whatever the underlying
platform does". Except it doesn't in cases like this. All my
platforms do exactly what I want for division by zero: they
generate a properly signed INF. Python chooses to override
that (IMO correct) platform behavior with something surprising.
Python doesn't generate exceptions for other floating point
"events"  why the inconsistency with divide by zero?
I'm aware result is arguable and professional users may prefer +INF for
1/0. However Python does the least surprising thing. It raises an
exception because everybody has learned at school 1/0 is not allowed.
>From the PoV of a mathematician Python does the right thing, too. 1/0 is
not defined, only the lim(1/x) for x 0 is +INF. From the PoV of a
numerics guy it's surprising.
Do you suggest that 1./0. results into +INF [1]? What should be the
result of 1/0?
Christian
[1] http://en.wikipedia.org/wiki/Divisio...ter_arithmetic   
On 20080210, Christian Heimes <li***@cheimes.dewrote:
Grant Edwards wrote:
>I've always found that check to be really annoying. Every time anybody asks about floating point handling, the standard response is that "Python just does whatever the underlying platform does". Except it doesn't in cases like this. All my platforms do exactly what I want for division by zero: they generate a properly signed INF. Python chooses to override that (IMO correct) platform behavior with something surprising. Python doesn't generate exceptions for other floating point "events"  why the inconsistency with divide by zero?
I'm aware result is arguable and professional users may prefer
+INF for 1/0. However Python does the least surprising thing.
It appears that you and I are surprised by different things.
It raises an exception because everybody has learned at school
1/0 is not allowed.
You must have gone to a different school than I did. I learned
that for IEEE floating point operations a/0. is INF with the
same sign as a (except when a==0, then you get a NaN).
>>From the PoV of a mathematician Python does the right thing, too. 1/0 is not defined, only the lim(1/x) for x 0 is +INF. From the PoV of a numerics guy it's surprising.
Do you suggest that 1./0. results into +INF [1]?
That's certainly what I expected after being told that Python
doesn't do anything special with floating point operations and
leaves it all up to the underlying hardware. Quoting from the
page to linked to, it's also what the IEEE standard specifies:
The IEEE floatingpoint standard, supported by almost all
modern processors, specifies that every floating point
arithmetic operation, including division by zero, has a
welldefined result. In IEEE 754 arithmetic, a/0 is positive
infinity when a is positive, negative infinity when a is
negative, and NaN (not a number) when a = 0.
I was caught completely off guard when I discovered that Python
goes out of its way to violate that standard, and it resulted
in my program not working correctly.
What should be the result of 1/0?
I don't really care. An exception is OK with me, but I don't
write code that does integer divide by zero operations.

Grant Edwards grante Yow! does your DRESSING
at ROOM have enough ASPARAGUS?
visi.com   
On Feb 10, 3:29*pm, Grant Edwards <gra...@visi.comwrote:
platform does". *Except it doesn't in cases like this. All my
platforms do exactly what I want for division by zero: they
generate a properly signed INF. *Python chooses to override
that (IMO correct) platform behavior with something surprising.
Python doesn't generate exceptions for other floating point
"events"  why the inconsistency with divide by zero?
But not everyone wants 1./0. to produce an infinity; some people
would prefer an exception.
Python does try to generate exceptions for floatingpoint events
at least some of the timee.g. generating ValueErrors for
sqrt(1.) and log(1.) and OverflowError for exp(large_number).
I agree that the current situation is not ideal. I think the ideal
would be to have a floatingpoint environment much like Decimal's,
where the user has control over whether floatingpoint exceptions
are trapped (producing Python exceptions) or not (producing
infinities
and nans). The main difficulty is in writing reliable ANSI C that
can do this across platforms. It's probably not impossible, but
it is a lot of work.   
Grant Edwards wrote:
You must have gone to a different school than I did. I learned
that for IEEE floating point operations a/0. is INF with the
same sign as a (except when a==0, then you get a NaN).
I'm not talking about CS and IEEE floating point ops. I was referring to
plain good old math. Python targets both newbies and professionals.
That's the reason for two math modules (math and cmath).
That's certainly what I expected after being told that Python
doesn't do anything special with floating point operations and
leaves it all up to the underlying hardware. Quoting from the
page to linked to, it's also what the IEEE standard specifies:
The IEEE floatingpoint standard, supported by almost all
modern processors, specifies that every floating point
arithmetic operation, including division by zero, has a
welldefined result. In IEEE 754 arithmetic, a/0 is positive
infinity when a is positive, negative infinity when a is
negative, and NaN (not a number) when a = 0.
I was caught completely off guard when I discovered that Python
goes out of its way to violate that standard, and it resulted
in my program not working correctly.
Python's a/0 outcome doesn't violate the standards because Python
doesn't promise to follow the IEEE 754 standard in the first place. Mark
and I are working hard to make math in Python more reliable across
platforms. So far we have fixed a lot of problems but we haven't
discussed the a/0 matter.
The best we could give you is an option that makes Python's floats more
IEEE 754 like:
>>from somemodule import ieee754 with ieee754:
.... r = a/0
.... print r
inf
Christian   
Christian Heimes <li***@cheimes.dewrites:
Python targets both newbies and professionals.
That's the reason for two math modules (math and cmath).
Ehhh??? cmath is for complexvalued functions, nothing to do with
newbies vs. professionals.   
On Feb 10, 4:56*pm, Grant Edwards <gra...@visi.comwrote:
Exactly. *Espeically when Python supposedly leaves floating
point ops up to the platform.
There's a thread at http://mail.python.org/pipermail/pyt...ly/329849.html
that's quite relevant to this discussion. See especially the
exchanges between Michael
Hudson and Tim Peters in the later part of the thread. I like this
bit, from Tim:
"I believe Python should raise exceptions in these cases by default,
because, as above, they correspond to the overflow and
invalidoperation signals respectively, and Python should raise
exceptions on the overflow, invalidoperation, and divideby0
signals
by default. But I also believe Python _dare not_ do so unless it
also
supplies sane machinery for disabling traps on specific signals
(along
the lines of the relevant standards here). Many serious numeric
programmers would be livid, and justifiably so, if they couldn't get
nonstop mode back. The most likely xplatfrom accident so far is
that they've been getting nonstop mode in Python since its
beginning."
Mark   
Grant Edwards wrote:
A more efficient implementation? Just delete the code that
raises the exception and the HW will do the right thing.
Do you really think that the hardware and the C runtime library will do
the right thing? Python runs on a lots platforms and architectures. Some
of the platforms don't have a FPU or don't support hardware acceleration
for floating point ops for user space applications. Some platforms don't
follow IEEE 754 semantics at all.
It took us a lot of effort to get consistent results for edge cases of
basic functions like sin and atan on all platforms. Simply removing
those lines and praying that it works won't do it.
Christian   
Christian Heimes wrote:
Grant Edwards wrote:
>A more efficient implementation? Just delete the code that raises the exception and the HW will do the right thing.
Do you really think that the hardware and the C runtime library will do
the right thing? Python runs on a lots platforms and architectures. Some
of the platforms don't have a FPU or don't support hardware acceleration
for floating point ops for user space applications. Some platforms don't
follow IEEE 754 semantics at all.
It took us a lot of effort to get consistent results for edge cases of
basic functions like sin and atan on all platforms. Simply removing
those lines and praying that it works won't do it.
Christian
I think, ideally, that on a platform that has proper IEEE 754 support we
would rely on the hardware, and only on platforms that don't would we add
extra software emulation.
With proper hardware support, the default would be a hardware floating pt
exception, which python would translate.
If the user wanted, she should be able to turn it off during some
calculation (but that would not be the default).   
On Feb 10, 5:50 pm, Ben Finney <bignose+hatess...@benfinney.id.au>
wrote:
Mark Dickinson <dicki...@gmail.comwrites:
On Feb 10, 3:29 pm, Grant Edwards <gra...@visi.comwrote:
platform does". Except it doesn't in cases like this. All my
platforms do exactly what I want for division by zero: they
generate a properly signed INF. Python chooses to override
that (IMO correct) platform behavior with something surprising.
Python doesn't generate exceptions for other floating point
"events"  why the inconsistency with divide by zero?
But not everyone wants 1./0. to produce an infinity; some people
would prefer an exception.
Special cases aren't special enough to break the rules.
Most people would not want this behaviour either::
>>0.1
0.10000000000000001
But the justification for this violation of surprise is "Python just
does whatever the underlying hardware does with floatingpoint
numbers". If that's the rule, it shouldn't be broken in the special
case of division by zero.
Do you recall what the very next Zen after "Special cases aren't
special enough to break the rules" is?
that'swhytheycallitZenly yr's,
Carl Banks   
On 20080210, Ben Finney <bi****************@benfinney.id.auwrote:
Mark Dickinson <di******@gmail.comwrites:
>>platform does". platforms do exactly what I want for division by zero: they generate a properly signed INF. *Python chooses to override that (IMO correct) platform behavior with something surprising. Python doesn't generate exceptions for other floating point "events"  why the inconsistency with divide by zero?
But not everyone wants 1./0. to produce an infinity; some people would prefer an exception.
Special cases aren't special enough to break the rules.
Most people would not want this behaviour either::
>>0.1
0.10000000000000001
But the justification for this violation of surprise is
"Python just does whatever the underlying hardware does with
floatingpoint numbers". If that's the rule, it shouldn't be
broken in the special case of division by zero.
My feelings exactly.
That's the rule that's always quoted to people asking about
various FP weirdness, but apparently the rule only applies
when/where certain people feel like it.

Grant Edwards grante Yow! YOW!! I'm in a very
at clever and adorable INSANE
visi.com ASYLUM!!   
On Feb 10, 3:29 pm, Grant Edwards <gra...@visi.comwrote:
On 20080210, Mark Dickinson <dicki...@gmail.comwrote:
On Feb 9, 5:03 pm, Neal Becker <ndbeck...@gmail.comwrote:
If I use C code to turn off the hardware signal, will that stop python from
detecting the exception, or is python checking for 0 denominator on it's
own (hope not, that would waste cycles).
Yes, Python does do an explicit check for a zero denominator. Here's
an excerpt from floatdiv.c in Objects/floatobject.c:
if (b == 0.0) {
PyErr_SetString(PyExc_ZeroDivisionError, "float division");
return NULL;
}
This is probably the only sane way to deal with differences in
platform behaviour when doing float divisions.
I've always found that check to be really annoying. Every time
anybody asks about floating point handling, the standard
response is that "Python just does whatever the underlying
platform does". Except it doesn't in cases like this. All my
platforms do exactly what I want for division by zero: they
generate a properly signed INF. Python chooses to override
that (IMO correct) platform behavior with something surprising.
Python doesn't generate exceptions for other floating point
"events"  why the inconsistency with divide by zero?
I understand your pain, but Python, like any good generalpurpose
language, is a compromise. For the vast majority of programming,
division by zero is a mistake and not merely a degenerate case, so
Python decided to treat it like one.
Carl Banks   
On Feb 10, 7:08*pm, Carl Banks <pavlovevide...@gmail.comwrote:
I understand your pain, but Python, like any good generalpurpose
language, is a compromise. *For the vast majority of programming,
division by zero is a mistake and not merely a degenerate case, so
Python decided to treat it like one.
Agreed. For 'normal' users, who haven't encountered the ideas of
infinities and NaNs, floatingpoint numbers are essentially a
computational model for the real numbers, and operations that are
illegal in the reals (square root of 1, division by zero) should
produce Python exceptions rather than send those users hurrying to
comp.lang.python to complain about something called #IND appearing on
their screens.
But for numericallyaware users it would be nice if it were possible
to do nonstop IEEE arithmetic with infinities and NaNs.
Any suggestions about how to achieve the abovedescribed state of
affairs are welcome!
Mark   
Grant Edwards wrote:
That would be great.
I'm looking forward to review your patch anytime soon. :)
Christian   
On Feb 10, 7:07*pm, Grant Edwards <gra...@visi.comwrote:
On 20080210, Christian Heimes <li...@cheimes.dewrote:
>>from somemodule import ieee754 with ieee754:
... * *r = a/0
... * *print r
inf
That would be great.
Seriously, in some of my crazier moments I've considered trying to
write a PEP on this, so I'm very interested in figuring out exactly
what it is that people want. The devil's in the details, but the
basic ideas would be:
(1) aim for consistent behaviour across platforms in preference to
exposing differences between platforms
(2) make default arithmetic raise Python exceptions in preference to
returning infs and nans. Essentially, ValueError would be raised
anywhere that IEEE 754(r) specifies raising the dividebyzero or
invalid signals, and OverflowError would be raised anywhere that IEEE
754(r) specifies raising the overflow signal. The underflow and
inexact signals would be ignored.
(3) have a threadlocal floatingpoint environment available from
Python to make it possible to turn nonstop mode on or off, with the
default being off. Possibly make it possible to trap individual
flags.
Any thoughts on the general directions here? It's far too late to
think about this for Python 2.6 or 3.0, but 3.1 might be a possibility.   
Christian Heimes <li***@cheimes.dewrites:
The two function are exposed to Python code as math.set_ieee754 and
math.get_ieee754.
Or, better, as a property, 'math.ieee754'.

\ "My, your, his, hers, ours, theirs, its. I'm, you're, he's, 
`\ she's, we're, they're, it's."  Anonymous, 
_o__) alt.sysadmin.recovery 
Ben Finney   
Ben Finney wrote:
Or, better, as a property, 'math.ieee754'.
No, it won't work. It's not possible to have a module property.
Christian   
Christian Heimes wrote:
Mark Dickinson wrote:
>Any suggestions about how to achieve the abovedescribed state of affairs are welcome!
I have worked out a suggestion in three parts.
[snip]
I've implemented my proposal and submitted it to the experimental math
branch: http://svn.python.org/view?rev=60724&view=rev
Christian   
Christian Heimes wrote:
I'm not talking about CS and IEEE floating point ops. I was referring to
plain good old math. Python targets both newbies and professionals.
Maybe there should be another division operator for
use by FP professionals?
/  mathematical real division
//  mathematical integer division
///  IEEE floating point division (where supported)

Greg   
Christian Heimes wrote:
The state is to be stored and fetched from Python's thread state object.
This could slow down floats a bit because every time f/0. occurs the
state has to be looked up in the thread state object.
An alternative implementation might be to leave zero division
traps turned on, and when one occurs, consult the state to
determine whether to raise an exception or retry that
operation with trapping turned off.
That would only incur the overhead of changing the hardware
setting when a zero division occurs, which presumably is a
relatively rare occurrence.

Greg   
Mark Dickinson wrote:
On Feb 10, 7:07 pm, Grant Edwards <gra...@visi.comwrote:
>On 20080210, Christian Heimes <li...@cheimes.dewrote:
>>>>>from somemodule import ieee754 >with ieee754: ... r = a/0 ... print r inf
That would be great.
Seriously, in some of my crazier moments I've considered trying to
write a PEP on this, so I'm very interested in figuring out exactly
what it is that people want. The devil's in the details, but the
basic ideas would be:
(1) aim for consistent behaviour across platforms in preference to
exposing differences between platforms
(2) make default arithmetic raise Python exceptions in preference to
returning infs and nans. Essentially, ValueError would be raised
anywhere that IEEE 754(r) specifies raising the dividebyzero or
invalid signals, and OverflowError would be raised anywhere that IEEE
754(r) specifies raising the overflow signal. The underflow and
inexact signals would be ignored.
(3) have a threadlocal floatingpoint environment available from
Python to make it possible to turn nonstop mode on or off, with the
default being off. Possibly make it possible to trap individual
flags.
Any thoughts on the general directions here? It's far too late to
think about this for Python 2.6 or 3.0, but 3.1 might be a possibility.
You also need to think about how conditionals interact with
quiet NANs. Properly, comparisons like ">" have three possibilities:
True, False, and "raise". Many implementations don't do that well,
which means that you lose trichotomy. "==" has issues; properly,
"+INF" is not equal to itself.
If you support quiet NANs, you need the predicates like "isnan".
I've done considerable work with code that handled floating
point exceptions in complex ways. I've done animation simulations
(see "www.animats.com") where floating point overflow could occur,
but just meant that part of the computation had to be rerun with a
smaller time step. So I'm painfully familiar with the interaction
of IEEE floating point, Windows FPU exception modes, and C++ exceptions.
On x86, with some difficulty, you can turn an FPU exception into a
C++ exception using Microsoft's compilers. But that's not portable.
x86 has exact exceptions, but most other superscalar machines
(PowerPC, Alpha, if anybody cares) do not.
For Python, I'd suggest throwing a Python exception on all errors
recognized by the FPU, except maybe underflow. If you're doing
such serious numbercrunching that you really want to handle NANs,
you're probably not writing in Python anyway.
John Nagle   
On Feb 14, 11:09 pm, John Nagle <na...@animats.comwrote:
You also need to think about how conditionals interact with
quiet NANs. Properly, comparisons like ">" have three possibilities:
True. There was a recent change to Decimal to make comparisons (other
than !=, ==) with NaNs do the "right thing": that is, raise a Python
exception, unless the Invalid flag is not trapped, in which case they
return False (and also raise the Invalid flag). I imagine something
similar would make sense for floats.
True, False, and "raise". Many implementations don't do that well,
which means that you lose trichotomy. "==" has issues; properly,
"+INF" is not equal to itself.
I don't understand: why would +INF not be equal to itself? Having
INF == INF be True seems like something that makes sense both
mathematically and computationally.
If you support quiet NANs, you need the predicates like "isnan".
They're on their way! math.isnan and math.isinf will be in Python
2.6.
C++ exception using Microsoft's compilers. But that's not portable.
x86 has exact exceptions, but most other superscalar machines
(PowerPC, Alpha, if anybody cares) do not.
Interesting. What do you mean by 'exact exception'?
For Python, I'd suggest throwing a Python exception on all errors
recognized by the FPU, except maybe underflow.
Yes: I think this should be the default behaviour, at least. It was
agreed quite a while ago amongst the Python demigods that the IEEE
overflow, invalid and dividebyzero signals should ideally raise
Python exceptions, while underflow and inexact should be ignored. The
problem is that that's not what Python does at the moment, and some
people rely on being able to get NaNs and infinities the old ways.
If you're doing
such serious numbercrunching that you really want to handle NANs,
you're probably not writing in Python anyway.
If you're worried about speed, then I agree you probably shouldn't be
writing in Python. But I can imagine there are usecases for nonstop
arithmetic with nans and infs where speed isn't the topmost concern.
Mark   
On 20080215, Mark Dickinson <di******@gmail.comwrote:
>If you're doing such serious numbercrunching that you really want to handle NANs, you're probably not writing in Python anyway.
I disagree completely. I do a lot of number crunching in
Python where I want IEEE NaN and Inf behavior. Speed is a
completely orthogonal issue.
If you're worried about speed, then I agree you probably
shouldn't be writing in Python.
Even if you are worried about speed, using tools like like
numpy can do some pretty cool stuff.
But I can imagine there are usecases for nonstop arithmetic
with nans and infs where speed isn't the topmost concern.
Frankly, I don't see that speed has anything to do with it at
all. I use Python for numbercrunching because it's easy to
program in. When people complain about not getting the right
results, replying with "if you want something fast, don't use
Python" makes no sense.

Grant Edwards grante Yow! I put aside my copy
at of "BOWLING WORLD" and
visi.com think about GUN CONTROL
legislation...   
On Feb 15, 1:38 pm, Grant Edwards <gra...@visi.comwrote:
On 20080215, Mark Dickinson <dicki...@gmail.comwrote:
If you're doing such serious numbercrunching that you really
want to handle NANs, you're probably not writing in Python
anyway.
Some dodgy quoting here: that wasn't me!
I disagree completely. I do a lot of number crunching in
Python where I want IEEE NaN and Inf behavior. Speed is a
completely orthogonal issue.
Exactly.
Mark   
On 20080215, Mark Dickinson <di******@gmail.comwrote:
On Feb 15, 1:38 pm, Grant Edwards <gra...@visi.comwrote:
>On 20080215, Mark Dickinson <dicki...@gmail.comwrote:
>If you're doing such serious numbercrunching that you really want to handle NANs, you're probably not writing in Python anyway.
Some dodgy quoting here: that wasn't me!
Yup. That's indicated by the xtra level of ">". Sorry if that
mislead anybody  I accidentally deleted the nested attribute
line when I was trimming things.
>I disagree completely. I do a lot of number crunching in Python where I want IEEE NaN and Inf behavior. Speed is a completely orthogonal issue.
Exactly.

Grant Edwards grante Yow! I know how to do
at SPECIAL EFFECTS!!
visi.com   
Mark Dickinson wrote:
On Feb 14, 11:09 pm, John Nagle <na...@animats.comwrote:
> You also need to think about how conditionals interact with quiet NANs. Properly, comparisons like ">" have three possibilities:
True. There was a recent change to Decimal to make comparisons (other
than !=, ==) with NaNs do the "right thing": that is, raise a Python
exception, unless the Invalid flag is not trapped, in which case they
return False (and also raise the Invalid flag). I imagine something
similar would make sense for floats.
>True, False, and "raise". Many implementations don't do that well, which means that you lose trichotomy. "==" has issues; properly, "+INF" is not equal to itself.
I don't understand: why would +INF not be equal to itself? Having
INF == INF be True seems like something that makes sense both
mathematically and computationally.
[...]
There are an uncountable number of infinities, all different.
regards
Steve

Steve Holden +1 571 484 6266 +1 800 494 3119
Holden Web LLC http://www.holdenweb.com/   
Steve Holden wrote:
Mark Dickinson wrote:
>On Feb 14, 11:09 pm, John Nagle <na...@animats.comwrote:
>> You also need to think about how conditionals interact with quiet NANs. Properly, comparisons like ">" have three possibilities:
True. There was a recent change to Decimal to make comparisons (other than !=, ==) with NaNs do the "right thing": that is, raise a Python exception, unless the Invalid flag is not trapped, in which case they return False (and also raise the Invalid flag). I imagine something similar would make sense for floats.
>>True, False, and "raise". Many implementations don't do that well, which means that you lose trichotomy. "==" has issues; properly, "+INF" is not equal to itself.
I don't understand: why would +INF not be equal to itself? Having INF == INF be True seems like something that makes sense both mathematically and computationally. [...]
There are an uncountable number of infinities, all different.
+ALEPH0?   
On Feb 14, 11:09 pm, John Nagle <na...@animats.comwrote:
You also need to think about how conditionals interact with
quiet NANs. Properly, comparisons like ">" have three possibilities:
True, False, and "raise". Many implementations don't do that well,
which means that you lose trichotomy. "==" has issues; properly,
"+INF" is not equal to itself.
I'm pretty sure it is. It certainly is on my machine at the moment:
>>float(3e300*3e300) == float(2e300*4e300)
True
Are you confusing INF with NAN, which is specified to be not equal to
itself (and, IIRC, is the only thing specified to be not equal to
itself, such that one way to test for NAN is x!=x).
For Python, I'd suggest throwing a Python exception on all errors
recognized by the FPU, except maybe underflow. If you're doing
such serious numbercrunching that you really want to handle NANs,
you're probably not writing in Python anyway.
Even if that were entirely true, there are cases where (for example)
you're using Python to glue together numerical routines in C, but you
need to do some preliminary calculations in Python (where there's no
edit/compile/run cycle but there is slicing and array ops), but want
the same floating point behavior.
IEEE conformance is not an unreasonable thing to ask for, and "you
should be using something else" isn't a good answer to "why not?".
Carl Banks   
On Feb 15, 2:35*pm, Steve Holden <st...@holdenweb.comwrote:
There are an uncountable number of infinities, all different.
If you're talking about infinite cardinals or ordinals in set theory,
then yes. But that hardly seems relevant to using floatingpoint as a
model for the doubly extended real line, which has exactly two
infinities.
Mark   
Mark Dickinson wrote:
On Feb 15, 2:35 pm, Steve Holden <st...@holdenweb.comwrote:
>There are an uncountable number of infinities, all different.
If you're talking about infinite cardinals or ordinals in set theory,
then yes. But that hardly seems relevant to using floatingpoint as a
model for the doubly extended real line, which has exactly two
infinities.
True enough, but aren't they of indeterminate magnitude? Since infinity
== infinity + delta for any delta, comparison for equality seems a
little specious.
regards
Steve

Steve Holden +1 571 484 6266 +1 800 494 3119
Holden Web LLC http://www.holdenweb.com/   
On Feb 15, 5:27*pm, Steve Holden <st...@holdenweb.comwrote:
True enough, but aren't they of indeterminate magnitude? Since infinity
== infinity + delta for any delta, comparison for equality seems a
little specious.
The equality is okay; it's when you start trying to apply arithmetic
laws like
a+c == b+c implies a == b
that you get into trouble. In other words, the doublyextended real
line is a perfectly welldefined and wellbehaved *set*, and even a
nice (compact) topological space with the usual topology. It's just
not a field, or a group under addition, or ...
Mark   
On Thu, 14 Feb 2008 20:09:38 0800, John Nagle wrote:
For Python, I'd suggest throwing a Python exception on all errors
recognized by the FPU, except maybe underflow. If you're doing such
serious numbercrunching that you really want to handle NANs, you're
probably not writing in Python anyway.
Chicken, egg.
The reason people aren't writing in Python is because Python doesn't
support NANs, and the reason Python doesn't support NANs is because the
people who want support for NANs aren't using Python.
Oh, also because it's hard to do it in a portable fashion. But maybe
Python doesn't need to get full platform independence all in one go?
# pseudocode
if sys.platform == "whatever"
float = IEEE_float
else:
warnings.warn("no support for NANs, beware of exceptions")
There are usecases for NANs that don't imply the need for full C speed.
Numbercrunching doesn't necessarily imply that you need to crunch
billions of numbers in the minimum time possible. Being able to do that
sort of "crunchlite" in Python would be great.

Steven   
On Fri, 15 Feb 2008 14:35:34 0500, Steve Holden wrote:
>I don't understand: why would +INF not be equal to itself? Having INF == INF be True seems like something that makes sense both mathematically and computationally. [...]
There are an uncountable number of infinities, all different.
But the IEEE standard only supports one of them, aleph(0).
Technically two: plus and minus aleph(0).

Steven   
On Feb 15, 7:59 pm, Steven D'Aprano <st...@REMOVETHIS
cybersource.com.auwrote:
On Fri, 15 Feb 2008 14:35:34 0500, Steve Holden wrote:
I don't understand: why would +INF not be equal to itself? Having INF
== INF be True seems like something that makes sense both
mathematically and computationally.
[...]
There are an uncountable number of infinities, all different.
But the IEEE standard only supports one of them, aleph(0).
Technically two: plus and minus aleph(0).
Not sure that alephs have anything to do with it. And unless I'm
missing something, minus aleph(0) is nonsense. (How do you define the
negation of a cardinal?)
From the fount of all wisdom: ( http://en.wikipedia.org/wiki/
Aleph_number)
"""The aleph numbers differ from the infinity ($B!g(B) commonly found in
algebra and calculus. Alephs measure the sizes of sets; infinity, on
the other hand, is commonly defined as an extreme limit of the real
number line (applied to a function or sequence that "diverges to
infinity" or "increases without bound"), or an extreme point of the
extended real number line. While some alephs are larger than others, $B!g(B
is just $B!g(B."""
Mark   
Paul Rubin wrote:
Mark Dickinson <di******@gmail.comwrites:
>>But the IEEE standard only supports one of them, aleph(0). Technically two: plus and minus aleph(0).
Not sure that alephs have anything to do with it.
They really do not. The extended real line can be modelled in set
theory, but the "infinity" in it is not a cardinal as we would
normally treat them in set theory.
Georg Cantor disagrees. Whether Aleph 1 is the cardinality of the set
of real numbers is provably undecidable. http://mathworld.wolfram.com/ContinuumHypothesis.html   
Jeff Schwab <je**@schwabcenter.comwrites:
They really do not. The extended real line can be modelled in set
theory, but the "infinity" in it is not a cardinal as we would
normally treat them in set theory.
Georg Cantor disagrees. Whether Aleph 1 is the cardinality of the set
of real numbers is provably undecidable.
You misunderstand, the element called "infinity" in the extended real
line has nothing to do with the cardinality of the reals, or of
infinite cardinals as treated in set theory. It's just an element of
a structure that can be described in elementary terms or can be viewed
as sitting inside of the universe of sets described by set theory.
See: http://en.wikipedia.org/wiki/Point_at_infinity
Aleph 1 didn't come up in the discussion earlier either. FWIW, it's
known (provable from the ZFC axioms) that the cardinality of the reals
is an aleph; ZFC just doesn't determine which particular aleph it is.
The Wikipedia article about CH is also pretty good: http://en.wikipedia.org/wiki/Continuum_hypothesis
the guy who proved CH is independent also expressed a belief that it
is actually false.   
On Fri, 15 Feb 2008 17:31:51 0800, Mark Dickinson wrote:
On Feb 15, 7:59 pm, Steven D'Aprano <st...@REMOVETHIS
cybersource.com.auwrote:
>On Fri, 15 Feb 2008 14:35:34 0500, Steve Holden wrote:
>I don't understand: why would +INF not be equal to itself? Having INF == INF be True seems like something that makes sense both mathematically and computationally. [...]
There are an uncountable number of infinities, all different.
But the IEEE standard only supports one of them, aleph(0).
Technically two: plus and minus aleph(0).
Not sure that alephs have anything to do with it. And unless I'm
missing something, minus aleph(0) is nonsense. (How do you define the
negation of a cardinal?)
*shrug* How would you like to?
The natural numbers (0, 1, 2, 3, ...) are cardinal numbers too. 0 is the
cardinality of the empty set {}; 1 is the cardinality of the set
containing only the empty set {{}}; 2 is the cardinality of the set
containing a set of cardinality 0 and a set of cardinality 1 {{}, {{}}}
.... and so on.
Since we have generalized the natural numbers to the integers
.... 3 2 1 0 1 2 3 ...
without worrying about what set has cardinality 1, I see no reason why
we shouldn't generalize negation to the alephs. The question of what set,
if any, has cardinality aleph(0) is irrelevant. Since the traditional
infinity of the real number line comes in a positive and negative
version, and we identify positive âˆž as aleph(0) [see below for why], I
don't believe there's any thing wrong with identifying aleph(0) as âˆž.
Another approach might be to treat the cardinals as ordinals. Subtraction
isn't directly defined for ordinals, ordinals explicitly start counting
at zero and only increase, never decrease. But one might argue that since
all ordinals are surreal numbers, and subtraction *is* defined for
surreals, we might identify aleph(0) as the ordinal omega Ï‰ then the
negative of aleph(0) is just Ï‰, or {{ ... 4, 3, 2, 1 }}. Or in
English... aleph(0) is the number more negative than every negative
integer, which gratifyingly matches our intuition about negative infinity.
There's lots of handwaving there. I expect a real mathematician could
make it all vigorous. But a lot of it is really missing the point, which
is that the IEEE standard isn't about ordinals, or cardinals, or surreal
numbers, but about floating point numbers as a discrete approximation to
the reals. In the reals, there are only two infinities that we care
about, a positive and negative, and apart from the sign they are
equivalent to aleph(0).
From the fount of all wisdom: (http://en.wikipedia.org/wiki/
Aleph_number)
"""The aleph numbers differ from the infinity (âˆž) commonly found in
algebra and calculus. Alephs measure the sizes of sets; infinity, on the
other hand, is commonly defined as an extreme limit of the real number
line (applied to a function or sequence that "diverges to infinity" or
"increases without bound"), or an extreme point of the extended real
number line. While some alephs are larger than others, âˆž is just âˆž."""
That's a very informal definition of infinity. Taken literally, it's also
nonsense, since the real number line has no limit, so talking about the
limit of something with no limit is meaningless. So we have to take it
loosely.
In fact, it isn't true that "âˆž is just âˆž" even in the two examples they
discuss. There are TWO extended real number lines: the projectively
extended real numbers, and the affinely extended real numbers. In the
projective extension to the reals, there is only one âˆž and it is
unsigned. In the affine extension, there are +âˆž and âˆž.
If you identify âˆž as "the number of natural numbers", that is, the number
of numbers in the sequence 0, 1, 2, 3, 4, ... then that's precisely what
aleph(0) is. If there's a limit to the real number line in any sense at
all, it is the same limit as for the integers (since the integers go all
the way to the end of the real number line).
(But note that there are more reals between 0 and âˆž than there are
integers, even though both go to the same limit: the reals are more
densely packed.)

Steven   
On Feb 16, 7:08*pm, Steven D'Aprano <st...@REMOVETHIS
cybersource.com.auwrote:
On Fri, 15 Feb 2008 17:31:51 0800, Mark Dickinson wrote:
Not sure that alephs have anything to do with it. *And unless I'm
missing something, minus aleph(0) is nonsense. (How do you define the
negation of a cardinal?)
*shrug* How would you like to?
Since we have generalized the natural numbers to the integers
... 3 2 1 0 1 2 3 ...
without worrying about what set has cardinality 1, I see no reason why
we shouldn't generalize negation to the alephs.
The reason is that it doesn't give a useful result. There's a natural
process for turning a commutative monoid into a group (it's the
adjoint to the forgetful functor from groups to commutative monoids).
Apply it to the "set of cardinals", leaving aside the settheoretic
difficulties with the idea of the "set of cardinals" in the first
place, and you get the trivial group.
There's lots of handwaving there. I expect a real mathematician could
make it all vigorous.
Rigorous? Yes, I expect I could.
And surreal numbers are something entirely different again.
That's a very informal definition of infinity. Taken literally, it's also
nonsense, since the real number line has no limit, so talking about the
limit of something with no limit is meaningless. So we have to take it
loosely.
The real line, considered as a topological space, has limit points.
Two of them.
Mark   
On Feb 16, 7:30*pm, Mark Dickinson <dicki...@gmail.comwrote:
The real line, considered as a topological space, has limit points.
Two of them.
Ignore that. It was nonsense. A better statement: the completion (in
the sense of lattices) of the real numbers is (isomorphic to) the
doublyextended real line. It's in this sense that +infinity and 
infinity can be considered limits.
I've no clue where your (Steven's) idea that 'all ordinals are surreal
numbers' came from. They're totally unrelated.
Sorry. I haven't had any dinner. I get tetchy when I haven't had any
dinner.
Usenet'ly yours,
Mark   
On Sat, 16 Feb 2008 17:47:39 0800, Mark Dickinson wrote:
I've no clue where your (Steven's) idea that 'all ordinals are surreal
numbers' came from. They're totally unrelated.
Tell that to John Conway.
[quote]
Just as the *real* numbers fill in the gaps between the integers, the
*surreal* numbers fill in the gaps between Cantor's ordinal numbers. We
get them by generalizing our use of the {} notation for the ordinal
numbers.
[...]
The ordinal numbers are those where there aren't any numbers to the right
of the bar:
{} = 0, the simplest number of all
{0} = 1, the simplest number greater than 0
{0,1} = 2, the simplest number greater than 1 (and 0)
and so on.
[end quote]
"The Book of Numbers", John W Conway and Richard K Guy, Copernicus Books,
1996, p.283.
I trust I don't have to explain this to Mark, but for the benefit of
anyone else reading, Conway invented surreal numbers.

Steven   
On Feb 16, 9:39*pm, Steven D'Aprano <st...@REMOVETHIS
cybersource.com.auwrote:
On Sat, 16 Feb 2008 17:47:39 0800, Mark Dickinson wrote:
I've no clue where your (Steven's) idea that 'all ordinals are surreal
numbers' came from. *They're totally unrelated.
Tell that to John Conway.
Apparently I also get stupid when I haven't had any dinner. Or
perhaps dinner has nothing to do with it. I was thinking of the
nonstandard reals.
You're absolutely right, and I hereby forfeit my Ph.D. (for the second
time today, as it happens).
Mark   This discussion thread is closed Replies have been disabled for this discussion. Similar topics
6 posts
views
Thread by Artemisio 
last post: by

2 posts
views
Thread by scorp7355 
last post: by

2 posts
views
Thread by deko 
last post: by

3 posts
views
Thread by ilushn 
last post: by

2 posts
views
Thread by Scott 
last post: by

3 posts
views
Thread by DC 
last post: by

10 posts
views
Thread by Zabby 
last post: by

1 post
views
Thread by Lonewolf 
last post: by
           