473,386 Members | 1,793 Online
Bytes | Software Development & Data Engineering Community
Post Job

Home Posts Topics Members FAQ

Join Bytes to post your question to a community of 473,386 software developers and data experts.

Python's "only one way to do it" philosophy isn't good?

I've just read an article "Building Robust System" by Gerald Jay
Sussman. The article is here:
http://swiss.csail.mit.edu/classes/s...st-systems.pdf

In it there is a footprint which says:
"Indeed, one often hears arguments against building exibility into an
engineered sys-
tem. For example, in the philosophy of the computer language Python it
is claimed:
\There should be one|and preferably only one|obvious way to do
it."[25] Science does
not usually proceed this way: In classical mechanics, for example, one
can construct equa-
tions of motion using Newtonian vectoral mechanics, or using a
Lagrangian or Hamiltonian
variational formulation.[30] In the cases where all three approaches
are applicable they are
equivalent, but each has its advantages in particular contexts."

I'm not sure how reasonable this statement is and personally I like
Python's simplicity, power and elegance. So I put it here and hope to
see some inspiring comments.

Jun 9 '07 #1
206 8146
En Sat, 09 Jun 2007 02:49:03 -0300, WaterWalk <to********@163.com>
escribió:
I've just read an article "Building Robust System" by Gerald Jay
Sussman. The article is here:
http://swiss.csail.mit.edu/classes/s...st-systems.pdf

In it there is a footprint which says:
"Indeed, one often hears arguments against building exibility into an
engineered sys-
tem. For example, in the philosophy of the computer language Python it
is claimed:
\There should be one|and preferably only one|obvious way to do
it."[25] Science does
not usually proceed this way: In classical mechanics, for example, one
can construct equa-
tions of motion using Newtonian vectoral mechanics, or using a
Lagrangian or Hamiltonian
variational formulation.[30] In the cases where all three approaches
are applicable they are
equivalent, but each has its advantages in particular contexts."

I'm not sure how reasonable this statement is and personally I like
Python's simplicity, power and elegance. So I put it here and hope to
see some inspiring comments.
I think the key is the word you ommited in the subject: "obvious". There
should be one "obvious" way to do it. For what I can remember of my first
love (Physics): if you have a small ball moving inside a spherical cup, it
would be almost crazy to use cartesian orthogonal coordinates and Newton's
laws to solve it - the "obvious" way would be to use spherical coordinates
and the Lagrangian formulation (or at least I hope so - surely
knowledgeable people will find more "obviously" which is the right way).
All classical mechanics formulations may be equivalent, but in certain
cases one is much more suited that the others.
--
Gabriel Genellina

Jun 9 '07 #2

"WaterWalk" <to********@163.comwrote in message
news:11**********************@r19g2000prf.googlegr oups.com...
| I've just read an article "Building Robust System" by Gerald Jay
| Sussman. The article is here:
|
http://swiss.csail.mit.edu/classes/s...st-systems.pdf
|
| In it there is a footprint which says:
| "Indeed, one often hears arguments against building exibility into an
| engineered sys-
| tem. For example, in the philosophy of the computer language Python it
| is claimed:

For him to imply that Python is anti-flexibility is wrong. Very wrong..
He should look in a mirror. See below.

| \There should be one|and preferably only one|obvious way to do
| it."[25] Science does
| not usually proceed this way: In classical mechanics, for example, one
| can construct equa-
| tions of motion using Newtonian vectoral mechanics, or using a
| Lagrangian or Hamiltonian
| variational formulation.[30] In the cases where all three approaches
| are applicable they are
| equivalent, but each has its advantages in particular contexts."

And in those contexts, one would hope that the method with advantages is
somehow the obvious way to do it. Otherwise beginners might become like
Buriden's ass.

So I dispute that science is as different as he claims. And I do not see
any real value in the statement in that I do not see it saying anything
useful to the reader, at least not in this snippet.

| I'm not sure how reasonable this statement is and personally I like
| Python's simplicity, power and elegance. So I put it here and hope to
| see some inspiring comments.

How much has Mr. Sussman actually programmed in Python and what actual
problems did he find with the *implementation* of the philosophy? Without
something concrete, the complaint is rather bogus.

But here is what I find funny (and a bit maddening): G. J. Sussman is one
of the inventers of the Lisp dialect Scheme, a minimalist language that for
some things has only one way to do it, let alone one obvious way. Scheme
would be like physics with only one of the three ways. After all, if they
are equivalent, only one is needed.

For example, consider scanning the items in a collection. In Python, you
have a choice of recursion (normal or tail), while loops, and for
statements. For statements are usually the obvious way, but the other two
are available for personal taste and for specially situations such as
walking a tree (where one might use recursion to write the generator that
can then be used by for loops. In scheme, I believe you just have
recursion. Since iteration and recursion are equivalent, why have both?

Terry Jan Reedy

Jun 9 '07 #3
Terry Reedy wrote:
In Python, you have a choice of recursion (normal or tail)
Please explain this. I remember reading on this newsgroup that an
advantage of ruby (wrt python) is that ruby has tail recursion, implying
that python does not. Does python have fully optimized tail recursion as
described in the tail recursion Wikipedia entry? Under what
circumstances can one count on the python interpreter recognizing the
possibility for optimized tail recursion?

James
=====

Disclaimer: Mention of more than one programming language in post does
not imply author's desire to begin language v. language holy battle. The
author does not program in [some or all of the other languages mentioned
aside from the language topical to the newsgroup] and has no opinions on
the merits or shortcomings of said language or languages.

=====
Jun 9 '07 #4
....
In scheme, I believe you just have recursion.
....
Cousin TJR ....

I'm a total scheme rookie starting only about 3 days ago
and one of the mechanisms I went looking for was a technique
for iteration ....

Found in the scheme docs about iteration supplied
via the reduce package ....

"Iterate and reduce are extensions of named-let
for writing loops that walk down one or more sequences
such as the elements of a list or vector, the characters
read from a port, or arithmetic series .... "

The following scheme session illustrates a trivial example ....
, open reduce

( define ( list_loop this_list )
( iterate loop
( ( list* this_item this_list ) ) ; iterate expression
( ( new_list '( ) ) ) ; state expression
( loop ( cons ( * 2 this_item ) new_list ) ) ; body expression
( reverse new_list ) ) ) ; final expression
; no values returned
>
( define L '( 1 2 3 4 5 ) )
; no values returned
>
( define result_i ( list_loop L ) )
; no values returned
>
result_i
'(2 4 6 8 10)
>
However, just as in Python the map function
might be both easier to code and more readable
in many cases ....
( define ( x2 n ) ( * 2 n ) )
; no values returned
>
( define result_m ( map x2 L ) )
; no values returned
>
result_m
'(2 4 6 8 10)

Note ....

No lambdas in my scheme code either .... ;-)
--
Stanley C. Kitching
Human Being
Phoenix, Arizona
----== Posted via Newsfeeds.Com - Unlimited-Unrestricted-Secure Usenet News==----
http://www.newsfeeds.com The #1 Newsgroup Service in the World! 120,000+ Newsgroups
----= East and West-Coast Server Farms - Total Privacy via Encryption =----
Jun 9 '07 #5
Gabriel Genellina wrote:
For what I can
remember of my first love (Physics): if you have a small ball
moving inside a spherical cup, it would be almost crazy to use
cartesian orthogonal coordinates and Newton's laws to solve it -
the "obvious" way would be to use spherical coordinates and the
Lagrangian formulation (or at least I hope so
Yep, that's right.
- surely knowledgeable people will find more "obviously" which is
the right way).
No, this case is IMHO almost classical. Movement with planar
constraints can be solved quite easy using Lagrange.
All classical mechanics formulations may be equivalent, but
in certain cases one is much more suited that the others.
Or: Lagrange is the only obvious way to describe movement with
constraints.

Regards,
Björn

--
BOFH excuse #80:

That's a great computer you have there; have you considered how it
would work as a BSD machine?

Jun 9 '07 #6
On Jun 9, 12:16 pm, James Stroud <jstr...@mbi.ucla.eduwrote:
Terry Reedy wrote:
In Python, you have a choice of recursion (normal or tail)

Please explain this. I remember reading on this newsgroup that an
advantage of ruby (wrt python) is that ruby has tail recursion, implying
that python does not.
Proof by rumour? You can use first class continuations in Ruby to
eliminate tail calls in and define higher order function wrappers
( like Python decorators ). But I wouldn't call this "fully
optimized".
Does python have fully optimized tail recursion as
described in the tail recursion Wikipedia entry?
No.

Jun 9 '07 #7

"James Stroud" <js*****@mbi.ucla.eduwrote in message
news:E5***************@newssvr17.news.prodigy.net. ..
| Terry Reedy wrote:
| In Python, you have a choice of recursion (normal or tail)
|
| Please explain this.

I am working on a paper for Python Papers that will. It was inspired by
the question 'why doesn't Python do tail-recursion optimization'.

tjr

Jun 9 '07 #8

"Cousin Stanley" <co***********@hotmail.comwrote in message
news:11**************@sp12lax.superfeed.net...
| In scheme, I believe you just have recursion.

I was referring to the original mimimalist core language developed by Guy
and Sussman and as I remember it being used in the original edition of SICP
(see Wikipedia). I also remember statements explaining (truthfully) that
builtin iteration is not needed because it can be defined in terms of tail
recursion, which in Scheme is required to be optimized to be just as space
efficient.

I see in Wikipedia that Scheme has do loops (all versions?), but I do not
know if that was original or added. If the former, it was de-emphasized.
Hence my belief, even if mistaken.

| Cousin TJR ....
|
| I'm a total scheme rookie starting only about 3 days ago
| and one of the mechanisms I went looking for was a technique
| for iteration ....
|
| Found in the scheme docs about iteration supplied
| via the reduce package ....

Right. An add-on library package, not part of the core;-)

In Python, modules can add functions (and classes, etc), but not statement
syntax, so adding while statements defined in terms of recursion is not
possible.

Scheme is quite elegant and worth learning at least the basics of. My only
point was that Sussman is an odd person to be criticizing (somewhat
mistakingly) Python for being minimalist.

tjr

Jun 9 '07 #9
Bjoern Schliessmann wrote:
Gabriel Genellina wrote:

>>For what I can
remember of my first love (Physics): if you have a small ball
moving inside a spherical cup, it would be almost crazy to use
cartesian orthogonal coordinates and Newton's laws to solve it -
the "obvious" way would be to use spherical coordinates and the
Lagrangian formulation (or at least I hope so
Having actually solved that problem in simulation, I can report
that it's easier in Cartesian coordinates. I used to use this as
a test of Falling Bodies, one of the first physics engines that
really worked on the hard cases.

Spherical coordinates seem attractive until you have to deal
with friction between the ball and cup. The ball is rotating, too,
and may be slipping with respect to the cup. Then the simple
Physics 101 approach isn't so simple any more.

John Nagle
Animats
Jun 9 '07 #10
James Stroud wrote:
Terry Reedy wrote:
>In Python, you have a choice of recursion (normal or tail)

Please explain this. I remember reading on this newsgroup that an
advantage of ruby (wrt python) is that ruby has tail recursion, implying
that python does not. Does python have fully optimized tail recursion as
described in the tail recursion Wikipedia entry? Under what
circumstances can one count on the python interpreter recognizing the
possibility for optimized tail recursion?
Note that Terry said that you could do normal or tail recursion, he
didn't claim that either were optimized. As for why tail calls are not
optimized out, it was decided that being able to have the stack traces
(with variable information, etc.) was more useful than offering tail
call optimization (do what I say).

- Josiah
Jun 9 '07 #11
Josiah Carlson <jo************@sbcglobal.netwrites:
James Stroud wrote:
>Terry Reedy wrote:
>>In Python, you have a choice of recursion (normal or tail)

Please explain this. I remember reading on this newsgroup that an advantage
of ruby (wrt python) is that ruby has tail recursion, implying that python
does not. Does python have fully optimized tail recursion as described in
the tail recursion Wikipedia entry? Under what circumstances can one count
on the python interpreter recognizing the possibility for optimized tail
recursion?

Note that Terry said that you could do normal or tail recursion, he didn't
claim that either were optimized.
Well yeah, but without the implication how do the two words "or tail" add to
the information content of the sentence?
As for why tail calls are not optimized out, it was decided that being able
to have the stack traces (with variable information, etc.) was more useful
than offering tail call optimization
I don't buy this. What's more important, making code not fail arbitrarily (and
thus making approaches to certain problems feasible that otherwise wouldn't
be) or making it a be a bit easier to debug code that will fail arbitrarily?
Why not only do tail-call optimization in .pyo files and get the best of both
worlds?
(do what I say).
Where did you say run out of memory and fail? More importantly how do you say
"don't run out of memory and fail"?

'as
Jun 9 '07 #12
Alexander Schmolck wrote:
Josiah Carlson <jo************@sbcglobal.netwrites:
>James Stroud wrote:
>>Terry Reedy wrote:
In Python, you have a choice of recursion (normal or tail)
Please explain this. I remember reading on this newsgroup that an advantage
of ruby (wrt python) is that ruby has tail recursion, implying that python
does not. Does python have fully optimized tail recursion as described in
the tail recursion Wikipedia entry? Under what circumstances can one count
on the python interpreter recognizing the possibility for optimized tail
recursion?
Note that Terry said that you could do normal or tail recursion, he didn't
claim that either were optimized.

Well yeah, but without the implication how do the two words "or tail" add to
the information content of the sentence?
Normal and tail recursion are different, based upon whether or not one
can technically considered to be done with the stack frame.

def normal(arg):
if arg == 1:
return 1
return arg * normal(arg-1)

def tail(arg, default=1):
if arg == 1:
return arg * default
return tail(arg-1, default*arg)

>As for why tail calls are not optimized out, it was decided that being able
to have the stack traces (with variable information, etc.) was more useful
than offering tail call optimization

I don't buy this. What's more important, making code not fail arbitrarily (and
thus making approaches to certain problems feasible that otherwise wouldn't
be) or making it a be a bit easier to debug code that will fail arbitrarily?
Why not only do tail-call optimization in .pyo files and get the best of both
worlds?
I didn't make the decisions, I'm just reporting what was decided upon.

Personally, I have never found the lack of tail call optimization an
issue for two reasons. The first is because I typically write such
programs in an iterative fashion. And generally for recursion for which
tail call optimization doesn't work (the vast majority of recursive
algorithms I use), I typically write the algorithm recursively first,
verify its correctness, then convert it into an iterative version with
explicit stack. I find it is good practice, and would be necessary
regardless of whether Python did tail call optimization or not.

>(do what I say).

Where did you say run out of memory and fail? More importantly how do you say
"don't run out of memory and fail"?
By virtue of Python's underlying implementation, Python "does what I
say", it doesn't "do what I mean". While I may not have explicitly
stated "run out of stack space", the underlying implementation *has*
limited stack space. You are stating that when you write a tail
recursive program, you want Python to optimize it away by destroying the
stack frames. And that's fine. There are tail-call optimization
decorators available, and if you dig into sourceforge, there should even
be a full patch to make such things available in a previous Python.

However, Python is not Lisp and is not partially defined by infinite
recursion (see sys.setrecursionlimit() ). Instead, it is limited by the
C call stack (in CPython), and makes guarantees regarding what will
always be available during debugging (the only thing that optimization
currently does in Python at present is to discard docstrings). If you
want to change what is available for debugging (to make things like tail
call optimization possible), you are free to write and submit a PEP.

In the mean time, you may need to do some source conversion.
- Josiah
Jun 9 '07 #13
On Sat, 09 Jun 2007 22:42:17 +0100, Alexander Schmolck wrote:
>As for why tail calls are not optimized out, it was decided that being able
to have the stack traces (with variable information, etc.) was more useful
than offering tail call optimization

I don't buy this.
Do you mean you don't believe the decision was made, or you don't agree
with the decision?

What's more important, making code not fail arbitrarily (and
thus making approaches to certain problems feasible that otherwise
wouldn't be) or making it a be a bit easier to debug code that will fail
arbitrarily? Why not only do tail-call optimization in .pyo files and
get the best of both worlds?
Are you volunteering? If you are, I'm sure your suggestion will be
welcomed gratefully.

>(do what I say).

Where did you say run out of memory and fail? More importantly how do
you say "don't run out of memory and fail"?
If we can live with a certain amount of "arbitrary failures" in simple
arithmetic, then the world won't end if tail recursion isn't optimized
away by the compiler. You can always hand-optimize it yourself.

dont_run_out_of_memory_and_fail = 10**(10**100) # please?
--
Steven.

Jun 10 '07 #14
On Sat, 09 Jun 2007 22:52:32 +0000, Josiah Carlson wrote:
the only thing that optimization
currently does in Python at present is to discard docstrings
Python, or at least CPython, does more optimizations than that. Aside from
run-time optimizations like interned strings etc., there are a small
number of compiler-time optimizations done.

Running Python with the -O (optimize) flag tells Python to ignore
assert statements. Using -OO additionally removes docstrings.

Regardless of the flag, in function (and class?) definitions like the
following:

def function(args):
"Doc string"
x = 1
s = "this is a string constant"
"and this string is treated as a comment"
return s*x

The string-comment is ignored by the compiler just like "real" comments.
(The same doesn't necessarily hold for other data types.)
Some dead code is also optimized away:
>>def function():
.... if 0:
.... print "dead code"
.... return 2
....
>>dis.dis(function)
4 0 LOAD_CONST 1 (2)
3 RETURN_VALUE
Lastly, in recent versions (starting with 2.5 I believe) Python includes a
peephole optimizer that implements simple constant folding:

# Python 2.4.3
>>dis.dis(lambda: 1+2)
1 0 LOAD_CONST 1 (1)
3 LOAD_CONST 2 (2)
6 BINARY_ADD
7 RETURN_VALUE

# Python 2.5
>>dis.dis(lambda: 1+2)
1 0 LOAD_CONST 2 (3)
3 RETURN_VALUE
The above all holds for CPython. Other Pythons may implement other
optimizations.

--
Steven.

Jun 10 '07 #15
Kay Schluehr wrote:
On Jun 9, 12:16 pm, James Stroud <jstr...@mbi.ucla.eduwrote:
>Terry Reedy wrote:
>>In Python, you have a choice of recursion (normal or tail)
Please explain this. I remember reading on this newsgroup that an
advantage of ruby (wrt python) is that ruby has tail recursion, implying
that python does not.

Proof by rumour?
"Proof" if you define "proof" by asking for clarification about a vague
recollection of an even more vague posting. I think now I'll prove
Fermat's Last Theorem by hailing a cab.
Jun 10 '07 #16
On Jun 9, 12:16 pm, James Stroud <jstr...@mbi.ucla.eduwrote:
Terry Reedy wrote:
In Python, you have a choice of recursion (normal or tail)

Please explain this. I remember reading on this newsgroup that an
advantage of ruby (wrt python) is that ruby has tail recursion, implying
that python does not. Does python have fully optimized tail recursion as
described in the tail recursion Wikipedia entry? Under what
circumstances can one count on the python interpreter recognizing the
possibility for optimized tail recursion?
I'm afraid Terry is wrong here, at least if he meant that CPython had
tail recursion *optimization*.

(and just for those who don't know yet, it's not a shortcoming, it's a
design choice.)
Jun 10 '07 #17
Steven D'Aprano wrote:
On Sat, 09 Jun 2007 22:52:32 +0000, Josiah Carlson wrote:
>the only thing that optimization
currently does in Python at present is to discard docstrings

Python, or at least CPython, does more optimizations than that. Aside from
run-time optimizations like interned strings etc., there are a small
number of compiler-time optimizations done.

Running Python with the -O (optimize) flag tells Python to ignore
assert statements. Using -OO additionally removes docstrings.
Oh yeah, asserts. I never run with -O, and typically don't use asserts,
so having or not having either isn't a big deal for me.
Regardless of the flag, in function (and class?) definitions like the
following:

def function(args):
"Doc string"
x = 1
s = "this is a string constant"
"and this string is treated as a comment"
return s*x

The string-comment is ignored by the compiler just like "real" comments.
(The same doesn't necessarily hold for other data types.)
I would guess it is because some other data types may have side-effects.
On the other hand, a peephole optimizer could be written to trim out
unnecessary LOAD_CONST/POP_TOP pairs.
Some dead code is also optimized away:
Obviously dead code removal happens regardless of optimization level in
current Pythons.
Lastly, in recent versions (starting with 2.5 I believe) Python includes a
peephole optimizer that implements simple constant folding:
Constant folding happens regardless of optimization level in current
Pythons.
So really, assert and docstring removals. Eh.

- Josiah
Jun 10 '07 #18
br*****************@gmail.com wrote:
On Jun 9, 12:16 pm, James Stroud <jstr...@mbi.ucla.eduwrote:
>>Terry Reedy wrote:
>>>In Python, you have a choice of recursion (normal or tail)

Please explain this. I remember reading on this newsgroup that an
advantage of ruby (wrt python) is that ruby has tail recursion, implying
that python does not. Does python have fully optimized tail recursion as
described in the tail recursion Wikipedia entry? Under what
circumstances can one count on the python interpreter recognizing the
possibility for optimized tail recursion?


I'm afraid Terry is wrong here, at least if he meant that CPython had
tail recursion *optimization*.

(and just for those who don't know yet, it's not a shortcoming, it's a
design choice.)

Jun 10 '07 #19
Josiah Carlson wrote:
Steven D'Aprano wrote:
>On Sat, 09 Jun 2007 22:52:32 +0000, Josiah Carlson wrote:
>>the only thing that optimization currently does in Python at present
is to discard docstrings


Python, or at least CPython, does more optimizations than that. Aside
from
run-time optimizations like interned strings etc., there are a small
number of compiler-time optimizations done.

Running Python with the -O (optimize) flag tells Python to ignore
assert statements. Using -OO additionally removes docstrings.
....
>
I would guess it is because some other data types may have side-effects.
On the other hand, a peephole optimizer could be written to trim out
unnecessary LOAD_CONST/POP_TOP pairs.
>Some dead code is also optimized away:

Obviously dead code removal happens regardless of optimization level in
current Pythons.
>Lastly, in recent versions (starting with 2.5 I believe) Python
includes a
peephole optimizer that implements simple constant folding:

Constant folding happens regardless of optimization level in current
Pythons.
So really, assert and docstring removals. Eh.
It's hard to optimize Python code well without global analysis.
The problem is that you have to make sure that a long list of "wierd
things", like modifying code or variables via getattr/setattr, aren't
happening before doing significant optimizations. Without that,
you're doomed to a slow implementation like CPython.

ShedSkin, which imposes some restrictions, is on the right track here.
The __slots__ feature is useful but doesn't go far enough.

I'd suggest defining "simpleobject" as the base class, instead of "object",
which would become a derived class of "simpleobject". Objects descended
directly from "simpleobject" would have the following restrictions:

- "getattr" and "setattr" are not available (as with __slots__)
- All class member variables must be initialized in __init__, or
in functions called by __init__. The effect is like __slots__,
but you don't have to explictly write declarations.
- Class members are implicitly typed with the type of the first
thing assigned to them. This is the ShedSkin rule. It might
be useful to allow assignments like

self.str = None(string)

to indicate that a slot holds strings, but currently has the null
string.
- Function members cannot be modified after declaration. Subclassing
is fine, but replacing a function member via assignment is not.
This allows inlining of function calls to small functions, which
is a big win.
- Private function members (self._foo and self.__foo) really are
private and are not callable outside the class definition.

You get the idea. This basically means that "simpleobject" objects have
roughly the same restrictions as C++ objects, for which heavy compile time
optimization is possible. Most Python classes already qualify for
"simpleobject". And this approach doesn't require un-Pythonic stuff like
declarations or extra "decorators".

With this, the heavy optimizations are possible. Strength reduction. Hoisting
common subexpressious out of loops. Hoisting reference count updates out of
loops. Keeping frequently used variables in registers. And elimination of
many unnecessary dictionary lookups.

Python could get much, much faster. Right now CPython is said to be 60X slower
than C. It should be possible to get at least an order of magnitude over
CPython.

John Nagle
Jun 10 '07 #20

<br*****************@gmail.comwrote in message
news:11**********************@m36g2000hse.googlegr oups.com...
| Terry Reedy wrote:
| In Python, you have a choice of recursion (normal or tail)

[snip Stroud questions]

| I'm afraid Terry is wrong here, at least if he meant that CPython had
| tail recursion *optimization*.

NO!!!
I did not mean that or imply that in any way.

| (and just for those who don't know yet, it's not a shortcoming, it's a
| design choice.)

And I already noted in a followup that I am working on a Python Papers
paper explaining that choice, including Guido's claim that 'for statements
are better'.

So frankly I am a little annoyed that you dragged my name into your answer
to Stroud when you should have succintly said 'No, Never', or better,
nothing at all, as someone else already did say that. Read more of the
tread before jumping in and acribing ignorance to people.

Terry Jan Reedy

Jun 10 '07 #21
Steven D'Aprano <st***@REMOVE.THIS.cybersource.com.auwrites:
On Sat, 09 Jun 2007 22:42:17 +0100, Alexander Schmolck wrote:
>>As for why tail calls are not optimized out, it was decided that being able
to have the stack traces (with variable information, etc.) was more useful
than offering tail call optimization

I don't buy this.

Do you mean you don't believe the decision was made, or you don't agree
with the decision?
Neither. I don't believe the rationale stated in this thread to be the true
reason.
Are you volunteering? If you are, I'm sure your suggestion will be welcomed
gratefully.
I rather doubt it. Guido has stated quite clearly that his not interested in
incorporating this feature.
>>(do what I say).

Where did you say run out of memory and fail? More importantly how do
you say "don't run out of memory and fail"?

If we can live with a certain amount of "arbitrary failures" in simple
arithmetic,
I prefer not too, and thus when possible avoid to use languages where ``a +
b`` is liable to fail arbitrarily (such as C, where the behavior will often be
undefined).
then the world won't end if tail recursion isn't optimized away by the
compiler.
I'd personally much rather have arithmetic working properly. Unfortunately
this is not an option at the moment, although quite a few smart people are
working on it and although it is an immensely costly problem.
You can always hand-optimize it yourself.
Not tail calls, in general, no.

'as
Jun 11 '07 #22
On Mon, 11 Jun 2007 01:28:09 +0100, Alexander Schmolck wrote:
Steven D'Aprano <st***@REMOVE.THIS.cybersource.com.auwrites:
>On Sat, 09 Jun 2007 22:42:17 +0100, Alexander Schmolck wrote:
>>>As for why tail calls are not optimized out, it was decided that being able
to have the stack traces (with variable information, etc.) was more useful
than offering tail call optimization

I don't buy this.

Do you mean you don't believe the decision was made, or you don't agree
with the decision?

Neither. I don't believe the rationale stated in this thread to be the true
reason.

Don't keep us in suspense. What do you believe is the true reason?

>Are you volunteering? If you are, I'm sure your suggestion will be welcomed
gratefully.

I rather doubt it. Guido has stated quite clearly that his not
interested in incorporating this feature.
He's not the only one who gets to make these decisions. But even if he
uses his veto to prevent tail-recursion optimization from being put into
the main branch, there are other options.
>>>(do what I say).

Where did you say run out of memory and fail? More importantly how do
you say "don't run out of memory and fail"?

If we can live with a certain amount of "arbitrary failures" in simple
arithmetic,

I prefer not too, and thus when possible avoid to use languages where
``a + b`` is liable to fail arbitrarily (such as C, where the behavior
will often be undefined).
That's not the sort of arbitrary failure I was discussing, but for that
matter Python is one of those languages. Perhaps Python is not the
language for you?

Correct me if I'm wrong, but surely it is C++ that can have arbitrary
behaviour for "a + b", not C?
>then the world won't end if tail recursion isn't optimized away by the
compiler.

I'd personally much rather have arithmetic working properly.
Unfortunately this is not an option at the moment, although quite a few
smart people are working on it and although it is an immensely costly
problem.
>You can always hand-optimize it yourself.

Not tail calls, in general, no.
Sorry, how does that work? You're suggesting that there is an algorithm
which the compiler could follow to optimize away tail-recursion, but human
beings can't follow the same algorithm?

Now I'm confused.
--
Steven.

Jun 11 '07 #23
Steven D'Aprano <st***@REMOVE.THIS.cybersource.com.auwrites:
Not tail calls, in general, no.

Sorry, how does that work? You're suggesting that there is an algorithm
which the compiler could follow to optimize away tail-recursion, but human
beings can't follow the same algorithm?

Now I'm confused.
The usual compiler method is to translate the code into
continuation-passing style and thereby gain tail-recursion
optimization automagically. Of course a human could do the same thing
in principle, but it would result in insanely difficult-to-write,
unreadable and unmaintainable code. At a higher level, there are no
Python features whatsoever, either existing or proposed, that couldn't
be eliminated and left to the human. Iterators? While loops? Who
needs 'em? We could all go back to programming in machine code. But
Python is supposed to make programming easier, not harder.
Jun 11 '07 #24
On Jun 11, 12:43 am, Steve Howell <showel...@yahoo.comwrote:
To the extent that some of these optimizations could
be achieved by writing better Python code, it would
nice for optimization tools to have a "suggest" mode.
Is anyone out there who uses MS Word and doesn't deactivate the
"suggest" mode i.e. Clippy? Maybe someone shall write a lucid blog
entry about the failure of "suggest" modes in general or point me to
one. Autocompletion as a typing aid might be considered as a counter
example but only because you don't really have a choice and will not
be confronted with nonsensical AI guesses.

Jun 11 '07 #25
John Nagle wrote:
Josiah Carlson wrote:
[snip]
>Constant folding happens regardless of optimization level in current
Pythons.
>So really, assert and docstring removals. Eh.

It's hard to optimize Python code well without global analysis.
The problem is that you have to make sure that a long list of "wierd
things", like modifying code or variables via getattr/setattr, aren't
happening before doing significant optimizations. Without that,
you're doomed to a slow implementation like CPython.

ShedSkin, which imposes some restrictions, is on the right track here.
The __slots__ feature is useful but doesn't go far enough.
[snip]
Python could get much, much faster. Right now CPython is said to be 60X
slower
than C. It should be possible to get at least an order of magnitude over
CPython.
Don't get me wrong; I'm all for adding optimizations, I was merely
expressing that currently, 'python -OO' doesn't really do a whole lot.
I've a long-time user of psyco, have mucked about with
scipy.weave.inline, and have been a heavy user of Pyrex and C. If there
was a method of offering some simple optimization cues to the Python
runtime to improve its speed, I would be happy to add them when I really
care about speed (I already do more than that when writing Pyrex). The
real question is whether we can get a practical implementation of these
optimizations as easy to use as psyco.
- Josiah
Jun 11 '07 #26
Paul Rubin wrote:
Steven D'Aprano <st***@REMOVE.THIS.cybersource.com.auwrites:
>>Not tail calls, in general, no.
Sorry, how does that work? You're suggesting that there is an algorithm
which the compiler could follow to optimize away tail-recursion, but human
beings can't follow the same algorithm?

Now I'm confused.

The usual compiler method is to translate the code into
continuation-passing style and thereby gain tail-recursion
optimization automagically. Of course a human could do the same thing
in principle, but it would result in insanely difficult-to-write,
unreadable and unmaintainable code. At a higher level, there are no
Python features whatsoever, either existing or proposed, that couldn't
be eliminated and left to the human. Iterators? While loops? Who
needs 'em? We could all go back to programming in machine code. But
Python is supposed to make programming easier, not harder.
Thanks to Richie Hindle, there exists a goto for Python implementation
that makes such things quite trivial (assuming one doesn't like
"abusing" break/continue/else). http://entrian.com/goto/ (which, by
the way, is the best April-fools joke ever)
- Josiah
Jun 11 '07 #27
It's hard to optimize Python code well without global analysis.
The problem is that you have to make sure that a long list of "wierd
things", like modifying code or variables via getattr/setattr, aren't
happening before doing significant optimizations. Without that,
you're doomed to a slow implementation like CPython.

ShedSkin, which imposes some restrictions, is on the right track here.
The __slots__ feature is useful but doesn't go far enough.

I'd suggest defining "simpleobject" as the base class, instead of
"object",
which would become a derived class of "simpleobject". Objects descended
directly from "simpleobject" would have the following restrictions:

- "getattr" and "setattr" are not available (as with __slots__)
- All class member variables must be initialized in __init__, or
in functions called by __init__. The effect is like __slots__,
but you don't have to explictly write declarations.
- Class members are implicitly typed with the type of the first
thing assigned to them. This is the ShedSkin rule. It might
be useful to allow assignments like

self.str = None(string)

to indicate that a slot holds strings, but currently has the null
string.
- Function members cannot be modified after declaration. Subclassing
is fine, but replacing a function member via assignment is not.
This allows inlining of function calls to small functions, which
is a big win.
- Private function members (self._foo and self.__foo) really are
private and are not callable outside the class definition.

You get the idea. This basically means that "simpleobject" objects have
roughly the same restrictions as C++ objects, for which heavy compile time
optimization is possible. Most Python classes already qualify for
"simpleobject". And this approach doesn't require un-Pythonic stuff like
declarations or extra "decorators".

With this, the heavy optimizations are possible. Strength reduction.
Hoisting
common subexpressious out of loops. Hoisting reference count updates
out of
loops. Keeping frequently used variables in registers. And elimination of
many unnecessary dictionary lookups.

I won't give you the "prove it by doing it"-talk. It's to cheap.

Instead I'd like to say why I don't think that this will buy you much
performance-wise: it's a local optimization only. All it can and will do
is to optimize lookups and storage of attributes - either functions or
values - and calls to methods from within one specialobject. As long as
expressions stay in their own "soup", things might be ok.

The very moment you mix this with "regular", no-strings-attached python
code, you have to have the full dynamic machinery in place + you need
tons of guarding statements in the optimized code to prevent access
violations.

So in the end, I seriously doubt the performance gains are noticable.
Instead I'd rather take the pyrex-road, which can go even further
optimizing with some more declarations. But then I at least know exactly
where the boundaries are. As does the compiler.
Python could get much, much faster. Right now CPython is said to be 60X
slower
than C. It should be possible to get at least an order of magnitude over
CPython.

Regardless of the possibility of speeding it up - why should one want
this? Coding speed is more important than speed of coding in 90%+ of all
cases. The other ones - well, if you _really_ want speed, assembler is
the way to go. I'm serious about that. There is one famous mathematical
library author that does code in assembler - because in the end, it's
all about processor architecture and careful optimization for that. [1]

The same is true for e.g. the new Cell architecture, or the
altivec-optimized code in photoshop that still beats the crap out of
Intel processors on PPC-machines.

I'm all for making python faster if it doesn't suffer
functionality-wise. But until there is a proof that something really
speeds up python w/o crippling it, I'm more than skeptical.

Diez

[1] http://math-atlas.sourceforge.net/faq.html#auth

"""
Kazushige Goto
His ev5/ev6 GEMM is used directly by ATLAS if the user answers
"yes" to its use during the configuration procedure on an alpha
processor. This results in a significant speedup over ATLAS's own GEMM
codes, and is the fastest ev5/ev6 implementation we are aware of.
"""
Jun 11 '07 #28
Terry Reedy a écrit :
<br*****************@gmail.comwrote in message
news:11**********************@m36g2000hse.googlegr oups.com...
| Terry Reedy wrote:
| In Python, you have a choice of recursion (normal or tail)

[snip Stroud questions]

| I'm afraid Terry is wrong here, at least if he meant that CPython had
| tail recursion *optimization*.

NO!!!
I did not mean that or imply that in any way.
I understand you didn't mean it, but since the whole point of
tail-recursion is allowing optimisation (else tail-recursion is nothing
else than a subset of recursion), you somehow implied it, even while
that was not your intention.
| (and just for those who don't know yet, it's not a shortcoming, it's a
| design choice.)

And I already noted in a followup that I am working on a Python Papers
paper explaining that choice, including Guido's claim that 'for statements
are better'.

So frankly I am a little annoyed that you dragged my name into your answer
to Stroud when you should have succintly said 'No, Never', or better,
nothing at all, as someone else already did say that. Read more of the
tread before jumping in and acribing ignorance to people.
You're right on the fact that I should have read more of the thread
before posting this (which I usually do), and I do apologize for this.
But please note the second half of the sentence - which puts a strong
precondition on the validity of the first part.
Jun 11 '07 #29
On 2007-06-09, Terry Reedy <tj*****@udel.eduwrote:
>
"WaterWalk" <to********@163.comwrote in message
news:11**********************@r19g2000prf.googlegr oups.com...
| I've just read an article "Building Robust System" by Gerald Jay
| Sussman. The article is here:
|
http://swiss.csail.mit.edu/classes/s...st-systems.pdf
|
| In it there is a footprint which says:
| "Indeed, one often hears arguments against building exibility into an
| engineered sys-
| tem. For example, in the philosophy of the computer language Python it
| is claimed:

For him to imply that Python is anti-flexibility is wrong. Very wrong..
He should look in a mirror. See below.
My impression is that python supporters often enough show
some anti-flexibility attitude.
>| \There should be one|and preferably only one|obvious way to do
| it."[25] Science does
| not usually proceed this way: In classical mechanics, for example, one
| can construct equa-
| tions of motion using Newtonian vectoral mechanics, or using a
| Lagrangian or Hamiltonian
| variational formulation.[30] In the cases where all three approaches
| are applicable they are
| equivalent, but each has its advantages in particular contexts."

And in those contexts, one would hope that the method with advantages is
somehow the obvious way to do it. Otherwise beginners might become like
Buriden's ass.

So I dispute that science is as different as he claims. And I do not see
any real value in the statement in that I do not see it saying anything
useful to the reader, at least not in this snippet.
Yes science is different. The difference is the following. Should
science only know the Newtonian vectoral mechanics and someone
would come up with the Lagrangian approach, nobody would protest
against this new approach by remarking that there should only be
one obvious approach, implying that by introducing the second approach
you give the people a choice, which they will have to think about
so their decision what to use is no longer obvious, which it is
if there is only one option.

Yet these kind of remarks are made often enough when someone suggest a
change to python.

--
Antoon Pardon
Jun 11 '07 #30
Diez B. Roggisch wrote:
Regardless of the possibility of speeding it up - why should one want
this? Coding speed is more important than speed of coding in 90%+ of all
cases.
When you have to start buying more servers for the server farm,
it's a real pain. I'm actually facing that because Python's HTML
parsing is so slow.

John Nagle
Jun 11 '07 #31
|| Terry Reedy wrote:
|| In Python, you have a choice of recursion (normal or tail)

Bruno
| I'm afraid Terry is wrong here, at least if he meant that CPython had
| tail recursion *optimization*.
| Terry Reedy a écrit :
| NO!!!
| I did not mean that or imply that in any way.

Bruno
| I understand you didn't mean it, but since the whole point of
| tail-recursion is allowing optimisation

Wrong again. That is a reason, even a primary reason, for some people some
of the time, but not at all the only reason anyone would ever write linear
recursion in tail rather than body (my term) form. So nothing follows from
this false premise.

| (else tail-recursion is nothing else than a subset of recursion)

So? Body-recursion (non-tail-recursion) is also nothing else than a subset
of recursion.

| you somehow implied it, even while that was not your intention.

False in its own right. Any langauge that allow recursion allows the
subset that happen to constitute tail recursion. Most do not *mandate*
that compilers specially recognize and optimize that subset. So there is
pragmatically no implication that the latter follows from the former.

The reason I specifically mentioned tail recursion is because Prof.
Sussman, who complained about
There should be one-- and preferably only one --obvious way to do it.
-- to quote Brother Tim accurately -- co-developed Scheme. To me, Scheme
promotes tail recursion as the one true way as much or more as anything is
similarly promoted in Python. That mention had nothing in itself to do
with the separate issue of optimizing tail calls.

What Sussman apparently missed is that Tim's main point is that there
should be some rather than no obvious way to do things. The parenthetical
optional secondary desiderata is just that -- optional and secondary.

Terry Jan Reedy

Jun 11 '07 #32

"Antoon Pardon" <ap*****@forel.vub.ac.bewrote in message
news:sl********************@rcpc42.vub.ac.be...
| On 2007-06-09, Terry Reedy <tj*****@udel.eduwrote:
| For him to imply that Python is anti-flexibility is wrong. Very
wrong..
| He should look in a mirror. See below.
|
| My impression is that python supporters often enough show
| some anti-flexibility attitude.

More so than supporters of most other languages, in particular Scheme?

Here's the situation. Python is making inroads at MIT, Scheme home turf.
The co-developer of Scheme, while writing about some other subject, tosses
in an off-the-wall slam against Python. Someone asks what we here think.
I think that the comment is a crock and the slam better directed, for
instance, at Scheme itself. Hence 'he should look in a mirror'.

| Yes science is different. The difference is the following. Should
| science only know the Newtonian vectoral mechanics and someone
| would come up with the Lagrangian approach, nobody would protest
| against this new approach by remarking that there should only be
| one obvious approach,

The history of science is a history of innovation and resistance to
innovation. Do you have information that the introduction of the
Lagrangian approach was exceptional? Do you really think that no college
student has ever groused about having to learn another approach that is
only equivalent to what he already knows?

| Yet these kind of remarks are made often enough when someone suggest a
| change to python.

So? Tim wrote 'There should be one-- and preferably only one --obvious way
to do it'. The primary clause is that there should at least one. The
secondary clause is that once there is a good and obvious way to do
something, we take a hard look before adding another. As it is, there are
already multiple ways to do many things. And there are probably at least
10 suggested innovations for everyone accepted.

tjr

Jun 11 '07 #33
In article <ma***************************************@python. org>,
Terry Reedy <tj*****@udel.eduwrote:
>
"James Stroud" <js*****@mbi.ucla.eduwrote in message
news:E5***************@newssvr17.news.prodigy.net ...
| Terry Reedy wrote:
| In Python, you have a choice of recursion (normal or tail)
|
| Please explain this.

I am working on a paper for Python Papers that will. It was inspired by
the question 'why doesn't Python do tail-recursion optimization'.
....with the proof contained in the margins?
--
Aahz (aa**@pythoncraft.com) <* http://www.pythoncraft.com/

"as long as we like the same operating system, things are cool." --piranha
Jun 12 '07 #34
On Jun 10, 6:43 pm, John Nagle <n...@animats.comwrote:
Josiah Carlson wrote:
Steven D'Aprano wrote:
On Sat, 09 Jun 2007 22:52:32 +0000, Josiah Carlson wrote:
>the only thing that optimization currently does in Python at present
is to discard docstrings
Python, or at least CPython, does more optimizations than that. Aside
from
run-time optimizations like interned strings etc., there are a small
number of compiler-time optimizations done.
Running Python with the -O (optimize) flag tells Python to ignore
assert statements. Using -OO additionally removes docstrings.
...
I would guess it is because some other data types may have side-effects.
On the other hand, a peephole optimizer could be written to trim out
unnecessary LOAD_CONST/POP_TOP pairs.
Some dead code is also optimized away:
Obviously dead code removal happens regardless of optimization level in
current Pythons.
Lastly, in recent versions (starting with 2.5 I believe) Python
includes a
peephole optimizer that implements simple constant folding:
Constant folding happens regardless of optimization level in current
Pythons.
So really, assert and docstring removals. Eh.

It's hard to optimize Python code well without global analysis.
The problem is that you have to make sure that a long list of "wierd
things", like modifying code or variables via getattr/setattr, aren't
happening before doing significant optimizations. Without that,
you're doomed to a slow implementation like CPython.

ShedSkin, which imposes some restrictions, is on the right track here.
The __slots__ feature is useful but doesn't go far enough.

I'd suggest defining "simpleobject" as the base class, instead of "object",
which would become a derived class of "simpleobject". Objects descended
directly from "simpleobject" would have the following restrictions:

- "getattr" and "setattr" are not available (as with __slots__)
- All class member variables must be initialized in __init__, or
in functions called by __init__. The effect is like __slots__,
but you don't have to explictly write declarations.
- Class members are implicitly typed with the type of the first
thing assigned to them. This is the ShedSkin rule. It might
be useful to allow assignments like

self.str = None(string)

to indicate that a slot holds strings, but currently has the null
string.
- Function members cannot be modified after declaration. Subclassing
is fine, but replacing a function member via assignment is not.
This allows inlining of function calls to small functions, which
is a big win.
- Private function members (self._foo and self.__foo) really are
private and are not callable outside the class definition.

You get the idea. This basically means that "simpleobject" objects have
roughly the same restrictions as C++ objects, for which heavy compile time
optimization is possible. Most Python classes already qualify for
"simpleobject". And this approach doesn't require un-Pythonic stuff like
declarations or extra "decorators".

With this, the heavy optimizations are possible. Strength reduction. Hoisting
common subexpressious out of loops. Hoisting reference count updates out of
loops. Keeping frequently used variables in registers. And elimination of
many unnecessary dictionary lookups.

Python could get much, much faster. Right now CPython is said to be 60X slower
than C. It should be possible to get at least an order of magnitude over
CPython.

John Nagle

This is already done in RPython:

http://codespeak.net/pypy/dist/pypy/...tricted-python

I was at the PyCon It conference the other day and one of the
PyPy people claimed that RPython is up to 300X faster than Python.

Michele Simionato

Jun 12 '07 #35
On 2007-06-11, Terry Reedy <tj*****@udel.eduwrote:
>
"Antoon Pardon" <ap*****@forel.vub.ac.bewrote in message
news:sl********************@rcpc42.vub.ac.be...
| On 2007-06-09, Terry Reedy <tj*****@udel.eduwrote:
| For him to imply that Python is anti-flexibility is wrong. Very
wrong..
| He should look in a mirror. See below.
|
| My impression is that python supporters often enough show
| some anti-flexibility attitude.

More so than supporters of most other languages, in particular Scheme?
Well to my knowledge (which could be vastly improved), scheme doesn't
have some Zen-rules that include something like this.

I tried to google for similar remarks in relation to scheme but I
got no results. Maybe your google skills are better.
Here's the situation. Python is making inroads at MIT, Scheme home turf.
The co-developer of Scheme, while writing about some other subject, tosses
in an off-the-wall slam against Python. Someone asks what we here think.
I think that the comment is a crock and the slam better directed, for
instance, at Scheme itself. Hence 'he should look in a mirror'.

| Yes science is different. The difference is the following. Should
| science only know the Newtonian vectoral mechanics and someone
| would come up with the Lagrangian approach, nobody would protest
| against this new approach by remarking that there should only be
| one obvious approach,

The history of science is a history of innovation and resistance to
innovation. Do you have information that the introduction of the
Lagrangian approach was exceptional? Do you really think that no college
student has ever groused about having to learn another approach that is
only equivalent to what he already knows?
Yes the history of science is a history of innovation and resistance.
But the resistance to my knowledge has never used the argument that
there should (preferably) be only one obvious way to do things.

The student example is IMO not appropiate. There is a difference between
prefering not having to learn something yourself and argueing something
shouldn't be available in general.
>| Yet these kind of remarks are made often enough when someone suggest a
| change to python.

So? Tim wrote 'There should be one-- and preferably only one --obvious way
to do it'. The primary clause is that there should at least one. The
secondary clause is that once there is a good and obvious way to do
something, we take a hard look before adding another. As it is, there are
already multiple ways to do many things. And there are probably at least
10 suggested innovations for everyone accepted.
Yes I know that. But that doesn't stop a lot of python supporters in this news
group to come with a variation that suggests once there is an obvious way to do
something in python, there really is no need any more to look at ways
that do it differently. And if my memory doesn't betray me, corrections
from others to such variations are a rather recent occurence.

--
Antoon Pardon
Jun 12 '07 #36
On 2007-06-12, Antoon Pardon <ap*****@forel.vub.ac.bewrote:
On 2007-06-11, Terry Reedy <tj*****@udel.eduwrote:
>More so than supporters of most other languages, in particular
Scheme?

Well to my knowledge (which could be vastly improved), scheme
doesn't have some Zen-rules that include something like this.

I tried to google for similar remarks in relation to scheme but
I got no results. Maybe your google skills are better.
It's in _The Revised^%d Report on Scheme_, Introduction:

Programming languages should be designed not by piling feature
on top of feature, but by removing the weaknesses and
restrictions that make additional features appear necessary.

Of course, that was written well before Scheme had most of its
current features.

--
Neil Cerutti
These people haven't seen the last of my face. If I go down, I'm going down
standing up. --Chuck Person
Jun 12 '07 #37
John Nagle wrote:
Diez B. Roggisch wrote:
>Regardless of the possibility of speeding it up - why should one want
this? Coding speed is more important than speed of coding in 90%+ of all
cases.

When you have to start buying more servers for the server farm,
it's a real pain. I'm actually facing that because Python's HTML
parsing is so slow.
I can't believe that this is a general "python is to slow"-issue. After all,
lxml and cElementTree are _really_ fast - and written in C.

For example in TurboGears, there are two python-only templating systems -
KID & genshi (apart from others). The latter is a magnitude faster than the
former. After all, O(n^3) is O(n^3), regardless of the language...

And if only the html-parsing is slow, you might consider creating an
extension for that. Using e.g. Pyrex.

Diez
Jun 12 '07 #38

"Antoon Pardon" <ap*****@forel.vub.ac.bewrote in message
news:sl********************@rcpc42.vub.ac.be...
| So? Tim wrote 'There should be one-- and preferably only one --obvious
way
| to do it'. The primary clause is that there should at least one. The
| secondary clause is that once there is a good and obvious way to do
| something, we take a hard look before adding another. As it is, there
are
| already multiple ways to do many things. And there are probably at
least
| 10 suggested innovations for everyone accepted.
|
| Yes I know that. But that doesn't stop a lot of python supporters in this
news
| group to come with a variation that suggests once there is an obvious way
to do
| something in python, there really is no need any more to look at ways
| that do it differently.

Try suggesting on a Lisp or Scheme group that having only one type of
syntax (prefix expressions) lacks something and that they should add
variety in the form of statement syntax ;-) Hint: some Lispers have
bragged here about the simplicity of 'one way to do it' and put Python down
for its mixed syntax. (Of course, this does not mean that some dialects
have not sneaked in lists of statements thru a back door ;-).

Would you really want Python to have a hundred new features every release?

tjr

Jun 12 '07 #39
Paul Rubin wrote:
Steven D'Aprano <st***@REMOVE.THIS.cybersource.com.auwrites:
>>Not tail calls, in general, no.
Sorry, how does that work? You're suggesting that there is an algorithm
which the compiler could follow to optimize away tail-recursion, but human
beings can't follow the same algorithm?

Now I'm confused.

The usual compiler method is to translate the code into
continuation-passing style and thereby gain tail-recursion
optimization automagically.
There's no need to go into CPS just to optimise tail-recursion. After all,
compilers were optimising tail-calls decades before Appel's work on SML/NJ.

Converting tail-recursion to iteration is trivial, and perfectly reasonable for
a human to do by hand. You add an outer "while True"-loop, the recursive call
becomes a tuple assignment, and other code paths end with a break out of the
loop. Completely mechanical and the resulting code doesn't even look that bad.

Like Steven said, tail-call optimisation is not necessary as you can always
hand-optimise it yourself.

- Anders

Jun 12 '07 #40
Steven D'Aprano <st***@REMOVE.THIS.cybersource.com.auwrites:
On Mon, 11 Jun 2007 01:28:09 +0100, Alexander Schmolck wrote:
>Steven D'Aprano <st***@REMOVE.THIS.cybersource.com.auwrites:
>>On Sat, 09 Jun 2007 22:42:17 +0100, Alexander Schmolck wrote:

As for why tail calls are not optimized out, it was decided that being able
to have the stack traces (with variable information, etc.) was more useful
than offering tail call optimization

I don't buy this.

Do you mean you don't believe the decision was made, or you don't agree
with the decision?

Neither. I don't believe the rationale stated in this thread to be the true
reason.


Don't keep us in suspense. What do you believe is the true reason?
It's easier to spot that some rationalization is bogus than to unconver the
true underlying causes; I'm pretty sure it's more a Gestalt thing than a
compelling technical reason (I guess Guido's distaste for scheme also plays a
role). Not that I discount that out of hand -- maybe all that's great about
python is due to Guido being exceptionally good at making such judgements.
>>Are you volunteering? If you are, I'm sure your suggestion will be welcomed
gratefully.

I rather doubt it. Guido has stated quite clearly that his not
interested in incorporating this feature.

He's not the only one who gets to make these decisions.
This is news to me. Who else does?
But even if he uses his veto to prevent tail-recursion optimization from
being put into the main branch, there are other options.
That don't involve abducting his kids?
>>>>(do what I say).

Where did you say run out of memory and fail? More importantly how do
you say "don't run out of memory and fail"?

If we can live with a certain amount of "arbitrary failures" in simple
arithmetic,

I prefer not too, and thus when possible avoid to use languages where
``a + b`` is liable to fail arbitrarily (such as C, where the behavior
will often be undefined).

That's not the sort of arbitrary failure I was discussing, but for that
matter Python is one of those languages.
Apart from floating point arithmetic, simple arithmetic doesn't tend to fail
arbitrarily in python, as far as I'm aware. You can of course create your very
own classes specifically to get broken arithmetic but that doesn't strike me
as "simple" arithmetic anymore.
Perhaps Python is not the language for you?
Do you also happen to know what would be?
Correct me if I'm wrong, but surely it is C++ that can have arbitrary
behaviour for "a + b", not C?
``INT_MAX + 1`` can do precisely anything in C.
>>You can always hand-optimize it yourself.

Not tail calls, in general, no.

Sorry, how does that work? You're suggesting that there is an algorithm
which the compiler could follow to optimize away tail-recursion, but human
beings can't follow the same algorithm? Now I'm confused.
Does it also confuse you that if I give you a 500x500 matrix A you won't be
able to figure out a single element in A^-1 by doing mental arithmetic (or
using pen and paper), although my computer manages just fine and I'm happy to
give you the algorithm it uses?

'as
Jun 13 '07 #41
"Anders J. Munch" <20**@jmunch.dkwrites:
Like Steven said, tail-call optimisation is not necessary as you can always
hand-optimise it yourself.
Care to demonstrate on some code written in CPS (a compiler or parser, say)?

'as
Jun 13 '07 #42
Alexander Schmolck wrote:
Steven D'Aprano <st***@REMOVE.THIS.cybersource.com.auwrites:
>>On Mon, 11 Jun 2007 01:28:09 +0100, Alexander Schmolck wrote:
>>>Steven D'Aprano <st***@REMOVE.THIS.cybersource.com.auwrites:
>>Don't keep us in suspense. What do you believe is the true reason?


It's easier to spot that some rationalization is bogus than to unconver the
true underlying causes; I'm pretty sure it's more a Gestalt thing than a
compelling technical reason.
There's a real reason. Remember, functions are dynamically
replaceable. The compiler would have to detect that the function
doesn't modify or replace itself while recursing for this optimization
to be valid. Worst case, another thread could replace the function
while it was recursing, invalidating the tail recursion optimization.

John Nagle
Jun 13 '07 #43
On 2007-06-12, Anders J. Munch <20**@jmunch.dkwrote:
Paul Rubin wrote:
>Steven D'Aprano <st***@REMOVE.THIS.cybersource.com.auwrites:
>>>Not tail calls, in general, no.
Sorry, how does that work? You're suggesting that there is an
algorithm which the compiler could follow to optimize away
tail-recursion, but human beings can't follow the same
algorithm?

Now I'm confused.

The usual compiler method is to translate the code into
continuation-passing style and thereby gain tail-recursion
optimization automagically.

There's no need to go into CPS just to optimise tail-recursion.
After all, compilers were optimising tail-calls decades before
Appel's work on SML/NJ.

Converting tail-recursion to iteration is trivial, and
perfectly reasonable for a human to do by hand.
For simple recursive tail calls, yeah, it can be. Translating a
tail-recursive Factorial function into a while loop is easy. But
tail-call optimization technically works for any tail-call,
including mutual recursion, and non-recursive tail-calls. You
can't reasonably hand-optimize away the stack frame for all
tail-calls.

def foo(x)
bar(x)

The only way to hand-optimize the call to bar is to inline it
yourself.

--
Neil Cerutti
Will the highways on the Internet become more few? --George W. Bush
Jun 13 '07 #44
On 2007-06-13, Steve Howell <sh*******@yahoo.comwrote:
You would just change the language definition to say that once
you enter f(), any call to f() from within f() behaves as if
the recursively called f() still points to the originally bound
version of f. To want any other behavior would be absurd,
anyhow.
There's a reason it's generally refered to as "tail-call"
optimization and not "tail-recursive" optimization. The former is
more general, and, I believe, easier to implement than the
latter.

--
Neil Cerutti
The peace-making meeting scheduled for today has been cancelled due to a
conflict. --Church Bulletin Blooper
Jun 13 '07 #45
Neil Cerutti wrote:
On 2007-06-12, Anders J. Munch <20**@jmunch.dkwrote:
>Converting tail-recursion to iteration is trivial, and
perfectly reasonable for a human to do by hand.

For simple recursive tail calls, yeah, it can be. Translating a
tail-recursive Factorial function into a while loop is easy. But
tail-call optimization technically works for any tail-call,
including mutual recursion, and non-recursive tail-calls. You
can't reasonably hand-optimize away the stack frame for all
tail-calls.
I may have misunderstood, I thought we were talking about tail recursion only.
The general tail-call optimisation, where all leaf calls become jumps and the
called function usurps the current stack frame, is a different ballgame
entirely. There's no pure-Python transformation for that, but that still
doesn't mean you need CPS.

General tail-call optimisation is of course completely out-of-bounds for Python,
because it ruins tracebacks. Unlike tail recursion, which could use recursion
counters.

- Anders

Jun 13 '07 #46
Alexander Schmolck wrote:
"Anders J. Munch" <20**@jmunch.dkwrites:
>Like Steven said, tail-call optimisation is not necessary as you can always
hand-optimise it yourself.

Care to demonstrate on some code written in CPS (a compiler or parser, say)?
I meant tail recursion, not tail-call, sorry, that was just my fingers trying to
save typing.

- Anders
Jun 13 '07 #47
On 2007-06-13, Anders J. Munch <20**@jmunch.dkwrote:
General tail-call optimisation is of course completely
out-of-bounds for Python, because it ruins tracebacks. Unlike
tail recursion, which could use recursion counters.
Is it really ruined? To use a similar example:

def foo(x):
bar(x+1)

def bar(x):
if x 10:
raise ValueError
else:
foo(x+2)

Today, when I call foo(4), I get something like:

C:\WINNT\system32\cmd.exe /c python temp.py
Traceback (most recent call last):
File "temp.py", line 529, in <module>
foo(4)
File "temp.py", line 521, in foo
bar(x+1)
File "temp.py", line 527, in bar
foo(x+2)
File "temp.py", line 521, in foo
bar(x+1)
File "temp.py", line 527, in bar
foo(x+2)
File "temp.py", line 521, in foo
bar(x+1)
File "temp.py", line 525, in bar
raise ValueError
ValueError
shell returned 1

With tail-call optimization you'd get something like:

C:\WINNT\system32\cmd.exe /c python temp.py
Traceback (most recent call last):
File "temp.py", line 529, in <module>
foo(4)
File "temp.py", line 525, in bar
raise ValueError
ValueError
shell returned 1

What makes the latter harder to work with?

--
Neil Cerutti
Jun 13 '07 #48
On Wed, 2007-06-13 at 18:22 +0000, Neil Cerutti wrote:
On 2007-06-13, Anders J. Munch <20**@jmunch.dkwrote:
General tail-call optimisation is of course completely
out-of-bounds for Python, because it ruins tracebacks. Unlike
tail recursion, which could use recursion counters.

Is it really ruined? To use a similar example:

def foo(x):
bar(x+1)

def bar(x):
if x 10:
raise ValueError
else:
foo(x+2)

Today, when I call foo(4), I get something like:

C:\WINNT\system32\cmd.exe /c python temp.py
Traceback (most recent call last):
File "temp.py", line 529, in <module>
foo(4)
File "temp.py", line 521, in foo
bar(x+1)
File "temp.py", line 527, in bar
foo(x+2)
File "temp.py", line 521, in foo
bar(x+1)
File "temp.py", line 527, in bar
foo(x+2)
File "temp.py", line 521, in foo
bar(x+1)
File "temp.py", line 525, in bar
raise ValueError
ValueError
shell returned 1

With tail-call optimization you'd get something like:

C:\WINNT\system32\cmd.exe /c python temp.py
Traceback (most recent call last):
File "temp.py", line 529, in <module>
foo(4)
File "temp.py", line 525, in bar
raise ValueError
ValueError
shell returned 1

What makes the latter harder to work with?
The fact that you don't see how many call levels down your algorithm got
before throwing an exception. This may be an important clue in debugging
a recursive algorithm.

--
Carsten Haese
http://informixdb.sourceforge.net
Jun 13 '07 #49
On 2007-06-13, Neil Cerutti <ho*****@yahoo.comwrote:
On 2007-06-13, Anders J. Munch <20**@jmunch.dkwrote:
>General tail-call optimisation is of course completely
out-of-bounds for Python, because it ruins tracebacks. Unlike
tail recursion, which could use recursion counters.

Is it really ruined? To use a similar example:
I found some interesting notes by Alex Martelli pertaining to
tail-call optimisation, and my assumption that tail-call
optimization is easier to implement than tail-recursive
optimization may have been naive. ;)

http://groups.google.com/group/comp....3c1bd70?hl=en&

Moreover, there are (or were) technical reasons that you can't do
tail-call optimization in Python, which can't even recognize
tail-calls at compile time. According to Tim Peters:

http://groups.google.com/group/comp....aefb828?hl=en&

--
Neil Cerutti
Jun 13 '07 #50

This thread has been closed and replies have been disabled. Please start a new discussion.

Similar topics

226
by: Stephen C. Waterbury | last post by:
This seems like it ought to work, according to the description of reduce(), but it doesn't. Is this a bug, or am I missing something? Python 2.3.2 (#1, Oct 20 2003, 01:04:35) on linux2 Type...
22
by: Tuang | last post by:
I'm checking out Python as a candidate for replacing Perl as my "Swiss Army knife" tool. The longer I can remember the syntax for performing a task, the more likely I am to use it on the spot if...
7
by: Michele Simionato | last post by:
So far, I have not installed Prothon, nor I have experience with Io, Self or other prototype-based languages. Still, from the discussion on the mailing list, I have got the strong impression that...
25
by: John Morgan | last post by:
Though I have designed and implemented a number of large reasonably well received web sites I do not consider myself a graphics designer I am now for the first time going to work with a ...
7
by: Russell Mangel | last post by:
I have been doing some C++ Interop using the new VS2005 (June Beta). I am exposing these methods to .NET clients. I ran into some WinAPI methods which use LPVOID types, and I don't understand...
150
by: tony | last post by:
If you have any PHP scripts which will not work in the current releases due to breaks in backwards compatibility then take a look at http://www.tonymarston.net/php-mysql/bc-is-everything.html and...
191
by: Xah Lee | last post by:
Software Needs Philosophers by Steve Yegge, 2006-04-15. Software needs philosophers. This thought has been nagging at me for a year now, and recently it's been growing like a tumor. One...
22
by: Xah Lee | last post by:
The Nature of the “Unix Philosophy†Xah Lee, 2006-05 In the computing industry, especially among unix community, we often hear that there's a “Unix Philosophyâ€. In this essay, i...
5
by: Mathias Panzenboeck | last post by:
Hi. I wrote a small hashlib for C. Because I'm new to hashes I looked at pythons implementation and reused *some* of the code... or more the mathematical "hash-function", not really the code. ...
0
by: Charles Arthur | last post by:
How do i turn on java script on a villaon, callus and itel keypad mobile phone
0
by: aa123db | last post by:
Variable and constants Use var or let for variables and const fror constants. Var foo ='bar'; Let foo ='bar';const baz ='bar'; Functions function $name$ ($parameters$) { } ...
0
by: ryjfgjl | last post by:
If we have dozens or hundreds of excel to import into the database, if we use the excel import function provided by database editors such as navicat, it will be extremely tedious and time-consuming...
0
by: emmanuelkatto | last post by:
Hi All, I am Emmanuel katto from Uganda. I want to ask what challenges you've faced while migrating a website to cloud. Please let me know. Thanks! Emmanuel
0
BarryA
by: BarryA | last post by:
What are the essential steps and strategies outlined in the Data Structures and Algorithms (DSA) roadmap for aspiring data scientists? How can individuals effectively utilize this roadmap to progress...
1
by: nemocccc | last post by:
hello, everyone, I want to develop a software for my android phone for daily needs, any suggestions?
0
by: Hystou | last post by:
There are some requirements for setting up RAID: 1. The motherboard and BIOS support RAID configuration. 2. The motherboard has 2 or more available SATA protocol SSD/HDD slots (including MSATA, M.2...
0
Oralloy
by: Oralloy | last post by:
Hello folks, I am unable to find appropriate documentation on the type promotion of bit-fields when using the generalised comparison operator "<=>". The problem is that using the GNU compilers,...
0
jinu1996
by: jinu1996 | last post by:
In today's digital age, having a compelling online presence is paramount for businesses aiming to thrive in a competitive landscape. At the heart of this digital strategy lies an intricately woven...

By using Bytes.com and it's services, you agree to our Privacy Policy and Terms of Use.

To disable or enable advertisements and analytics tracking please visit the manage ads & tracking page.