By using this site, you agree to our updated Privacy Policy and our Terms of Use. Manage your Cookies Settings.
429,044 Members | 1,287 Online
Bytes IT Community
+ Ask a Question
Need help? Post your question and get tips & solutions from a community of 429,044 IT Pros & Developers. It's quick & easy.

Python syntax in Lisp and Scheme

P: n/a
I think everyone who used Python will agree that its syntax is
the best thing going for it. It is very readable and easy
for everyone to learn. But, Python does not a have very good
macro capabilities, unfortunately. I'd like to know if it may
be possible to add a powerful macro system to Python, while
keeping its amazing syntax, and if it could be possible to
add Pythonistic syntax to Lisp or Scheme, while keeping all
of the functionality and convenience. If the answer is yes,
would many Python programmers switch to Lisp or Scheme if
they were offered identation-based syntax?
Jul 18 '05
Share this Question
Share on Google+
699 Replies


P: n/a
Erann Gat wrote:
...
But if you focus on examples like this you really miss the point. Imagine
that you wanted to be able to write this in Python:

def vector_fill(v, x):
for i from 0 to len(v)-1:
v[i] = x

You can't do it because Python doesn't support "for i from ... to ...",
only "for i in ...". What's more, you can't as a user change the language
so that it does support "for i from ... to ...". (That's why the xrange
hack was invented.)
Almost right, except that xrange is a hack. Since in Python you cannot
change the language to suit your whims, you USE the language (designed
by a pretty good language designer) -- by coding an iterator that is
suitable to put where the ... are in "for i in ...".
In Lisp you can. If Lisp didn't already have LOOP or DOTIMES as part of
the standard you could add them yourself, and the way you do it is by
writing a macro.
Good summary: if you fancy yourself as a language designer, go for Lisp;
if you prefer to use a language designed by somebody else, without you
_or any of the dozens of people working with you on the same project_
being able to CHANGE the language, go for Python.
That's what macros are mainly good for, adding features to the langauge in
ways that are absolutely impossible in any other language. S-expression
syntax is the feature that enables users to so this quickly and easily.
Doesn't Dylan do a pretty good job of giving essentially the same
semantics (including macros) without S-expression syntax? That was
my impression, but I've never used Dylan in production.
For example, imagine you want to be able to traverse a binary tree and do
an operation on all of its leaves. In Lisp you can write a macro that
lets you write:

(doleaves (leaf tree) ...)

You can't do that in Python (or any other langauge).
Well, in Ruby, or Smalltalk, you would pass your preferred code block
to the call to the doleaves iterator, giving something like:

doleaves(tree) do |leaf|
...
end

while in Python, where iterators are "the other way around" (they
get relevant items out rather than taking a code block in), it would be:

for leaf in doleaves(tree):
...

In either case, it may not be "that" (you are not ALTERING the syntax
of the language, just USING it for the same purpose), but it's sure close.
(In Dylan, I do believe you could ``do that'' -- except the surface
syntax would not be Lisp-ish, of course).

Here's another example of what you can do with macros in Lisp:

(with-collector collect
(do-file-lines (l some-file-name)
(if (some-property l) (collect l))))

This returns a list of all the lines in a file that have some property.
DO-FILE-LINES and WITH-COLLECTOR are macros, and they can't be implemented
any other way because they take variable names and code as arguments.


If you consider than giving e.g. the variable name as an argument to
do-file-lines is the crucial issue here, then it's probably quite true
that this fundamental (?) feature "cannot be implemented any other way";
in Ruby, e.g., the variable name would not be an argument to dofilelines,
it would be a parameter at the start of the block receiving & using it:

dofilelines(somefilename) do |l|
collect l if someproperty? l
end

However, it appears to me that the focus on where variable names are
to be determined may be somewhat misplaced. The key distinction does
seem to be: if you're happy using a language as it was designed (e.g.,
in this example, respecting the language designer's concept that the
names for the control variables of a block must appear within | vertical
bars | at the start of the block -- or, in Python, the reversed concept
that they must appear between the 'for' and 'in' in the "for ... in ...:
statement), macros are not relevant; if you do want to design and use
your own language (including, for example, placing variable names in
new and interesting places) then macros can let you do that, while
other constructs would be insufficiently powerful.

If you dream of there being "preferably only one obvious way to do it",
as Pythonistas do (try "import this" at an interactive Python prompt),
macros are therefore a minus; if you revel in the possibilities of there
being many ways to do it, even ones the language designer had never even
considered (or considered and rejected in disgust:-), macros then become
a huge plus.

Therefore, I entirely agree that people who pine for macros should
use them in a language that accomodates them quite well, is designed
for them, cherishes and nurtures and exhalts them, like some language
of the Lisp family (be it Common, ISO, Scheme, ...), or perhaps Dylan
(which often _feels_ as if "of the Lisp family" even though it does
not use S-expressions), rather than trying to shoehorn them willy-nilly
into a language to whose overall philosophy they are SO utterly foreign,
like Python (Ruby, and even more Perl, may be a different matter;
google for "Lingua Latina Perligata" to see what Perl is already able
to do in terms of within-the-language language design and syntax
alteration, even without anything officially deemed to be 'macros'...
it IS, after all, a language CENTERED on "more than one way to do it").
Alex

Jul 18 '05 #51

P: n/a
MetalOne wrote
If a set of macros could be written to improve LISP
syntax, then I think that might be an amazing thing. An
interesting question to me is why hasn't this already been
done.


I think the issue is the grandeur of the Lisp vision. More
ambitious projects require larger code bases. Ambition is
hard to quantify. Nevertheless one must face the issue of
scaling. Does code size go as the cube of ambition, or is it
the fourth power of ambition? Or something else entirely.

Lisp aspires to change the exponent, not the constant
factor. The constant factor is very important. That is why
CL has FILL :-) but shrinking the constant factor has been
done (and with C++ undone).

Macros can be used to abbreviate code. One can spot that one
is typing similar code over and over again. One says
"whoops, I'm typing macro expansions". Do you use macros to
tune the syntax, so that you type N/2 characters instead of
N characters, or do you capture the whole concept in macro
and eliminate the repetition altogether?

The point is that there is nowhere very interesting to go
with syntax tuning. It is the bolder goal of changing the
exponent, and thus seriously enlarging the realm of the
possible, that excites.

Alan Crowe
Jul 18 '05 #52

P: n/a
Alex Martelli <al***@aleax.it> writes:
Good summary: if you fancy yourself as a language designer, go for
Lisp; if you prefer to use a language designed by somebody else,
without you _or any of the dozens of people working with you on the
same project_ being able to CHANGE the language, go for Python.


I believe it is very unfortunate to view lisp macros as something that
is used to "change the language". Macros allow syntactic abstraction
the same way functions allow functional abstraction, and is almost as
important a part of the programmer's toolchest. While macros _can_ be
used to change the language in the sense of writing your own
general-purpose iteration construct or conditional operator, I believe
this is an abuse of macros, precisely because of the implications this
has for the readability of the code and for the language's user
community.

--
Frode Vatvedt Fjeld
Jul 18 '05 #53

P: n/a
On 03 Oct 2003 14:44:36 +0300, Toni Nikkanen <to**@tuug.fi> wrote:
It's be interesting to know where people got the idea of learning
Scheme/LISP from (apart from compulsory university courses)?


Emacs. I've noticed over the years that people don't really get Emacs
religion until they've started hacking elisp. I know that the frustration
of having almost-but-not-quite the behavior I wanted on top of having all
that source code was a powerful incentive for me to learn Lisp. Of course
my apreciation of Emacs only increased as I went...

The thing that sealed it for me was re-programming SCWM's behavior so that
I could use X w/no mouse &cet. That got me hooked on Scheme (I had been
hacking SML at roughly the same time while looking for the foundations of
OOP), which was really just about perfect semantically.

david rush
--
(\x.(x x) \x.(x x)) -> (s i i (s i i))
-- aki helin (on comp.lang.scheme)
Jul 18 '05 #54

P: n/a
bo**@oz.net (Bengt Richter) wrote in message news:<bl**********@216.39.172.122>...
Do you like this better?
>>> def foo(n):
... box = [n]
... def foo(i): box[0]+=i; return box[0]
... return foo
...


It's still a hack that shows an area where Python has unnecessary
limitations, isn't it?
As Paul Graham says (<URL:http://www.paulgraham.com/icad.html>):
Python users might legitimately ask why they can't just write

def foo(n):
return lambda i: return n += i

or even

def foo(n):
lambda i: n += i


Cheers,
-- Grzegorz
Jul 18 '05 #55

P: n/a
Frode Vatvedt Fjeld <fr****@cs.uit.no> writes:
Alex Martelli <al***@aleax.it> writes:
Good summary: if you fancy yourself as a language designer, go for
Lisp; if you prefer to use a language designed by somebody else,
without you _or any of the dozens of people working with you on the
same project_ being able to CHANGE the language, go for Python.


I believe it is very unfortunate to view lisp macros as something that
is used to "change the language". Macros allow syntactic abstraction
the same way functions allow functional abstraction, and is almost as
important a part of the programmer's toolchest. While macros _can_ be
used to change the language in the sense of writing your own
general-purpose iteration construct or conditional operator, I believe
this is an abuse of macros, precisely because of the implications this
has for the readability of the code and for the language's user
community.


But syntactic abstractions *are* a change to the language, it just
sounds fancier.

I agree that injudicious use of macros can destroy the readability of
code, but judicious use can greatly increase the readability. So
while it is probably a bad idea to write COND1 that assumes
alternating test and consequence forms, it is also a bad idea to
replicate boilerplate code because you are eschewing macros.

Jul 18 '05 #56

P: n/a
Grzegorz Chrupala wrote:
...
>>> def foo(n): ... box = [n]
... def foo(i): box[0]+=i; return box[0]
... return foo
...


It's still a hack that shows an area where Python has unnecessary
limitations, isn't it?


Debatable, and debated. See the "Rebinding names in enclosing
scopes" section of http://www.python.org/peps/pep-0227.html .

Essentially, Guido prefers classes (and instances thereof) to
closures as a way to bundle state and behavior; thus he most
emphatically does not want to add _any_ complication at all,
when the only benefit would be to have "more than one obvious
way to do it".

Guido's generally adamant stance for simplicity has been the
key determinant in the evolution of Python. Guido is also on
record as promising that the major focus in the next release
of Python where he can introduce backwards incompatibilities
(i.e. the next major-number-incrementing release, 3.0, perhaps,
say, 3 years from now) will be the _elimination_ of many of
the "more than one way to do it"s that have accumulated along
the years mostly for reasons of keeping backwards compatibility
(e.g., lambda, map, reduce, and filter, which Guido mildly
regrets ever having accepted into the language).

As Paul Graham says (<URL:http://www.paulgraham.com/icad.html>):
Python users might legitimately ask why they can't just write

def foo(n):
return lambda i: return n += i
The rule Python currently use to determine whether a variable
is local is maximally simple: if the name gets bound (assigned
to) in local scope, it's a local variable. Making this rule
*any* more complicated (e.g. to allow assignments to names in
enclosing scopes) would just allow "more than one way to do
it" (making closures a viable alternative to classes in more
cases) and therefore it just won't happen. Python is about
offering one, and preferably only one, obvious way to do it,
for any value of "it". And another key principle of the Zen
of Python is "simple is better than complex".

Anybody who doesn't value simplicity and uniformity is quite
unlikely to be comfortable with Python -- and this should
amply answer the question about the motivations for reason
number 1 why the above foo is unacceptable in Python (the
lambda's body can't rebind name n in an enclosing scope).

Python draws a firm distinction between expressions and
statements. Again, the deep motivation behind this key
distinction can be found in several points in the Zen of
Python, such as "flat is better than nested" (doing away
with the expression/statement separation allows and indeed
encourages deep nesting) and "sparse is better than dense"
(that 'doing away' would encourage expression/statements
with a very high density of operations being performed).

This firm distinction should easily explain other reasons
why the above foo is unacceptable in Python: n+=i is a
statement (not an expression) and therefore it cannot be
held by a 'return' keyword; 'return' is a statement and
therefore cannot be in the body of a 'lambda' keyword.
or even

def foo(n):
lambda i: n += i


And this touches on yet another point of the Zen of Python:
explicit is better than implicit. Having a function
implicitly return the last expression it computes would
violate this point (and is in fact somewhat error-prone,
in my experience, in the several languages that adopt
this rule).

Somebody who is unhappy with this drive for explicitness,
simplicity, uniformity, and so on, cannot be happy with
Python. If he wants a very similar language from most
points of view, BUT with very different philosophies, he
might well be quite happy with Ruby. Ruby does away with
any expression/statement distinction; it makes the 'return'
optional, as a method returns the last thing it computes;
it revels in "more than one way to do it", clever and cool
hacks, not perhaps to the extent of Perl, but close enough.

In Ruby, the spaces of methods and data are separate (i.e.,
most everything is "an object" -- but, differently from
Python, methods are not objects in Ruby), and I do not
think, therefore, that you can write a method that builds
and returns another method, and bind the latter to a name --
but you can return an object with a .call method, a la:

def outer(a) proc do |b| a+=b end end

x = outer(23)
puts x.call(100) # emits 123
puts x.call(100) # emits 223

[i.e., I can't think of any way you could just use x(100)
at the end of such a snippet in Ruby -- perhaps somebody
more expert of Ruby than I am can confirm or correct...?]
but apart from this it seems closer to what the above
quotes appear to be probing for. In particular, it lets
you be MUCH, MUCH denser, if that is your purpose in life,
easily squeezing that outer function into a (short) line.
Python is NOT about making code very dense, indeed, as
above mentioned, it sees _sparseness_ as a plus; a typical
Pythonista would cringe at the density of that 'outer'
and by contrast REVEL at the "sparsity" and "explicitness"
(due to the many names involved:-) of, e.g.:

def make_accumulator(initial_value):
accumulator = Bunch(value=initial_value)
def accumulate(addend):
accumulator.value += addend
return accumulator.value
return accumulate

accumulate = make_accumulator(23)
print accumulate(100) # emits 123
print accumulate(100) # emits 223
(using the popular Bunch class commonly defined as:
class Bunch(object):
def __init__(self, **kwds):
self.__dict__.update(kwds)
). There is, of course, a cultural gulf between this
verbose 6-liner [using an auxiliary class strictly for
reasons of better readability...!] and the terse Ruby
1-liner above, and no doubt most practitioners of both
languages would in practice choose intermediate levels,
such as un-densifying the Ruby function into:
def outer(a)
proc do |b|
a+b
end
end

or shortening/densifying the Python one into:

def make_accumulator(a):
value = [a]
def accumulate(b):
value[0] += b
return value[0]
return accumulate

but I think the "purer" (more extreme) versions are
interesting "tipizations" for the languages, anyway.
Alex

Jul 18 '05 #57

P: n/a
Frode Vatvedt Fjeld wrote:
Alex Martelli <al***@aleax.it> writes:
Good summary: if you fancy yourself as a language designer, go for
Lisp; if you prefer to use a language designed by somebody else,
without you _or any of the dozens of people working with you on the
same project_ being able to CHANGE the language, go for Python.
I believe it is very unfortunate to view lisp macros as something that
is used to "change the language". Macros allow syntactic abstraction


Maybe "enhance" can sound more positive? An enhancement, of course,
IS a change -- and if one were to perform any change, he'd surely be
convinced it WAS going to be an enhancement. (Whether it really
turned out to be one is another issue).
the same way functions allow functional abstraction, and is almost as
important a part of the programmer's toolchest. While macros _can_ be
used to change the language in the sense of writing your own
general-purpose iteration construct or conditional operator, I believe
this is an abuse of macros, precisely because of the implications this
has for the readability of the code and for the language's user
community.


Sure, but aren't these the examples that are being presented? Isn't
"with-collector" a general purpose iteration construct, etc? Maybe
only _special_ purpose ones should be built with macros (if you are
right that _general_ purpose ones should not be), but the subtleness
of the distinction leaves me wondering about the practice.
Alex
Jul 18 '05 #58

P: n/a
Thanks for everybody's responses. I found them quite informative.
Jul 18 '05 #59

P: n/a
Alex Martelli wrote:
Essentially, Guido prefers classes (and instances thereof) to
closures as a way to bundle state and behavior; thus he most
emphatically does not want to add _any_ complication at all,
when the only benefit would be to have "more than one obvious
way to do it".

Guido's generally adamant stance for simplicity has been the
key determinant in the evolution of Python.


The following is taken from "All Things Pythonic - News from Python UK"
written by Guido van Rossum April 17,
<2003:http://www.artima.com/weblogs/viewpost.jsp?thread=4550>

During Simon's elaboration of an example (a type-safe printf function)
I realized the problem with functional programming: there was a simple
programming problem where a list had to be transformed into a
different list. The code to do this was a complex two-level lambda
expression if I remember it well, and despite Simon's lively
explanation (he was literally hopping around the stage making
intricate hand gestures to show how it worked) I failed to "get" it. I
finally had to accept that it did the transformation without
understanding how it did it, and this is where I had my epiphany about
loops as a higher level of abstraction than recursion - I'm sure that
the same problem would be easily solved by a simple loop in Python,
and would leave no-one in the dark about what it did.

Hmm.

--
Jens Axel Søgaard

Jul 18 '05 #60

P: n/a
On Fri, 3 Oct 2003 09:36:32 -0400, Terry Reedy <tj*****@udel.edu> wrote:
... Lispers posting here have gone to pains to state that Scheme is
not a dialect of Lisp but a separate Lisp-like language. Could you
give a short listing of the current main differences (S vs. CL)?


Do you even begin to appreciate how inflammatory such a request is when
posted to to both c.l.l and c.l.s?

Anyway, as a fairly heavily biased Schemer:

Scheme vs Common Lisp

1 name space vs multiple name spaces
This is a bigger issue than it seems on the surface, BTW

#f vs nil
In Scheme an empty list is not considered to be the same
thing as boolean false

emphasis on all values being first-class vs ad-hoc values
Scheme tries to achieve this, Lisp is by conscious design a
compromise system design, for both good and bad

small semantic footprint vs large semantic footprint
Scheme seems relatively easier to keep in mind as an
additional language.CL appears to have several sub-languages
embedded in it. This cuts both ways, mind you.

Thos eare the most obvious surface issues. My main point is that it is
pretty much silly to consider any of the above in isolation. Both languages
make a lot of sense in their design context. I vastly prefer Scheme because
it suits my needs (small semantic footprint, powerful toolkit) far better
than CL (everything is there if you have the time to look for it). I should
point out that I build a lot of funny data structures (suffix trees and
other
IR magic) for which pre-built libraries are both exceedingly rare and
incorrectly optimized for the specific application.

I also like the fact that Scheme hews rather a lot closer to the
theoretical
foundations of CS than CL, but then again that's all part of the small
semantic
footprint for me.

david rush
--
(\x.(x x) \x.(x x)) -> (s i i (s i i))
-- aki helin (on comp.lang.scheme)
Jul 18 '05 #61

P: n/a
gr******@pithekos.net (Grzegorz Chrupala) wrote previously:
|shocked at how awkward Paul Graham's "accumulator generator" snippet is
|in Python:
|class foo:
| def __init__(self, n):
| self.n = n
| def __call__(self, i):
| self.n += i
| return self.n

Me too. The way I'd do it is probably a lot closer to the way Schemers
would do it:
def foo(i, accum=[0]): ... accum[0]+=i
... return accum[0]
... foo(1) 1 foo(3)

4

Shorter, and without an awkward class.

Yours, David...

--
Buy Text Processing in Python: http://tinyurl.com/jskh
---[ to our friends at TLAs (spread the word) ]--------------------------
Echelon North Korea Nazi cracking spy smuggle Columbia fissionable Stego
White Water strategic Clinton Delta Force militia TEMPEST Libya Mossad
---[ Postmodern Enterprises <me***@gnosis.cx> ]--------------------------
Jul 18 '05 #62

P: n/a
Alex Martelli wrote:
Guido's generally adamant stance for simplicity has been the
key determinant in the evolution of Python. Guido is also on
record as promising that the major focus in the next release
of Python where he can introduce backwards incompatibilities
(i.e. the next major-number-incrementing release, 3.0, perhaps,
say, 3 years from now) will be the _elimination_ of many of
the "more than one way to do it"s that have accumulated along
the years mostly for reasons of keeping backwards compatibility
(e.g., lambda, map, reduce, and filter, which Guido mildly
regrets ever having accepted into the language).


I have some doubts about the notion of simplicity which you (or Guido) seem
to be taking for granted. I don't think it is that straightforwrd to agree
about what is simpler, even if you do agree that simpler is better. Unless
you objectivize this concept you can argue that a "for" loop is simple than
a "map" function and I can argue to the contrary and we'll be talking past
each other: much depends on what you are more familiar with and similar
random factors.

As an example of how subjective this can be, most of the features you
mention as too complex for Python to support are in fact standard in Scheme
(true lexical scope, implicit return, no expression/statement distinction)
and yet Scheme is widely regarded as one of the simplest programming
languages out there, more so than Python.

Another problem with simplicity is than introducing it in one place my
increase complexity in another place.
Specifically consider the simple (simplistic?) rule you cite that Python
uses to determine variable scope ("if the name gets bound (assigned to) in
local scope, it's a local variable"). That probably makes the implementor's
job simpler, but it at the same time makes it more complex and less
intuitive for the programmer to code something like the accumulator
generator example -- you need to use a trick of wrapping the variable in a
list.

As for Ruby, I know and quite like it. Based on what you tell me about
Python's philosophy, perhaps Ruby makes more pragmatic choices in where to
make things simple and for whom than Python.

--
Grzegorz
http://pithekos.net
Jul 18 '05 #63

P: n/a
Lulu of the Lotus-Eaters wrote:
gr******@pithekos.net (Grzegorz Chrupala) wrote previously:
|shocked at how awkward Paul Graham's "accumulator generator" snippet is
|in Python:
|class foo:
| def __init__(self, n):
| self.n = n
| def __call__(self, i):
| self.n += i
| return self.n

Me too. The way I'd do it is probably a lot closer to the way Schemers
would do it:
>>> def foo(i, accum=[0]): ... accum[0]+=i
... return accum[0]
... >>> foo(1) 1 >>> foo(3)

4

Shorter, and without an awkward class.


There's an important difference: with your approach, you cannot just
instantiate multiple independent accumulators like with the other --
a = foo(10)
b = foo(23)
in the 'class foo' approach, just as in all of those where foo returns an
inner-function instance, a and b are now totally independent accumulator
callables -- in your approach, 'foo' itself is the only 'accumulator
callable', and a and b after these two calls are just two numbers.

Making a cookie, and making a cookie-cutter, are quite different issues.
Alex

Jul 18 '05 #64

P: n/a
In article <ma**********************************@python.org >,
Lulu of the Lotus-Eaters <me***@gnosis.cx> wrote:
gr******@pithekos.net (Grzegorz Chrupala) wrote previously:
|shocked at how awkward Paul Graham's "accumulator generator" snippet is
|in Python:
|class foo:
| def __init__(self, n):
| self.n = n
| def __call__(self, i):
| self.n += i
| return self.n

Me too. The way I'd do it is probably a lot closer to the way Schemers
would do it:
>>> def foo(i, accum=[0]): ... accum[0]+=i
... return accum[0]
... >>> foo(1) 1 >>> foo(3)

4

Shorter, and without an awkward class.


There's an important difference between these two: the object-based
solution (and the solutions with two nested functions and a closure)
allow more than one accumulator to be created. Yours only creates a
one-of-a-kind accumulator.

I happen to like the object-based solution better. It expresses more
clearly to me the intent of the code. I don't find the class awkward;
to me, a class is what you use when you want to keep some state around,
which is exactly the situation here. "Explicit is better than
implicit." Conciseness is not always a virtue.

--
David Eppstein http://www.ics.uci.edu/~eppstein/
Univ. of California, Irvine, School of Information & Computer Science
Jul 18 '05 #65

P: n/a


Alex Martelli wrote:
record as promising that the major focus in the next release
of Python where he can introduce backwards incompatibilities
(i.e. the next major-number-incrementing release, 3.0, perhaps,
say, 3 years from now) will be the _elimination_ of many of
the "more than one way to do it"s that have accumulated along
the years mostly for reasons of keeping backwards compatibility
(e.g., lambda, map, reduce, and filter,
Oh, goodie, that should win Lisp some Pythonistas. :) I wonder if Norvig
will still say Python is the same as Lisp after that.
Python draws a firm distinction between expressions and
statements. Again, the deep motivation behind this key
distinction can be found in several points in the Zen of
Python, such as "flat is better than nested" (doing away
with the expression/statement separation allows and indeed
encourages deep nesting) and "sparse is better than dense"
(that 'doing away' would encourage expression/statements
with a very high density of operations being performed).


In Lisp, all forms return a value. How simple is that? Powerful, too,
because a rule like "flat is better than nested" is flat out dumb, and I
mean that literally. It is a dumb criterion in that it does not consider
the application.

Take a look at the quadratic formula. Is that flat? Not. Of course
Python allows nested math (hey, how come!), but non-mathematical
computations are usually trees, too.

I was doing an intro to Lisp when someone brought up the question of
reading deeply nested stuff. It occurred to me that, if the computation
is indeed the moral equivalent of the quadratic formula, calling various
lower-level functions instead of arithmetic operators, then it is
/worse/ to be reading a flattened version in which subexpression results
are pulled into local variable, because then one has to mentally
decipher the actual hierarchical computation from the bogus flat sequence.

So if we have:

(defun some-vital-result (x y z)
(finally-decide
(if (serious-concern x)
(just-worry-about x z)
(whole-nine-yards x
(composite-concern y z)))))

....well, /that/ visually conveys the structure of the algorithm, almost
as well as a flowchart (as well if one is accustomed to reading Lisp).
Unwinding that into an artificial flattening /hides/ the structure.
Since when is that "more explicit"? The structure then becomes implicit
in the temp variable bindings and where they get used and in what order
in various steps of a linear sequence forced on the algotrithm.

I do not know what Zen is, but I do now that is not Zen.

Yes, the initial reaction of a COBOL programmer to a deeply nested form
is "whoa! break it down for me!". But that is just lack of familiarity.
Anyone in a reasonable amount of time can get used to and then benefit
from reading nested code. Similarly with every form returning a
value...the return statement looks silly in pretty short order if one
spends any time at all with a functional language.
kenny

Jul 18 '05 #66

P: n/a
[comp.lang.functional removed]
Peter Seibel <pe***@javamonkey.com> writes:
which seems pretty similar to the Python version.

(If of course we didn't already have the FILL function that does just
that.)


Just for the record, in python all you'd write is: v[:] = a

'as
Jul 18 '05 #67

P: n/a


Alexander Schmolck wrote:
pr***********@comcast.net writes:

mi*****@ziplip.com writes:

I think everyone who used Python will agree that its syntax is
the best thing going for it.


I've used Python. I don't agree.

I'd be interested to hear your reasons. *If* you take the sharp distinction
that python draws between statements and expressions as a given, then python's
syntax, in particular the choice to use indentation for block structure, seems
to me to be the best choice among what's currently on offer (i.e. I'd claim
that python's syntax is objectively much better than that of the C and Pascal
descendants -- comparisons with smalltalk, prolog or lisp OTOH are an entirely
different matter).


The best choice for code indentation in any language is M-C-q in Emacs.

Cheers
--
Marco

Jul 18 '05 #68

P: n/a
Since no one has done a point-by-point correction of the errors w/rt
Scheme...

On 03 Oct 2003 11:25:31 -0400, Jeremy H. Brown <jh*****@ai.mit.edu> wrote:
Here are a few of the (arguably) notable differences:

Scheme Common Lisp
Philosophy minimalism comprehensiveness orthogonality compromise
Namespaces one two (functions, variables) more than two, actually
Continuations yes no
Object system no yes
It really depends on how you define 'object system' as to whether or not
Scheme has one. I personally think it does, but you have to be prepared
to crawl around the foundations of OOP (and CS generally) before this
becomes apparent. It helps if you've ever lived with unconventional
object systems like Self.
Exceptions no yes yes, via continuations which reify the
fundamental control operators in all languages
Macro system syntax-rules defmacro most Schemes provide defmacro style macros as
they are relatively easy to implement correctly
(easier than syntax-rules anyway)
Implementations >10 ~4 too many to count. The FAQ lists over twenty. IMO
there are about 9 'major' implementations which
have
relatively complete compliance to R5RS and/or
significant extension libraries
Performance "worse" "better" This is absolutely wrong. Scheme actually boasts
one
of the most efficient compliers on the planet in
the
StaLIn (Static Language Implementation) Scheme
system.
Larceny, Bigloo, and Gambit are also all quite
zippy
when compiled.
Standards IEEE ANSI Hrmf. 'Scheme' and 'Standard' are slightly skewed
terms.
This is probably both the greatest weakness of
the
language and also its greatest strength. R5RS is
more
of a description to programmers of how to write
portable
code than it is a constraint on implementors.
Scheme is
probably more of a "family" of languages than
Lisp is
at that.

Anyway, Nobody really pays much attention to
IEEE, although
that may change since it's being reworked this
year. The
real standard thus far has been the community
consensus
document called R5RS, the Revised^5 Report on the
Algorithmic
Language Scheme. There is a growing consensus
that it needs
work, but nobody has yet figured out how to make
a new version happen (And I believe that the IEEE effort is just
bringing IEEE up to date w/R5RS)
Reference name R5RS CLTL2
Reference length 50pp 1029pp
Standard libraries "few" "more" Well, we're up to SRFI-45 (admittedly a number of
them have been withdrawn, but the code and specification are still
available) and there's very little overlap.
Most of the SRFIs have highly portable
implementations.
Support Community Academic Applications writers

in outlook, perhaps, but the academic component
has dropped fairly significantly over the years. The best implementations
still come out of academia, but the better libraries are starting to come
from people in the industry.
There is also an emphasis on heavily-armed
programming
which is sadly lacking in other branches of the
IT
industry. Remember - there is no Scheme
Underground.

david rush
--
(\x.(x x) \x.(x x)) -> (s i i (s i i))
-- aki helin (on comp.lang.scheme)
Jul 18 '05 #69

P: n/a

jc*@iteris.com (MetalOne) writes:
I have tried on 3 occassions to become a LISP programmer, based upon
the constant touting of LISP as a more powerful language and that
ultimately S-exprs are a better syntax. Each time, I have been
stopped because the S-expr syntax makes we want to vomit.
:-)

Although people are right when they say that S-exprs are simpler, and
once you get used to them they are actually easier to read, I think
the visual impact they have on those not used to it is often
underestimated.

And to be honest, trying to deal with all these parenthesis in an
editor which doesn't help you is not an encouraging experience, to say
the least. You need at least a paren-matching editor, and it is a real
big plus if it also can reindent your code properly. Then, very much
like in python, the indent level tells you exactly what is happening,
and you pretty much don't see the parens anymore.

Try it! In emacs, or Xemacs, open a file ending in .lisp and
copy/paste this into it:

;; Split a string at whitespace.
(defun splitatspc (str)
(labels ((whitespace-p (c)
(find c '(#\Space #\Tab #\Newline))))
(let* ((posnew -1)
(posold 0)
(buf (cons nil nil))
(ptr buf))
(loop while (and posnew (< posnew (length str))) do
(setf posold (+ 1 posnew))
(setf posnew (position-if #'whitespace-p str
:start posold))
(let ((item (subseq str posold posnew)))
(when (< 0 (length item))
(setf (cdr ptr) (list item))
(setf ptr (cdr ptr)))))
(cdr buf))))

Now place the cursor on the paren just in front of the defun in the
first line, and hit ESC followed by <ctrl-Q>.
If a set of macros could be written to improve LISP syntax, then I
think that might be an amazing thing. An interesting question to me
is why hasn't this already been done.


Because they are so damned regular. After some time you do not even
think about the syntax anymore.

Jul 18 '05 #70

P: n/a
Grzegorz Chrupa?a wrote:
...
I have some doubts about the notion of simplicity which you (or Guido)
seem to be taking for granted. I don't think it is that straightforwrd to
agree about what is simpler, even if you do agree that simpler is better.
Unless you objectivize this concept you can argue that a "for" loop is
simple than a "map" function and I can argue to the contrary and we'll be
talking past each other: much depends on what you are more familiar with
and similar random factors.
I have both learned, and taught, many different languages -- and my
teaching was both to people already familiar with programming, and to
others who were not programmers but had some experience and practice
of "more rigorous than ordinary" thinking (in maths, physics, etc),
and to others yet, of widely varying ages, who lacked any such practise.

I base my notions of what is simple, first and foremost, on the experience
of what has proved easy to teach, easy to learn, and easy for learners to
use. Secondarily, on the experience of helping experienced programmers
design, develop and debug their code (again in many languages, though
nowhere as wide a variety as for the learning and teaching experience).

None of this (like just about nothing in human experiential knowledge about
such complicated issues as the way human beings think and behave) can be
remotely described as "objective".

As an example of how subjective this can be, most of the features you
mention as too complex for Python to support are in fact standard in
Scheme (true lexical scope, implicit return, no expression/statement
Tut-tut. You are claiming, for example, that I mentioned the lack
of distinction between expressions and statements as "too complex for
Python to support": I assert your claim is demonstrably false, and
that I NEVER said that it would be COMPLEX for Python to support such
a lack. What I *DID* say on the subject, and I quote, was:

"""
Python draws a firm distinction between expressions and
statements. Again, the deep motivation behind this key
distinction can be found in several points in the Zen of
Python, such as "flat is better than nested" (doing away
with the expression/statement separation allows and indeed
encourages deep nesting) and "sparse is better than dense"
(that 'doing away' would encourage expression/statements
with a very high density of operations being performed).
"""

Please read what I write rather than putting in my mouth words that
I have never written, thank you. To reiterate, it would have been
quite simple to design Python without any distinction between
expressions and statement; HOWEVER, such a lack of distinction
would have encouraged programs written in Python by others to
break the Python principles that "flat is better than nested"
(by encouraging nesting) and "sparse is better than dense" (by
encouraging high density).
distinction) and yet Scheme is widely regarded as one of the simplest
programming languages out there, more so than Python.
But does encourage nesting and density. Q.E.D..
Another problem with simplicity is than introducing it in one place my
increase complexity in another place.
It may (which is why "practicality beats purity", yet another Zen of
Python principle...), therefore it becomes important to evaluate the
PRACTICAL IMPORTANCE, in the language's environment, of that "other
place". All engineering designs (including programming languages)
are a rich tapestry of trade-offs. I think Python got its trade-offs
more nearly "right" (for my areas of interest -- particularly for large
multi-author application programs and frameworks, and for learning
and teaching) than any other language I know.
Specifically consider the simple (simplistic?) rule you cite that Python
uses to determine variable scope ("if the name gets bound (assigned to) in
local scope, it's a local variable"). That probably makes the
implementor's job simpler, but it at the same time makes it more complex
and less intuitive for the programmer to code something like the
accumulator generator example -- you need to use a trick of wrapping the
variable in a list.
It makes the _learner_'s job simple (the rule he must learn is simple),
and it makes the _programmer_'s job simple (the rule he must apply to
understand what will happens if he codes in way X is simple) -- those
two are at least as important as simplifying the implementor's job (and
thus making implementations smaller and more bug-free). If the inability
to re-bind outer-scope variables encourages all programmers to use
classes whenever they have to decide how to bundle some code and some
data, i.e. if it makes classes the "one obvious way to do it" for such
purposes, the resulting greater uniformity in Python programs is deemed
to be a GOOD thing in the Python viewpoint. (In practice, there are of
course always "other ways to do it" -- as long as they're "non-obvious",
that's presumably tolerable, even if not ideal:-).

As for Ruby, I know and quite like it. Based on what you tell me about
Python's philosophy, perhaps Ruby makes more pragmatic choices in where to
make things simple and for whom than Python.


I thought the total inability to nest method definitions (while in Python
you get perfectly normal lexical closures, except that you can't _rebind_
outer-scope names -- hey, in functional programming languages you can't
rebind ANY name, yet nobody every claimed that this means they "don't have
true lexical closures"...!-), and more generally the deep split between
the space of objects and that of methods (a split that's simply not there
in Python), would have been show-stoppers for a Schemer, but it's always
nice to learn otherwise. I think, however, that deeming the set of
design trade-offs in Ruby as "more pragmatic" than those in Python is
a distorted vision, because it fails to consider the context. If my main
goal in programming was to develop experimental designs in small groups,
I would probably appreciate certain features of Ruby (such as the ability
to change *ANY* method of existing built-in classes); thinking of rather
large teams developing production applications and frameworks, the same
features strike me as a _negative_ aspect. The language and cultural
emphasis towards clarity, simplicity and uniformity, against cleverness,
terseness, density, and "more than one way to do-ity", make Python by
far the most practical language for me to teach, and in which to program
the kind of application programs and frameworks that most interest me --
but if my interest was instead to code one-liner scripts for one-off
system administration tasks, I might find that emphasis abominable...!
Alex

Jul 18 '05 #71

P: n/a
On Sat, 04 Oct 2003 17:02:41 GMT, Alex Martelli <al***@aleax.it> wrote:
[...]

def make_accumulator(initial_value):
accumulator = Bunch(value=initial_value)
def accumulate(addend):
accumulator.value += addend
return accumulator.value
return accumulate

accumulate = make_accumulator(23)
print accumulate(100) # emits 123
print accumulate(100) # emits 223
(using the popular Bunch class commonly defined as:
class Bunch(object):
def __init__(self, **kwds):
self.__dict__.update(kwds)
). There is, of course, a cultural gulf between this
verbose 6-liner [using an auxiliary class strictly for
reasons of better readability...!] and the terse Ruby
1-liner above, and no doubt most practitioners of both
languages would in practice choose intermediate levels,
such as un-densifying the Ruby function into:
I like the Bunch class, but the name suggests vegetables to me ;-)

Since the purpose (as I see it) is to create a unique object with
an attribute name space, I'd prefer a name that suggests that, e.g., NS,
or NSO or NameSpaceObject, so I am less likely to need a translation.
BTW, care to comment on a couple of close variants of Bunch with per-object class dicts? ...

def mkNSC(**kwds): return type('NSC', (), kwds)()

or, stretching the one line a bit to use the instance dict,

def mkNSO(**kwds): o=type('NSO', (), {})(); o.__dict__.update(kwds); return o

I'm wondering how much space is actually wasted with a throwaway class. Is there a
lazy copy-on-write kind of optimization for class and instance dicts that prevents
useless proliferation? I.e.,
type('',(),{}).__dict__ <dictproxy object at 0x00901570> type('',(),{}).__dict__.keys() ['__dict__', '__module__', '__weakref__', '__doc__']

seems like it could be synthesized by the proxy without a real dict
until one was actually needed to hold other state.

For qnd ns objects, I often do

nso = type('',(),{})()
nso.att = 'some_value'

and don't generally worry about the space issue anyway, since I don't make that many.

def outer(a)
proc do |b|
a+b
end
end

or shortening/densifying the Python one into:

def make_accumulator(a):
value = [a]
def accumulate(b):
value[0] += b
return value[0]
return accumulate
Or you could make a one-liner (for educational purposes only ;-)
def mkacc(a): return (lambda a,b: a.__setitem__(0,a[0]+b) or a[0]).__get__([a]) ... acc = mkacc(100)
acc(3) 103 acc(5) 108

Same with defining Bunch (or even instanciating via a throwaway). Of course I'm not
suggesting these as a models of spelling clarity, but it is sometimes interesting to see
alternate spellings of near-if-not-identical functionality.
Bunch = type('Bunch',(),{'__init__':lambda self,**kw:self.__dict__.update(kw)})
bunch=Bunch(value='initial_value')
bunch.value

'initial_value'

but I think the "purer" (more extreme) versions are
interesting "tipizations" for the languages, anyway.

Oh goody, a new word (for me ;-). Would you define "tipization"?

Regards,
Bengt Richter
Jul 18 '05 #72

P: n/a
pr***********@comcast.net writes:
But syntactic abstractions *are* a change to the language, it just
sounds fancier.
Yes, this is obviously true. Functional abstractions also change the
language, even if it's in a slightly different way. Any programming
language is, after all, a set of functional and syntactic
abstractions.
I agree that injudicious use of macros can destroy the readability
of code, but judicious use can greatly increase the readability. So
while it is probably a bad idea to write COND1 that assumes
alternating test and consequence forms, it is also a bad idea to
replicate boilerplate code because you are eschewing macros.


I suppose this is about the same differentiantion I wanted to make by
the terms "syntactic abstraction" (stressing the idea of building a
syntax that matches a particular problem area or programming pattern),
and "changing the language" which is just that, not being part of any
particular abstraction other than the programming language itself.

--
Frode Vatvedt Fjeld
Jul 18 '05 #73

P: n/a
Bengt Richter wrote:
...
I like the Bunch class, but the name suggests vegetables to me ;-)
Well, I _like_ vegetables...
BTW, care to comment on a couple of close variants of Bunch with
per-object class dicts? ...

def mkNSC(**kwds): return type('NSC', (), kwds)()
Very nice (apart from the yecchy name;-).
or, stretching the one line a bit to use the instance dict,

def mkNSO(**kwds): o=type('NSO', (), {})(); o.__dict__.update(kwds);
return o
I don't see the advantage of explicity using an empty dict and then
updating it with kwds, vs using kwds directly.
I'm wondering how much space is actually wasted with a throwaway class. Is
there a lazy copy-on-write kind of optimization for class and instance
dicts that prevents useless proliferation? I.e.,


I strongly doubt there's any "lazy copy-on-write" anywhere in Python.
The "throwaway class" will be its dict (which, here, you need -- that's
the NS you're wrapping, after all) plus a little bit (several dozen bytes
for the typeobject, I'd imagine); an instance of Bunch, probably a bit
smaller. But if you're going to throw either away soon, who cares?

but I think the "purer" (more extreme) versions are
interesting "tipizations" for the languages, anyway.

Oh goody, a new word (for me ;-). Would you define "tipization"?


I thought I was making up a word, and slipped by spelling it
as in Italiano "tipo" rather than English "type". It appears
(from Google) that "typization" IS an existing word (sometimes
mis-spelled as "tipization"), roughly in the meaning I intended
("characterization of types") -- though such a high proportion
of the research papers, institutes, etc, using "typization",
seems to come from Slavic or Baltic countries, that I _am_
left wondering...;-).
Alex

Jul 18 '05 #74

P: n/a
"David Rush" <dr***@aol.net> wrote in message
news:op**************@news.nscp.aoltw.net...
On Fri, 3 Oct 2003 09:36:32 -0400, Terry Reedy <tj*****@udel.edu> wrote:
... Lispers posting here By 'here', I meant comp.lang.python ...
have gone to pains to state that Scheme is
not a dialect of Lisp but a separate Lisp-like language. Could you give a short listing of the current main differences (S vs. CL)?

Do you even begin to appreciate how inflammatory such a request is
when posted to to both c.l.l and c.l.s?
As implied by 'here', I did not originally notice the cross-posting
(blush, laugh ;<). I am pleased with the straightforward, civil, and
helpful answers I have received, including yours, and have saved them
for future reference.

.... compromise system design, for both good and bad .... embedded in it. This cuts both ways, mind you.

....

I believe in neither 'one true religion' nor in 'one best
algorithm/computer language for all'. Studying Lisp has helped me
better understand Python and the tradeoffs embodied in its design. I
certainly better appreciate the issue of quoting and its relation to
syntax.

Terry J. Reedy
Jul 18 '05 #75

P: n/a
mi*****@ziplip.com writes:
I'd like to know if it may be possible to add a powerful macro system
to Python, while keeping its amazing syntax,


I fear it would not be. I can't say for certain but I found that the
syntax rules out nesting statements inside expressions (without adding
some kind of explicit bracketing, which rather defeats the point of
Python syntax) and you might run into similar difficulties if adding
macros. It's a very clean syntax (well, with a few anomalies) but
this is at the price of a rigid separation between statements and
expressions, which doesn't fit well with the Lisp-like way of doing
things.

Myself I rather like the option chosen by Haskell, to define an
indentation-based syntax which is equivalent to one with bracketing,
and let you choose either. You might do better to add a new syntax to
Lisp than to add macro capabilities to Python. Dylan is one Lisp
derivative with a slightly more Algol-like syntax, heh, Logo is
another; GNU proposed some thing called 'CTAX' which was a C-like
syntax for Guile Scheme, I don't know if it is usable.

If the indentation thing appeals, maybe you could preprocess Lisp
adding a new kind of bracket - say (: - which closes at the next line
of code on the same indentation level. Eg

(: hello
there
(goodbye)

would be equivalent to

(hello
there)
(goodbye)

I dunno, this has probably already been done.

--
Ed Avis <ed@membled.com>
Jul 18 '05 #76

P: n/a
Alexander Schmolck wrote:
[comp.lang.functional removed]
Peter Seibel <pe***@javamonkey.com> writes:
which seems pretty similar to the Python version.

(If of course we didn't already have the FILL function that does just
that.)


Just for the record, in python all you'd write is: v[:] = a

'as


I suspect you may intend "v[:] = [a]*len(v)", although a good alternative
may also be "v[:] = itertools.repeat(a, len(v))".
Alex

Jul 18 '05 #77

P: n/a
Alex Martelli <al***@aleax.it> writes:
Sure, but aren't these the examples that are being presented? Isn't
"with-collector" a general purpose iteration construct, etc? Maybe
only _special_ purpose ones should be built with macros (if you are
right that _general_ purpose ones should not be), but the subtleness
of the distinction leaves me wondering about the practice.


It is a subtle distinction, just like a lot of other issues in
programming are quite subtle. And I think this particular issue
deserves more attention than it has been getting (so far as I know).

As for the current practice, I know that I quite dislike code that
uses things like with-collector, and I especially dislike it when I
have to look at the macro's expansion to see what is going on, and I
know there are perfectly fine alternatives in the standard syntax. On
the other hand, I do like it when I see a macro call that reduces tens
or even hundreds of lines of code to just a few lines that make it
immediately apparent what's happening. And I know I'd never want to
use a language with anything less than lisp's macros.

--
Frode Vatvedt Fjeld
Jul 18 '05 #78

P: n/a
Sugar exists. http://redhog.org/Projects/Programming/Current/Sugar/
let
group
foo
+ 1 2
bar
+ 3 4
+ foo bar
Jul 18 '05 #79

P: n/a
In comp.lang.scheme Grzegorz Chrupala <gr******@pithekos.net> wrote:
jc*@iteris.com (MetalOne) wrote in message news:<92**************************@posting.google. com>...
Scheme
(define vector-fill!
(lambda (v x)
(let ((n (vector-length v)))
(do ((i 0 (+ i 1)))
((= i n))
(vector-set! v i x)))))

Python
def vector_fill(v, x):
for i in range(len(v)):
v[i] = x

To me the Python code is easier to read, and I can't possibly fathom
how somebody could think the Scheme code is easier to read. It truly
boggles my mind.
Pick a construct your pet language has specialized support, write an
ugly equivalent in a language that does not specifically support it
and you have proved your pet language to be superior to the other
language. (I myself have never used the "do" macro in Scheme and my
impression is few people do. I prefer "for-each", named "let" or the
CL-like "dotimes" for looping).


Whiile true, if solving a problem requires you to use a lot of constructs
that one language provides and for which youave to do lots of extra work
in teh other, one might aswell take the pragmatic approach that the other
language is better for the given problem at hand.

Cheers,
--
Grzegorz


--
Sander

+++ Out of cheese error +++
Jul 18 '05 #80

P: n/a
"Alex Martelli" wrote:
....
def outer(a) proc do |b| a+=b end end

x = outer(23)
puts x.call(100) # emits 123
puts x.call(100) # emits 223

[i.e., I can't think of any way you could just use x(100)
at the end of such a snippet in Ruby -- perhaps somebody
more expert of Ruby than I am can confirm or correct...?]


Guy is probably thinking about something like this

---
def outer(sym,a)
Object.instance_eval {
private # define a private method
define_method(sym) {|b| a+=b }
}
end

outer(:x,24)

p x(100) # 124
p x(100) # 224
---
but there is no way to write a ``method returning
method ::outer in Ruby that could be used in the form

----
x = outer(24)
x(100)
----

On the other hand, using []-calling convention
and your original definition, you get - at least
visually - fairly close.

---
def outer(a) proc do |b| a+=b end end

x = outer(23)
puts x[100] # emits 123
puts x[100] # emits 223
---
/Christoph
Jul 18 '05 #81

P: n/a
In comp.lang.scheme David Rush <dr***@aol.net> wrote:
On 03 Oct 2003 14:44:36 +0300, Toni Nikkanen <to**@tuug.fi> wrote:
It's be interesting to know where people got the idea of learning
Scheme/LISP from (apart from compulsory university courses)?
Emacs. I've noticed over the years that people don't really get Emacs
religion until they've started hacking elisp. I know that the frustration
of having almost-but-not-quite the behavior I wanted on top of having all
that source code was a powerful incentive for me to learn Lisp. Of course
my apreciation of Emacs only increased as I went...


I have at times almost gnawed off my hand to avoid going down that path.
I'd rather write cobol than elisp...

The thing that sealed it for me was re-programming SCWM's behavior so that
I could use X w/no mouse &cet. That got me hooked on Scheme (I had been
hacking SML at roughly the same time while looking for the foundations of
OOP), which was really just about perfect semantically.

david rush


--
Sander

+++ Out of cheese error +++
Jul 18 '05 #82

P: n/a
Alex Martelli wrote:

Tut-tut. You are claiming, for example, that I mentioned the lack
of distinction between expressions and statements as "too complex for
Python to support": I assert your claim is demonstrably false, and
that I NEVER said that it would be COMPLEX for Python to support such
a lack. What I *DID* say on the subject, and I quote, was:
Sorry if I inadvertantly distorted your words. What I meant by my admittedly
rhetorical statement wa something like: "these features either introduce
too much complexity, or are messy, or otherwise incompatible with Python's
philosophy and for this reason the language refuses to support them." Not
necessarily too complex to *implement*. I do realize that
no-statements-just-expressions is not a particularly challenging design
issue.
It makes the _learner_'s job simple (the rule he must learn is simple),
That is plausible.
and it makes the _programmer_'s job simple (the rule he must apply to
understand what will happens if he codes in way X is simple)
This makes less sense. The rule may be simple but it also limits the
expressiveness of the language and forces the programmer to work around the
limitations in a contorted and far from "simple" way.

I thought the total inability to nest method definitions (while in Python
you get perfectly normal lexical closures, except that you can't _rebind_
outer-scope names -- hey, in functional programming languages you can't
rebind ANY name, yet nobody every claimed that this means they "don't have
true lexical closures"...!-), and more generally the deep split between
the space of objects and that of methods (a split that's simply not there
in Python), would have been show-stoppers for a Schemer, but it's always
nice to learn otherwise.


I don't really feel quite qualified discuss Ruby's design decisions wrt the
relation between methods, procedures and objects, but I don't think the
split between methods and objects is as deep as you claim:

irb(main):011:0> meth="f-o-o".method(:split)
=> #<Method: String#split>
irb(main):012:0> meth.class
=> Method
irb(main):013:0> meth.kind_of?(Object)
=> true
irb(main):014:0> meth.call('-')
=> ["f", "o", "o"]
irb(main):015:0>

I do tend to think that Ruby would be better off with a more unified
treatment of blocks, procedures and methods, but my understanding of the
issues involved is very incomplete. Perhaps Smalltalk experts would be more
qualified to comment on this.

--
Grzegorz
http://pithekos.net

Jul 18 '05 #83

P: n/a


Sander Vesik wrote:
I have at times almost gnawed off my hand to avoid going down that path.
I'd rather write cobol than elisp...


Mileage does vary :): http://alu.cliki.net/RtL%20Emacs%20Elisp

That page lists people who actually cite Elisp as at least one way they
got turned on to Lisp. I started the survey when newbies started showing
up on the c.l.l. door in still small but (for Lisp) significantly larger
numbers. Pail Graham holds a commanding lead, btw.

kenny

Jul 18 '05 #84

P: n/a
Still have only made slight headway into learning Lisp since the
last discussion, so I've been staying out of this one. But

Kenny Tilton:
Take a look at the quadratic formula. Is that flat? Not. Of course
Python allows nested math (hey, how come!), but non-mathematical
computations are usually trees, too.
Since the quadratic formula yields two results, I expect most
people write it more like

droot = sqrt(b*b-4*a*c) # square root of the discriminate
x_plus = (-b + droot) / (4*a*c)
x_minus = (-b - droot) / (4*a*c)

possibly using a temp variable for the 4*a*c term, for a
slight bit better performance.
It occurred to me that, if the computation
is indeed the moral equivalent of the quadratic formula, calling various
lower-level functions instead of arithmetic operators, then it is
/worse/ to be reading a flattened version in which subexpression results
are pulled into local variable, because then one has to mentally
decipher the actual hierarchical computation from the bogus flat sequence.
But isn't that flattening *exactly* what occurs in math. Let me pull
out my absolute favorite math textbook - Bartle's "The Elements
of Real Analysis", 2nd ed.

I opened to page 213, which is in the middle of the book.

29.1 Definition. If P is a partition of J, then a Riemann-Stieltjes sum
of f with respect to g and corresponding to P = (x_0, x_1, ..., x_n) is a
real
number S(P; f, g) of the form

n
S(P; f, g) = SIGMA f(eta_k){g(x_k) - g(x_{k-1})}
k = 1

Here we have selected number eta_k satisying

x_{k-1} <= eta_k <= x_k for k = 1, 2, ..., n

There's quite a bit going on here behind the scenes which are
the same flattening you talk about. For examples: the definiton
of "partition" is given elsewhere, the notations of what f and g
mean, and the nomenclature "SIGMA"
Let's try mathematical physics, so I pulled out Arfken's
"Mathematical Methods for Physicists", 3rd ed.

About 1/3rd of the way through the book this time, p 399

Exercise 7.1.1 The function f(z) expanded in a Laurent series exhibits
a pole of order m at z = z_0. Show that the coefficient of (z-z_0)**-1,
a_{-1}, is given by
1 d[m-1]
a_{-1} = ------- * ------------- * ( (z-z_0)**m * f(z)) evalutated as )
(m-1)! d z [m-1]
x -> x_0

This requires going back to get the definition of a Laurent series,
and of a pole, knowing how to evaluate a function at a limit point,
and remembering the bits of notation which are so hard to express
in 2D ASCII. (the d[m-1]/dz[m-1] is meant to be the d/dz operator
taken m-1 times).

In both cases, the equations are flattened. They aren't pure trees
nor are they absolutely flat. Instead, names are used to represent
certain ideas -- that is, flatten them. Yes, it requires people to
figure out what these names mean, but on the other hand, that's part
of training.

And part of that training is knowing which terms are important
enough to name, and the balance between using using old
names and symbols and creating new ones.
So if we have:

(defun some-vital-result (x y z)
(finally-decide
(if (serious-concern x)
(just-worry-about x z)
(whole-nine-yards x
(composite-concern y z)))))

...well, /that/ visually conveys the structure of the algorithm, almost
as well as a flowchart (as well if one is accustomed to reading Lisp).
Unwinding that into an artificial flattening /hides/ the structure.
"Flat is better than nested." does not mean nested is always and
forever wrong. "Better" means there's a balance.

"Readability counts." is another guideline.
I do not know what Zen is, but I do now that is not Zen.


The foreigner came to the monestary, to learn more of the
ways of Zen. He listened to the monks then sat cross-legged
for a day, in the manner of initiates. Afterwards he complained
to the master saying that it would be impossible for him to reach
Nirvana because of the pains in his legs and back. Replied the
master, "try using a comfy chair," but the foreigner returned
home to his bed.

The Zen of Python outlines a set of guidelines. They are not
orthogonal and there are tensions between them. You can
take one to an extreme but the others suffer. That balance
is different for different languages. You judge the Zen of Python
using the Zen of Lisp.

Andrew
da***@dalkescientific.com
Jul 18 '05 #85

P: n/a
Kenny Tilton <kt*****@nyc.rr.com> writes:
That page lists people who actually cite Elisp as at least one way
they got turned on to Lisp. I started the survey when newbies started
showing up on the c.l.l. door in still small but (for Lisp)
significantly larger numbers. Pail Graham holds a commanding lead, btw.


I'd fooled around with other lisp systems before using GNU Emacs, but
reading the Emacs source code was how I first got to really understand
how Lisp works.
Jul 18 '05 #86

P: n/a
In article <Ps*****************@newsread3.news.pas.earthlink. net>,
Andrew Dalke <ad****@mindspring.com> writes
.......
Since the quadratic formula yields two results, I expect most
people write it more like

droot = sqrt(b*b-4*a*c) # square root of the discriminate
x_plus = (-b + droot) / (4*a*c)
x_minus = (-b - droot) / (4*a*c)

possibly using a temp variable for the 4*a*c term, for a
slight bit better performance.


perhaps we should be using computer algebra as suggested in this paper
http://www.mmrc.iss.ac.cn/~ascm/ascm03/sample.pdf on computing the
solutions of quadratics.
--
Robin Becker
Jul 18 '05 #87

P: n/a
In article <cr******************@twister.nyc.rr.com>,
Kenny Tilton <kt*****@nyc.rr.com> wrote:
I have at times almost gnawed off my hand to avoid going down that path.
I'd rather write cobol than elisp...


Mileage does vary :): http://alu.cliki.net/RtL%20Emacs%20Elisp

That page lists people who actually cite Elisp as at least one way they
got turned on to Lisp. I started the survey when newbies started showing
up on the c.l.l. door in still small but (for Lisp) significantly larger
numbers. Pail Graham holds a commanding lead, btw.


Heh. Does that mean former TECO programmers will get turned on to Perl?
Hasn't had that effect for me yet...

--
David Eppstein http://www.ics.uci.edu/~eppstein/
Univ. of California, Irvine, School of Information & Computer Science
Jul 18 '05 #88

P: n/a
In comp.lang.scheme David Rush <dr***@aol.net> wrote:
Exceptions no yes yes, via continuations which reify the
fundamental control operators in all languages


the exceptions SRFI and saying it is there as an extension would imho be a
better answer.
Implementations >10 ~4 too many to count. The FAQ lists over twenty. IMO
there are about 9 'major' implementations which
have
relatively complete compliance to R5RS and/or
significant extension libraries


And the number is likely to continue increase over the years. Scheme is
very easy to implement, including as an extensions language inside the
runtime of something else. The same doesn't really hold for common lisp.

david rush


--
Sander

+++ Out of cheese error +++
Jul 18 '05 #89

P: n/a
Lulu of the Lotus-Eaters wrote:
gr******@pithekos.net (Grzegorz Chrupala) wrote previously:
|shocked at how awkward Paul Graham's "accumulator generator" snippet is
|in Python:
|class foo:
| def __init__(self, n):
| self.n = n
| def __call__(self, i):
| self.n += i
| return self.n

Me too. The way I'd do it is probably a lot closer to the way Schemers
would do it:
>>> def foo(i, accum=[0]): ... accum[0]+=i
... return accum[0]
... >>> foo(1) 1 >>> foo(3)

4

Shorter, and without an awkward class.


Yah, but instead it abuses a relatively obscure Python feature... the fact that
default arguments are created when the function is created (rather than when it
is called). I'd rather have the class, which is, IMHO, a better way to
preserve state than closures. (Explicit being better than implicit and all
that... :-)

--
Hans (ha**@zephyrfalcon.org)
http://zephyrfalcon.org/

Jul 18 '05 #90

P: n/a
Grzegorz Chrupa?a wrote:
As an example of how subjective this can be, most of the features you
mention as too complex for Python to support are in fact standard in Scheme
(true lexical scope, implicit return, no expression/statement distinction)
and yet Scheme is widely regarded as one of the simplest programming
languages out there, more so than Python.


Scheme, as a language, is arguably simpler than Python... it takes a few core
concepts and rigorously applies them everywhere. This makes the Scheme
language definition simpler than Python's. However, whether *programming in
Scheme* is simpler than *programming in Python* is a different issue
altogether. To do everyday things, should you really have to grok recursion,
deeply nested expressions, anonymous functions, complex list structures, or
environments? Of course, Python has all this as well (more or less), but they
usually don't show up in Python 101.

--
Hans (ha**@zephyrfalcon.org)
http://zephyrfalcon.org/

Jul 18 '05 #91

P: n/a

"Grzegorz ChrupaÅ,a" <gr******@pithekos.net> wrote in message
news:bl**********@news.ya.com...
As an example of how subjective this can be, most of the features you mention as too complex for Python to support are in fact standard in Scheme (true lexical scope, implicit return, no expression/statement distinction)
Another problem with simplicity is that introducing it in one place may increase complexity in another place. [typos corrected]


[Python simplicity=>complexity example (scopes) snipped]

[I am leaving the reduced newsgroup list as is. If anything I write
below about Lisp does not apply to Scheme specificly, my aplogies in
advance.]

There is a basic Lisp example that some Lispers tend to gloss over, I
think to the ultimate detriment of promoting that more people
understand and possibly use Lisp (in whatever version).

Specifically, the syntactic simplification of unifying functions and
statements as S-expressions aids, is made possible by, and comes at
the cost of semantic complexification of the meaning of 'function
call' (or S-expression evaulation).

The 'standard' meaning in the languages I am previously familiar with
(and remember) is simple and uniform: evaluate the argument
expressions and somehow 'pass' the resulting values to the function to
be matched with the formal parameters. The only complication is in
the 'how' of the passing.

Lisp (and possibly other languages I am not familiar with) adds the
alternative of *not* evaluating arguments but instead passing them as
unevaluated expressions. In other words, arguments may be
*implicitly* quoted. Since, unlike as in Python, there is no
alternate syntax to flag the alternate argument protocol, one must, as
far as I know, memorize/learn the behavior for each function. The
syntactic unification masks but does not lessen the semantic
divergence. For me, it made learning Lisp (as far as I have gotten)
more complicated, not less, especially before I 'got' what going on.

In Python, one must explicitly quote syntactic function arguments
either with quote marks (for later possible eval()ing) or 'lambda :'
(for later possible calling). Inplicit quoting requires the alternate
syntax of either operator notation ('and' and 'or'-- but these are
exceptional for operators) or a statement. Most Python statements
implicitly quote at least part of the construct. (A print statement
implicitly stringifies its object values, but this too is special
handling.)

Question: Python has the simplicity of one unified assignment
statement for the binding of names, attributes, slot and slices, and
multiples thereof. Some Lisps have the complexity of different
functions for different types of targets: set, setq, putprop, etc.
What about Scheme ;-?

Terry J. Reedy
Jul 18 '05 #92

P: n/a
"Terry Reedy" <tj*****@udel.edu> writes:
Lisp (and possibly other languages I am not familiar with) adds the
alternative of *not* evaluating arguments but instead passing them as
unevaluated expressions. In other words, arguments may be
*implicitly* quoted. Since, unlike as in Python, there is no
alternate syntax to flag the alternate argument protocol, one must, as
far as I know, memorize/learn the behavior for each function. The
syntactic unification masks but does not lessen the semantic
divergence. For me, it made learning Lisp (as far as I have gotten)
more complicated, not less, especially before I 'got' what going on.
What you're talking about are called "special forms" and are definitely
not functions, and are used when it is semantically necessary to leave
something in an argument position unevaluated (such as in 'cond' or
'if', Lisp 'defun' or 'setq', or Scheme 'define' or 'set!').
Programmers create them using the macro facilities of Lisp or Scheme
rather than as function definitions. There are only a handful of
special forms one needs to know in routine programming, and each one has
a clear justification for being a special form rather than a function.

Lisp-family languages have traditionally held to the notion that Lisp
programs should be easily representable using the list data structure,
making it easy to manipulate programs as data. This is probably the
main reason Lisp-family languages have retrained the very simple syntax
they have, as well as why there is not different syntax for functions
and special forms.
Question: Python has the simplicity of one unified assignment
statement for the binding of names, attributes, slot and slices, and
multiples thereof. Some Lisps have the complexity of different
functions for different types of targets: set, setq, putprop, etc.
What about Scheme ;-?


Scheme has 'define', 'set!', and 'lambda' for identifier bindings (from
which 'let'/'let*'/'letrec' can be derived), and a number of mutation
operations for composite data types: 'set-car!'/'set-cdr!' for pairs,
'vector-set!' for mutating elements of vectors, 'string-set!' for
mutating strings, and probably a few others I'm forgetting.

--
Steve VanDevender "I ride the big iron" http://jcomm.uoregon.edu/~stevev
st****@hexadecimal.uoregon.edu PGP keyprint 4AD7AF61F0B9DE87 522902969C0A7EE8
Little things break, circuitry burns / Time flies while my little world turns
Every day comes, every day goes / 100 years and nobody shows -- Happy Rhodes
Jul 18 '05 #93

P: n/a
On Sat, 04 Oct 2003 17:02:41 GMT
Alex Martelli <al***@aleax.it> wrote:
As Paul Graham says (<URL:http://www.paulgraham.com/icad.html>):
or even

def foo(n):
lambda i: n += i
And this touches on yet another point of the Zen of Python:
explicit is better than implicit. Having a function
implicitly return the last expression it computes would
violate this point (and is in fact somewhat error-prone,
in my experience, in the several languages that adopt
this rule).
I don't mean to start a flamewar....ah, who am I kidding, of *course* I
mean to start a flamewar. :-)

I just wish the Zen of Python (try "import this" on a Python interpreter
for those who haven't read it.) would make it clearer that "Explicit is
better than implicit" really means "Explicit is better than implicit _in
some cases_"

Look here:
[x*x for x in [1,2,3]] [1, 4, 9]

Good grief! How could someone who doesn't understand list comprehensions
*ever* read and understand this? We'd better do it the explicit way:'
ary = []
for x in [1,2,3]: .... ary.append(x*x)
.... ary [1, 4, 9]


*Much* better! Now you don't have to understand list comprehensions to
read this code!

</sarcasm>

Of course, nobody is going to seriously suggest that list
comprehensions be removed from Python. (I hope). The point here is that,
for any level of abstraction, you have to understand how it works. (I
think) what the author of "The Zen of Python" (Tim Peters) means when
he says "Explicit is better than implicit" is "Don't do weird crazy
things behind the programmer's back, like automatically have varibles
initialized to different datatypes depending on how they are used, like
some programming language we could mention."

I agree with most of the rest of "The Zen of Python", except for the
"There should be one-- and preferably only one --obvious way to do it."
bit. I think it should be "There should be one, and preferably only one
, *easy* (And it should be obvious, if we can manage it) way to do it."

For instance, let us take the ternary operator. Ruby has at least two
constructs that will act like the ternary operator.

if a then b else c end

and

a ? b : c

The "if a then b else c end" bit works because of Ruby's "return value is
last expression" policy.

In a recent thread in comp.lang.ruby, you (Alex Martelli) said:

But for the life of me I just can't see why, when one has
"if a then b else c end" working perfectly as both an expression
and a control statement, one would WANT to weigh down the language
with an alternative but equivalent syntax "a?b:c".

<end quote>

The reason I would want to weigh down the language with an alternative
syntax is because sometimes a ? b : c is the *easy* way to do it.
Sometimes you don't want do say:

obj.method(arg1, (if boolean then goober else lala end))

Sometimes you just want to be able to say:

obj.method(arg1, arg2, boolean ? goober : lala)

But the Python folks seems to like having only one way to write
something, which I argee with, so long as we have at least one easy way
to write something.

So there is a balance to be struck here. Some people like the way Python
does things; some people do not. This is why we all hate each other. :-)

No, really, that's why we have different languages.
In Ruby, the spaces of methods and data are separate (i.e.,
most everything is "an object" -- but, differently from
Python, methods are not objects in Ruby), and I do not
think, therefore, that you can write a method that builds
and returns another method, and bind the latter to a name --
but you can return an object with a .call method, a la:

def outer(a) proc do |b| a+=b end end
I would probably define this as;

def outer(a)
proc { |b| a+=b }
end

I prefer the { } block syntax for one-line blocks like that. And I don't
like stick a whole function definition on one line like that. Makes it
harder to read, IMHO.
x = outer(23)
puts x.call(100) # emits 123
puts x.call(100) # emits 223

[i.e., I can't think of any way you could just use x(100)
at the end of such a snippet in Ruby -- perhaps somebody
more expert of Ruby than I am can confirm or correct...?]


I will go on a little ego trip here and assume I'm more of a Ruby expert
than you are. :-)

Yes, you are pretty much correct. There are some clever hacks you could
do, but for the most part, functional objects in Ruby come without
sugar.

Jason Creighton
Jul 18 '05 #94

P: n/a
On Fri, Oct 03, 2003 at 04:02:13AM -0700, Mark Brady wrote:
....
have been helped by python. I learned python then got interested in
it's functional side and ended up learning Scheme and Common Lisp. A
lot of new Scheme and Common Lisp developers I talk to followed the
same route. Python is a great language and I still use it for some
things.


Python is a gateway drug to much more dangerous stuff. Just say no to
functions as first-class objects. Before you know it you will be
snorting a dozen closing parentheses in a row.

Oren

Jul 18 '05 #95

P: n/a
On Sat, 04 Oct 2003 13:11:49 +0100, David Rush <dr***@aol.net> wrote:
Emacs. I've noticed over the years that people don't really get Emacs
religion until they've started hacking elisp. I know that the frustration
of having almost-but-not-quite the behavior I wanted on top of having all
that source code was a powerful incentive for me to learn Lisp. Of course
my apreciation of Emacs only increased as I went...


hm. i really like LISP, but still don't get through emacs. After i
learned a bit LISP i wanted to try it again, and again i failed ;) i
know vim from the in- and out- side and just feel completely lost in
emacs.

i also like vim with gtk2 support more. not because of menu or toolbar,
which are usually switched off in my config, but because of antialiased
letters. I just don't like coding with bleeding eyes anymore ;)

*to me* vim just looks and feels much more smooth than emacs, so i don't
think that hacking LISP influences the choice of the editor much. it of
course makes people *try* Emacs because of its LISP support.

Rene
Jul 18 '05 #96

P: n/a
On Sat, 04 Oct 2003 19:48:54 GMT, Alex Martelli <al***@aleax.it> wrote:
Bengt Richter wrote:
...
I like the Bunch class, but the name suggests vegetables to me ;-)


Well, I _like_ vegetables...
BTW, care to comment on a couple of close variants of Bunch with
per-object class dicts? ...

def mkNSC(**kwds): return type('NSC', (), kwds)()


Very nice (apart from the yecchy name;-).
or, stretching the one line a bit to use the instance dict, ^^^^^^^^^^^^^^^^^^^^^^^^

def mkNSO(**kwds): o=type('NSO', (), {})(); o.__dict__.update(kwds);
return o


I don't see the advantage of explicity using an empty dict and then
updating it with kwds, vs using kwds directly.

^^-- not the same dict, as you've probably thought of by now, but
glad to see I'm not the only one who misread that ;-)

I.e., as you know, the contents of the dict passed to type is used to update the fresh class dict.
It's not the same mutable dict object, (I had to check)
d={'foo':'to check id'}
o = type('Using_d',(),d)()
d['y']='a y value'
o.__class__.__dict__.keys() ['__dict__', '__module__', 'foo', '__weakref__', '__doc__']

(If d were serving as class dict, IWT y would have shown up in the keys).

and also the instance dict is only a glimmer in the trailing ()'s eye
at the point the kwd dict is being passed to type ;-)
def mkNSC(**kwds): return type('NSC', (), kwds)() ... def mkNSO(**kwds): o=type('NSO', (), {})(); o.__dict__.update(kwds); return o ... class Bunch(object): ... def __init__(self, **kw): self.__dict__.update(kw)
... for inst in [mk(x=mk.__name__+'_x_value') for mk in (mkNSC, mkNSO, Bunch)]: ... cls=inst.__class__; classname = cls.__name__
... inst.y = 'added %s instance attribute y'% classname
... print '%6s: instance dict: %r' %(classname, inst.__dict__)
... print '%6s class dict keys: %r' %('', cls.__dict__.keys())
... print '%6s instance attr x: %r' %( '', inst.x)
... print '%6s instance attr y: %r' %( '', inst.y)
... print '%6s class var x : %r' %( '', cls.__dict__.get('x','<x not there>'))
... print
...
NSC: instance dict: {'y': 'added NSC instance attribute y'}
class dict keys: ['__dict__', 'x', '__module__', '__weakref__', '__doc__']
instance attr x: 'mkNSC_x_value'
instance attr y: 'added NSC instance attribute y'
class var x : 'mkNSC_x_value'

NSO: instance dict: {'y': 'added NSO instance attribute y', 'x': 'mkNSO_x_value'}
class dict keys: ['__dict__', '__module__', '__weakref__', '__doc__']
instance attr x: 'mkNSO_x_value'
instance attr y: 'added NSO instance attribute y'
class var x : '<x not there>'

Bunch: instance dict: {'y': 'added Bunch instance attribute y', 'x': 'Bunch_x_value'}
class dict keys: ['__dict__', '__module__', '__weakref__', '__doc__', '__init__']
instance attr x: 'Bunch_x_value'
instance attr y: 'added Bunch instance attribute y'
class var x : '<x not there>'

Note where x and y went. So NSC is nice and compact, but subtly different. E.g.,
nsc = mkNSC(x='really class var')
nsc.x 'really class var' del nsc.x Traceback (most recent call last):
File "<stdin>", line 1, in ?
AttributeError: 'NSC' object attribute 'x' is read-only

(Is that's a new message with 2.3?)
del nsc.__class__.x
nsc.x

Traceback (most recent call last):
File "<stdin>", line 1, in ?
AttributeError: 'NSC' object has no attribute 'x'

NS was for Name Space, and C vs O was for Class vs obj dict initialization ;-)

Regards,
Bengt Richter
Jul 18 '05 #97

P: n/a


Andrew Dalke wrote:
Still have only made slight headway into learning Lisp since the
last discussion, so I've been staying out of this one. But

Kenny Tilton:
Take a look at the quadratic formula. Is that flat? Not. Of course
Python allows nested math (hey, how come!), but non-mathematical
computations are usually trees, too.

Since the quadratic formula yields two results, ...


I started this analogy, didn't I? <g>

I expect most people write it more like

droot = sqrt(b*b-4*a*c) # square root of the discriminate
x_plus = (-b + droot) / (4*a*c)
x_minus = (-b - droot) / (4*a*c)
Not?:

(defun quadratic (a b c)
(let ((rad (sqrt (- (* b b) (* 4 a c)))))
(mapcar (lambda (plus-or-minus)
(/ (funcall plus-or-minus (- b) rad) (+ a a)))
'(+ -))))

:)

possibly using a temp variable for the 4*a*c term, for a
slight bit better performance.
Well it was a bad example because it does require two similar
calculations which can be done /faster/ by pre-computing shared
components. But then the flattening is about performance, and the
subject is whether deeply nested forms are in fact simpler than
flattened sequences where the algorithm itself would be drawn as a tree.
So the example (or those who stared at it too hard looking for
objections <g>) distracted us from that issue.
But isn't that flattening *exactly* what occurs in math. Let me pull
out my absolute favorite math textbook - Bartle's "The Elements
of Real Analysis", 2nd ed.
Oh, god. I tapped out after three semesters of calculus. I am in deep
trouble. :)

I opened to page 213, which is in the middle of the book.

29.1 Definition. If P is a partition of J, then a Riemann-Stieltjes sum
of f with respect to g and corresponding to P = (x_0, x_1, ..., x_n) is a
real
number S(P; f, g) of the form

n
S(P; f, g) = SIGMA f(eta_k){g(x_k) - g(x_{k-1})}
k = 1

Here we have selected number eta_k satisying

x_{k-1} <= eta_k <= x_k for k = 1, 2, ..., n

There's quite a bit going on here behind the scenes which are
the same flattening you talk about. For examples: the definiton
of "partition" is given elsewhere, the notations of what f and g
mean, and the nomenclature "SIGMA"
<snip another good example>
In both cases, the equations are flattened. They aren't pure trees
nor are they absolutely flat. Instead, names are used to represent
certain ideas -- that is, flatten them.
No! Those are like subroutines; they do not flatten, they create call
trees, hiding and encapsulating the details of subcomputations.

We do precisely the same in programming, which is part of why flattening
can be avoided. When any local computation gets too long, there is
probably a subroutine to be carved out, or at least I can take 10 lines
and give it a nice readable name so I can avoid confronting too much
detail at any one time. But I don't throw away the structure of the
problem to get to simplicity.
.. You judge the Zen of Python
using the Zen of Lisp.


Hmmm, Zen constrained by the details of a computing language. Some
philosophy! :) What I see in "flat is better" is the mind imposing
preferred structure on an algorithm which has its own structure
independent of any particular observer/mind.

I am getting excellent results lately by always striving to conform my
code to the structure of the problem as it exists independently of me.
How can I know the structure independently of my knowing? I cannot, but
the problem will tell me if I screw up and maybe even suggest how I went
wrong. I make my code look like my best guess at the problem, then if I
have trouble, I try a different shape. I do not add bandaids and patches
to force my first (apparently mistaken) ideas on the problem. When the
problem stops resisting me, I know I have at least approximated its
shape. Often there is a "pieces falling into place" sensation that gives
me some confidence.

Lisp has both SETF and "all forms return a value", so it does not
interfere in the process. In rare cases where the functional paradigm is
inappropriate, I can run thru a sequence of steps to achieve some end.
Lisp stays out of the way.

Python (I gather from what I read here) /deliberately/ interferes in my
attempts to conform my code to the problem at hand, because the
designers have decreed "flat is better". Python rips a tool from my
hands without asking if, in some cases (I would say most) it might be
the right tool (where an algorithm has a tree-like structure).

I do not think that heavy-handedness can be defended by saying "Oh, but
this is not Lisp." It is just plain heavy-handed.

kenny

"Be the ball."
- Caddy Shack


Jul 18 '05 #98

P: n/a
"Terry Reedy" <tj*****@udel.edu> writes:
Lisp (and possibly other languages I am not familiar with) adds the
alternative of *not* evaluating arguments but instead passing them as
unevaluated expressions. In other words, arguments may be
*implicitly* quoted. Since, unlike as in Python, there is no
alternate syntax to flag the alternate argument protocol, one must, as
far as I know, memorize/learn the behavior for each function. The
syntactic unification masks but does not lessen the semantic
divergence. For me, it made learning Lisp (as far as I have gotten)
more complicated, not less, especially before I 'got' what going on.
I'm sorry -- you appear to be hopelessly confused on this point. I
can't comment on the dark corners of Common Lisp, but I do know all of
those corners of Scheme. Scheme is a true call-by-value language.
There are no functions in Scheme whose arguments are not evaluated.
Indeed, neithen a function definition, nor an argument location, has
the freedom to "not evaluate" an argument. We can reason about this
quite easily: the language provides no such syntactic annotation, and
the evaluator (as you might imagine) does not randomly make such a
choice. Therefore, it can't happen.

It is possible that you had a horribly confused, and therefore
confusing, Scheme instructor or text.
Question: Python has the simplicity of one unified assignment
statement for the binding of names, attributes, slot and slices, and
multiples thereof. Some Lisps have the complexity of different
functions for different types of targets: set, setq, putprop, etc.


Again, you're confused. SET, SETQ, etc are not primarily binding
operators but rather mutation operators. The mutation of identifiers
and the mutation of values are fundamentally different concepts.

Shriram
Jul 18 '05 #99

P: n/a
On Sun, 05 Oct 2003 12:27:47 GMT,
Kenny Tilton <kt*****@nyc.rr.com> wrote:
Python (I gather from what I read here) /deliberately/ interferes in my
attempts to conform my code to the problem at hand, because the
designers have decreed "flat is better". Python rips a tool from my
hands without asking if, in some cases (I would say most) it might be
the right tool (where an algorithm has a tree-like structure).


Oh, for Pete's sake... Python is perfectly capable of manipulating tree
structures, and claiming it "rips a tool from my hand" is simply silly.

The 19 rules are general principles that encapsulate various principles of
Python's design, but they're not hard-and-fast rules to be obeyed like a
legal code, and their meanings are unspecified. I have seen the "flat is
better than nested" rule cited against creating too many submodules in a
package, against nesting loops too deeply, against making code too dense.
You can project any meaning onto them you wish, much like Perlis's epigrams
or Zen koans.

--amk
Jul 18 '05 #100

699 Replies

This discussion thread is closed

Replies have been disabled for this discussion.