473,327 Members | 2,071 Online
Bytes | Software Development & Data Engineering Community
Post Job

Home Posts Topics Members FAQ

Join Bytes to post your question to a community of 473,327 software developers and data experts.

A critic of Guido's blog on Python's lambda

Python, Lambda, and Guido van Rossum

Xah Lee, 2006-05-05

In this post, i'd like to deconstruct one of Guido's recent blog about
lambda in Python.

In Guido's blog written in 2006-02-10 at
http://www.artima.com/weblogs/viewpo...?thread=147358

is first of all, the title “Language Design Is Not Just Solving
Puzzles”. In the outset, and in between the lines, we are told that
“I'm the supreme intellect, and I created Python”.

This seems impressive, except that the tech geekers due to their
ignorance of sociology as well as lack of analytic abilities of the
mathematician, do not know that creating a language is a act that
requires little qualifications. However, creating a language that is
used by a lot people takes considerable skill, and a big part of that
skill is salesmanship. Guido seems to have done it well and seems to
continue selling it well, where, he can put up a title of belittlement
and get away with it too.

Gaudy title aside, let's look at the content of his say. If you peruse
the 700 words, you'll find that it amounts to that Guido does not like
the suggested lambda fix due to its multi-line nature, and says that he
don't think there could possibly be any proposal he'll like. The
reason? Not much! Zen is bantered about, mathematician's impractical
ways is waved, undefinable qualities are given, human's right brain is
mentioned for support (neuroscience!), Rube Goldberg contrivance
phraseology is thrown, and coolness of Google Inc is reminded for the
tech geekers (in juxtaposition of a big notice that Guido works
there.).

If you are serious, doesn't this writing sounds bigger than its
content? Look at the gorgeous ending: “This is also the reason why
Python will never have continuations, and even why I'm uninterested in
optimizing tail recursion. But that's for another installment.”. This
benevolent geeker is gonna give us another INSTALLMENT!

There is a computer language leader by the name of Larry Wall, who said
that “The three chief virtues of a programmer are: Laziness,
Impatience and Hubris” among quite a lot of other ingenious
outpourings. It seems to me, the more i learn about Python and its
leader, the more similarities i see.

So Guido, i understand that selling oneself is a inherent and necessary
part of being a human animal. But i think the lesser beings should be
educated enough to know that fact. So that when minions follow a
leader, they have a clear understanding of why and what.

----

Regarding the lambda in Python situation... conceivably you are right
that Python lambda is perhaps at best left as it is crippled, or even
eliminated. However, this is what i want: I want Python literatures,
and also in Wikipedia, to cease and desist stating that Python supports
functional programing. (this is not necessarily a bad publicity) And, I
want the Perl literatures to cease and desist saying they support OOP.
But that's for another installment.

----
This post is archived at:
http://xahlee.org/UnixResource_dir/w...bda_guido.html

* * Xah
* * xa*@xahlee.org
http://xahlee.org/

May 6 '06
267 10448
<br***@sweetapp.com> wrote:
...
Being able to keep pass around state with functions is useful.
I agree and Python supports this. What is interesting is how
counter-intuitive many programmers find this. For example, one of my


Funny: I have taught/mentored large number of people in Python, people
coming from all different levels along the axis of "previous knowledge
of programming in general", and closures are not among the issues where
I ever noticed large number of people having problems.
So I try to use this sort of pattern sparingly because many programmers
don't think of closures as a way of saving state. That might be because
it is not possible to do so in most mainstream languages.
I don't normally frame it in terms of "saving" state, but rather of
"keeping some amount of state around" -- which means more or less the
same thing but may perhaps be easier to digest (just trying to see what
could explain the difference between my experience and yours).
There are already some people in the Python community who think that
Python has already gone too far in supporting "complex" language
features and now imposes two steep a learning curve i.e. you now have
to know a lot to be considered a Python expert. And there is a lot of
resistance to adding features that will raise the bar even higher.


I might conditionally underwrite this, myself, but I guess my emphasis
is different from that of the real "paladins" of this thesis (such as
Mark Shuttleworth, who gave us all an earful about this when he
delivered a Keynote at Europython 2004).

I'm all for removing _redundant_ features, but I don't think of many
things on the paladins' hitlist as such -- closures, itertools, genexps,
etc, all look just fine to me (and I have a good track record of
teaching them...). I _would_ love to push (for 3.0) further
simplifications, e.g., I do hate it that
[ x for x in container if predicate(x) ]
is an exact synonym of the more legible
list( x for x in container if predicate(x) )
and the proposed
{1, 2, 3}
is an exact synonym of
set((1, 2, 3))
just to focus on a couple of redundant syntax-sugar ideas (one in
today's Python but slated to remain in 3.0, one proposed for 3.0). It's
not really about there being anything deep or complex about this, but
each and every such redundancy _does_ "raise the bar" without any
commensurate return. Ah well.
Alex
May 7 '06 #101
<br***@sweetapp.com> wrote:
...
2. There has to be a mechanism where an organization can add
developers - even if it is only for new projects. Python advocates
Obviously.


It's good that you agree. I think that the ability to add new
productive developers to a project/team/organization is at least part
of what Alex means by "scaleability". I'm sure that he will correct me
if I am wrong.


I agree with your formulation, just not with your spelling of
"scalability";-).
[1] I'm considering introducing bugs or misdesigns that have to be
fixed
as part of training for the purposes of this discussion. Also the
Actually, doing it _deliberately_ (on "training projects" for new people
just coming onboard) might be a good training technique; what you learn
by finding and fixing bugs nicely complements what you learn by studying
"good" example code. I do not know of this technique being widely used
in real-life training, either by firms or universities, but I'd love to
learn about counterexamples.
time needed to learn to coordinate with the rest of the team.


Pair programming can help a lot with this (in any language, I believe)
if the pairing is carefully chosen and rotated for the purpose.
Alex
May 7 '06 #102
Carl Friedrich Bolz <cf****@gmx.de> wrote:
...
an extension that allows the programmer to specify how the value of
some slot (Lisp lingo for "member variable") can be computed. It
frees the programmer from having to recompute slot values since Cells
... I have not looked at Cells at all, but what you are saying here sounds
amazingly like Python's properties to me. You specify a function that
calculates the value of an attribute (Python lingo for something like a


You're right that the above-quoted snipped does sound exactly like
Python's properties, but I suspect that's partly because it's a very
concise summary. A property, as such, recomputes the value each and
every time, whether the computation is necessary or not; in other words,
it performs no automatic caching/memoizing.

A more interesting project might therefore be a custom descriptor, one
that's property-like but also deals with "caching wherever that's
possible". This adds interesting layers of complexity, some of them not
too hard (auto-detecting dependencies by introspection), others really
challenging (reliably determining what attributes have changed since
last recomputation of a property). Intuition tells me that the latter
problem is equivalent to the Halting Problem -- if somewhere I "see" a
call to self.foo.zap(), even if I can reliably determine the leafmost
type of self.foo, I'm still left with the issue of analyzing the code
for method zap to find out if it changes self.foo on this occasion, or
not -- there being no constraint on that code, this may be too hard.

The practical problem of detecting alterations may be softened by
realizing that some false positives are probably OK -- if I know that
self.foo.zap() *MAY* alter self.foo, I might make my life simpler by
assuming that it *HAS* altered it. This will cause some recomputations
of property-like descriptors' values that might theoretically have been
avoided, "ah well", not a killer issue. Perhaps a more constructive
approach would be: start by assuming the pseudoproperty always
recomputes, like a real property would; then move toward avoiding SOME
needless recomputations when you can PROVE they're needless. You'll
never avoid ALL needless recomputations, but if you avoid enough of them
to pay for the needed introspection and analysis, it may still be a win.
As to whether it's enough of a real-world win to warrant the project, I
pass -- in a research setting it would surely be a worthwhile study, in
a production setting there are other optimizations that look like
lower-hanging fruits to me. But, I'm sure the Cells people will be back
with further illustrations of the power of their approach, beyond mere
"properties with _some_ automatic-caching abilities".
Alex
May 7 '06 #103
Tomasz Zielonka <to*************@gmail.com> wrote:
Alex Martelli wrote:
Tomasz Zielonka <to*************@gmail.com> wrote:
Alex Martelli wrote:
> Having to give functions a name places no "ceiling on expressiveness",
> any more than, say, having to give _macros_ a name.

And what about having to give numbers a name?
Excellent style, in most cases; I believe most sensible coding guides
recommend it for most numbers -- cfr
<http://en.wikipedia.org/wiki/Magic_number_(programming)> , section
"magic numbers in code".


I was a bit unclear. I didn't mean constants (I agree with you on
magic numbers), but results of computations, for example


Ah, good that we agree on _some_thing;-)

(x * 2) + (y * 3)

Here (x * 2), (y * 3) and (x * 2) + 3 are anonymous numbers ;-)

Would you like if you were forced to write it this way:

a = x * 2
b = y * 3
c = a * b

?

Thanks for your answers to my questions.


I do not think there would be added value in having to name every
intermediate result (as opposed to the starting "constants", about which
we agree); it just "spreads things out". Fortunately, Python imposes no
such constraints on any type -- once you've written out the starting
"constants" (be they functions, numbers, classes, whatever), which may
require naming (either language-enforced, or just by good style),
instances of each type can be treated in perfectly analogous ways (e.g.,
calling callables that operate on them and return other instances) with
no need to name the intermediate results.

The Function type, by design choice, does not support any overloaded
operators, so the analogy of your example above (if x and y were
functions) would be using named higher-order-functions (or other
callables, of course), e.g.:

add_funcs( times_num(x, 2), times_num(y, 3) )

whatever HOF's add and times were doing, e.g.

def add_funcs(*fs):
def result(*a):
return sum(f(*a) for f in fs)
return result

def times_num(f, k):
def result(*a):
return k * f(*a)
return result

or, add polymorphism to taste, if you want to be able to use (e.g.) the
same named HOF to add a mix of functions and constants -- a side issue
that's quite separate from having or not having a name, but rather
connected with how wise it is to overload a single name for many
purposes (PEAK implements generic-functions and multimethods, and it or
something like it is scheduled for addition to Python 3.0; Python 2.*
has no built-in way to add such arbitrary overloads, and multi-dispatch
in particular, so you need to add a framework such as PEAK for that).
Alex
May 7 '06 #104
I V <wr******@gmail.com> wrote:
...
higher level languages. There are useful programming techniques, like
monadic programming, that are infeasible without anonymous functions.
Anonymous functions really add some power to the language.
Can you give me one example that would be feasible with anonymous
functions, but is made infeasible by the need to give names to
functions? In Python, specifically, extended with whatever fake syntax
you favour for producing unnamed functions?


Monads are one of those parts of functional programming I've never really
got my head around, but as I understand them, they're a way of
transforming what looks like a sequence of imperative programming
statements that operate on a global state into a sequence of function
calls that pass the state between them.


Looks like a fair enough summary to me (but, I'm also shaky on monads,
so we might want confirmation from somebody who isn't;-).
So, what would be a statement in an imperative language is an anonymous
function that gets added to the monad, and then, when the monad is run,
these functions get executed. The point being, that you have a lot of
small functions (one for each statement) which are likely not to be used
anywhere else, so defining them as named functions would be a bit of a
pain in the arse.


It seems to me that the difference between, say, a hypothetical:

monad.add( lambda state:
temp = zipper(state.widget, state.zrup)
return state.alteredcopy(widget=temp)
)

and the you-can-use-it-right now alternative:

def zipperize_widget(state):
temp = zipper(state.widget, state.zrup)
return state.alteredcopy(widget=temp)
monad.add(zipperize_widget)

is trivial to the point of evanescence. Worst case, you name all your
functions Beverly so you don't have to think about the naming; but you
also have a chance to use meaningful names (such as, presumably,
zipperize_widget is supposed to be here) to help the reader.

IOW, monads appear to me to behave just about like any other kind of
HOFs (for a suitably lax interpretation of that "F") regarding the issue
of named vs unnamed functions -- i.e., just about like the difference
between:

def double(f):
return lambda *a: 2 * f(*a)

and

def double(f):
def doubled(*a): return 2 * f(*a)
return doubled

I have no real problem using the second form (with a name), and just
don't see it as important enough to warrant adding to the language (a
language that's designed to be *small*, and *simple*, so each addition
is to be seen as a *cost*) a whole new syntaxform 'lambda'.

((The "but you really want macros" debate is a separate one, which has
been held many times [mostly on comp.lang.python] and I'd rather not
repeat at this time, focusing instead on named vs unnamed...))
Alex
May 7 '06 #105
Frank Buss <fb@frank-buss.de> wrote:
Alex Martelli wrote:
I cannot conceive of one. Wherever within a statement I could write the
expression
lambda <args>: body
I can *ALWAYS* obtain the identical effect by picking an otherwise
locally unused identifier X, writing the statement
def X(<args>): body
and using, as the expression, identifier X instead of the lambda.


This is true, but with lambda it is easier to read:

http://www.frank-buss.de/lisp/functional.html
http://www.frank-buss.de/lisp/texture.html

Would be interesting to see how this would look like in Python or some of
the other languages to which this troll thread was posted :-)


Sorry, but I just don't see what lambda is buying you here. Taking just
one simple example from the first page you quote, you have:

(defun blank ()
"a blank picture"
(lambda (a b c)
(declare (ignore a b c))
'()))

which in Python would be:

def blank():
" a blank picture "
return lambda a, b, c: []

while a named-function variant might be:

def blank():
def blank_picture(a, b, c): return []
return blank_picture

Where's the beef, really? I find the named-function variant somewhat
more readable than the lambda-based variant, but even if your
preferences are the opposite, this is really such a tiny difference that
I can't see why so many bits should gets wasted debating it (perhaps
it's one of Parkinson's Laws at work...).
Alex
May 7 '06 #106
<br***@sweetapp.com> wrote:
Patrick May wrote:
al*****@yahoo.com (Alex Martelli) writes:
In my opinion (and that of several others), the best way for Python to
grow in this regard would be to _lose_ lambda altogether, since named
functions are preferable


Why? I find the ability to create unnamed functions on the fly
to be a significant benefit when coding in Common Lisp.


1. They don't add anything new to the language semantically i.e. you
can always used a named function to accomplish the same task
as an unnamed one.
2. Giving a function a name acts as documentation (and a named
function is more likely to be explicitly documented than an unnamed
one). This argument is pragmatic rather than theoretical.
3. It adds another construction to the language.


Creating *FUNCTIONS* on the fly is a very significant benefit, nobody on
the thread is disputing this, and nobody ever wanted to take that
feature away from Python -- it's the obsessive focus on the functions
needing to be *unnamed* ones, that's basically all the debate. I wonder
whether all debaters on the "unnamed is a MUST" side fully realize that
a Python's def statement creates a function on the fly, just as much as
a lambda form does. Or maybe the debate is really about the distinction
between statement and expression: Python does choose to draw that
distinction, and while one could certainly argue that a language might
be better without it, the distinction is deep enough that nothing really
interesting (IMHO) is to be gleaned by the debate, except perhaps as
pointers for designers of future languages (and there are enough
programming languages that I personally see designing yet more of them
as one of the least important tasks facing the programming community;-).
Alex
May 7 '06 #107
al*****@yahoo.com (Alex Martelli) writes:
I do hate it that
[ x for x in container if predicate(x) ]
is an exact synonym of the more legible
list( x for x in container if predicate(x) )
Heh, I hate it that it's NOT an exact synonym (the listcomp leaves 'x'
polluting the namespace and clobbers any pre-existing 'x', but the
gencomp makes a new temporary scope).
and the proposed
{1, 2, 3}
is an exact synonym of
set((1, 2, 3))


There's one advantage that I can think of for the existing (and
proposed) list/dict/set literals, which is that they are literals and
can be treated as such by the parser. Remember a while back that we
had a discussion of reading expressions like
{'foo': (1,2,3),
'bar': 'file.txt'}
from configuration files without using (unsafe) eval. Aside from that
I like the idea of using constructor functions instead of special syntax.
May 7 '06 #108
Alex Martelli wrote:
Sorry, but I just don't see what lambda is buying you here. Taking just
one simple example from the first page you quote, you have:

(defun blank ()
"a blank picture"
(lambda (a b c)
(declare (ignore a b c))
'()))


You are right, for this example it is not useful. But I assume you need
something like lambda for closures, e.g. from the page
http://www.frank-buss.de/lisp/texture.html :

(defun black-white (&key function limit)
(lambda (x y)
(if (> (funcall function x y) limit)
1.0
0.0)))

This function returns a new function, which is parametrized with the
supplied arguments and can be used later as building blocks for other
functions and itself wraps input functions. I don't know Python good
enough, maybe closures are possible with locale named function definitions,
too.

--
Frank Buss, fb@frank-buss.de
http://www.frank-buss.de, http://www.it4-systems.de
May 7 '06 #109
Bill Atkins wrote:
[snip]
Here's how one of the cells examples might look in corrupted Python
(this is definitely not executable):

class FallingRock:
def __init__(self, pos):
define_slot( 'velocity', lambda: self.accel * self.elapsed )
define_slot( 'pos', lambda: self.accel * (self.elapsed ** 2) / 2,
initial_position = cell_initial_value( 100 ) )
self.accel = -9.8

rock = FallingRock(100)
print rock.accel, rock.velocity, rock.pos
# -9.8, 0, 100

rock.elapsed = 1
print rock.accel, rock.velocity, rock.pos
# -9.8, -9.8, -9.8

rock.elapsed = 8
print rock.accel, rock.velocity, rock.pos
# -9.8, -78.4, -627.2

Make sense? The idea is to declare what a slot's value represents
(with code) and then to stop worrying about keeping different things
synchronized.

Here's another of the examples, also translated into my horrific
rendition of Python (forgive me):

class Menu:
def __init__(self):
define_slot( 'enabled',
lambda: focused_object( self ).__class__ == TextEntry and
focused_object( self ).selection )

Now whenever the enabled slot is accessed, it will be calculated based
on what object has the focus. Again, it frees the programmer from
having to keep these different dependencies updated.

--
This is a song that took me ten years to live and two years to write.
- Bob Dylan

Oh dear, there were a few typos:

class FallingRock:
def __init__(self, pos):
define_slot( 'velocity', lambda: self.accel * self.elapsed )
define_slot( 'pos', lambda: self.accel * (self.elapsed ** 2) / 2,
initial_value = cell_initial_value( 100 ) )
self.accel = -9.8

rock = FallingRock(100)
print rock.accel, rock.velocity, rock.pos
# -9.8, 0, 100

rock.elapsed = 1
print rock.accel, rock.velocity, rock.pos
# -9.8, -9.8, 90.2

rock.elapsed = 8
print rock.accel, rock.velocity, rock.pos
# -9.8, -78.4, -527.2


you mean something like this? (and yes, this is executable python):
class FallingRock(object):
def __init__(self, startpos):
self.startpos = startpos
self.elapsed = 0
self.accel = -9.8

velocity = property(lambda self: self.accel * self.elapsed)
pos = property(lambda self: self.startpos + self.accel *
(self.elapsed ** 2) / 2)

rock = FallingRock(100)
print rock.accel, rock.velocity, rock.pos
# -9.8, 0, 100

rock.elapsed = 1
print rock.accel, rock.velocity, rock.pos
# -9.8, -9.8, 95.1

rock.elapsed = 8
print rock.accel, rock.velocity, rock.pos
# -9.8, -78.4, -213.6
Cheers,

Carl Friedrich Bolz

May 7 '06 #110
Paul Rubin <http://ph****@NOSPAM.invalid> wrote:
al*****@yahoo.com (Alex Martelli) writes:
I do hate it that
[ x for x in container if predicate(x) ]
is an exact synonym of the more legible
list( x for x in container if predicate(x) )
Heh, I hate it that it's NOT an exact synonym (the listcomp leaves 'x'
polluting the namespace and clobbers any pre-existing 'x', but the
gencomp makes a new temporary scope).


Yeah, that's gonna be fixed in 3.0 (can't be fixed before, as it would
break backwards compatibility) -- then we'll have useless synonyms.

and the proposed
{1, 2, 3}
is an exact synonym of
set((1, 2, 3))


There's one advantage that I can think of for the existing (and
proposed) list/dict/set literals, which is that they are literals and
can be treated as such by the parser. Remember a while back that we
had a discussion of reading expressions like
{'foo': (1,2,3),
'bar': 'file.txt'}
from configuration files without using (unsafe) eval. Aside from that


And as I recall I showed how to make a safe-eval -- that could easily be
built into 3.0, btw (including special treatment for builtin names of
types that are safe to construct). I'd be all in favor of specialcasing
such names in the parser, too, but that's a harder sell.
I like the idea of using constructor functions instead of special syntax.


Problem is how to make _GvR_ like it too;-)
Alex
May 7 '06 #111
Frank Buss <fb@frank-buss.de> wrote:
Alex Martelli wrote:
Sorry, but I just don't see what lambda is buying you here. Taking just
one simple example from the first page you quote, you have:

(defun blank ()
"a blank picture"
(lambda (a b c)
(declare (ignore a b c))
'()))
You are right, for this example it is not useful. But I assume you need
something like lambda for closures, e.g. from the page


Wrong and unfounded assumption.
http://www.frank-buss.de/lisp/texture.html :

(defun black-white (&key function limit)
(lambda (x y)
(if (> (funcall function x y) limit)
1.0
0.0)))

This function returns a new function, which is parametrized with the
supplied arguments and can be used later as building blocks for other
functions and itself wraps input functions. I don't know Python good
enough, maybe closures are possible with locale named function definitions,
too.


They sure are, I gave many examples already all over the thread. There
are *NO* semantic advantages for named vs unnamed functions in Python.

Not sure what the &key means here, but omitting that

def black_white(function, limit):
def result(x,y):
if function(x, y) > limit: return 1.0
else: return 0.0
return result
Alex
May 7 '06 #112

Alex Martelli wrote:
Steve R. Hastings <st***@hastings.org> wrote:
...
But the key in the whole thread is simply that indentation will not
scale. Nor will Python.


This is a curious statement, given that Python is famous for scaling well.


I think "ridiculous" is a better characterization than "curious", even
if you're seriously into understatement.


When you consider that there was just a big flamewar on comp.lang.lisp
about the lack of standard mechanisms for both threading and sockets in
Common Lisp (with the lispers arguing that it wasn't needed) I find it
"curious" that someone can say Common Lisp scales well.

May 7 '06 #113
Alex Martelli wrote:
Not sure what the &key means here, but omitting that

def black_white(function, limit):
def result(x,y):
if function(x, y) > limit: return 1.0
else: return 0.0
return result


&key is something like keyword arguments in Python. And looks like you are
right again (I've tested it in Pyhton) and my assumption was wrong, so the
important thing is to support closures, which Python does, even with local
function definitions.

--
Frank Buss, fb@frank-buss.de
http://www.frank-buss.de, http://www.it4-systems.de
May 7 '06 #114
al*****@yahoo.com (Alex Martelli) writes:
>> In my opinion (and that of several others), the best way for
>> Python to grow in this regard would be to _lose_ lambda
>> altogether, since named functions are preferable
>
> Why? I find the ability to create unnamed functions on the
> fly to be a significant benefit when coding in Common Lisp.


1. They don't add anything new to the language semantically
i.e. you can always used a named function to accomplish the same
task as an unnamed one.
Sure, but it won't necessarily be as expressive or as convenient.
2. Giving a function a name acts as documentation (and a named
function is more likely to be explicitly documented than an
unnamed one). This argument is pragmatic rather than
theoretical.
Using lambda in an expression communicates the fact that it will
be used only in the scope of that expression. Another benefit is that
declaration at the point of use means that all necessary context is
available without having to look elsewhere. Those are two pragmatic
benefits.
3. It adds another construction to the language.


That's a very minimal cost relative to the benefits.

You haven't made your case for named functions being preferable.

Regards,

Patrick

------------------------------------------------------------------------
S P Engineering, Inc. | The experts in large scale distributed OO
| systems design and implementation.
pj*@spe.com | (C++, Java, Common Lisp, Jini, CORBA, UML)
May 7 '06 #115
ol*****@verizon.net writes:
Alex Martelli wrote:
Steve R. Hastings <st***@hastings.org> wrote:
...
> > But the key in the whole thread is simply that indentation will not
> > scale. Nor will Python.
>
> This is a curious statement, given that Python is famous for scaling well.


I think "ridiculous" is a better characterization than "curious", even
if you're seriously into understatement.


When you consider that there was just a big flamewar on comp.lang.lisp
about the lack of standard mechanisms for both threading and sockets in
Common Lisp (with the lispers arguing that it wasn't needed) I find it
"curious" that someone can say Common Lisp scales well.


It's not all that curious. Every Common Lisp implementation supports
sockets, and most support threads. The "flamewar" was about whether
these mechanisms should be (or could be) standardized across all
implementation. It has little to do with CL's ability to scale well.
You simply use the socket and thread API provided by your
implementation; if you need to move to another, you write a thin
compatibility layer. In Python, since there is no standard and only
one implementation that counts, you write code for that implementation
the same way you write for the socket and thread API provided by your
Lisp implementation.

I still dislike the phrase "scales well," but I don't see how
differences in socket and thread API's across implementations can be
interpreted as causing Lisp to "scale badly." Can you elaborate on
what you mean?

--
This is a song that took me ten years to live and two years to write.
- Bob Dylan
May 7 '06 #116
Bill Atkins <NO**********@rpi.edu> writes:
Here's how one of the cells examples might look in corrupted Python
(this is definitely not executable):

class FallingRock:
def __init__(self, pos):
define_slot( 'velocity', lambda: self.accel * self.elapsed )
define_slot( 'pos', lambda: self.accel * (self.elapsed ** 2) / 2,
initial_position = cell_initial_value( 100 ) )
self.accel = -9.8

rock = FallingRock(100)
print rock.accel, rock.velocity, rock.pos
# -9.8, 0, 100

rock.elapsed = 1
print rock.accel, rock.velocity, rock.pos
# -9.8, -9.8, -9.8

rock.elapsed = 8
print rock.accel, rock.velocity, rock.pos
# -9.8, -78.4, -627.2

Make sense?
No, not at all.

Why do you pass a ``pos`` parameter to the constructor you never use? Did you
mean to write ``cell_initial_value(pos)``?

Why is elapsed never initialized? Is the dependency computation only meant to
start once elapsed is bound? But where does the value '0' for velocity come
from then? Why would it make sense to have ``pos`` initially be completely
independent of everything else but then suddenly reset to something which is
in accordance with the other parameters?

What happens if I add ``rock.pos = -1; print rock.pos ``? Will I get an error?
Will I get -1? Will I get -627.2?

To make this more concrete, here is how I might implement a falling rock:

class FallingRock(object):
velocity = property(lambda self:self.accel * self.elapsed)
pos = property(lambda self: 0.5*self.accel * self.elapsed**2)
def __init__(self, elapsed=0):
self.elapsed = elapsed
self.accel = -9.8

rock = FallingRock()
print rock.accel, rock.velocity, rock.pos
# => -9.8 -0.0 -0.0
rock.elapsed = 1
print rock.accel, rock.velocity, rock.pos
# => -9.8 -9.8 -4.9
rock.elapsed = 9
print rock.accel, rock.velocity, rock.pos
# => -9.8 -88.2 -396.9

How would you like the behaviour to be different from that (and why)?
The idea is to declare what a slot's value represents
(with code) and then to stop worrying about keeping different things
synchronized.
That's what properties (in python) and accessors (in lisp) are for -- if you
compute the slot-values on-demand (i.e. each time a slot is accessed) then you
don't need to worry about stuff getting out of synch.

So far I haven't understood what cells (in its essence) is meant to offer over
properties/accessors apart from a straightforward efficiency hack (instead of
recomputing the slot-values on each slot-access, you recompute them only when
needed, i.e. when one of the other slots on which a slot-value depends has
changed). So what am I missing?
Here's another of the examples, also translated into my horrific
rendition of Python (forgive me):

class Menu:
def __init__(self):
define_slot( 'enabled',
lambda: focused_object( self ).__class__ == TextEntry and

OK, now you've lost me completely. How would you like this to be different in
behaviour from:

class Menu(object):
enabled = property(lambda self: isinstance(focused_object(self),TextEntry) \
and focused_object(self).selection)

???
Now whenever the enabled slot is accessed, it will be calculated based
on what object has the focus. Again, it frees the programmer from
having to keep these different dependencies updated.


Again how's that different from the standard property/accessor solution as
above?

'as
May 7 '06 #117
> There are *NO* semantic advantages for named vs unnamed functions in Python.

I feel that this conversation has glanced off the point. Let me try a
new approach:

There is the Pythonic way (whatever that is), and then The Lisp Way. I
don't know what the former is, but it has something to do with
indentation, and the (meaningless to me*) phrase "It fits your mind."
The Lisp way is quite specific: Pure Compositionality. Compositionality
encompasses and defines all aspects of Lisp, from the parens to
functional style to fundamental recursion to lambda, and even the
language itself is classically composed from the bottom up, and
compositionality enables us to create new complete languages nearly
trivially.

How this concept plays into the current conversation is not subtle:
LAMBDA forms server to directly modify the forms in which they appear.
SORT is the example that comes to mind for me. If one says: (sort ...
#'(lambda (a b) ...)) [I realize that the #' is optional, I use it here
for emphasis that there is a function being formed.] the lambda form
composes, with sort, a new type of sort -- a sort of type <whatever the
lambda function does>. Thus, the semantics of this form are localized
to the sort expression, and do not leave it -- they are, indeed,
conceptually a part of the sort expression, and to require it/them to
be moved outside and given a name breaks the conceptual
compositionality -- that is, the compositional locality of the form.

Similarly, parens and the functional fact that every form returns a
value provide compositional locality and, perhaps more importantly in
practice, compositional *mobility* -- so that, pretty much anywhere in
Lisp where you need an argument, you can pick up a form and drop it in.
[Macros often break this principle, I'll get to those in a moment.]
This is something that no other language (except some dead ones, like
APL) were able to do, and these provide incredible conceptual
flexibility -- again, I'll use the term "mobility" -- one can, in most
cases, literally move code as though it were a closed concept to
anywhere that that concept is needed.

Macros, as I have said, bear a complex relationship to this concept of
composition mobility and flexibility. The iteration macro, demonstrated
elsewhere in this thread, is an excellent example. But macros are more
subtly related to compositionality, and to the present specific
question, because, as you yourself said: All you need to do is make up
a name that isn't used....But how is one to find a name that isn't used
if one has macros? [Actually, in Lisp, even if we didn't have lambda we
could do this by code walking, but I'll leave that aside, because
Python can't do that, nor can it do macros.]

I do not hesitate to predict that Python will someday sooner than later
recognize the value of compositional flexibility and mobility, and that
it will struggle against parentheses and lambdas, but that in the end
it will become Lisp again. They all do, or die.

===
[*] BA - Biographical Annotation: Yeah, I've programmed all those
things too for years and years and years. I also have a PhD in
cognitive psychology from CMU, where I worked on how people learn
complex skills, and specifically programming. When I say that "fits
your brain" is meaningless to me, I mean that in a technical sense:
If it had any meaning, I, of all people, would know what it means;
meaning that I know that it doesn't mean anything at all.

May 7 '06 #118
["Followup-To:" header set to comp.lang.functional.]
On 2006-05-07, br***@sweetapp.com <br***@sweetapp.com> wrote:
- it fits most programmers brains i.e. it is similar enough to
languages that most programmers have experience with and the
differences are usually perceived to beneficial (exception:
people from a Java/C/C++ background often perceive dynamic
typing as a misfeature and have to struggle with it)


It is a misfeature. It's just less of a misfeature than the typing of
Java/C/C++, etc.

--
Aaron Denney
-><-
May 7 '06 #119
On Sun, May 07, 2006 at 11:57:55AM -0700, Alex Martelli wrote:
[1] I'm considering introducing bugs or misdesigns that have to be
fixed
as part of training for the purposes of this discussion. Also the


Actually, doing it _deliberately_ (on "training projects" for new people
just coming onboard) might be a good training technique; what you learn
by finding and fixing bugs nicely complements what you learn by studying
"good" example code. I do not know of this technique being widely used
in real-life training, either by firms or universities, but I'd love to
learn about counterexamples.


When I was learning C in university my professor made us fix broken programs.
He did this specifically to teach us to understand how to read compiler
warnings/errors and also how to debug software. The advantage of this in the
tutorial setting was that the TAs knew what the error was and could assist the
people in finding bugs in a controlled environment. When I later worked with
people who did not go through this training I found many of them had no clue
how to decipher the often cryptic C/C++ compiler warnings/errors (think
Borland Turbo C or MS Visual C++, GCC is pretty good in comparison) or where
to start looking for a bug (an affliction I do not possess).

-Chris
May 8 '06 #120


ol*****@verizon.net wrote:
Alex Martelli wrote:
Steve R. Hastings <st***@hastings.org> wrote:
...
But the key in the whole thread is simply that indentation will not
scale. Nor will Python.

This is a curious statement, given that Python is famous for scaling well.


I think "ridiculous" is a better characterization than "curious", even
if you're seriously into understatement.

When you consider that there was just a big flamewar on comp.lang.lisp
about the lack of standard mechanisms for both threading and sockets in
Common Lisp (with the lispers arguing that it wasn't needed) I find it
"curious" that someone can say Common Lisp scales well.


We're talking about whether the language can grow to have new
capabilities, while you are talking about libraries, and specifically
whether different implementations have the same API. They all have
sockets, just not the same API, probably because, to be honest that is
not something that belongs in a /language/ API.

But those of us who bounce from implementation to implementation see a
standard APi as saving us some conditional compilation and (effectively)
rolling our own common API out of dii implementation's socket APIs, so a
few socket gurus are working on a standard now.

And yes, they will be able to do this with Common Lisp as it stands.

Try to think a little more rigorously in these discussions, Ok?

Thx, kenny

--
Cells: http://common-lisp.net/project/cells/

"Have you ever been in a relationship?"
Attorney for Mary Winkler, confessed killer of her
minister husband, when asked if the couple had
marital problems.
May 8 '06 #121
I V wrote:
Monads are one of those parts of functional programming I've never really
got my head around, but as I understand them, they're a way of
transforming what looks like a sequence of imperative programming
statements that operate on a global state into a sequence of function
calls that pass the state between them.
This is a description of only one particular kind of monad - a state
monad. A generalisation of your statement would be something like this:
"they're a way of writing what looks like a sequence of imperative
programming statements that, depending on the monad, can have certain
computational side-effects (like operating on a global state) in a
purely functional way". But this doesn't explain much. If you want to
know more, there are some pretty good tutorials on
http://www.haskell.org/.
So, what would be a statement in an imperative language is an anonymous
function that gets added to the monad, and then, when the monad is run,
these functions get executed.
A monad is a type, it isn't run. The thing you run can be called a
monadic action. You don't add functions to a monad (in this sense), you
build a monadic action from smaller monadic actions, gluing them with
functions - here's where anonymous functions are natural.
The point being, that you have a lot of small functions (one for each
statement) which are likely not to be used anywhere else, so defining
them as named functions would be a bit of a pain in the arse.
Exactly!
Actually, defining them as unnamed functions via lambdas would be annoying
too, although not as annoying as using named functions - what you really
want is macros, so that what looks like a statement can be interpreted is
a piece of code to be executed later.


Haskell has one such "macro" - this is the do-notation syntax. But it's
translation to ordinary lambdas is very straightforward, and the choice
between using the do-notation or lambdas with >>= is a matter of
style.

Best regards
Tomasz
May 8 '06 #122


Alexander Schmolck wrote:
[trimmed groups]

Ken Tilton <ke*******@gmail.com> writes:

yes, but do not feel bad, everyone gets confused by the /analogy/ to
spreadsheets into thinking Cells /is/ a spreadsheet. In fact, for a brief
period I swore off the analogy because it was so invariably misunderstood.
Even Graham misunderstood it.

Count me in.


<g> But looking at what it says: "Think of the slots as cells in a
spreadsheet (get it?), and you've got the right idea. ", if you follow
the analogy (and know that slot means "data member" in other OO models)
you also know that Serge's Spreadsheet example would have scored a big
fat zero on the Miller Analogy Test. Serge in no way made slots in
Python classes behave like cells in a spreadsheet. He simply started
work on a Spreadsheet application, using Python classes along the way. Bzzt.

While everyone makes the mistake, it is only because few of us (me
included) read very carefully. Especially if they are more interested in
flaming than learning what someone is saying.

C'mon, people. I linked to Adobe's betting the ranch on such an idea. I
linked to Guy Steele's paper on the same idea. In which he marvelled
that it had not caught on. I could also link you to COSI over at STSCI,
presented at a Lisp Users Group Meeting in 1999 where they were jumping
up and down about the same thing. One of my users gets Cells because he
loved the KR system in Garnet. Look it up. I have more citations of
prior art. And, again, it has gone mainstream: Adobe has adopted the
paradigm.

Y'all might want to ease up on the pissing contest and learn something.
or not, I have been on Usenet before. :)


But it is such a great analogy! <sigh>
but what's the big deal about PyCells?
Here is 22-lines barebones implementation of spreadsheet in Python,
later I create 2 cells "a" and "b", "b" depends on a and evaluate all
the cells. The output is
a = negate(sin(pi/2)+one) = -2.0
b = negate(a)*10 = 20.0


Very roughly speaking, that is supposed to be the code, not the output. So you
would start with (just guessing at the Python, it has been years since I did
half a port to Python):
v1 = one
a = determined_by(negate(sin(pi/2)+v1)
b = determined_by(negate(a)*10)
print(a) -> -2.0 ;; this and the next are easy
print(b) -> 20
v1 = two ;; fun part starts here
print(b) -> 40 ;; of course a got updated, too

do you mean 30?

I've translated my interpretation of the above to this actual python code:

from math import sin, pi
v1 = cell(lambda: 1)
a = cell(lambda:-(sin(pi/2)+v1.val), dependsOn=[v1])
b = cell(lambda: -a.val*10, dependsOn=[a],
onChange=lambda *args: printChangeBlurp(name='b',*args))
print 'v1 is', v1
print 'a is', a # -2.0 ;; this and the next are easy
print 'b is', b # 20
v1.val = 2 # ;; fun part starts here
print 'v1 now is', v1
print 'b now is', b # 30 ;; of course a got updated, too
I get the following printout:

v1 is 1
a is -2.0
b is [cell 'b' changed from <__main__.unbound object at 0xb4e2472c> to 20.0,
it was not bound]20.0
[cell 'b' changed from 20.0 to 30.0, it was bound ] v1 now is 2
b now is 30.0

Does that seem vaguely right?


<g> You have a good start. But you really have to lose the manual wiring
of dependencies, for several reasons;

-- it is a nuisance to do
-- it will be a source of bugs
-- it will be kind of impossible to do, because (in case you missed
it), the rule should be able to call any function and establish a
dependency on any other cell accessed. So when coding a change to a
function, one would have to go track down any existing rule to change
its dependsOn declaration. never mind the pain in th first place of
examining the entire call tree to see what else gets acessed.
-- it gets worse. I want you to further improve your solution by
handling rules such as this (I will just write Lisp):

(if (> a b)
c ;; this would be the "then" form
d)) ;; this is the 'else'

The problem here is that the rule always creates dependencies on a and
b, but only one of c and d. So you cannot write a dependsOn anyway
(never mind all the other reasons for that being unacceptable).

The other thing we want is (really inventing syntax here):

on_change(a,new,old,old-bound?) print(list(new, old, old-bound?)

Is the above what you want (you can also dynamically assign onChange later
on, as required or have a list of procedures instead)?


Your onChange seems to be working fine. One thing we are glossing over
here is that we want to use this to extend the object system. In that
case, as I said and as no one bothered to comprehend, we want /slots/ to
behave like spreadsheet cells. Not globals. And I have found that these
onChane deals are most sensibly defined on slots, not cell by cell.

That said, if you did work something up similar for Python classes, i
have no doubt you could do that.

Again, if anyone is reading and not looking to just have a flamewar,
they will recall i have already done once a partial port of Cells to
python. (I should go look for that, eh? It might be two computer systems
back in the closet though. <g>)

Then the print statements Just Happen. ie, It is not as if we are just hiding
computed variables behind syntax and computations get kicked off when a value
is read. Instead, an underlying engine propagates any assignment throughout
the dependency graph before the assignment returns.

Updating on write rather than recalculating on read does in itself not seem
particularly complicated.


<heh-heh> Well, there are some issues. B and C depend on A. B also
depends on C. When A changes, you have to compute C before you compute
B, or B will get computed with an obsolete value of C and be garbage.
And you may not know when A changes that there is a problem, because as
you can see from my example (let me change it to be relevant):

(if (> a d) c e)

It may be that during the prior computation a was less= d and did /not/
depend on c, but with the new value of a is > d and a new code branch
will be taken leading to c.

It was not that hard to figure all that out (and it will be easier for
you given the test case <g>) but I would not say propagation is
straightforward. There are other issues as well, including handling
assignments to cells within observers. This is actually useful
sometimes, so the problem needs solving.

My Cells hack does the above, not with global variables, but with slots (data
members?) of instances in the CL object system. I have thought about doing it
with global variables such as a and b above, but never really seen much of
need, maybe because I like OO and can always think of a class to create of
which the value should be just one attribute.

OK, so in what way does the quick 35 line hack below also completely miss your
point?


What is that trash talking? I have not seen your code before, so of
course I have never characterized it as completely missing the point.
Spare me the bullshit, OK?

Alexander, you are off to a, well, OK start on your own PyCells. You
have not made a complete mess of the low-hanging fruit, but neither have
you done a very good job. Requiring the user to declare dependencies was
weak -- I never considered anything that (dare I say it?) unscaleable.
Like GvR with Python, I knew from day one that Cells had to very simple
on the user. Even me, their developer. But do not feel too bad, the GoF
Patterns book described more prior art (I might have mentioned) and they
had explicit (and vague) subscribe/unsubscribe requirements.

As for the rest of your code, well, propagation should stop if a cell
recomputes the same value (it happens). And once you have automatic
dependency detection, well, if the rule is (max a b) and it turns out
that b is just 42 (you had cell(lambda 1)... why not just cell(1) or
just 1), then do not record a dependency on b. (Another reason why the
user cannot code dependsOn -- it is determined at run time, not by
examination of the code.

Now we need to talk about filters on a dependency....

kenny (expecting more pissing and less reading of the extensive on-line
literature on constraints)
--
Cells: http://common-lisp.net/project/cells/

"Have you ever been in a relationship?"
Attorney for Mary Winkler, confessed killer of her
minister husband, when asked if the couple had
marital problems.
May 8 '06 #123
Frank Buss <fb@frank-buss.de> wrote:
Alex Martelli wrote:
Not sure what the &key means here, but omitting that

def black_white(function, limit):
def result(x,y):
if function(x, y) > limit: return 1.0
else: return 0.0
return result
&key is something like keyword arguments in Python. And looks like you are


Ah, thanks.
right again (I've tested it in Pyhton) and my assumption was wrong, so the
important thing is to support closures, which Python does, even with local
function definitions.


We do appear to entirely agree. In Python <= 2.4, where if is just a
statement (not an expression), you'd need some trick to get this effect
with a lambda, e.g.:

def black_white(function, limit, key=None):
return lambda x,y: 1.0 * (function(x,y) > limit)

assuming it's important to get a float result -- the > operator per se
returns an int, so you can call float() on it, or multiply it by 1.0,
etc -- if you had two arbitrary colors, e.g.

def two_tone(function, limit, key=None, low=0.0, high=1.0):
return lambda x,y: (low, high)[function(x,y) > limit]

which is a pretty obscure alternative. In Python >= 2.5, an if
expression has been added, but I'll leave you to judge if it's actually
an improvement (sigh)...:

def two_tone(function, limit, key=None, low=0.0, high=1.0):
return lambda x,y: high if function(x,y) > limit else low

Personally, I'd rather use the named-function version. Anyway, they're
all semantically equivalent (sigh), and the key point is that the
semantics (building and returning functions on the fly) IS there,
whether the functions are named or unnamed, as we agree.
Alex
May 8 '06 #124
Chris Lambacher <ch***@kateandchris.net> wrote:
On Sun, May 07, 2006 at 11:57:55AM -0700, Alex Martelli wrote:
[1] I'm considering introducing bugs or misdesigns that have to be
fixed
as part of training for the purposes of this discussion. Also the


Actually, doing it _deliberately_ (on "training projects" for new people
just coming onboard) might be a good training technique; what you learn
by finding and fixing bugs nicely complements what you learn by studying
"good" example code. I do not know of this technique being widely used
in real-life training, either by firms or universities, but I'd love to
learn about counterexamples.


When I was learning C in university my professor made us fix broken programs.
He did this specifically to teach us to understand how to read compiler
warnings/errors and also how to debug software. The advantage of this in the
tutorial setting was that the TAs knew what the error was and could assist the
people in finding bugs in a controlled environment. When I later worked with
people who did not go through this training I found many of them had no clue
how to decipher the often cryptic C/C++ compiler warnings/errors (think
Borland Turbo C or MS Visual C++, GCC is pretty good in comparison) or where
to start looking for a bug (an affliction I do not possess).


Great to hear that SOME teachers use this technique. I think it would
be about just as valuable with any language (or other similar piece of
technology).
Alex
May 8 '06 #125
Alex Martelli wrote:
Worst case, you name all your functions Beverly so you don't have to
think about the naming
I didn't think about this, probably because I am accustomed to Haskell,
where you rather give functions different names (at the module top-level
you have no other choice). I just checked that it would work for nested
Beverly-lambdas (but could be quite confusing), but how about using more
then one lambda in an expression? You would have to name them
differently.
but you also have a chance to use meaningful names (such as,
presumably, zipperize_widget is supposed to be here) to help the
reader.


[OK, I am aware that you are talking solely about lambdas in Python,
but I want to talk about lambdas in general.]

Sometimes body of the function is its best description and naming what
it does would be only a burden. Consider that the same things that you
place in a loop body in python, you pass as a function to a HOF in
Haskell. Would you propose that all loops in Python have the form:

def do_something_with_x(x):
...
do something with x
for x in generator:
do_something_with_x(x)

Also, having anonymous functions doesn't take your common sense away, so
you still "have a chance".

Best regards
Tomasz
May 8 '06 #126
Tomasz Zielonka <to*************@gmail.com> wrote:
...
Also, having anonymous functions doesn't take your common sense away, so
you still "have a chance".


I've seen many people (presumably coming from Lisp or Scheme) code
Python such as:

myname = lambda ...

rather than the obvious Python way to do it:

def myname(...

((they generally do that right before they start whining that their
absurd choice doesn't let them put statements inside the "unnamed
function that they need to assign to a name")).

_THAT_ is what having many semantically overlapping (or identically
equivalent) ways to perform the same task does to people: it takes the
common sense away from enough of them that I'm statistically certain to
have to wrestle with some of them (be it as suppliers, people I'm trying
to help out on mailing lists etc, students I'm mentoring -- at least
being at Google means I don't have to fear finding such people as my
colleagues, but the memories and the scars of when I was a freelance
consultant are still fresh, and my heart goes out to the 99% of sensible
Pythonistas who don't share my good luck).

As long as Guido planned to remove lambda altogether in Python 3.0, I
could console myself with the thought that this frequent, specific
idiocy wasn't one I would have to wrestle with forever; now I know I
will have no such luck -- it's back to the dark ages. ((If I ever _DO_
find a language that *DOES* mercilessly refactor in pursuit of the ideal
"only one obvious way", I may well jump ship, since my faith in Python's
adherence to this principle which I cherish so intensely has been so
badly broken by GvR's recent decisions to keep lambdas, keep [<genexp>]
as an identical synonym for list(<genexp>), add {1,2,3} as an identical
synonym for set((1,2,3))...); though, being a greedy fellow, I'll
probably wait until all my Google options have vested;-)).
Alex
May 8 '06 #127
Patrick May <pj*@spe.com> wrote:

....an alleged reply to me, which in fact quotes (and responds to) only
to statements by Brian, without mentioning Brian...

Mr May, it seems that you're badly confused regarding Usenet's quoting
conventions. You may want to repeat your answer addressing specifically
the poster you ARE apparently answering. Nevertheless, I'll share my
opinions:
Using lambda in an expression communicates the fact that it will
be used only in the scope of that expression. Another benefit is that
declaration at the point of use means that all necessary context is
available without having to look elsewhere. Those are two pragmatic
benefits.
You still need to look a little bit upwards to the "point of use",
almost invariably, to see what's bound to which names -- so, you DO
"have to look elsewhere", nullifying this alleged benefit -- looking at
the def statement, immediately before the "point of use", is really no
pragmatic cost when you have to go further up to get the context for all
other names used (are they arguments of this function, variables from a
lexically-containing outer function, assigned somewhere...), which is
almost always. And if you think it's an important pragmatic advantage
to limit "potential scope" drastically, nothing stops you from wrapping
functions just for that purpose around your intended scope -- me, I find
that as long as functions are always kept small (as they should be for a
host of other excellent reasons anyway), the "ambiguity" of scope being
between the def and the end of the containing function is nil (literally
nil when the statement right after the def, using the named function, is
a return, as is often the case -- pragmatically equivalent to nil when
the statements following the def are >1 but sufficiently few).

Your "pragmatic benefits", if such they were, would also apply to the
issue of "magic numbers", which was discussed in another subthread of
this unending thread; are you therefore arguing, contrary to widespread
opinion [also concurred in by an apparently-Lisp-oriented discussant],
that it's BETTER to have magic unexplained numbers appear as numeric
constants "out of nowhere" smack in the middle of expressions, rather
than get NAMED separately and then have the names be used? If you
really believe in the importance of the "pragmatic benefits" you claim,
then to be consistent you should be arguing that...:

return total_amount * 1.19

is vastly superior to the alternative which most everybody would deem
preferable,

VAT_MULTIPLIER = 1.19
return total_amount * VAT_MULTIPLIER

because the alternative with the magic number splattered inexplicably
smack in the middle of code "communicated the fact" that it's used only
within that expression, and makes all context available without having
to look "elsewhere" (just one statement up of course, but then this
would be identically so if the "one statement up" was a def, and we were
discussing named vs unnamed functions vs "magic numbers").

3. It adds another construction to the language.


That's a very minimal cost relative to the benefits.


To my view of thinking, offering multiple semantically equivalent ways
(or, perhaps worse, "nearly equivalent but with subtle differences"
ones) to perform identical tasks is a *HUGE* conceptual cost: I like
languages that are and stay SMALL and SIMPLE. Having "only one obvious
way to do it" is just an ideal, but that's no reason to simply abrogate
it when it can so conveniently be reached (my only serious beef with
Python it has it *HAS* abdicated the pursuit of that perfect design
principle by recent decisions to keep lambda, and to keep the syntax
[<genexp>] as an identical equivalent to list(<genexp>), in the future
release 3.0, which was supposed to simplify and remove redundant stuff
accreted over the years: suddenly, due to those decisions, I don't
really look forward to Python 3.0 as I used to - though, as I've already
mentioned, being a greedy fellow I'll no doubt stick with Python until
all my Google options have vested).

You haven't made your case for named functions being preferable.


I think it's made at least as well as the case for using constant-names
rather than "magic numbers" numeric constants strewn throughout the
code, and THAT case is accepted by a wide consensus of people who care
about programming style and clarity, so I'm pretty happy with that.
Alex
May 8 '06 #128

Ken Tilton wrote:
Alexander Schmolck wrote:
[trimmed groups]

Ken Tilton <ke*******@gmail.com> writes:

yes, but do not feel bad, everyone gets confused by the /analogy/ to
spreadsheets into thinking Cells /is/ a spreadsheet. In fact, for a brief
period I swore off the analogy because it was so invariably misunderstood.
Even Graham misunderstood it.

Count me in.


<g> But looking at what it says: "Think of the slots as cells in a
spreadsheet (get it?), and you've got the right idea. ", if you follow
the analogy (and know that slot means "data member" in other OO models)
you also know that Serge's Spreadsheet example would have scored a big
fat zero on the Miller Analogy Test. Serge in no way made slots in
Python classes behave like cells in a spreadsheet. He simply started
work on a Spreadsheet application, using Python classes along the way. Bzzt.

While everyone makes the mistake, it is only because few of us (me
included) read very carefully. Especially if they are more interested in
flaming than learning what someone is saying.


I don't really mean any disrespect here, but if an analogy is not
interpreted correctly by a large group of people, the analogy is crap,
not the people. Yes, I understood it, specifically because I have spent
enough time dinking around with cell functions in a spreadhseet to
understand what you meant.

Maybe it would help to change the wording to "functions with cell
references in a spreadsheet" instead of "cells in a spreadsheet". Yes,
you lose the quippy phrasing but as it is most people use spreadsheets
as "simple database with informal ad hoc schema" and mostly ignore the
more powerful features anyways, so explicit language would probably
help the analogy. I'm guessing if you made some vague allusions to how
"sum(CellRange)" works in most spreadsheets people would get a better
idea of what is going on.

May 8 '06 #129
>If I ever _DO_ find a language that *DOES* mercilessly refactor in pursuit
of the ideal "only one obvious way", I may well jump ship, since my faith in
Python's adherence to this principle which I cherish so intensely has
been so badly broken ...


The phrase "only one obvious way..." is nearly the most absurd
marketing bullshit I have ever heard; topped only by "it fits your
brain". Why are so many clearly intelligent and apparently
self-respecting hard-core software engineers repeating this kind of
claptrap? It sounds more like a religious cult than a programming
language community. If one of my students answered the question: "Why
use X for Y?" with "X fits your brain." or "There's only one obvious
way to do Y in X." I'd laugh out loud before failing them.

May 8 '06 #130
ol*****@verizon.net wrote:
When you consider that there was just a big flamewar on comp.lang.lisp
about the lack of standard mechanisms for both threading and sockets in
Common Lisp (with the lispers arguing that it wasn't needed) I find it
"curious" that someone can say Common Lisp scales well.


In comp.lang.python there are often discussions about which is the best
web framework or what is the best gui. There seems to be some common
meme in these kinds of discussions and the lambda controversy. I'm even
ready to expand the concept even more and include documentation problems
and polymorphic typing.

So what is the big advantage of using parens then that is making people
give up documenting their code by naming functions? (See, I'm getting
into the right kind of lingo for discussing these kind of questions)

Well, there seems to be some advantage to conceptually decoupling a
function from what it is doing *now* (which can be easily named) and
what it is doing in some other situation. Naming things is only a
ballast and makes the mental model not "fit the brain" (introducing
pythonic terminology here for the lispers).

This is a lot like polymorphic functions. For example an adding function
sometimes adds integers and sometimes floats or complex variables and
it can be defined just once without specifying which type of parameters
it is going to get. I assume this to be a piece of cake for most lispers
and pythoneers, but possibly this could still confuse some static typers.

An anonymous function is like a polymorphic function in that it is
possible to make the "mental model" about it polymorphic, instead of
just its parameters. This enables the lispers to just "take what it does
and paste it where that needs to be done" (inventing crypto speak here).

This is a very effective way of handling operations and it would
surprise me if not 99 percent of the Python coders do things mentally
this way too and only add names and documentation at the last possible
moment (mental compile time documentation procedure).

So here we're integrating mental models concerning polymorphism into the
way we talk and think about code, and naming things explicitly always
seems to be a burden.

But now we let the other side of our brain speak for a moment, it was
always the side that translated everything we wanted to say to each
other here into mental Unicode so that we can hear what the others are
saying (further diving into the linguistic pit I am digging here).

Yes, communication is what suffers from *not* naming things, and right
after it documentation and standardization. How else are we going to
communicate our findings verbally to the non coders and the trans coders?

Also naming functions and variables can help us create appropriate
mental models that 'fix' certain things in place and keep them in the
same state, because now they are 'documented'. This promotes people
being able to work together and also it enables measuring progress, very
important aspects for old world companies who won't understand the way
things are evolving (even if they seem to have roaring success at the
moment).

Not to say that I invented something new, it was always a theme, but now
it's a meme,(he, he), the conflict between the scripture and the
mysticism. It's such a pity that everyone understands some way or
another that mysticism is the way things work but that none wants to
acknowledge it.

What am I doing here coding Python one might ask, well, the knowledge
has to be transfered to my brain first *somehow*, and until someone
finds a better way to do that or until there is so much procedural
information in my head that I can start autocoding (oh no) that seems to
be the better option.

Anton


May 8 '06 #131
ol*****@verizon.net writes:
Alex Martelli wrote:
Steve R. Hastings <st***@hastings.org> wrote:
...
> But the key in the whole thread is simply that indentation will not
> scale. Nor will Python.

This is a curious statement, given that Python is famous for scaling well.


I think "ridiculous" is a better characterization than "curious", even
if you're seriously into understatement.


When you consider that there was just a big flamewar on comp.lang.lisp
about the lack of standard mechanisms for both threading and sockets in
Common Lisp (with the lispers arguing that it wasn't needed) I find it
"curious" that someone can say Common Lisp scales well.


You really need to get better at distinguishing between reality and
usenet flamewars. While some comp.lang.lispers were bitching back and
forth about this, others of us were in Hamburg listening to Martin
Cracauer from ITA talking about "Common Lisp in a high-performance
search environment". In case you aren't aware, ITA is the company
that makes the search engine behind Orbitz.

May 8 '06 #132


Adam Jones wrote:
Ken Tilton wrote:
Alexander Schmolck wrote:
[trimmed groups]

Ken Tilton <ke*******@gmail.com> writes:

yes, but do not feel bad, everyone gets confused by the /analogy/ to
spreadsheets into thinking Cells /is/ a spreadsheet. In fact, for a brief
period I swore off the analogy because it was so invariably misunderstood.
Even Graham misunderstood it.
Count me in.
<g> But looking at what it says: "Think of the slots as cells in a
spreadsheet (get it?), and you've got the right idea. ", if you follow
the analogy (and know that slot means "data member" in other OO models)
you also know that Serge's Spreadsheet example would have scored a big
fat zero on the Miller Analogy Test. Serge in no way made slots in
Python classes behave like cells in a spreadsheet. He simply started
work on a Spreadsheet application, using Python classes along the way. Bzzt.

While everyone makes the mistake, it is only because few of us (me
included) read very carefully. Especially if they are more interested in
flaming than learning what someone is saying.

I don't really mean any disrespect here, but if an analogy is not
interpreted correctly by a large group of people, the analogy is crap,
not the people.


No, I do not think that follows. I reiterate: people (inluding me!) read
too quickly, and this analogy has a trap in it: spreadsheets are /also/
software.

The analogy is fine and the people are fine, but as you suggest there is
a human engineering problem to be acknowledged.

btw, I have a couple of links to papers on similar art and they all use
the spreadshett metaphor. It is too good not to, but...
Yes, I understood it, specifically because I have spent
enough time dinking around with cell functions in a spreadhseet to
understand what you meant.

Maybe it would help to change the wording to "functions with cell
references in a spreadsheet" instead of "cells in a spreadsheet".


<g> We could do a study. I doubt your change would work, but, hey, that
is what studies are for.

I think probably the best thing to do with the human engineering problem
is attack the misunderstanding explicitly. "Now if you are like most
people, you think that means X. It does not." And then give an example,
and then again say what it is not.

Anyone who comes away from /that/ with the wrong idea just is not trying.

But I would not put that in the project synopsis, and that is all the
original confused poster read. Just not trying.

kenny

--
Cells: http://common-lisp.net/project/cells/

"Have you ever been in a relationship?"
Attorney for Mary Winkler, confessed killer of her
minister husband, when asked if the couple had
marital problems.
May 8 '06 #133
On Sun, 07 May 2006 10:36:00 -0400, Ken Tilton <ke*******@gmail.com>
wrote:
[...]

Your spreadsheet does not have slots ruled by functions, it has one slot
for a dictionary where you store names and values/formulas.

Go back to your example and arrange it so a and b are actual slots (data
members? fields?) of the spreadsheet class. You can just stuff numbers in a:

sheet1.a = 42

but b should be somehow associated with a rule when sheet1 is created.
As I said in the other post, also associate an on-change callback with
slots a and b.
I must be missing something - seems this should be easy using
__setattr__ and __getattr__. Then _literally_ there's just a
dict containing names and functions, but when you _use_ the
class it looks just like the above:
[...]

When that is done we can look at a working example and see how well
Python fared without macros and full-blown lambda.


No lambda in the non-programmer-half-hour implementation below.
You need to define a named function for each cell to use as
a callback. Except for that what are Cells supposed to do that
the implementation below doesn't do?

"""PyCells.py"""

class Cell:

def __init__(self, name, owner, callback):
self.name = name
self.callback = callback
self.owner = owner

def onchange(self, value):
self.value = value
self.callback(self, value)

class Cells:

def __init__(self):
#self.slots = {}
#Oops, don't work so well with __setattr__:
self.__dict__['slots'] = {}

def __setattr__(self, name, value):
self.slots[name].onchange(value)

def __getattr__(self, name):
return self.slots[name].value

def AddCell(self, name, callback):
self.slots[name] = Cell(name, self, callback)

***********

Sample use:

cells = Cells()

def acall(cell, value):
cell.owner.slots['b'].value = value + 1

cells.AddCell('a',acall)

def bcall(cell, value):
cell.owner.slots['a'].value = value - 1

cells.AddCell('b',bcall)

cells.a = 42
print cells.a, cells.b
cells.b = 24
print cells.a, cells.b

************************

David C. Ullrich
May 8 '06 #134
On Mon, 08 May 2006 08:05:38 -0500, David C. Ullrich
<ul*****@math.okstate.edu> wrote:
[...]

def acall(cell, value):
cell.owner.slots['b'].value = value + 1


Needing to say that sort of thing every time
you define a callback isn't very nice.
New and improved version:

"""PyCells.py"""

class Cell:

def __init__(self, name, owner, callback):
self.name = name
self.callback = callback
self.owner = owner

def onchange(self, value):
self.value = value
self.callback(self, value)

def __setitem__(self, name, value):
self.owner.slots[name].value = value

class Cells:

def __init__(self):
self.__dict__['slots'] = {}

def __setattr__(self, name, value):
self.slots[name].onchange(value)

def __getattr__(self, name):
return self.slots[name].value

def AddCell(self, name, callback):
self.slots[name] = Cell(name, self, callback)

Sample:

cells = Cells()

def acall(cell, value):
cell['b'] = value + 1

cells.AddCell('a',acall)

def bcall(cell, value):
cell['a'] = value - 1

cells.AddCell('b',bcall)

cells.a = 42
print cells.a, cells.b
cells.b = 24
print cells.a, cells.b
#OR you could give Cell a __setattr__ so the above
#would be cell.a = value - 1. I think I like this
#version better; in applications I have in mind I
#might be iterating over lists of cell names.

************************

David C. Ullrich
May 8 '06 #135


David C. Ullrich wrote:
On Sun, 07 May 2006 10:36:00 -0400, Ken Tilton <ke*******@gmail.com>
wrote:

[...]

Your spreadsheet does not have slots ruled by functions, it has one slot
for a dictionary where you store names and values/formulas.

Go back to your example and arrange it so a and b are actual slots (data
members? fields?) of the spreadsheet class. You can just stuff numbers in a:

sheet1.a = 42

but b should be somehow associated with a rule when sheet1 is created.
As I said in the other post, also associate an on-change callback with
slots a and b.

I must be missing something - seems this should be easy using
__setattr__ and __getattr__. Then _literally_ there's just a
dict containing names and functions, but when you _use_ the
class it looks just like the above:


Ah, but looks like is not enough. Suppose you have a GUI class from
Tkinter. After a little more playing and fixing the huge gap described
in the next paragraph you decide, Cripes! Kenny was right! This is very
powerful. So now you want to subclass a Tkinter button and control
whether it is enabled with a rule (the huge gap, btw). But the enabled
flag of the super class is a native Python class slot. How would you
handle that with your faux object system? Had you truly extended the
Python class system you could just give the inherited slot a rule.
Speaking of which...

btw, You claimed "no lambda" but I did not see you doing a ruled value
anywhere, and that is where you want the lambda. And in case you
thinking your callbacks do that.

No, you do not want on-change handlers propagating data to other slots,
though that is a sound albeit primitive way of improving
self-consistency of data in big apps. The productivity win with VisiCalc
was that one simply writes rules that use other cells, and the system
keeps track of what to update as any cell changes for you. You have that
exactly backwards: every slot has to know what other slots to update. Ick.
kenny

--
Cells: http://common-lisp.net/project/cells/

"Have you ever been in a relationship?"
Attorney for Mary Winkler, confessed killer of her
minister husband, when asked if the couple had
marital problems.
May 8 '06 #136

Alex Martelli wrote:
Ken Tilton <ke*******@gmail.com> wrote:
...
But the key in the whole thread is simply that indentation will not
scale. Nor will Python.
Absolutely. That's why firms who are interested in building *seriously*
large scale systems, like my employer (and supplier of your free mail
account), would never, EVER use Python,


So how much Python code runs when I check my gmail?
nor employ in prominent
positions such people as the language's inventor and BDFL, the author of
the most used checking tool for it, and the author of the best-selling
reference book about that language; and, for that matter, a Director of
Search Quality who, while personally a world-renowned expert of AI and
LISP, is on record as supporting Python very strongly, and publically
stating its importance to said employer.


Doesn't Google also employ such people as the inventor of Limbo
programming language, one of the inventors of Dylan, and a Smalltalk
expert?

May 8 '06 #137

Alex Martelli wrote:

Your "pragmatic benefits", if such they were, would also apply to the
issue of "magic numbers", which was discussed in another subthread of
this unending thread; are you therefore arguing, contrary to widespread
opinion [also concurred in by an apparently-Lisp-oriented discussant],
that it's BETTER to have magic unexplained numbers appear as numeric
constants "out of nowhere" smack in the middle of expressions, rather
than get NAMED separately and then have the names be used? If you
really believe in the importance of the "pragmatic benefits" you claim,
then to be consistent you should be arguing that...:

return total_amount * 1.19

is vastly superior to the alternative which most everybody would deem
preferable,

VAT_MULTIPLIER = 1.19
return total_amount * VAT_MULTIPLIER

because the alternative with the magic number splattered inexplicably
smack in the middle of code "communicated the fact" that it's used only
within that expression, and makes all context available without having
to look "elsewhere" (just one statement up of course, but then this
would be identically so if the "one statement up" was a def, and we were
discussing named vs unnamed functions vs "magic numbers").
Most languages allow `unnamed numbers'. The `VAT_MULTIPLIER' argument
is a
strawman. Would you want to have to use a special syntax to name the
increment
in loop?

defnumber zero 0
defnumber one { successor (zero); }

for (int i = zero; i < limit; i += one) { ...}

If you language allows unnamed integers, unnamed strings, unnamed
characters, unnamed arrays or aggregates, unnamed floats, unnamed
expressions, unnamed statements, unnamed argument lists, etc. why
*require* a name for trivial functions?
Wouldn't all the other constructs benefit by having a required name as
well?

To my view of thinking, offering multiple semantically equivalent ways
(or, perhaps worse, "nearly equivalent but with subtle differences"
ones) to perform identical tasks is a *HUGE* conceptual cost: I like
languages that are and stay SMALL and SIMPLE.


Then why not stick with S and K combinators? There are few languages
SMALLER and
SIMPLER.

May 8 '06 #138
On 2006-05-08 02:51:22 -0400, JS******@gmail.com said:
The phrase "only one obvious way..." is nearly the most absurd
marketing bullshit I have ever heard; topped only by "it fits your
brain". Why are so many clearly intelligent and apparently
self-respecting hard-core software engineers repeating this kind of
claptrap?

Really should read "only one obvious way to people with a similar
background and little creativity" or "it fits your brain if you've
mostly programmed in algol syntax languages and alternative ideas make
said brain hurt."

trimmed to c.l.python and c.l.lisp

May 8 '06 #139
[Sorry, i was just reading comp.lang.lisp, missed the following till
someone mentioned it in email. k]

Alex Martelli wrote:
Carl Friedrich Bolz <cf****@gmx.de> wrote:
...
an extension that allows the programmer to specify how the value of
some slot (Lisp lingo for "member variable") can be computed. It
frees the programmer from having to recompute slot values since Cells

...
I have not looked at Cells at all, but what you are saying here sounds
amazingly like Python's properties to me. You specify a function that
calculates the value of an attribute (Python lingo for something like a

You're right that the above-quoted snipped does sound exactly like
Python's properties, but I suspect that's partly because it's a very
concise summary. A property, as such, recomputes the value each and
every time, whether the computation is necessary or not; in other words,
it performs no automatic caching/memoizing.


Right, and the first thing we did was simply that, no memoizing. The
second thing we did was memoize without tracking dependencies, updating
everything on each pass thru the eventloop. We knew both would not
(uh-oh) scale, but we wanted to see if the approach solved the
computational problem that started the whole research programme.

Soon enough we were tracking dependencies.


A more interesting project might therefore be a custom descriptor, one
that's property-like but also deals with "caching wherever that's
possible". This adds interesting layers of complexity, some of them not
too hard (auto-detecting dependencies by introspection), others really
challenging (reliably determining what attributes have changed since
last recomputation of a property). Intuition tells me that the latter
problem is equivalent to the Halting Problem -- if somewhere I "see" a
call to self.foo.zap(), even if I can reliably determine the leafmost
type of self.foo, I'm still left with the issue of analyzing the code
for method zap to find out if it changes self.foo on this occasion, or
not -- there being no constraint on that code, this may be too hard.
"no constraint on the code" is a sufficient objection but also: if the
rule for the code is (in neutral pseudo-code)

if a is true
then return b
else return c

...you only want to depend on one of b or c at a time. (also, not do the
internal dependency tracking on, say, b if it happens to hold an
immutable value (part of the Cells API). Nice performance win, it turns out.

I just keep what I call a "datapulse ID", sequentially growing from
zero, in a global variable. Each ruled Cell keeps track of its memoized
value, datapulse stamp, and whether it in fact changed value in reaching
its current datapulse stamp. (I can reach the current datapulse stamp by
determining no dependency (direct or indirect recursively down the
dependency graph) is both more current than me /and/ in fact changed in
value getting there.[1] If not, I take on the current datapulse but
never run my rule. Or, if yes, I run my rule but might compute the same
value as last time. Either way, I can flag myself as current but
not-actually-changed.)

So we have push and pull. The whole thing gets kicked off by a setting
operation on some slot, who notifies dependents that they should
recompute. This is a cascade of further notifications if anyone notified
recomputes and in fact computes a different value. The pull comes in
while rules are running to make sure no obsolete value gets used. So the
dependency graph gets updated JIT during rule evaluations, possively
recursively so.


The practical problem of detecting alterations may be softened by
realizing that some false positives are probably OK -- if I know that
self.foo.zap() *MAY* alter self.foo, I might make my life simpler by
assuming that it *HAS* altered it. This will cause some recomputations
of property-like descriptors' values that might theoretically have been
avoided, "ah well", not a killer issue. Perhaps a more constructive
approach would be: start by assuming the pseudoproperty always
recomputes, like a real property would; then move toward avoiding SOME
needless recomputations when you can PROVE they're needless. You'll
never avoid ALL needless recomputations, but if you avoid enough of them
to pay for the needed introspection and analysis, it may still be a win.
As to whether it's enough of a real-world win to warrant the project, I
pass -- in a research setting it would surely be a worthwhile study, in
a production setting there are other optimizations that look like
lower-hanging fruits to me. But, I'm sure the Cells people will be back
with further illustrations of the power of their approach, beyond mere
"properties with _some_ automatic-caching abilities".


Who knows, maybe something like that lies in the future of cells. It
would not be because of the need for update being undecidable, it would
be because the decision somehow would be too expensive compared to "Just
Run the Rule!"[2] It seems every new application brings up interesting
new requirements where Cells can usefully be extended with new
capabilities. But over time the code has gotten simpler and simpler, so
I think we are headed in the right direction.

kenny

[1] Aha! I see a flaw. Arises if two datapulses pass before I (a cell
<g>) get read and must determine if my cache is obsolete, and some
dependency changed in the first datapulse but not the second. I have to
enhance regression test suite and cure. One possible cure is akin to
your thinking: if I missed a generation, I have to assume it changed in
the missed generation. The alternatives are keeping a history of my
datapulses going back no further than the last true change, or having a
Cell keep a separate record of the last value used from each dependency.
The second could get expensive if I consult a lot of other cells,
while the first ... ah wait, I do not need a history, I just need one
new attribute: datapulse-of-latest-actual change. Sweet.

[2] I am making a mental note to look into that optimization.
May 8 '06 #140


Ken Tilton wrote:

I just keep what I call a "datapulse ID", sequentially growing from
zero, in a global variable. Each ruled Cell keeps track of its memoized
value, datapulse stamp, and whether it in fact changed value in reaching
its current datapulse stamp. (I can reach the current datapulse stamp by
determining no dependency (direct or indirect recursively down the
dependency graph) is both more current than me /and/ in fact changed in
value getting there.[1] If not, I take on the current datapulse but
never run my rule. Or, if yes, I run my rule but might compute the same
value as last time. Either way, I can flag myself as current but
not-actually-changed.)
.....

[1] Aha! I see a flaw. Arises if two datapulses pass before I (a cell
<g>) get read and must determine if my cache is obsolete, and some
dependency changed in the first datapulse but not the second.


No, I do not think that can happen. I was conflating two mutually
exclusive paths. If I am checking a dependency that means it would have
notified me when it in fact changed. If my rule is running i will always
get a valid value from a read. So i do not think there is a hole in
there anywhere.

In any case, I would not have needed a new last-changed-datapulse slot,
i could just change the "changed" flag to be last-changed-datapulse.

kenny

--
Cells: http://common-lisp.net/project/cells/

"Have you ever been in a relationship?"
Attorney for Mary Winkler, confessed killer of her
minister husband, when asked if the couple had
marital problems.
May 8 '06 #141
Ken Tilton <ke*******@gmail.com> writes:
No, you do not want on-change handlers propagating data to other
slots, though that is a sound albeit primitive way of improving
self-consistency of data in big apps. The productivity win with
VisiCalc was that one simply writes rules that use other cells, and
the system keeps track of what to update as any cell changes for
you. You have that exactly backwards: every slot has to know what
other slots to update. Ick.


No no, that's fine and the way it should be: when you change a slot,
it should know who to update. And that's also the way it works in
Cells. The trick is that Cells takes care of that part for you: all
the *programmer* has to care about is what values a slot depends on --
Cells takes care of inverting that for you, which is important because
that's a job that a computer is much better at than a human.
May 8 '06 #142
In article <1h*****************************@yahoo.com>,
Alex Martelli <al*****@yahoo.com> wrote:
May 8 '06 #143
al***@mac.com (Alex Martelli) writes:
...an alleged reply to me, which in fact quotes (and responds to)
only to statements by Brian, without mentioning Brian...

Mr May, it seems that you're badly confused regarding Usenet's
quoting conventions.
It seems that someone pisses in your cornflakes nearly every
morning.

For the record, I was attempting to respond to your post which I
only saw quoted in another message. Please excuse any accidental
misquoting.
Using lambda in an expression communicates the fact that it
will be used only in the scope of that expression. Another benefit
is that declaration at the point of use means that all necessary
context is available without having to look elsewhere. Those are
two pragmatic benefits.


You still need to look a little bit upwards to the "point of use",
almost invariably, to see what's bound to which names -- so, you DO
"have to look elsewhere", nullifying this alleged benefit -- looking at
the def statement, immediately before the "point of use", is really no
pragmatic cost when you have to go further up to get the context for all
other names used (are they arguments of this function, variables from a
lexically-containing outer function, assigned somewhere...), which is
almost always.


It appears that you write much longer functions than I generally
do. Requiring that all functions be named adds even more to the
clutter.
And if you think it's an important pragmatic advantage to limit
"potential scope" drastically, nothing stops you from wrapping
functions just for that purpose around your intended scope
Or, I could just use a language that supports unnamed functions.
Your "pragmatic benefits", if such they were, would also apply to the
issue of "magic numbers",


That claim is, frankly, silly. A function is far more
understandable without a name than a value like 1.19 in isolation.
The situations aren't remotely comparable.
>> 3. It adds another construction to the language.


That's a very minimal cost relative to the benefits.


To my view of thinking, offering multiple semantically equivalent
ways (or, perhaps worse, "nearly equivalent but with subtle
differences" ones) to perform identical tasks is a *HUGE* conceptual
cost: I like languages that are and stay SMALL and SIMPLE.


Like Scheme?

Regards,

Patrick

------------------------------------------------------------------------
S P Engineering, Inc. | The experts in large scale distributed OO
| systems design and implementation.
pj*@spe.com | (C++, Java, Common Lisp, Jini, CORBA, UML)
May 8 '06 #144


Thomas F. Burdick wrote:
Ken Tilton <ke*******@gmail.com> writes:

No, you do not want on-change handlers propagating data to other
slots, though that is a sound albeit primitive way of improving
self-consistency of data in big apps. The productivity win with
VisiCalc was that one simply writes rules that use other cells, and
the system keeps track of what to update as any cell changes for
you. You have that exactly backwards: every slot has to know what
other slots to update. Ick.

No no, that's fine and the way it should be: when you change a slot,
it should know who to update. And that's also the way it works in
Cells. The trick is that Cells takes care of that part for you: all
the *programmer* has to care about is what values a slot depends on --
Cells takes care of inverting that for you, which is important because
that's a job that a computer is much better at than a human.


Well, as long as we are being precise, an important distinction is being
obscured here: I was objecting to a slot "knowing" who to update thanks
to having been hardcoded to update certain other slots. When you say
"Cells takes care of that", it is important to note that it does so
dynamically at runtime based on actual usage of one slot by the rule for
another slot.

kt

--
Cells: http://common-lisp.net/project/cells/

"Have you ever been in a relationship?"
Attorney for Mary Winkler, confessed killer of her
minister husband, when asked if the couple had
marital problems.
May 8 '06 #145
On 08 May 2006 12:53:09 -0700, tf*@conquest.OCF.Berkeley.EDU (Thomas
F. Burdick) wrote:
Ken Tilton <ke*******@gmail.com> writes:
No, you do not want on-change handlers propagating data to other
slots, though that is a sound albeit primitive way of improving
self-consistency of data in big apps. The productivity win with
VisiCalc was that one simply writes rules that use other cells, and
the system keeps track of what to update as any cell changes for
you. You have that exactly backwards: every slot has to know what
other slots to update. Ick.
No no, that's fine and the way it should be: when you change a slot,
it should know who to update. And that's also the way it works in
Cells. The trick is that Cells takes care of that part for you:


I'm glad you said that - this may be what he meant, but it seems
more plausible than what he actually said.
all
the *programmer* has to care about is what values a slot depends on --
Cells takes care of inverting that for you, which is important because
that's a job that a computer is much better at than a human.


Fine. I suppose that is better; if b is going to return a + 1
the fact that this is what b returns should belong to b, not
to a. So a has an update list including b, so when a's value
is set a tells b it needs to update itself.

If we're allowed to pass (at some point, to some constructor
or other) something like (b, a + 1, [a]), which sets up a
cell b that shall return a + 1, and where the [a] is used
in the constructor to tell a to add b to a's update list
then this seems like no big deal.

And doing that doesn't seem so bad - now when the programmer
is writing b he has to decide that it should return a + 1
and also explicitly state that b shall depend on a; this
is all nice and localized, it's still _b_ telling _a_ to
add b to a's update list, and the programmer only has
to figure out what _b_ depends on when he's writing _b_.
Doesn't seem so bad.

But of course it would be better to be able to pass just
something morally equivalent to (b, a + 1) to whatever
constructor and have the system figure out automatically
that since b returns a + 1 it has to add a to b's update
list. There must be some simple trick to accomplish that
(using Python, without parsing code). (I begin to see the
point to the comment about how the callbacks should fire
when things are constucted.) Exactly what the trick is I
don't see immediately.

In Cells do we just pass a rule using other cells to
determine this cell's value, or do we also include
an explicit list of cells that this cell depends on?

************************

David C. Ullrich
May 8 '06 #146

Steve R. Hastings wrote:
On Fri, 05 May 2006 21:16:50 -0400, Ken Tilton wrote:
The upshot of
what he wrote is that it would be really hard to make semantically
meaningful indentation work with lambda.
Pretty much correct. The complete thought was that it would be painful
all out of proportion to the benefit.

See, you don't need multi-line lambda, because you can do this:
def make_adder(x):
def adder_func(y):
sum = x + y
return sum
return adder_func


Now imagine you had to do this with every object.

def add_five(x)
# return x + 5 <-- anonymous integer literal, not allowed!!!
five = 5 # define it first
return x + five

Think about the ramifications of every object having to have a name in
some environment, so that at the leaves of all expressions, only names
appear, and literals can only be used in definitions of names.

Also, what happens in the caller who invokes make_adder? Something like
this:

adder = make_adder(42)

Or perhaps even something like this

make_adder(2)(3) --> 5

Look, here the function has no name. Why is that allowed? If anonymous
functions are undesireable, shouldn't there be a requirement that the
result of make_adder has to be bound to a name, and then the name must
be used?
Note that make_adder() doesn't use lambda, and yet it makes a custom
function with more than one line. Indented, even.
That function is not exactly custom. What is custom are the environment
bindings that it captures. The code body comes from the program itself.

What about actually creating the source code of a function at run-time
and compiling it?

(let ((source-code (list 'lambda (list 'x 'y) ...)))
(compile nil source-code))

Here, we are applying the compiler (available at run-time) to syntax
which represents a function. The compiler analyzes the syntax and
compiles the function for us, giving us an object that can be called.

Without that syntax which can represent a function, what do you pass to
the compiler?

If we didn't have lambda in Lisp, we could still take advantage of the
fact that the compiler can also take an interpreted function object and
compile that, rather than source code. So we could put together an
expression which looks like this:

(flet ((some-name (x y) ...)) #'some-name)

We could EVAL this expression, which would give us a function object,
which can then be passed to COMPILE. So we have to involve the
evaluator in addition to the compiler, and it only works because the
compiler is flexible enough to accept function objects in addition to
source code.
No; lambda is a bit more convenient. But this doesn't seem like a very
big issue worth a flame war. If GvR says multi-line lambda would make
the lexer more complicated and he doesn't think it's worth all the effort,
I don't see any need to argue about it.
I.e. GvR is the supreme authority. If GvR rationalizes something as
being good for himself, that's good enough for me and everyone else.
I won't say more, since Alex Martelli already pointed out that Google is
doing big things with Python and it seems to scale well for them.


That's pretty amazing for something that doesn't even have a native
compiler, and big mutexes in its intepreter core.

Look at "docs.python.org" in section 8.1 en titled "Thread State and
the Global Interpreter Lock":

"The Python interpreter is not fully thread safe. In order to support
multi-threaded Python programs, there's a global lock that must be held
by the current thread before it can safely access Python objects.
Without the lock, even the simplest operations could cause problems in
a multi-threaded program: for example, when two threads simultaneously
increment the reference count of the same object, the reference count
could end up being incremented only once instead of twice. Therefore,
the rule exists that only the thread that has acquired the global
interpreter lock may operate on Python objects or call Python/C API
functions. In order to support multi-threaded Python programs, the
interpreter regularly releases and reacquires the lock -- by default,
every 100 bytecode instructions (this can be changed with
sys.setcheckinterval())."

That doesn't mean you can't develop scalable solutions to all kinds of
problems using Python. But it does mean that the scalability of the
overall solution comes from architectural details that are not related
to Python itself. Like, say, having lots of machines linked by a fast
network, working on problems that decompose along those lines quite
nicely.

May 8 '06 #147


David C. Ullrich wrote:
On 08 May 2006 12:53:09 -0700, tf*@conquest.OCF.Berkeley.EDU (Thomas
F. Burdick) wrote:

Ken Tilton <ke*******@gmail.com> writes:

No, you do not want on-change handlers propagating data to other
slots, though that is a sound albeit primitive way of improving
self-consistency of data in big apps. The productivity win with
VisiCalc was that one simply writes rules that use other cells, and
the system keeps track of what to update as any cell changes for
you. You have that exactly backwards: every slot has to know what
other slots to update. Ick.
No no, that's fine and the way it should be: when you change a slot,
it should know who to update. And that's also the way it works in
Cells. The trick is that Cells takes care of that part for you:

I'm glad you said that - this may be what he meant, but it seems
more plausible than what he actually said.


There may be some confusion here because there are two places for code
being discussed at the same time, and two sense of propagation.

the two places for code are (1) the rule attached to A which is
responsible for computing a value for A and (2) a callback for A to be
invoked whenever A changes. Why the difference?

In Cells, A is a slot such as 'background-color'. Whenever that changes,
we have to do something more. On Mac OS9 it was "InvalidateRect" of the
widget. In Cells-Tk, it is:
(Tcl_interp "mywidget configure -background <new color>")

In my OpenGL GUI, it is to rebuild the display-list for the widget.

That is the same no matter what rule some instance has for the slot
background-color, and different instances will have different rules.

As for propagating, yes, Cells propagates automatically. More below on
that. What I saw in the example offered was a hardcoded on-change
callback that was doing /user/ propagation form B to A (and B to A! ...
doesn't that loop, btw? Anyway...)

all
the *programmer* has to care about is what values a slot depends on --
Cells takes care of inverting that for you, which is important because
that's a job that a computer is much better at than a human.

Fine. I suppose that is better; if b is going to return a + 1
the fact that this is what b returns should belong to b, not
to a. So a has an update list including b, so when a's value
is set a tells b it needs to update itself.

If we're allowed to pass (at some point, to some constructor
or other) something like (b, a + 1, [a]), which sets up a
cell b that shall return a + 1, and where the [a] is used
in the constructor to tell a to add b to a's update list
then this seems like no big deal.

And doing that doesn't seem so bad - now when the programmer
is writing b he has to decide that it should return a + 1
and also explicitly state that b shall depend on a; this
is all nice and localized, it's still _b_ telling _a_ to
add b to a's update list, and the programmer only has
to figure out what _b_ depends on when he's writing _b_.
Doesn't seem so bad.

But of course it would be better to be able to pass just
something morally equivalent to (b, a + 1) to whatever
constructor and have the system figure out automatically
that since b returns a + 1 it has to add a to b's update
list. There must be some simple trick to accomplish that
(using Python, without parsing code).


Right, you do not want to parse code. It really would not work as
powerfully as Cells, which notice any dynamic access to another cell
while a rule is running. So my rule can call a function on "self" (the
instance that wons the slot being calculated, and since self can have
pointers to other instances, the algorithm can navigate high and low
calling other functions before finally reading another ruled slot. You
want to track those.
Exactly what the trick is I
don't see immediately.
To compute a value for a slot that happens to have a rule associated
with it, have a little cell datastructure that implements all this and
associate the cell with the slot and store a pointer to the rule in the
cell. Then have a global variable called *dependent* and locally:

currentdependent = *dependent*
oldvalue = cell.value
newvalue = call cell.rule, passing it the self instance
*dependent* = currentvalue

if newvalue not = oldvalue
call on-change on the slot name, self, newvalue and oldvalue
(the on-chnage needs to dispatch on as many arguments as
the language allows. Lisp does it on them all)

In the reader on a slot (in your getattr) you need code that notices if
the value being read is mediated by a ruled cell, and if the global
*dependent* is non empty. If so, both cells get a record of the other
(for varying demands of the implementation).

In Cells do we just pass a rule using other cells to
determine this cell's value, or do we also include
an explicit list of cells that this cell depends on?


Again, the former. Just write the rule, the above scheme dynamically
figures out the dependencies. Note then that dependencies vary over time
because of different branches a rule might take.

I want to reassure the community that this (nor the spreadsheet analogy
<g>) is not just my crazy idea. In 1992:

http://www.cs.utk.edu/~bvz/active-va...readsheet.html

"It is becoming increasingly evident that imperative languages are
unsuitable for supporting the complicated flow-of-control that arises in
interactive applications. This paper describes a declarative paradigm
for specifying interactive applications that is based on the spreadsheet
model of programing. This model includes multi-way constraints and
action procedures that can be triggered when constraints change the
values of variables."

Cells do not do multi-way constraints, btw. Nor partial constraints. To
hard to program, because the system gets non-deterministic. That kinda
killed (well, left to a small niche) the whole research programme. I
have citations on that as well. :)

kenny

--
Cells: http://common-lisp.net/project/cells/

"Have you ever been in a relationship?"
Attorney for Mary Winkler, confessed killer of her
minister husband, when asked if the couple had
marital problems.
May 8 '06 #148
Joe Marshall <ev********@gmail.com> wrote:
...
If you language allows unnamed integers, unnamed strings, unnamed
characters, unnamed arrays or aggregates, unnamed floats, unnamed
expressions, unnamed statements, unnamed argument lists, etc. why
*require* a name for trivial functions?
I think it's reasonable to make a name a part of functions, classes and
modules because they may often be involved in tracebacks (in case of
uncaught errors): to me, it makes sense to let an error-diagnosing
tracebacks display packages, modules, classes and functions/methods
involved in the chain of calls leading to the point of error _by name_.

I think it's reasonable to make a name a part of types for a different
reason: new types are rarely meant to be used "just once"; but also, if
during debugging any object is displayed, it's nice to be able to show,
as part of the display, "this object is of type X and ...", with X shown
as a name rather than as a complete (thus lengthy) description. (any
decent interactive shell/debugger will let you drill down into the
details as and when you need to, of course, but a well-chosen name can
be often sufficient during such interactive exploration/debugging
sessions, and therefore save time and effort).

This doesn't stop a programmer from using a meaningless name, of course,
but it does nudge things in the right direction.
Wouldn't all the other constructs benefit by having a required name as
well?


I believe this is a delicate style call, but I agree with your
implication that a language should at least _allow_ any object to have a
name (even when such objects are more often constructed on the fly, they
could still usefully borrow the first [or, maybe, the latest] name
they're bound to, if any). If I was designing a language from scratch,
I'd probably have as the first few fields of any object _at least_...:
a cell pointing to the type object,
a utility cell for GC (reference count or generation-count +
markflag)
a cell pointing to the name object,
...rest of the object's value/state to follow...

Indeed, "given an object, how do I get its NAME" (for inspection and
debugging purposes) is the most frequently asked question on
comp.lang.python, and I've grown a bit tired of answering "you can't, an
object in general intrinsically ``has no name'', it might have many or
none at all, blah blah" -- yeah, this is technically true (in today's
Python), but there's no real reason why it should stay that way forever
(IMHO). If we at least ALLOWED named objects everywhere, this would
further promote the use of names as against mysterious "magic numbers",
since the programmer would KNOW that after
VAT_MULTIPLIER = 1.19
then displaying in a debugger or other interactive session that
PARTICULAR instance of the value 1.19 would show the name string
'VAT_MULTIPLIER' as well (or no doubt a more structured name constructed
on the fly, identifying package and module-within-package too).

As to what good practices should be more or less mandated by the
language, and what other good practices instead should be just gently
nudged towards, that's an interesting design question in each case; to
me, a cornerstone for answering it is generally _language simplicity_.

When mandating a certain good practice DETRACTS from language
simplicity, make it a matter of convention instead; when so mandating
ENHANCES language simplicity (by not needing the addition of some other
construct, otherwise unneeded in the language), go for the mandate.

Mandating names for _everything_ would complicate the language by
forcing it to provide builtin names for a lot of elementary building
blocks: so for most types of objects it's best to "gently nudge". For
functions, classes, modules, and packages, I think the naming is
important enough (as explained above) to warrant a syntax including the
name; better, therefore, not to complicate the language by providing
another different syntax in each case just to allow the name to be
omitted -- why encourage a pratice that's best discouraged, at the price
of language simplicity? This DOES imply that some (functions, modules,
etc) that are fundamental to the language (and needed to build others)
should be provided with a name, but then one tends to do that anyway:
what language *DOESN'T* provide (perhaps in some suitable "trigonometry"
module) elementary functions named (e.g.) sin, cos, tan, ..., to let the
user build richer ones on top of those?
Alex
May 10 '06 #149

Joe Marshall wrote:
Alex Martelli wrote:
Most languages allow `unnamed numbers'. The `VAT_MULTIPLIER' argument
is a
strawman. Would you want to have to use a special syntax to name the
increment
in loop?

defnumber zero 0
defnumber one { successor (zero); }

for (int i = zero; i < limit; i += one) { ...}

If you language allows unnamed integers, unnamed strings, unnamed
characters, unnamed arrays or aggregates, unnamed floats, unnamed
expressions, unnamed statements, unnamed argument lists, etc. why
*require* a name for trivial functions?
Wouldn't all the other constructs benefit by having a required name as
well?


Is this a Slippery Slope fallacious argument?
(http://c2.com/cgi/wiki?SlipperySlope)

"if python required you to name every function then soon it will
require you to name every number, every string, every immediate result,
etc. And we know that is bad. Therefore requiring you to name your
function is bad!!!! So Python is bad!!!!"
How about:

If Common Lisp lets you use unnamed function, then soon everyone will
start not naming their function. Then soon they will start not naming
their variable, not naming their magic number, not naming any of their
class, not naming any function, and then all Common Lisp program will
become one big mess. And we know that is bad. So allowing unnamed
function is bad!!!! So Common Lisp is bad!!!!!

May 10 '06 #150

This thread has been closed and replies have been disabled. Please start a new discussion.

Similar topics

181
by: Tom Anderson | last post by:
Comrades, During our current discussion of the fate of functional constructs in python, someone brought up Guido's bull on the matter: http://www.artima.com/weblogs/viewpost.jsp?thread=98196 ...
30
by: Mike Meyer | last post by:
I know, lambda bashing (and defending) in the group is one of the most popular ways to avoid writing code. However, while staring at some Oz code, I noticed a feature that would seem to make both...
0
by: ryjfgjl | last post by:
ExcelToDatabase: batch import excel into database automatically...
0
isladogs
by: isladogs | last post by:
The next Access Europe meeting will be on Wednesday 6 Mar 2024 starting at 18:00 UK time (6PM UTC) and finishing at about 19:15 (7.15PM). In this month's session, we are pleased to welcome back...
1
isladogs
by: isladogs | last post by:
The next Access Europe meeting will be on Wednesday 6 Mar 2024 starting at 18:00 UK time (6PM UTC) and finishing at about 19:15 (7.15PM). In this month's session, we are pleased to welcome back...
0
by: Vimpel783 | last post by:
Hello! Guys, I found this code on the Internet, but I need to modify it a little. It works well, the problem is this: Data is sent from only one cell, in this case B5, but it is necessary that data...
0
by: jfyes | last post by:
As a hardware engineer, after seeing that CEIWEI recently released a new tool for Modbus RTU Over TCP/UDP filtering and monitoring, I actively went to its official website to take a look. It turned...
0
by: ArrayDB | last post by:
The error message I've encountered is; ERROR:root:Error generating model response: exception: access violation writing 0x0000000000005140, which seems to be indicative of an access violation...
1
by: PapaRatzi | last post by:
Hello, I am teaching myself MS Access forms design and Visual Basic. I've created a table to capture a list of Top 30 singles and forms to capture new entries. The final step is a form (unbound)...
1
by: CloudSolutions | last post by:
Introduction: For many beginners and individual users, requiring a credit card and email registration may pose a barrier when starting to use cloud servers. However, some cloud server providers now...
0
isladogs
by: isladogs | last post by:
The next Access Europe User Group meeting will be on Wednesday 3 Apr 2024 starting at 18:00 UK time (6PM UTC+1) and finishing by 19:30 (7.30PM). In this session, we are pleased to welcome former...

By using Bytes.com and it's services, you agree to our Privacy Policy and Terms of Use.

To disable or enable advertisements and analytics tracking please visit the manage ads & tracking page.