473,421 Members | 1,695 Online
Bytes | Software Development & Data Engineering Community
Post Job

Home Posts Topics Members FAQ

Join Bytes to post your question to a community of 473,421 software developers and data experts.

A critic of Guido's blog on Python's lambda

Python, Lambda, and Guido van Rossum

Xah Lee, 2006-05-05

In this post, i'd like to deconstruct one of Guido's recent blog about
lambda in Python.

In Guido's blog written in 2006-02-10 at
http://www.artima.com/weblogs/viewpo...?thread=147358

is first of all, the title “Language Design Is Not Just Solving
Puzzles”. In the outset, and in between the lines, we are told that
“I'm the supreme intellect, and I created Python”.

This seems impressive, except that the tech geekers due to their
ignorance of sociology as well as lack of analytic abilities of the
mathematician, do not know that creating a language is a act that
requires little qualifications. However, creating a language that is
used by a lot people takes considerable skill, and a big part of that
skill is salesmanship. Guido seems to have done it well and seems to
continue selling it well, where, he can put up a title of belittlement
and get away with it too.

Gaudy title aside, let's look at the content of his say. If you peruse
the 700 words, you'll find that it amounts to that Guido does not like
the suggested lambda fix due to its multi-line nature, and says that he
don't think there could possibly be any proposal he'll like. The
reason? Not much! Zen is bantered about, mathematician's impractical
ways is waved, undefinable qualities are given, human's right brain is
mentioned for support (neuroscience!), Rube Goldberg contrivance
phraseology is thrown, and coolness of Google Inc is reminded for the
tech geekers (in juxtaposition of a big notice that Guido works
there.).

If you are serious, doesn't this writing sounds bigger than its
content? Look at the gorgeous ending: “This is also the reason why
Python will never have continuations, and even why I'm uninterested in
optimizing tail recursion. But that's for another installment.”. This
benevolent geeker is gonna give us another INSTALLMENT!

There is a computer language leader by the name of Larry Wall, who said
that “The three chief virtues of a programmer are: Laziness,
Impatience and Hubris” among quite a lot of other ingenious
outpourings. It seems to me, the more i learn about Python and its
leader, the more similarities i see.

So Guido, i understand that selling oneself is a inherent and necessary
part of being a human animal. But i think the lesser beings should be
educated enough to know that fact. So that when minions follow a
leader, they have a clear understanding of why and what.

----

Regarding the lambda in Python situation... conceivably you are right
that Python lambda is perhaps at best left as it is crippled, or even
eliminated. However, this is what i want: I want Python literatures,
and also in Wikipedia, to cease and desist stating that Python supports
functional programing. (this is not necessarily a bad publicity) And, I
want the Perl literatures to cease and desist saying they support OOP.
But that's for another installment.

----
This post is archived at:
http://xahlee.org/UnixResource_dir/w...bda_guido.html

* * Xah
* * xa*@xahlee.org
http://xahlee.org/

May 6 '06
267 10485


Boris Borcic wrote:
Bill Atkins wrote:

It's interesting how much people who don't have macros like to put
them down and treat them as some arcane art that are too "*insane*"ly
powerful to be used well.

They're actually very straightforward and can often (shock of shocks!)
make your code more readable, without your efficiency taking a hit.

Not even efficiency of debugging ? A real problem with macros is that
run-time tracebacks etc, list macro outputs and not your macro'ed source
code. And that becomes an acute problem if you leave code for somebody
else to update. Or did lisp IDEs make progress on that front ?


AllegroCL now shows macros in the stack frame. Relatively recent
feature, and their IDE really stands out above the rest.

kenny

--
Cells: http://common-lisp.net/project/cells/

"Have you ever been in a relationship?"
Attorney for Mary Winkler, confessed killer of her
minister husband, when asked if the couple had
marital problems.
May 10 '06 #201
[Sorry, I missed this one originally.]
David C. Ullrich wrote:
On Tue, 09 May 2006 05:35:47 -0500, David C. Ullrich
<ul*****@math.okstate.edu> wrote:

On Mon, 08 May 2006 18:46:57 -0400, Ken Tilton <ke*******@gmail.com>
wrote:
[...]

If you, um, look at the code you see that "cells.a = 42" triggers
cells.__setattr__, which fires a's callback; the callback then
reaches inside and sets the value of b _without_ going through
__setattr__, hence without triggering b's callback.

In Cells you can't have A depend on B and also B depend on A?
That seems like an unfortunate restriction - I'd want to be
able to have Celsius and Farenheit, so that setting either
one sets the other.

Set Kelvin, and make Celsius and Fahrneheit functions of that. ie, There
is only one datapoint, the temperature. No conflict unless one creates one.


Realized later that I hadn't thought this through.

I'd been assuming that of course we should be allowed to
have A and B depend on each other. Hence if a change in
A propagates to a change in B that change in B has to
be a non-propagating change - thought I was just so
clever seeing a way to do that.
I think it could be arranged, if one were willing to tolerate a little
fuzziness: no, there would be no strictly correct snapshot at which
point everyone had their "right value". Instead, A changes so B
recomputes, B changes so A recomputes... our model has now come to life,
we just have to poll for OS events or socket data, and A and B never get
to a point where they are self-consistent, because one or the other
always needs to be recalculated.

I sometimes wonder if the physical universe is like that, explaining why
gravity slows time: it is not the gravity, it is the mass and we are
seeing system degradation as the matrix gets bogged down recomputing all
that matter.

[Cue Xah]


But duh, if that's how things are then we can't have
transitive dependencies working out right; surely we
want to be able to have B depend on A and then C
depend on B...

(And also if A and B are allowed to depend on each
other then the programmer has to ensure that the
two rules are inverses of each other, which seems
like a bad constraint in general, something non-trivial
that the programmer has to get right.)
Right, when I considered multi-way dependencies I realized I would have
to figure out some new syntax to declare in one place the rules for two
slots, and that would be weird because in Cells it is the instance that
gets a rule at make-instance time, so i would really have to have some
new make-instance-pair capability. Talk about a slippery slope. IMO, the
big constraints research program kicked off by Steele's thesis withered
into a niche technology because they sniffed at the "trivial"
spreadsheet model of linear dataflow and tried to do partial and
multi-way dependencies. I call it "a bridge too far", and in my
experience of Cells (ten years of pretty intense use), guess what?, all
we need as developers is one-way, linear, fully-specified dependencies.

So fine, no loops. If anything, if we know that
there are no loops in the dependencies that simplifies
the rest of the programming, no need for the sort of
finagling described in the first paragraph above.
Actually, I do allow an on-change callback ("observer" in Cells
parlance) to kick off a toplevel, imperative state change to the model.
Two cells who do that to each other will run until one decides not to do
so. I solve some GUI situations (the classic being a scrollbar thumb and
the text offset, which each at different times control the other, by
having them simply set the other in an observer. On the second
iteration, B is setting A to the value A has already, so propagation
stops (a longstanding Cells feature).

These feel like GOTOs, by the way, and are definitely to be avoided
because they break the declarative paradigm of Cells in which I can
always look at one (anonymous!) rule and see without question from where
any value it might hold comes. (And observers define where they take
effect outside the model, but those I have to track down by slot name
using OO browsing tools.)

But this raises a question:

Q: How do we ensure there are no loops in the dependencies?
Elsewhere I suggested the code was:

(let ((*dependent* this-cell))
(funcall (rule this-cell) (object this-cell)))

It is actually:

(let ((*dependents* (list* this-cell *dependents*)))
(funcall (rule this-cell) (object this-cell)))

So /before/ that I can say:

(assert (not (find this-cell *dependents*)))

Do we actually run the whole graph through some algorithm
to verify there are no loops?

The simplest solution seems like adding the cells one
at a time, and only allowing a cell to depend on
previously added cells. It's clear that that would
prevent loops, but it's not clear to me whether or
not that disallows some non-looping graphs.
As you can see, the looping is detected only when there is an actual
circularity, defined as a computation requiring its own computation as
an input.

btw, a rule /does/ have access to the prior value it computed, if any,
so the cell can be value-reflective even though the rules cannot be
reentrant.
A
math question the answer to which is not immediately
clear to me (possibly trivial, the question just
ocurred to me this second):

Say G is a (finite) directed graph with no loops. Is it always
possible to order the vertices in such a way that
every edge goes from a vertex to a _previous_ vertex?


I am just a simple application programmer, so I just wait till Cells
breaks and then I fix that. :)

kenny

--
Cells: http://common-lisp.net/project/cells/

"Have you ever been in a relationship?"
Attorney for Mary Winkler, confessed killer of her
minister husband, when asked if the couple had
marital problems.
May 10 '06 #202
"Michele Simionato" <mi***************@gmail.com> writes:
Ken Tilton wrote:
I was not thinking about the thread issue (of which I know little). The
big deal for Cells is the dynamic bit:

(let ((*dependent* me))
(funcall (rule me) me))

Then if a rule forces another cell to recalculate itself, *dependent*
gets rebound and (the fun part) reverts back to the original dependent
as soon as the scope of the let is exited.


Python 2.5 has a "with" statement (yes, the name is Lispish on purpose)
that could be used to implement this. See
http://www.python.org/dev/peps/pep-0343


You are mistaken. In particular, VAR doesn't have dynamic scope.
/Jon

--
'j' - a n t h o n y at romeo/charley/november com
May 10 '06 #203
Randal L. Schwartz wrote:
>> "Alex" == Alex Martelli <al***@mac.com> writes:


Alex> The difference, if any, is that gurus of Java, C++ and Python get to
Alex> practice and/or keep developing their respectively favorite languages
Alex> (since those three are the "blessed" general purpose languages for
Alex> Google - I say "general purpose" to avoid listing javascript for
Alex> within-browser interactivity, SQL for databases, XML for data
Alex> interchange, HTML for web output, &c, &c), while the gurus of Lisp,
Alex> Limbo, Dylan and Smalltalk don't (Rob Pike, for example, is one of the
Alex> architects of sawzall -- I already pointed to the whitepaper on that
Alex> special-purpose language, and he co-authored that paper, too).

That's crazy. Some of the key developers of Smalltalk continue to work
on the Squeak project (Alan Kay, Dan Ingalls, and I'm leaving someone
out, I know it...). So please remove Smalltalk from that list.


I thought it was clear that Alex was talking about "smalltalk gurus who
work for Google."

-Jonathan

May 10 '06 #204


Chris F Clark wrote:
David C Ullrich asked:
Q: How do we ensure there are no loops in the dependencies?

Do we actually run the whole graph through some algorithm
to verify there are no loops?

The question you are asking is the dependency graph a "directed
acyclic graph" (commonly called a DAG)? One algorithm to determine if
it is, is called "topological sort". That algorithm tells you where
there are cyclces in your graph,


Yep. But with Cells the dependency graph is just a shifting record of
who asked who, shifting because all of a sudden some outlier data will
enter the system and a rule will branch to code for the first time, and
suddenly "depend on" on some new other cell (new as in never before used
by this cell). This is not subject to static analysis because, in fact,
lexically everyone can get to everything else, what with closures,
first-class functions, runtime branching we cannot predict... fuggedaboutit.

So we cannot say, OK, here is "the graph" of our application model. All
we can do is let her rip and cross our fingers. :)

kenny
May 10 '06 #205
jayessay wrote:
"Michele Simionato" <mi***************@gmail.com> writes:
Ken Tilton wrote:
I was not thinking about the thread issue (of which I know little). The
big deal for Cells is the dynamic bit:

(let ((*dependent* me))
(funcall (rule me) me))

Then if a rule forces another cell to recalculate itself, *dependent*
gets rebound and (the fun part) reverts back to the original dependent
as soon as the scope of the let is exited.


Python 2.5 has a "with" statement (yes, the name is Lispish on purpose)
that could be used to implement this. See
http://www.python.org/dev/peps/pep-0343


You are mistaken. In particular, VAR doesn't have dynamic scope.


I said "it could be used to implement this", and since in a previous
post on mine in this
same thread I have shown how to implement thread local variables in
Python I figured
out people would be able to do the exercise for themselves.

Michele Simionato

May 10 '06 #206


Ketil Malde wrote:

Sometimes the best documentation is the code itself. Sometimes the
best name for a function is the code itself.


Absolutely. When I take over someone else's code I begin by deleting all
the comments. Then I read the code. If a variable or function name makes
no sense (once I have figured out what they /really/ do) I do a global
change. Pretty soon the system is "documented". And I usually find a
couple of bugs as the renaming produces things like:

count = count + weight

I think one good argument for anonymous functions is a hefty Cells
application, with literally hundreds of rules. The context is set by the
instance and slot name, and as you say, the rule speaks for itself:

(make-instance 'frame-widget
:bounds (c? (apply 'rect-union (all-bounds (subwidgets self)))))

Why do I have to give that a name? And if the algorithm gets hairier,
well, why is the reader looking at my code? If the reader is debugging
or intending to modify the rule, they damn well better be looking at the
code, not the name and not the comments. (Never a problem with my code. <g>)

kenny

--
Cells: http://common-lisp.net/project/cells/

"Have you ever been in a relationship?"
Attorney for Mary Winkler, confessed killer of her
minister husband, when asked if the couple had
marital problems.
May 10 '06 #207
Ken Tilton wrote:


Boris Borcic wrote:
Ken Tilton wrote:
"Now if you are like most people, you think that means X. It does not."

As far as natural language and understanding are concerned, "to mean"
means conformity to what most people understand, Humpty Dumpties
notwithstanding.


Nonsense.


:)
You are confusing that quality of natural language with most
people's quality of being sloppy readers, or in your case, a sloppy
thinker. Misapplying an analogy is not a question of usage -- when I
said spreadsheet and they thought of spreadsheets, so far so good,
right?
No, as Adam Jones pointed out. Like Bush speaking of "crUSAde" after 9/11.
-- it just sloppiness and laziness.

I do it, too, all the time. :)


Right.
May 10 '06 #208

Alex Martelli wrote:
Joe Marshall <ev********@gmail.com> wrote:
...
The problem is that a `name' is a mapping from a symbolic identifier to
an object and that this mapping must either be global (with the
attendant name collision issues) or within a context (with the
attendant question of `in which context').
Why is that a problem? Even for so-called "global" names, Python
supports a structured, hierarchical namespace, so there can never be any
collision betwen the "globals" of distinct modules (including modules
which happen to have the same name but live in distinct packages or
subpackages) -- I did mention that names could usefully be displayed in
some strcutured form such as apackage.somemodule.thefunction but perhaps
I was too tangential about it;-).


Can you refer to inner functions from the global context? Suppose I
have this Python code:

def make_adder(x):
def adder_func(y):
sum = x + y
return sum
return adder_func

Can I refer to the inner adder_func in any meaningful way?


Matthias Felleisen once suggested that *every* internal function should
be named. I just said `continuations'. He immediately amended his
statement with `except those'.


If I used continuations (I assume you mean in the call/cc sense rather
than some in which I'm not familiar?) I might feel the same way, or not,
but I don't (alas), so I can't really argue the point either way for
lack of real-world experience.


I meant continuations as in the receiver function in
continuation-passing-style. If you have a function that has to act
differently in response to certain conditions, and you want to
parameterize the behavior, then one possibility is to pass one or more
thunks to the function in addition to the normal arguments. The
function acts by selecting and invoking one of the thunks. A classic
example is table lookup. It is often the case you wish to proceed
differently depending upon whether a key exists in a table or not.
There are several ways to provide this functionality. One is to have a
separate `key-exists?' predicate. Another is to have a special return
value for `key not found'. Another is to throw an exception when a key
is not found. There are obvious advantages and drawbacks to all of
these methods. By using continuation-passing-style, we can
parameterize how the table lookup proceeds once it determines whether
or not the key is found. We have the lookup procedure take two thunks
in addition to the key. If the key is found, the first thunk is
invoked on the associated value. If the key is not found, the second
thunk is invoked. We can subsume all the previous behaviors:

(define (key-exists? key table)
(lookup key table
(lambda (value) #t) ;; if found, ignore value, return true
(lambda () #f))) ;; if not found, return false.

(define (option1 key table)
(lookup key table
(lambda (value) value)
(lambda () 'key-not-found)))

(define (option2 key table)
(lookup key table
(lambda (value) value)
(lambda () (raise 'key-not-found-exception))))

(define (option3 key table default-value)
(lookup key table
(lambda (value) value)
(lambda () default-value)))

The unnamed functions act in this regard much like a `local label'. We
wrap two chunks of code in a lambda and the lookup function `jumps' to
the appropriate chunk. (If the compiler knows about thunks, the
generated assembly code really will have local labels and jump
instructions. It can be quite efficient.)

This may look odd and cumbersome, but with a little practice the
lambdas fade into the background and it becomes easy to read.

My point with Matthias, however, was that defining all these
continuations (the thunks) as named internal functions was not only
cumbersome, but it obscured the control flow. Notice:

(define (named-option3 key table default-value)
(define (if-found value)
value)
(define (if-not-found)
default-value)
(lookup key table if-found if-not-found))

When we enter the function, we skip down to the bottom (past the
internal definitions) to run lookup, which transfers control to a
function defined earlier in the code.

There are many reasons to avoid this style in Python, so this probably
won't win you over, but my point is that there are times where
anonymous functions have an advantage over the named alternative and
that disallowing anonymous functions can be as cumbersome as
disallowing anonymous integers.

May 10 '06 #209
"Michele Simionato" <mi***************@gmail.com> writes:
jayessay wrote:
"Michele Simionato" <mi***************@gmail.com> writes:
Ken Tilton wrote:
> I was not thinking about the thread issue (of which I know little). The
> big deal for Cells is the dynamic bit:
>
> (let ((*dependent* me))
> (funcall (rule me) me))
>
> Then if a rule forces another cell to recalculate itself, *dependent*
> gets rebound and (the fun part) reverts back to the original dependent
> as soon as the scope of the let is exited.

Python 2.5 has a "with" statement (yes, the name is Lispish on purpose)
that could be used to implement this. See
http://www.python.org/dev/peps/pep-0343


You are mistaken. In particular, VAR doesn't have dynamic scope.


I said "it could be used to implement this", and since in a previous
post on mine in this
same thread I have shown how to implement thread local variables in
Python I figured
out people would be able to do the exercise for themselves.


I was saying that you are mistaken in that pep-0343 could be used to
implement dynamically scoped variables. That stands.
/Jon

--
'j' - a n t h o n y at romeo/charley/november com
May 10 '06 #210
Ken Tilton <ke*******@gmail.com> writes:

Set Kelvin, and make Celsius and Fahrneheit functions of that.


Or Rankine:-)

--
Robert Uhl <http://public.xdi.org/=ruhl>
Brought to you by 'Ouchies', the sharp, prickly toy you bathe with...
May 10 '06 #211
Cameron Laird wrote:
In article <1h*****************************@yahoo.com>,
Alex Martelli <al*****@yahoo.com> wrote:

.... .
Of course, the choice of Python does mean that, when we really truly
need a "domain specific little language", we have to implement it as a
language in its own right, rather than piggybacking it on top of a
general-purpose language as Lisp would no doubt afford; see
<http://labs.google.com/papers/sawzall.html> for such a DSLL developed
at Google. However, I think this tradeoff is worthwhile, and, in
particular, does not impede scaling....


....I'm confused, Alex: I sure
think *I* have been writing DSLs as specializations of Python,
and NOT as "a language in its own right"....


I think Alex is suggesting that if they used, for example, a version of
scheme
with a good optimizing compiler they could implement sawzall like
convenience
with almost the same performance, including startup, etc. whereas even
a
highly optimized python based approach would at least have a
comparatively
large startup penalty. For an environment like Google where they
scrape
thru their logs of various sorts doing lots of trivial scans you can
probably save
a lot of money and time on lots of machines by optimizing such scrapes
(but keep your bactine handy). And as the sawzall paper pointed out,
even static
type checks can prevent a lot of wasted machine bandwidth by avoiding
dumb
errors.

But the real question for someone like Rob Pike is why use scheme when
you
can invent another little language instead, I suspect :).

-- Aaron Watters

===

Stop procrastinating soon.

May 10 '06 #212
Kenny replied to me saying:
Yep. But with Cells the dependency graph is just a shifting record of
who asked who, shifting because all of a sudden some outlier data will
enter the system and a rule will branch to code for the first time,
and suddenly "depend on" on some new other cell (new as in never
before used by this cell). This is not subject to static analysis
because, in fact, lexically everyone can get to everything else, what
with closures, first-class functions, runtime branching we cannot
predict... fuggedaboutit.

So we cannot say, OK, here is "the graph" of our application
model. All we can do is let her rip and cross our fingers. :)


Yes, if you have Turing completeness in your dependency graph, the
problem is unsolvable. However, it's like the static v. dynamic
typing debate, you can pick how much you want to allow your graph to
be dynamic versus how much "safety" you want. In particular, I
suspect that in many applications, one can compute the set of
potentially problematic dependencies (and that set will be empty).
It's just a matter of structuring and annotating them correctly. Just
like one can create type systems that work for ML and Haskell. Of
course, if you treat your cell references like C pointers, then you
get what you deserve.

Note that you can even run the analysis dynamically, recomputing
whether the graph is cycle free as each dependency changes. Most
updates have local effect. Moreover, if you have used topological
sort to compute an ordering as well as proving cycle-free-ness, the
edge is only pontentially problemantic when it goes from a later
vertex in the order to an earlier one. I wouldn't be surprised to
find efficient algorithms for calculating and updating a topological
sort already in the literature.

It is worth noting that in typical chip circuitry there are
constructions, generally called "busses" where the flow of information
is sometimes "in" via an edge and sometimes "out" via the same edge
and we can model them in a cycle-free manner.

If you want to throw up your hands and say the problem is intractable
in general, you can. However, in my opinion one doesn't have to give
up quite that easily.

-Chris
May 10 '06 #213
Ken Tilton <ke*******@gmail.com> writes:
sross wrote:
I do wonder what would happen to Cells if I ever want to support
multiple threads. Or in a parallel processing environment. AFAIK It should be fine.
In LW, SBCL and ACL all bindings of dynamic variables are thread-local.


Ah, I was guilty of making an unspoken segue: the problem is not with
the *dependent* special variable, but with the sequentially growing
numeric *datapulse-id* ("the ID") that tells a cell if it needs to
recompute its value. The ID is not dynamically bound. If threads T1
and T2 each execute a toplevel, imperative assignment, two threads
will start propagating change up the same dependency
graph... <shudder>

Might need to specify a "main" thread that gets to play with Cells and
restrict other threads to intense computations but no Cells?


Hmmm. I am wondering if a Cells Manager class could be the home for
all Cells. Each thread could the have its own Cells Manager...

Actually, I got along quite a while without an ID, I just propagated
to dependents and ran rules. This led sometimes to a rule running
twice for one change and transiently taking on a garbage value, when
the dependency graph of a Cell had two paths back to some changed
Cell.

Well, Cells have always been reengineered in the face of actual use
cases, because I am not really smart enough to work these things out
in the abstract. Or too lazy or something. Probably all three.


Nah. It's me asking again and again those silly questions about
real Cells usage in some real life apps ;-)

Frank
May 10 '06 #214
Alex Martelli wrote:
Tomasz Zielonka <to*************@gmail.com> wrote:
...
higher level languages. There are useful programming techniques, like
monadic programming, that are infeasible without anonymous functions.
Anonymous functions really add some power to the language.
Can you give me one example that would be feasible with anonymous
functions, but is made infeasible by the need to give names to
functions?


Perhaps you were speaking more about Python, and I was speaking more
generally. There are useful programming techniques that require using
many functions, preferably anonymous ones, but these techniques probably
won't fit Python very well anyway.

In Haskell when I write IO intensive programs, when I use monadic
parsing libraries, etc. I use many lambdas, sometimes disguised as a
do-notation bind syntax (do x <- a; ...). I wouldn't want to name
all those functions.

Here is the random page with Haskell code I found on haskell.org
http://haskell.org/haskellwiki/Sudoku
See how many are there uses of lambdas:
\... ->
and do-bindings, which are basically a syntactic sugar for lambdas
and monadic bind operations:
do
x <- something ...
Also, there are many anonymous functions created by using higher
order functions or by partial application.

I want to make it clear that monadic programming in Haskell is not only
something that we *have* to do in order to perform IO in a purely
functional program, but it's often something we *can* and *want* to do,
because it's powerful, convenient, etc.

Haskell programs without many monadic programming also contain many
anonymous functions.

Of course I wouldn't use monadic programming too often in Python, mostly
because I wouldn't have to (eg. for IO), but also because it would be
inconvenient and difficult in Python.
In Python, specifically, extended with whatever fake syntax
you favour for producing unnamed functions?

I cannot conceive of one. Wherever within a statement I could write the
expression
lambda <args>: body
I can *ALWAYS* obtain the identical effect by picking an otherwise
locally unused identifier X, writing the statement
def X(<args>): body
and using, as the expression, identifier X instead of the lambda.


I know that. But the more such functions you use, the more cumbersome it
gets. Note also that I could use your reasoning to show that support
for expressions in the language is not neccessary.
On the other hand, what do you get by allowing ( as an indentifier?


Nothing useful -- the parallel is exact.


What we are discussing here is language expressivity and power,
something very subtle and hard to measure. I think your logical
reasoning proves nothing here. After all, what is really needed
in a programming language? Certainly not indentation sensitivity :-)
0s and 1s should be enough.
Significant whitespace is a good thing, but the way it is designed in
Python it has some costs. Can't you simply acknowledge that?


I would have no problem "acknowledging" problems if I agreed that any
exist, but I do not agree that any exist. Please put your coding where
your mouth is, and show me ONE example that would be feasible in a
Python enriched by unlimited unnamed functions but is not feasible just
because Python requires naming such "unlimited" functions.


You got me, monadic programming is infeasible in Python even with
lambdas ;-)

Best regards
Tomasz
May 10 '06 #215
Tomasz Zielonka wrote:
(x * 2) + (y * 3)

Here (x * 2), (y * 3) and (x * 2) + 3 are anonymous numbers ;-)

^^^^^^^^^^^

Of course it should be (x * 2) + (y * 3).

Best regards
Tomasz
May 10 '06 #216
Alex Martelli wrote:
Tomasz Zielonka <to*************@gmail.com> wrote:
...
higher level languages. There are useful programming techniques, like
monadic programming, that are infeasible without anonymous functions.
Anonymous functions really add some power to the language.
Can you give me one example that would be feasible with anonymous
functions, but is made infeasible by the need to give names to
functions?


Perhaps you were speaking more about Python, and I was speaking more
generally. There are useful programming techniques that require using
many functions, preferably anonymous ones, but these techniques probably
won't fit Python very well anyway.

In Haskell when I write IO intensive programs, when I use monadic
parsing libraries, etc. I use many lambdas, sometimes disguised as a
do-notation bind syntax (do x <- a; ...). I wouldn't want to name
all those functions.

Here is the random page with Haskell code I found on haskell.org
http://haskell.org/haskellwiki/Sudoku
See how many are there uses of lambdas:
\... ->
and do-bindings, which are basically a syntactic sugar for lambdas
and monadic bind operations:
do
x <- something ...
Also, there are many anonymous functions created by using higher
order functions or by partial application.

I want to make it clear that monadic programming in Haskell is not only
something that we *have* to do in order to perform IO in a purely
functional program, but it's often something we *can* and *want* to do,
because it's powerful, convenient, etc.

Haskell programs without many monadic programming also contain many
anonymous functions.

Of course I wouldn't use monadic programming too often in Python, mostly
because I wouldn't have to (eg. for IO), but also because it would be
inconvenient and difficult in Python.
In Python, specifically, extended with whatever fake syntax
you favour for producing unnamed functions?

I cannot conceive of one. Wherever within a statement I could write the
expression
lambda <args>: body
I can *ALWAYS* obtain the identical effect by picking an otherwise
locally unused identifier X, writing the statement
def X(<args>): body
and using, as the expression, identifier X instead of the lambda.


I know that. But the more such functions you use, the more cumbersome it
gets. Note also that I could use your reasoning to show that support
for expressions in the language is not neccessary.
On the other hand, what do you get by allowing ( as an indentifier?


Nothing useful -- the parallel is exact.


What we are discussing here is language expressivity and power,
something very subtle and hard to measure. I think your logical
reasoning proves nothing here. After all, what is really needed
in a programming language? Certainly not indentation sensitivity :-)
0s and 1s should be enough.
Significant whitespace is a good thing, but the way it is designed in
Python it has some costs. Can't you simply acknowledge that?


I would have no problem "acknowledging" problems if I agreed that any
exist, but I do not agree that any exist. Please put your coding where
your mouth is, and show me ONE example that would be feasible in a
Python enriched by unlimited unnamed functions but is not feasible just
because Python requires naming such "unlimited" functions.


You got me, monadic programming is infeasible in Python even with
lambdas ;-)

Best regards
Tomasz
May 10 '06 #217
Joe Marshall <ev********@gmail.com> wrote:
Alex Martelli wrote:
Joe Marshall <ev********@gmail.com> wrote:
...
The problem is that a `name' is a mapping from a symbolic identifier to
an object and that this mapping must either be global (with the
attendant name collision issues) or within a context (with the
attendant question of `in which context').
Why is that a problem? Even for so-called "global" names, Python
supports a structured, hierarchical namespace, so there can never be any
collision betwen the "globals" of distinct modules (including modules
which happen to have the same name but live in distinct packages or
subpackages) -- I did mention that names could usefully be displayed in
some strcutured form such as apackage.somemodule.thefunction but perhaps
I was too tangential about it;-).


Can you refer to inner functions from the global context? Suppose I
have this Python code:

def make_adder(x):
def adder_func(y):
sum = x + y
return sum
return adder_func

Can I refer to the inner adder_func in any meaningful way?


You can refer to one instance/closure (which make_adder returns), of
course -- you can't refer to the def statement itself (but that's a
statement, ready to create a function/closure each time it executes, not
a function, thus, not an object) except through introspection. Maybe I
don't understand what you mean by this question...

If I used continuations (I assume you mean in the call/cc sense rather
than some in which I'm not familiar?) I might feel the same way, or not,
but I don't (alas), so I can't really argue the point either way for
lack of real-world experience.


I meant continuations as in the receiver function in
continuation-passing-style. If you have a function that has to act
differently in response to certain conditions, and you want to
parameterize the behavior, then one possibility is to pass one or more
thunks to the function in addition to the normal arguments. The


Ah, OK, I would refer to this as "callbacks", since no
call-with-continuation is involved, just ordinary function calls; your
use case, while pretty alien to Python's typical style, isn't all that
different from other uses of callbacks which _are_ very popular in
Python (cfr the key= argument to the sort methods of list for a typical
example). I would guess that callbacks of all kinds (with absolutely
trivial functions) is the one use case which swayed Guido to keep lambda
(strictly limited to just one expression -- anything more is presumably
worth naming), as well as to add an if/else ternary-operator. I still
disagree deeply, as you guessed I would -- if I had to work with a
framework using callbacks in your style, I'd name my callbacks, and I
wish Python's functools module provided for the elementary cases, such
as:

def constant(k):
def ignore_args(*a): return k
return ignore_args

def identity(v): return v

and so on -- I find, for example, that to translate your
(define (option3 key table default-value)
(lookup key table
(lambda (value) value)
(lambda () default-value)))


I prefer to use:

def option3(key, table, default_value):
return lookup(key, table, identity, constant(default_value))

as being more readable than:

def option3(key, table, default_value):
return lookup(key, table, lambda v: v, lambda: default_value)

After all, if I have in >1 place in my code the construct "lambda v: v"
(and if I'm using a framework that requires a lot of function passing
I'm likely to be!), the "Don't Repeat Yourself" (DRY) principle suggests
expressing the construct *ONCE*, naming it, and using the name.

By providing unnamed functions, the language aids and abets violations
of DRY, while having the library provide named elementary functions (in
the already-existing appropriate module) DRY is reinforced and strongly
supported, which, IMHO, is a very good thing.
Alex
May 11 '06 #218


Alex Martelli wrote:
Stefan Nobis <sn****@gmx.de> wrote:

al***@mac.com (Alex Martelli) writes:

if anonymous functions are available, they're used in even more
cases where naming would help


Yes, you're right. But don't stop here. What about expressions? Many
people write very complex expression, that are hard to understand. A
good language should forbid these abuse and don't allow expressions
with more than 2 or maybe 3 operators!

That would _complicate_ the language (by adding a rule). I repeat what
I've already stated repeatedly: a good criterion for deciding which good
practices a language should enforce and which ones it should just
facilitate is _language simplicity_. If the enforcement is done by
adding rules or constructs it's probably not worth it; if the
"enforcements" is done by NOT adding extra constructs it's a double win
(keep the language simpler AND push good practices).


Gosh, that looks like fancy footwork. But maybe I misunderstand, so I
will just ask you to clarify.

In the case of (all syntax imaginary and not meant ot be Python):

if whatever = 42
dothis
do that
do something else
else
go ahead
make my day

You do not have a problem with unnamed series of statements. But in the
case of:

treeTravers( myTree, lambda (node):
if xxx(node)
print "wow"
return 1
else
print "yawn"
return 0)

....no, no good, you want a named yawnOrWow function? And though they
look similar, the justification above was that IF-ELSE was lucky enough
to get multiline branches In the Beginning, so banning it now would be
"adding a rule", whereas lambda did not get multiline In the Beginning,
so allowing it would mean "adding a construct". So by positing "adding a
rule or construct" as always bad (even if they enforce a good practice
such as naming an IF branch they are bad since one is /adding/ to the
language), the inconsistency becomes a consistency in that keeping IF
powerful and denying lambda the same power each avoids a change?

In other words, we are no longer discussing whether unnamed multi-line
statements are a problem. The question is, would adding them to lambda
mean a change?

Oh, yeah, it would. :)

hth, kenny
--
Cells: http://common-lisp.net/project/cells/

"Have you ever been in a relationship?"
Attorney for Mary Winkler, confessed killer of her
minister husband, when asked if the couple had
marital problems.
May 11 '06 #219
Alex Martelli wrote:
M Jared Finder <ja***@hpalace.com> wrote:
...
Your reasoning, taken to the extreme, implies that an assembly language,
by virtue of having the fewest constructs, is the best designed language


Except that the major premise is faulty! Try e.g.
<http://docs.sun.com/app/docs/doc/817-5477/6mkuavhrf#hic> and count the
number of distinct instructions -- general purpose, floating point,
SIMD, MMX, SSE, SSE2, OS support... there's *hundreds*, each with its
own rules as to what operand(s) are allowed plus variants such as (e.g.)
cmovbe{w,l,q} for "conditional move if below or equal" for word, long,
quadword (no byte variant) -- but e.g cmpxchg{b,w,l,q} DOES have a byte
variant too, while setbe for "set if below or equal" ONLY has a byte
variant, etc, etc -- endless memorization;-).

When you set up your strawman arguments, try to have at least ONE of the
premises appear sensible, will you?-)

I never argued against keeping languages at a high level, of course
(that's why your so utterly unfounded argument would be a "strawman"
even if it WAS better founded;-).
prone, code. I think the advantages of anonymous functions:

...
e) making the language simpler to implement


Adding one construct (e.g., in Python, having both def and lambda with
vast semantic overlap, rather than just one) cannot "make the language
simpler to implement" -- no doubt this kind of "reasoning" (?) is what
ended up making the instruction-set architecture of the dominant
families of CPUs so bizarre, intricate, and abstruse!-)


It sure can. First, let's cover the cost. I'll be measuring everything
in terms of lines of code, with the assumption that the code has been
kept readable.

Here's an implementation of lambda (anonymous functions) in Lisp based
on flet (lexically scoped functions):

(defmacro lambda (args &rest body)
(let ((name (gensym)))
`(flet ((,name ,args ,@body)) (function ,name))))

That's three lines of code to implement. An almost trivial amount.

Now by using anonymous functions, you can implement many other language
level features simpler. Looping can be made into a regular function
call. Branching can be made into a regular function call. Defining
virtual functions can be made into a regular function call. Anything
that deals with code blocks can be made into a regular function call.

By removing the special syntax and semantics from these language level
features and making them just pain old function calls, you can reuse the
same evaluator, optimizer, code parser, introspector, and other code
analyzing parts of your language for these (no longer) special
constructs. That's a HUGE savings, well over 100 lines of code.

Net simplification, at least 97 lines of code. For a concrete example
of this in action, see Smalltalk.

-- MJF
May 11 '06 #220
Chris Uppal wrote:
E.g. can you add three-way comparisons (less-than, same-as, greater-than to,
say, Python with corresponding three-way conditional control structures to
supplement "if" etc ? Are they on a semantic and syntactic par with the
existing ones ? In Smalltalk that is trivial (too trivial to be particularly
interesting, even), and I presume the same must be true of Lisp (though I
suspect you might be forced to use macros).


As an illustration, here's the definition and usage of such a numeric-if
in Lisp.

Using raw lambdas, it's ugly, but doable:

(defun fnumeric-if (value lt-body eq-body gt-body)
(cond ((< value 0) (funcall lt-body))
((= value 0) (funcall eq-body))
((> value 0) (funcall gt-body))))

(fnumeric-if (- a b)
(lambda () (print "a < b"))
(lambda () (print "a = b"))
(lambda () (print "a > b")))

A macro helps clean that up and make it look prettier:

(defmacro numeric-if (value lt-body eq-body gt-body)
`(fnumeric-if ,value
(lambda () ,lt-body)
(lambda () ,eq-body)
(lambda () ,gt-body)))

(numeric-if (- a b)
(print "a < b")
(print "a = b")
(print "a > b"))

-- MJF
May 11 '06 #221
jayessay wrote:
I was saying that you are mistaken in that pep-0343 could be used to
implement dynamically scoped variables. That stands.


Proof by counter example:

from __future__ import with_statement
import threading

special = threading.local()

def getvar(name):
return getattr(special, name)

def setvar(name, value):
return setattr(special, name, value)

class dynamically_scoped(object):
def __init__(self, name, value):
self.name = name
self.value = value
def __context__(self):
return self
def __enter__(self):
self.orig_value = getvar(self.name)
setvar(self.name, self.value)
def __exit__(self, Exc, msg, tb):
setvar(self.name, self.orig_value)

if __name__ == '__main__': # test
setvar("*x*", 1)
print getvar("*x*") # => 1
with dynamically_scoped("*x*", 2):
print getvar("*x*") # => 2
print getvar("*x*") # => 1

If you are not happy with this implementation, please clarify.

Michele Simionato

May 11 '06 #222

"Chris Uppal" wrote:
Petr Prikryl wrote:
for element in aCollection:
if element > 0:
return True
return False
[I'm not sure whether this is supposed to be an example of some specific
language (Python ?) or just a generic illustration. I'll take it as the
latter, since it makes my point easier to express. I'll also exaggerate,

just a little...]
Sorry, I do not know Smalltalk, but this was meant as the transcription
of your...
| E.g. consider the Smalltalk code (assumed to be the body of a method):
|
| aCollection
| do: [:each |
| each > 0 ifTrue: [^ true]].
| ^ false.

into Python
But now, in order to hack around the absence of a sensible and useful
feature -- /only/ in order to do so -- you have added two horrible new
complications to your language. You have introduced a special syntax to
express conditionals, and (worse!) a special syntax to express looping. Not only does that add a huge burden of complexity to the syntax, and semantics, of the language (and, to a lesser extent, its implementation), but it also throws out any semblance of uniformity.
I guess that it is not me who is confused here. The subject clearly
says that the thread is related to Python and to lambda supported
by Python. It was only crossposted to other groups and I did
not want to remove them -- other people may want to read
the thread in the other newsgroups.

So, I did not introduced any horible syntax, nor looping
construct that would look strange to people used to
classical procedural languages. The lambda syntax
in Python is the thing that could be viewed as a complication,
not the "for" loop or the "if" construction.

If you take any English speaking human (even the non-programmer),
I could bet that the Python transcription will be more understandable
than your Smalltalk example.
E.g. in Java there's an unresolved, and irresolvable, tension between whether a failing operation should return an error condition or throw an exception [...].

It is more a design problem than the language problem. And it is also
the implementation problem (i.e. what is the price of exceptions
in comparison with the other code). In Python, the exceptions
are intesively used.
E.g. can you add three-way comparisons (less-than, same-as, greater-than to, say, Python with corresponding three-way conditional control structures to
supplement "if" etc ? Are they on a semantic and syntactic par with the
existing ones ? In Smalltalk that is trivial (too trivial to be particularly interesting, even), and I presume the same must be true of Lisp (though I
suspect you might be forced to use macros).
Such built-in function already is in Python. But you could add it
by hand if it were not:

def cmp(x, y):
if x < y:
return -1
if x == y:
return 0
return 1

and the "if" supplement (the "switch" or "case" command) could be
replaced easily in Python using the hash-table (dictionary) structure.
I should say that if your example /is/ in fact Python, then I believe that
language allows fairly deep hooks into the execution mechanism, so that at
least the "for" bit can be mediated by the collection itself -- which is better than nothing, but nowhere near what I would call "good".


It is a dual point of view. Should the collection be passive data, or not?
I believe that the "pure" object oriented view (there are no functions,
only the object methods) is not very practical and does not reflect
well the big part of the reality that is simulated by programs.
Python and C++, for example, allow mixing functions and objects.

You should try Python. The "for" construct iterates through
a sequence or through all values of the generator, thus making
the for loop much more generic than for example in C or other
languages.

Every language forms the way of thinking. Every language has its strong
and weak points. Every language has its followers and haters.
Not every language is practical enough to fly around the Earth
in the space ship.

pepr

(Sorry for my broken English.)
May 11 '06 #223
Ken Tilton <ke*******@gmail.com> writes:
David C. Ullrich wrote:
But duh, if that's how things are then we can't have transitive
dependencies working out right; surely we
want to be able to have B depend on A and then C
depend on B...
(And also if A and B are allowed to depend on each
other then the programmer has to ensure that the
two rules are inverses of each other, which seems
like a bad constraint in general, something non-trivial
that the programmer has to get right.)


Right, when I considered multi-way dependencies I realized I would
have to figure out some new syntax to declare in one place the rules
for two slots, and that would be weird because in Cells it is the
instance that gets a rule at make-instance time, so i would really
have to have some new make-instance-pair capability. Talk about a
slippery slope. IMO, the big constraints research program kicked off
by Steele's thesis withered into a niche technology because they
sniffed at the "trivial" spreadsheet model of linear dataflow and
tried to do partial and multi-way dependencies. I call it "a bridge
too far", and in my experience of Cells (ten years of pretty intense
use), guess what?, all we need as developers is one-way, linear,
fully-specified dependencies.


It may also be that the bridge too far was in trying to do big,
multi-way constraints in a general-purpose manner. Cells provides you
with the basics, and you can build a special-purpose multi-way system
on top of it, much like you can use it as a toolkit for doing KR.
May 11 '06 #224
"Michele Simionato" <mi***************@gmail.com> writes:
jayessay wrote:
I was saying that you are mistaken in that pep-0343 could be used to
implement dynamically scoped variables. That stands.


Proof by counter example:

from __future__ import with_statement
import threading

special = threading.local()

def getvar(name):
return getattr(special, name)

def setvar(name, value):
return setattr(special, name, value)

class dynamically_scoped(object):
def __init__(self, name, value):
self.name = name
self.value = value
def __context__(self):
return self
def __enter__(self):
self.orig_value = getvar(self.name)
setvar(self.name, self.value)
def __exit__(self, Exc, msg, tb):
setvar(self.name, self.orig_value)

if __name__ == '__main__': # test
setvar("*x*", 1)
print getvar("*x*") # => 1
with dynamically_scoped("*x*", 2):
print getvar("*x*") # => 2
print getvar("*x*") # => 1

If you are not happy with this implementation, please clarify.


I can't get this to work at all - syntax errors (presumably you must
have 2.5?, I only have 2.4). But anyway:

This has not so much to do with WITH as relying on a special "global"
object which you must reference specially, which keeps track (more or
less) of its attribute values, which you use as "faked up" variables.
Actually you probably need to hack this a bit more to even get that as
it doesn't appear to stack the values beyond a single level.
/Jon

--
'j' - a n t h o n y at romeo/charley/november com
May 11 '06 #225
jayessay wrote:
"Michele Simionato" <mi***************@gmail.com> writes:
jayessay wrote:
I was saying that you are mistaken in that pep-0343 could be used to
implement dynamically scoped variables. That stands. Proof by counter example:

from __future__ import with_statement
import threading

special = threading.local()

def getvar(name):
return getattr(special, name)

def setvar(name, value):
return setattr(special, name, value)

class dynamically_scoped(object):
def __init__(self, name, value):
self.name = name
self.value = value
def __context__(self):
return self
def __enter__(self):
self.orig_value = getvar(self.name)
setvar(self.name, self.value)
def __exit__(self, Exc, msg, tb):
setvar(self.name, self.orig_value)

if __name__ == '__main__': # test
setvar("*x*", 1)
print getvar("*x*") # => 1
with dynamically_scoped("*x*", 2):
print getvar("*x*") # => 2
print getvar("*x*") # => 1

If you are not happy with this implementation, please clarify.


I can't get this to work at all - syntax errors (presumably you must
have 2.5?, I only have 2.4). But anyway:

This has not so much to do with WITH as relying on a special "global"
object which you must reference specially, which keeps track (more or
less) of its attribute values, which you use as "faked up" variable
Actually you probably need to hack this a bit more to even get that as
it doesn't appear to stack the values beyond a single level.


Actually there's no problem there. hint : dynamically_scoped is a class that the
with statement will instantiate before (any) entry. OTOH, as it is written, I am
not convinced it will work in a multithreaded setting : isn't it the case that
all threads that will import eg dynamically_scoped/getvar/setvar will act
without sync on the /single/ special object of the /single/ thread that
initialized the module ?

But I'm not sure, it's been ages since I used python threading.


/Jon

May 11 '06 #226
jayessay wrote:
"Michele Simionato" <mi***************@gmail.com> writes:
I can't get this to work at all - syntax errors (presumably you must
have 2.5?, I only have 2.4).
You can download Python 2.5 from www.python.org, but the important bit,
i.e. the use of threading.local to get thread-local variables is
already there in Python 2.4.
'with' gives you just a nicer lisp-like syntax.
This has not so much to do with WITH as relying on a special "global"
object which you must reference specially, which keeps track (more or
less) of its attribute values, which you use as "faked up" variables.
Actually you probably need to hack this a bit more to even get that as
it doesn't appear to stack the values beyond a single level.


Yes, but it would not be difficult, I would just instantiate
threading.local inside
the __init__ method of the dynamically_scoped class, so each 'with'
block
would have its own variables (and I should change getvar and setvar a
bit).

I was interested in a proof of concept, to show that Python can emulate
Lisp
special variables with no big effort.

Michele Simionato

May 12 '06 #227
"Michele Simionato" <mi***************@gmail.com> writes:
I was interested in a proof of concept, to show that Python can
emulate Lisp special variables with no big effort.


OK, but the sort of "proof of concept" given here is something you can
hack up in pretty much anything. So, I wouldn't call it especially
convincing in its effect and capability.
/Jon

--
'j' - a n t h o n y at romeo/charley/november com
May 12 '06 #228
jayessay <no****@foo.com> writes:
"Michele Simionato" <mi***************@gmail.com> writes:
I was interested in a proof of concept, to show that Python can
emulate Lisp special variables with no big effort.


OK, but the sort of "proof of concept" given here is something you can
hack up in pretty much anything.


Care to provide e.g. a java equivalent?

'as
May 12 '06 #229


Michele Simionato wrote:
jayessay wrote:
I was saying that you are mistaken in that pep-0343 could be used to
implement dynamically scoped variables. That stands.

Proof by counter example:

from __future__ import with_statement
import threading

special = threading.local()

def getvar(name):
return getattr(special, name)

def setvar(name, value):
return setattr(special, name, value)

class dynamically_scoped(object):
def __init__(self, name, value):
self.name = name
self.value = value
def __context__(self):
return self
def __enter__(self):
self.orig_value = getvar(self.name)
setvar(self.name, self.value)
def __exit__(self, Exc, msg, tb):
setvar(self.name, self.orig_value)

if __name__ == '__main__': # test
setvar("*x*", 1)
print getvar("*x*") # => 1
with dynamically_scoped("*x*", 2):
print getvar("*x*") # => 2
print getvar("*x*") # => 1

If you are not happy with this implementation, please clarify.


Can you make it look a little more as if it were part of the language,
or at least conceal the wiring better? I am especially bothered by the
double-quotes and having to use setvar and getvar.

In Common Lisp we would have:

(defvar *x*) ;; makes it special
(setf *x* 1)
(print *x*) ;;-> 1
(let ((*x* 2))
(print *x*)) ;; -> 2
(print *x*) ;; -> 1

kenny

--
Cells: http://common-lisp.net/project/cells/

"Have you ever been in a relationship?"
Attorney for Mary Winkler, confessed killer of her
minister husband, when asked if the couple had
marital problems.
May 12 '06 #230


Alexander Schmolck wrote:
jayessay <no****@foo.com> writes:

"Michele Simionato" <mi***************@gmail.com> writes:

I was interested in a proof of concept, to show that Python can
emulate Lisp special variables with no big effort.


OK, but the sort of "proof of concept" given here is something you can
hack up in pretty much anything.

Care to provide e.g. a java equivalent?


I think the point is that, with the variable actually being just a
string and with dedicated new explicit functions required as
"accessors", well, you could hack that up in any language with
dictionaries. It is the beginnings of an interpreter, not Python itself
even feigning special behavior.

perhaps the way to go is to take the Common Lisp:

(DEFVAR *x*)

*x* = special_var(v=42) ;; I made this syntax up

that could make for cleaner code:

*x*.v = 1

print *x*.v -> 1

(Can we hide the .v?) But there is still the problem of knowing when to
revert a value to its prior binding when the scope of some WITH block is
left.

Of course that is what indentation is for in Python, so... is that
extensible by application code? Or would this require Python internals work?

kenny
--
Cells: http://common-lisp.net/project/cells/

"Have you ever been in a relationship?"
Attorney for Mary Winkler, confessed killer of her
minister husband, when asked if the couple had
marital problems.
May 12 '06 #231
Ken Tilton <ke*******@gmail.com> writes:
In Common Lisp we would have:

(defvar *x*) ;; makes it special
(setf *x* 1)
(print *x*) ;;-> 1
(let ((*x* 2))
(print *x*)) ;; -> 2
(print *x*) ;; -> 1


You seem to think that conflating special variable binding and lexical
variable binding is a feature and not a bug. What's your rationale?

'as
May 12 '06 #232
Alexander Schmolck <a.********@gmail.com> writes:
Ken Tilton <ke*******@gmail.com> writes:
In Common Lisp we would have:

(defvar *x*) ;; makes it special
(setf *x* 1)
(print *x*) ;;-> 1
(let ((*x* 2))
(print *x*)) ;; -> 2
(print *x*) ;; -> 1


You seem to think that conflating special variable binding and lexical
variable binding is a feature and not a bug. What's your rationale?


A bug is a non-conformance to spec. Kenny's statement was specifically
about Common Lisp, which has a spec. Now, what was your rationale for
it _being_ a bug?

--
Duane Rettig du***@franz.com Franz Inc. http://www.franz.com/
555 12th St., Suite 1450 http://www.555citycenter.com/
Oakland, Ca. 94607 Phone: (510) 452-2000; Fax: (510) 452-0182
May 12 '06 #233
Ken Tilton <ke*******@gmail.com> writes:
Alexander Schmolck wrote:
jayessay <no****@foo.com> writes:
"Michele Simionato" <mi***************@gmail.com> writes:
I was interested in a proof of concept, to show that Python can
emulate Lisp special variables with no big effort.

OK, but the sort of "proof of concept" given here is something you can
hack up in pretty much anything.

Care to provide e.g. a java equivalent?

I think the point is that, with the variable actually being just a string and
with dedicated new explicit functions required as "accessors", well, you could
hack that up in any language with dictionaries.


Great -- so can I see some code? Can't be that difficult, it takes about 10-15
lines in python (and less in scheme).
It is the beginnings of an interpreter, not Python itself even feigning
special behavior.
perhaps the way to go is to take the Common Lisp:

(DEFVAR *x*)

*x* = special_var(v=42) ;; I made this syntax up

that could make for cleaner code:

*x*.v = 1

print *x*.v -> 1

(Can we hide the .v?)
I'd presumably write special variable access as something like:

with specials('x','y','z'):
special.x = 3 + 4
special.y = special.x + 10
...

I haven't tested this because I haven't got the python 2.5 alpha and won't go
through the trouble of installing it for this usenet discussion, but I'm
pretty sure this would work fine (I'm sure someone else can post an
implementation or prove me wrong). I also can't see how one could sensibly
claim that this doesn't qualify as an implementation of dynamically scoped
variables. Doesn't look any worse to me than

(let (x y z)
(declare (special x y z))
...)

-- in fact it looks better.
But there is still the problem of knowing when to revert a value to its
prior binding when the scope of some WITH block is left.
Can you explain what you mean by this statement? I'm not quite sure but I've
got the impression you're a possibly confused. Have you had a look at
<http://docs.python.org/dev/whatsnew/pep-343.html> or some other explanation
of the with statement?
Of course that is what indentation is for in Python, so... is that extensible
by application code?
The meaning of indentation? No.
Or would this require Python internals work?


May 12 '06 #234
Alexander Schmolck <a.********@gmail.com> writes:
(defvar *x*) ;; makes it special
(setf *x* 1)
(print *x*) ;;-> 1
(let ((*x* 2))
(print *x*)) ;; -> 2
(print *x*) ;; -> 1


You seem to think that conflating special variable binding and lexical
variable binding is a feature and not a bug. What's your rationale?


I thought special variables meant dynamic binding, i.e.

(defvar *x* 1)
(defun f ()
(print *x*) ;; -> 2
(let ((*x* 3))
(g)))
(defun g ()
(print *x*)) ;; - > 3

That was normal behavior in most Lisps before Scheme popularlized
lexical binding. IMO it was mostly an implementation convenience hack
since it was implemented with a very efficient shallow binding cell.
That Common Lisp adapted Scheme's lexical bindings was considered a
big sign of CL's couthness. So I'm a little confused about what Ken
Tilton is getting at.
May 12 '06 #235
Duane Rettig <du***@franz.com> writes:
Alexander Schmolck <a.********@gmail.com> writes:
Ken Tilton <ke*******@gmail.com> writes:
In Common Lisp we would have:

(defvar *x*) ;; makes it special
(setf *x* 1)
(print *x*) ;;-> 1
(let ((*x* 2))
(print *x*)) ;; -> 2
(print *x*) ;; -> 1
You seem to think that conflating special variable binding and lexical
variable binding is a feature and not a bug. What's your rationale?


A bug is a non-conformance to spec.


There is a world beyond specs, you know. If copies of allegro CL accidently
sent out death-threats to the US president on a weekly basis, because someone
at franz accidently or purposefully left in some pranky debugging code the
fact that this behaviour would likely neither violate the ansi spec nor any
other specs that ACL officially purports to adhere to wouldn't make it any
less of a bug (or help to pacify your customers).
Kenny's statement was specifically about Common Lisp
No Kenny's statement was about contrasting the way something is done in python
and the way something is done in common lisp (with the implication that the
latter is preferable). Of course the way something is done in common lisp is
almost tautologically in closer agreement with the ansi common lisp spec than
the way it is done in python, so agreement with the clhs is not a useful
criterion when talking about design features and misfeatures when contrasting
languages.

I thought it would have been pretty obvious that I was talking about language
design features and language design misfeatures (Indeed the infamously post
hoc, "It's a feature, not a bug" I was obviously alluding too doesn't make
much sense in a world were everything is tightly specified, because in it
nothing is post-hoc).
, which has a spec.
Bah -- so does fortran. But scheme also has operational semantics.
Now, what was your rationale for it _being_ a bug?


I just don't think the way special variable binding (or variable binding in
general[1]) is handled in common lisp is particularly well designed or
elegant.

Special variables and lexical variables have different semantics and using
convention and abusing[2] the declaration mechanism to differentiate between
special and lexical variables doesn't strike me as a great idea.

I can certainly think of problems that can occur because of it (E.g. ignoring
or messing up a special declaration somewhere; setf on a non-declared variable
anyone? There are also inconsistent conventions for naming (local) special
variables within the community (I've seen %x%, *x* and x)).

Thus I don't see having to use syntactically different binding and assignment
forms for special and lexical variables as inherently inferior.

But I might be wrong -- which is why was asking for the rationale of Kenny's
preference. I'd be even more interested in what you think (seriously; should
you consider it a design feature (for reasons other than backwards
compatiblity constraints), I'm pretty sure you would also give a justification
that would merrit consideration).

'as

Footnotes:
[1] The space of what I see as orthogonal features (parallel vs. serial
binding, single vs. multiple values and destructuring vs non-destructuring
etc.) is sliced in what appear to me pretty arbitrary, non-orthogonal and
annoying (esp. superfluous typing and indentation) ways in CL.

[2] Generally declarations don't change the meaning of an otherwise
well-defined program. The special declaration does. It's also a potential
source of errors as the declaration forces you to repeat yourself and to
pay attention to two places rather than one.

May 12 '06 #236


Alexander Schmolck wrote:
Ken Tilton <ke*******@gmail.com> writes:

In Common Lisp we would have:

(defvar *x*) ;; makes it special
(setf *x* 1)
(print *x*) ;;-> 1
(let ((*x* 2))
(print *x*)) ;; -> 2
(print *x*) ;; -> 1

You seem to think that conflating special variable binding and lexical
variable binding is a feature and not a bug. What's your rationale?


Transparency. That is where power comes from. I did the same things with
Cells. Reading a slot with the usual Lisp reader method transparently
creates a dependency on the variable. To change a variable and have it
propagate throughout the datamodel, Just Change It.

Exposed wiring means more work and agonizing refactoring.

kenny

--
Cells: http://common-lisp.net/project/cells/

"Have you ever been in a relationship?"
Attorney for Mary Winkler, confessed killer of her
minister husband, when asked if the couple had
marital problems.
May 12 '06 #237
Alexander Schmolck <a.********@gmail.com> writes:
Ken Tilton <ke*******@gmail.com> writes:
In Common Lisp we would have:

(defvar *x*) ;; makes it special
(setf *x* 1)
(print *x*) ;;-> 1
(let ((*x* 2))
(print *x*)) ;; -> 2
(print *x*) ;; -> 1


You seem to think that conflating special variable binding and lexical
variable binding is a feature and not a bug. What's your rationale?


And the particularly ugly, crappy, half baked python emulation is what?
A feature? Right.
/Jon

--
'j' - a n t h o n y at romeo/charley/november com
May 12 '06 #238
Ken Tilton <ke*******@gmail.com> writes:
Alexander Schmolck wrote:
jayessay <no****@foo.com> writes:
"Michele Simionato" <mi***************@gmail.com> writes:
I was interested in a proof of concept, to show that Python can
emulate Lisp special variables with no big effort.

OK, but the sort of "proof of concept" given here is something you can
hack up in pretty much anything.

Care to provide e.g. a java equivalent?


I think the point is that, with the variable actually being just a
string and with dedicated new explicit functions required as
"accessors", well, you could hack that up in any language with
dictionaries. It is the beginnings of an interpreter, not Python
itself even feigning special behavior.


Exactly. Of course this is going to be totally lost on the intended
audience...
/Jon

--
'j' - a n t h o n y at romeo/charley/november com
May 12 '06 #239
Alexander Schmolck <a.********@gmail.com> writes:
Ken Tilton <ke*******@gmail.com> writes:
Alexander Schmolck wrote:
jayessay <no****@foo.com> writes:

>"Michele Simionato" <mi***************@gmail.com> writes:
>
>
>>I was interested in a proof of concept, to show that Python can
>>emulate Lisp special variables with no big effort.
>
>OK, but the sort of "proof of concept" given here is something you can
> hack up in pretty much anything.

Care to provide e.g. a java equivalent?

I think the point is that, with the variable actually being just a string and
with dedicated new explicit functions required as "accessors", well, you could
hack that up in any language with dictionaries.


Great -- so can I see some code? Can't be that difficult, it takes about 10-15
lines in python (and less in scheme).


Do you actually need the code to understand this relatively simple concept???

/Jon
--
'j' - a n t h o n y at romeo/charley/november com
May 12 '06 #240
Alexander Schmolck <a.********@gmail.com> writes:
Duane Rettig <du***@franz.com> writes:
Alexander Schmolck <a.********@gmail.com> writes:
> Ken Tilton <ke*******@gmail.com> writes:
>
>> In Common Lisp we would have:
>>
>> (defvar *x*) ;; makes it special
>> (setf *x* 1)
>> (print *x*) ;;-> 1
>> (let ((*x* 2))
>> (print *x*)) ;; -> 2
>> (print *x*) ;; -> 1
>
> You seem to think that conflating special variable binding and lexical
> variable binding is a feature and not a bug. What's your rationale?
A bug is a non-conformance to spec.


There is a world beyond specs, you know. If copies of allegro CL accidently
sent out death-threats to the US president on a weekly basis, because someone
at franz accidently or purposefully left in some pranky debugging code the
fact that this behaviour would likely neither violate the ansi spec nor any
other specs that ACL officially purports to adhere to wouldn't make it any
less of a bug (or help to pacify your customers).


It wouldn't be a bug in Allegro CL, because it would never happen in an Allegro CL
that hasn't been enhanced with some kind of program. And although that program
itself could have a bug whereby such a threat were accidental, I would tend not
to call it accidental, I would tend to call it expicit, and thus not a bug but
an intended consequence of such explicit programming.

My reason for responding to you in the first place was due to your poor use
of the often misused term "bug". You could have used many other words or
phrases to describe the situation, and I would have left any of those alone.

For example:
Kenny's statement was specifically about Common Lisp


No Kenny's statement was about contrasting the way something is done in python
and the way something is done in common lisp (with the implication that the
latter is preferable). Of course the way something is done in common lisp is
almost tautologically in closer agreement with the ansi common lisp spec than
the way it is done in python, so agreement with the clhs is not a useful
criterion when talking about design features and misfeatures when contrasting
languages.

I thought it would have been pretty obvious that I was talking about language
design features and language design misfeatures (Indeed the infamously post
hoc, "It's a feature, not a bug" I was obviously alluding too doesn't make
much sense in a world were everything is tightly specified, because in it
nothing is post-hoc).


Whether it is preferable is a matter of opinion, and whether Kenny meant it
to infer preferability (I suspect so) or not has nothing to due with whether
it is a bug. Instead, you should call it a "design misfeature", which would
set the stage for a more cogent argumentation on the point, rather than on the
hyperbole. By the way, if you do call it a design misfeature, I would be
arguing against you, but that is another conversation.
, which has a spec.


Bah -- so does fortran. But scheme also has operational semantics.
Now, what was your rationale for it _being_ a bug?


I just don't think the way special variable binding (or variable binding in
general[1]) is handled in common lisp is particularly well designed or
elegant.


Then call it "misdesigned" or "inelegant".
Special variables and lexical variables have different semantics and using
convention and abusing[2] the declaration mechanism to differentiate between
special and lexical variables doesn't strike me as a great idea.
Then call it a "bad idea".
I can certainly think of problems that can occur because of it (E.g. ignoring
or messing up a special declaration somewhere; setf on a non-declared variable
anyone? There are also inconsistent conventions for naming (local) special
variables within the community (I've seen %x%, *x* and x)).
Then call it "not fully standardized or normative".
Thus I don't see having to use syntactically different binding and assignment
forms for special and lexical variables as inherently inferior.
Then call it "inherently inferior".
But I might be wrong -- which is why was asking for the rationale of Kenny's
preference.
But you _didn't_ ask him what rationale he had for his _preference_, you
asked him his rationale for considering it not a _bug_.
I'd be even more interested in what you think (seriously; should
you consider it a design feature (for reasons other than backwards
compatiblity constraints), I'm pretty sure you would also give a justification
that would merrit consideration).


Well, OK, let's change the conversation away from "bug"-ness and toward any of
the other negatives we discussed above. I actually doubt that I can provide
a justification in a small space without first understanding who you are
and from what background you are coming, so let me turn it around and ask
you instead to knock down a straw-man:

You seem to be saying that pure lexical transparency is always preferable
to statefulness (e.g. context). Can we make that leap? If not, set me
straight. If so, tell me: how do we programmatically model those situations
in life which are inherently contextual in nature, where you might get
a small piece of information and must make sense of it by drawing on
information that is _not_ given in that information, but is (globally,
if you will) "just known" by you? How about conversations in English?
And, by the way, how do you really know I'm writing to you in English, and
not some coded language that means something entirely different?

--
Duane Rettig du***@franz.com Franz Inc. http://www.franz.com/
555 12th St., Suite 1450 http://www.555citycenter.com/
Oakland, Ca. 94607 Phone: (510) 452-2000; Fax: (510) 452-0182
May 12 '06 #241
Duane Rettig <du***@franz.com> writes:
My reason for responding to you in the first place was due to your poor use
of the often misused term "bug". You could have used many other words or
phrases to describe the situation, and I would have left any of those alone.
I'm happy to accept your terminology of bug (not conforming to a certain
specification) for the remainder of this discussion so that we can stop
quibbling over words.

[...]
I'd be even more interested in what you think (seriously; should
you consider it a design feature (for reasons other than backwards
compatiblity constraints), I'm pretty sure you would also give a justification
that would merrit consideration).


Well, OK, let's change the conversation away from "bug"-ness and toward any of
the other negatives we discussed above. I actually doubt that I can provide
a justification in a small space without first understanding who you are
and from what background you are coming, so let me turn it around and ask
you instead to knock down a straw-man:

You seem to be saying that pure lexical transparency is always preferable
to statefulness (e.g. context).


No.
Can we make that leap? If not, set me straight.
I think that in most contexts lexical transparency is desirable so that
deviations from lexical transparency ought to be well motivated. I also
believe that a construct that is usually used to establish a lexically
transparent binding shouldn't be equally used for dynamic bindings so that it
isn't syntactically obvious what's going on. I've already provided some
reasons why CL's design wrt. binding special and lexical variables seems bad
to me. I don't think these reasons were terribly forceful but as I'm not aware
of any strong motivation why the current behaviour would be useful I'm
currently considering it a minor wart.

To make things more concrete: What would be the downside of, instead of doing
something like:

(let ((*x* ...)) [(declare (special *x*))] ...) ; where [X] denotes maybe X

doing any of the below:

a) using a different construct e.g. (fluid-let ((*x* ...)) ...) for binding
special variables
b) having to use *...* (or some other syntax) for special variables
c) using (let ((q (special *x*) (,a ,b ,@c)) (values 1 2 '(3 4 5 6)))
(list q ((lambda () (incf *x*))) a b c)) ; => (1 3 3 4 (5 6))

(It's getting late, but hopefully this makes some vague sense)
If so, tell me: how do we programmatically model those situations in life
which are inherently contextual in nature, where you might get a small piece
of information and must make sense of it by drawing on information that is
_not_ given in that information, but is (globally, if you will) "just known"
by you? How about conversations in English? And, by the way, how do you
really know I'm writing to you in English, and not some coded language that
means something entirely different?


We can skip that part.

'as
May 12 '06 #242
jayessay <no****@foo.com> writes:
Great -- so can I see some code? Can't be that difficult, it takes about 10-15
lines in python (and less in scheme).


Do you actually need the code to understand this relatively simple concept???


Yes. I'd be genuinely curious to see how an implementation in Java, Pascal, C,
(or any other language that has little more than dictionaries) compares to
python and CL.

In my limited understanding I have trouble seeing how you'd do without either
unwind-protect/try-finally or reliable finalizers for starters.

'as
May 12 '06 #243
Ken Tilton <ke*******@gmail.com> writes:
Alexander Schmolck wrote:
Ken Tilton <ke*******@gmail.com> writes:
In Common Lisp we would have:

(defvar *x*) ;; makes it special
(setf *x* 1)
(print *x*) ;;-> 1
(let ((*x* 2))
(print *x*)) ;; -> 2
(print *x*) ;; -> 1

You seem to think that conflating special variable binding and lexical

variable binding is a feature and not a bug. What's your rationale?


Transparency.


That's is circular. You might be right, but you failed to provide a rationale
and not just a restatement.
That is where power comes from. I did the same things with Cells. Reading a
slot with the usual Lisp reader method transparently creates a dependency on
the variable.
Let me see if I understand it right -- if an instance of class A has a ruled
slot a that reads an instance of class B's slot b then it is noted somewhere
that A's a depends on b?
To change a variable and have it propagate throughout the datamodel, Just
Change It.
Exposed wiring means more work and agonizing refactoring.


Well, you claim that in that instance python suffers from exposed wiring and I
claim that CL suffers from a (minor) booby trap. You can't typically safely
ignore whether a variable is special as a mere wiring detail or your code
won't work reliably (just as you can't typically safely ignore whether
something is rigged or not even if booby-trapness is pretty transparent) --
it's as simple as that (actually its a bit worse because the bug can be hard
to detect as lexical and special variables will result in the same behaviour
in many contexts).

So in the case of booby traps and special variables, I generally prefer some
exposed wiring (or strong visual clues) to transparency.

I'd like to see a demonstration that using the same binding syntax for special
and lexical variables buys you something apart from bugs.

'as
May 12 '06 #244


Alexander Schmolck wrote:
Duane Rettig <du***@franz.com> writes:

Alexander Schmolck <a.********@gmail.com> writes:

Ken Tilton <ke*******@gmail.com> writes:
In Common Lisp we would have:

(defvar *x*) ;; makes it special
(setf *x* 1)
(print *x*) ;;-> 1
(let ((*x* 2))
(print *x*)) ;; -> 2
(print *x*) ;; -> 1

You seem to think that conflating special variable binding and lexical
variable binding is a feature and not a bug. What's your rationale?

I will expand on my earlier "transparency" rationale with a further
rationale for transparency: I do not need no stinkin' rationale. A
special variable is still a variable. They should be set, read, and
bound (say, by "let") the same way as any other variable.

You need a rationale. It sounds as if you want some noisey syntax to
advertise the specialness. I do not think the Python community will
appreciate you messing up their pretty code.

You are right about one thing: specialness needs advertising. You know
what we do in Lisp? We obediently name special variables with bracketing
*s, like *this*. Too simple?

A bug is a non-conformance to spec.

There is a world beyond specs, you know. If copies of allegro CL accidently
sent out death-threats to the US president on a weekly basis, because someone
at franz accidently or purposefully left in some pranky debugging code the
fact that this behaviour would likely neither violate the ansi spec nor any
other specs that ACL officially purports to adhere to wouldn't make it any
less of a bug (or help to pacify your customers).

Kenny's statement was specifically about Common Lisp

No Kenny's statement was about contrasting the way something is done in python
and the way something is done in common lisp (with the implication that the
latter is preferable).


Close, but no. The question I was weighing in was "has Michele
replicated special variables?". My implication was, "Not yet -- can you
match the transparency?", and it was an honest question, I do not know.
Again, transparency is a qualitative difference.

I liked your solution better, btw, because it does minimize the noise.
For fun, you should call the class ** instead of special, so we end up
with: **.b = 42

We'll understand. :)
Of course the way something is done in common lisp is
almost tautologically in closer agreement with the ansi common lisp spec than
the way it is done in python, so agreement with the clhs is not a useful
criterion when talking about design features and misfeatures when contrasting
languages.
Again, no, it is not the spec, it is the highly-valued Python quality of
clean code. Also, the consistency of treating variables as variables,
regardless of some special/dynamic quality.

Some background. Lisp is a big language, and I am self taught and do not
like to read, grew up in Lisp in isolation. Not many Lispers in the
exercise yard. Discovered special variables only when we hired an old
hand who gently corrected a howler:

(let* ((old-x *x*))
(setf *x* 42)
....
(setf *x* old-x))

I still laugh at that. Anyway, as soon as I learned that, I was able to
make Cells syntax infinitely more transparent. And guess what? It also
made dependency identification automatic instead of cooperative, and
when I rebuilt a huge Cells-based app I discovered two or three cases
where I had neglected to publish a dependency.

It's a mystery, but somehow simpler syntax... oh, wait, this is
c.l.python, I am preaching to the choir.

I just don't think the way special variable binding (or variable binding in
general[1]) is handled in common lisp is particularly well designed or
elegant.
See above. There is nothing like a concrete experience of implementing a
hairy library like Cells /without/ leveraging specials and then
converting to specials. Talk about an Aha! experience. I mean, bugs ran
screaming from their nests simply because of the implementation change--
we call that A Message From God that the design has taken a correct turn.

Special variables and lexical variables have different semantics and using
convention and abusing[2] the declaration mechanism to differentiate between
special and lexical variables doesn't strike me as a great idea.
I know what you mean, but I like reading tea leaves, and I find it
fascinating that *this* somehow eliminates all ambiguity. Background:
don't know where I might find it, but I once saw a thread demonstrating
the astonishing confusion one could create with a special variable such
as a plain X (no *s). Absolutely mind-bogglingly confusing. Go back and
rename the special version *x*, and use *x* where you want to rebind it.
Result? Utterly lucid code. Scary, right?
I can certainly think of problems that can occur because of it (E.g. ignoring
or messing up a special declaration somewhere; setf on a non-declared variable
anyone?
Sh*t, you don't respond to compiler warnings? Don't blame CL for your
problems. :)
There are also inconsistent conventions for naming (local) special
variables within the community (I've seen %x%, *x* and x)).
OK, you are in flamewar mode, now you are just making things up.

Thus I don't see having to use syntactically different binding and assignment
forms for special and lexical variables as inherently inferior.


DUDE! They are both variables! Why the hell /should/ the syntax be
different? "Oh, these are /cross-training/ sneakers. I'll wear them on
my hands." Hunh?

:)
kenny
May 13 '06 #245


Paul Rubin wrote:
Alexander Schmolck <a.********@gmail.com> writes:
(defvar *x*) ;; makes it special
(setf *x* 1)
(print *x*) ;;-> 1
(let ((*x* 2))
(print *x*)) ;; -> 2
(print *x*) ;; -> 1


You seem to think that conflating special variable binding and lexical
variable binding is a feature and not a bug. What's your rationale?

I thought special variables meant dynamic binding, i.e.

(defvar *x* 1)
(defun f ()
(print *x*) ;; -> 2
(let ((*x* 3))
(g)))
(defun g ()
(print *x*)) ;; - > 3

That was normal behavior in most Lisps before Scheme popularlized
lexical binding. IMO it was mostly an implementation convenience hack
since it was implemented with a very efficient shallow binding cell.
That Common Lisp adapted Scheme's lexical bindings was considered a
big sign of CL's couthness. So I'm a little confused about what Ken
Tilton is getting at.


Paul, there is no conflict between your example and mine, but I can see
why you think mine does not demonstrate dynamic binding: I did not
demonstrate the binding applying across a function call.

What might be even more entertaining would be a nested dynamic binding
with the same function called at different levels and before and after
each binding.

I just had the sense that this chat was between folks who fully grokked
special vars. Sorr if I threw you a curve.

kenny
May 13 '06 #246
Everything else responded to separately, but...
I'd like to see a demonstration that using the same binding syntax for special
and lexical variables buys you something apart from bugs.


Buys me something? Why do I have to sell simplicity, transparency, and
clean syntax on c.l.python?

kenny

--
Cells: http://common-lisp.net/project/cells/

"Have you ever been in a relationship?"
Attorney for Mary Winkler, confessed killer of her
minister husband, when asked if the couple had
marital problems.
May 13 '06 #247
Alexander Schmolck <a.********@gmail.com> writes:
I think that in most contexts lexical transparency is desirable so that
deviations from lexical transparency ought to be well motivated. I also
believe that a construct that is usually used to establish a lexically
transparent binding shouldn't be equally used for dynamic bindings so that it
isn't syntactically obvious what's going on. I've already provided some
reasons why CL's design wrt. binding special and lexical variables seems bad
to me. I don't think these reasons were terribly forceful but as I'm not aware
of any strong motivation why the current behaviour would be useful I'm
currently considering it a minor wart.

To make things more concrete: What would be the downside of, instead of doing
something like:

(let ((*x* ...)) [(declare (special *x*))] ...) ; where [X] denotes maybe X
Let's start with this. You seem to be saying that the above construct is inferior
to the alternatives you are about to suggest. Why? And since you are adding
an optional form, let's break it down into its separate situations:

1. (let ((*x* ...)) (declare (special *x*)) ...)

Here there is no question about the specialness of *x*; it is textually
obvious what the binding is - that it is not a lexical binding but a special
binding.

2. (let ((*x* ...)) ...)

[where there is no special declaration for *x* within the form]

Here, the issue is that it is not obvious that *x* is special (in this case,
it would have to already be a dynamic variable (what we internally call
"globally special"), because a special declaration within a lexical context
does not affect inner bindings. Perhaps this form is the one you are really
having trouble with.
doing any of the below:

a) using a different construct e.g. (fluid-let ((*x* ...)) ...) for binding
special variables
Unless you also _remove_ the #2 case above, this seems no diferent than writing
a macro for the #1 case, above.
b) having to use *...* (or some other syntax) for special variables
In fact, the spec does suggest precisely this (see
http://www.franz.com/support/documen...r/defparam.htm,
in the Notes section), and to the extent that programmers obey the suggestion,
the textual prompting is present in the name.
c) using (let ((q (special *x*) (,a ,b ,@c)) (values 1 2 '(3 4 5 6)))
(list q ((lambda () (incf *x*))) a b c)) ; => (1 3 3 4 (5 6))

(It's getting late, but hopefully this makes some vague sense)


Well, sort of; this seems simply like a sometimes-fluid-let, whose syntax could
easily be established by a macro (with destructurings whose form is (special X)
could be specially [sic] treated.

Now if in the above example you would have trouble with (a) and/or (c)
based on the absence of a "lexical" declaration (i.e. one that would undo
the effect of a globally special declaration), thus guaranteeing that a
fluid-let or a "sometimes-fluid-let" would work, you should know that while
I was working on the Environments Access module I theorized and demonstrated
that such a declaration could be easily done within a conforming Common Lisp.
I leave you with that demonstration here (though it really is only for
demonstration purposes only; I don't necessarily propose that CL should add
a lexical declaration to the language):

[This only works on Allegro CL 8.0]:

CL-USER(1): (defvar pie pi)
PIE
CL-USER(2): (compile (defun circ (rad) (* pie rad rad)))
CIRC
NIL
NIL
CL-USER(3): (circ 10)
314.1592653589793d0
CL-USER(4): (compile (defun foo (x) (let ((pie 22/7)) (circ x))))
FOO
NIL
NIL
CL-USER(5): (foo 10)
2200/7
CL-USER(6): (float *)
314.2857
CL-USER(7): (sys:define-declaration sys::lexical (&rest vars)
nil
:variable
(lambda (declaration env)
(declare (ignore env))
(let* ((spec '(lexical t))
(res (mapcar #'(lambda (x) (cons x spec))
(cdr declaration))))
(values :variable res))))
SYSTEM::LEXICAL
CL-USER(8): (compile (defun foo (x) (let ((pie 22/7)) (declare (sys::lexical pie)) (circ x))))
; While compiling FOO:
Warning: Variable PIE is never used.
FOO
T
NIL
CL-USER(9): (foo 10)
314.1592653589793d0
CL-USER(10):

--
Duane Rettig du***@franz.com Franz Inc. http://www.franz.com/
555 12th St., Suite 1450 http://www.555citycenter.com/
Oakland, Ca. 94607 Phone: (510) 452-2000; Fax: (510) 452-0182
May 13 '06 #248
Ken Tilton <ke*******@gmail.com> writes:
I think the point is that, with the variable actually being just
a string and with dedicated new explicit functions required as
"accessors", well, you could hack that up in any language with
dictionaries. It is the beginnings of an interpreter, not Python
itself even feigning special behavior.


If the semantics and the global structure of the code is right, only
you don't like the local concrete syntax, then the complaint is at
most as justified as complaints against Lisp parentheses.

--
__("< Marcin Kowalczyk
\__/ qr****@knm.org.pl
^^ http://qrnik.knm.org.pl/~qrczak/
May 13 '06 #249
Alexander Schmolck <a.********@gmail.com> writes:
I'd like to see a demonstration that using the same binding syntax
for special and lexical variables buys you something apart from bugs.


There are 3 fundamental operations related to plain mutable variables:

A1. Making a new mutable variable with an initial value.
A2. Getting the current value.
A3. Setting the new value.

and 4 operations related to dynamically scoped variables:

B1. Making a new dynamic variable with an initial value.
B2. Getting the current value.
B3. Setting the new value.
B4. Local rebinding with a new initial value.

If you don't ever use B4, dynamic variables behave exactly like plain
variables. For this reason I see no point in distinguishing A2 from B2,
or A3 from B3. Dynamic variables are a pure extension of plain variables
by providing an additional operation.

Distinguishing the syntax of A1 and B1 is natural: somehow it must be
indicated what kind of variable is created.

Mutability is orthogonal to dynamic scoping. It makes sense to have a
variable which is like a plain variable but without A3, and a variable
which is like a dynamic variable but without B3, although it doesn't
provide anything new, only allows to express more constraints with a
potential for optimization. I won't consider them here.

Common Lisp does something weird: it uses the same syntax for A1 and B4,
where the meaning is distinguished by a special declaration. Here is
its syntax:

Directly named plain variables:
A1. (let ((name value)) body) and other forms
A2. name
A3. (setq name value), (setf name value)

First-class dynamic variables:
B1. (gensym)
B2. (symbol-value variable)
B3. (set variable value), (setf (symbol-value variable) value)
B4. (progv `(variable) `(value) body)

Directly named dynamic variables:
B1. (defvar name value), (defparameter name value)
B2. name
B3. (setq name value), (setf name value)
B4. (let ((name value)) body) and other forms

Dynamic variables in Lisp come in two flavors: first-class variables
and directly named variables. Directly named variables are always
global. You can convert a direct name to a first-class variable by
(quote name).

Plain variables have only the directly named flavor and they are
always local. You can emulate the first-class flavor by wrapping a
variable in a pair of closures or a closure with dual getting/setting
interface (needs a helper macro in order to be convenient). You can
emulate a global plain variable by wrapping a dynamic variable in a
symbol macro, ignoring its potential for local rebinding. You can
emulate creation of a new first-class variable by using a dynamic
variable and ignoring its potential for local rebinding, but this
can't be used to refer to an existing directly named plain variable.

In order to create a plain variable, you must be sure that its name is
not already used by a dynamic variable in the same scope.

So any essential functionality is possible to obtain, but the syntax
is very irregular.

--
__("< Marcin Kowalczyk
\__/ qr****@knm.org.pl
^^ http://qrnik.knm.org.pl/~qrczak/
May 13 '06 #250

This thread has been closed and replies have been disabled. Please start a new discussion.

Similar topics

181
by: Tom Anderson | last post by:
Comrades, During our current discussion of the fate of functional constructs in python, someone brought up Guido's bull on the matter: http://www.artima.com/weblogs/viewpost.jsp?thread=98196 ...
30
by: Mike Meyer | last post by:
I know, lambda bashing (and defending) in the group is one of the most popular ways to avoid writing code. However, while staring at some Oz code, I noticed a feature that would seem to make both...
0
by: emmanuelkatto | last post by:
Hi All, I am Emmanuel katto from Uganda. I want to ask what challenges you've faced while migrating a website to cloud. Please let me know. Thanks! Emmanuel
0
BarryA
by: BarryA | last post by:
What are the essential steps and strategies outlined in the Data Structures and Algorithms (DSA) roadmap for aspiring data scientists? How can individuals effectively utilize this roadmap to progress...
1
by: nemocccc | last post by:
hello, everyone, I want to develop a software for my android phone for daily needs, any suggestions?
1
by: Sonnysonu | last post by:
This is the data of csv file 1 2 3 1 2 3 1 2 3 1 2 3 2 3 2 3 3 the lengths should be different i have to store the data by column-wise with in the specific length. suppose the i have to...
0
by: Hystou | last post by:
There are some requirements for setting up RAID: 1. The motherboard and BIOS support RAID configuration. 2. The motherboard has 2 or more available SATA protocol SSD/HDD slots (including MSATA, M.2...
0
by: Hystou | last post by:
Most computers default to English, but sometimes we require a different language, especially when relocating. Forgot to request a specific language before your computer shipped? No problem! You can...
0
jinu1996
by: jinu1996 | last post by:
In today's digital age, having a compelling online presence is paramount for businesses aiming to thrive in a competitive landscape. At the heart of this digital strategy lies an intricately woven...
0
isladogs
by: isladogs | last post by:
The next Access Europe User Group meeting will be on Wednesday 1 May 2024 starting at 18:00 UK time (6PM UTC+1) and finishing by 19:30 (7.30PM). In this session, we are pleased to welcome a new...
0
by: conductexam | last post by:
I have .net C# application in which I am extracting data from word file and save it in database particularly. To store word all data as it is I am converting the whole word file firstly in HTML and...

By using Bytes.com and it's services, you agree to our Privacy Policy and Terms of Use.

To disable or enable advertisements and analytics tracking please visit the manage ads & tracking page.