I think everyone who used Python will agree that its syntax is
the best thing going for it. It is very readable and easy
for everyone to learn. But, Python does not a have very good
macro capabilities, unfortunately. I'd like to know if it may
be possible to add a powerful macro system to Python, while
keeping its amazing syntax, and if it could be possible to
add Pythonistic syntax to Lisp or Scheme, while keeping all
of the functionality and convenience. If the answer is yes,
would many Python programmers switch to Lisp or Scheme if
they were offered identation-based syntax?
Jul 18 '05
699 33207
Pascal Costanza wrote: Rayiner Hashem wrote:
From that point of view, "car" and "cdr" are as good as anything!
Well, if you're going to call the thing a 'cons' you might as well go all the way and use 'car' and 'cdr' as operators. A little flavor is nice, although I think that "4th" would be easier to read than "cadddr"...
...but cadddr might not be "fourth". It might be some leaf in a tree. Or something completely different. "fourth" doesn't always make sense.
(And just for the sake of completeness, Common Lisp does have FOURTH and also (NTH 3 ...).)
And it maxes out at ten: http://www.lispworks.com/reference/H...rstc.htm#tenth
Doesn't seem right for a language that goes to eleven.*
:)
kenny
* That's one more, isn't it?
-- http://tilton-technology.com
What?! You are a newbie and you haven't answered my: http://alu.cliki.net/The%20Road%20to%20Lisp%20Survey
"Greg Ewing (using news.cis.dfn.de)" <g2********@sneakemail.com> wrote in
message news:bm************@ID-169208.news.uni-berlin.de... Dave Benjamin wrote: In that case, why do we eschew code blocks, yet have no problem with the implicit invocation of an iterator, I don't think code blocks per se are regarded as a bad thing. The problem is that so far nobody has come up with an entirely satisfactory way of fitting them into the Python syntax as expressions.
I know. I played around with the idea a bit after it came up a couple
of weeks ago, and identified a number of issues.
1. One code block, or a code block for any parameter?
This isn't as simple as it seems. Ruby does one code block
that is an implicit parameter to any method call, but in
Smalltalk any method parameter can be a code block.
2. How do you invoke a code block? Does it look just like
a function? I presume so. If you do one code block per
method call, though, it gets a bit sticky. Again, Ruby
uses a special keyword ('yield') to invoke such a code
block, while if code blocks were simply anon functions,
then it's a non-issue.
3. Expression or statement syntax? Ruby avoids the
problem by making its single code block a special
construct that immediately follows the method
call parameter list, and it doesn't have the chasm
between expression and statement syntax that's built
into Python.
4. Do we want it to be smoothly substitutable for
lambda? I presume so, simply based on the principle
of minimum surprise. Then that forces multiple
code blocks in a method, which in turn reduces
a lot of other issues.
5. Is uglyness really an issue? One of the major
discussion points (read: flame war issues) any time
expanding expression syntax comes up is that
expressions that are too long become unreadable
very rapidly.
So what I come up with at this point is twofold:
1. We need to be able to insert a code block in
any parameter, and
2. Code blocks need to have statement syntax.
So let's say I want to use a code block instead of
a lambda or a named function in a map:
foobar = map(def (x, y, z):
astatement
anotherstatement
list1, list2, list3)
This doesn't actually look anywhere near as bad
as I thought it might. The indentation, though, is a
bit peculiar. The first point is that the statements
in the code block are indented with respect to the
enclosing statement, NOT with respect to the first
word ('def') that starts the code block.
The second point is that the continuation of the
embedding expression has to dedent to close the
code block without closing the embedding statement,
and this has to be visually identifiable.
A third item is that I don't really care if we use 'def'
or not. Borrowing the vertical bar from Ruby, the map
example becomes:
foobar = map(| x, y, z |
astatement
anotherstatement
list1, list2, list3)
I kind of like this better, except for one really unfortunate
issue: it's going to raise havoc with code coloring algorithms
for a while.
John Roth -- Greg Ewing, Computer Science Dept, University of Canterbury, Christchurch, New Zealand http://www.cosc.canterbury.ac.nz/~greg
On Mon, 13 Oct 2003 03:19:11 +0000, Raffael Cavallaro wrote: Most, but not all. From <http://okmij.org/ftp/papers/Macros-talk.pdf>
"One sometimes hears that higher-order functions (and related non-strictness) make macros unnecessary. For example, In Haskell, 'if' is a regular function.
It's not. It could easily be a regular function which would look like
'if condition branch1 branch2' and behave exactly the same (arguments
would often have to be parenthesized), but it's a keyword with the syntax
'if condition then branch1 else branch2' (condition and branches don't
have to be parenthesized because of 'then' and 'else' delimiters).
OTOH && and || are regular functions.
So you're willing here to trade code size for readability. The pro-macro camp (myself included) find that macros make source code easier to read and write than the equivalent HOF solution. We're willing to trade that ease of use for a little compiled code size, especially when this means you can write your code in what amounts to a domain specific language.
Note that Lisp and Scheme have a quite unpleasant anonymous function
syntax, which induces a stronger tension to macros than in e.g. Ruby or
Haskell.
In Haskell one often passes around monadic actions instead of anonymous
nullary functions, so it's not only the lambda syntax. Putting such action
in a function argument doesn't make it run. Laziness also reduces the
number of anonymous functions. Partial application doesn't require lambda,
binary operators can be partially applied on either argument. The 'do'
notation and list comprehensions are another case where other languages
would use anonymous functions. Yes, they are built in the language rather
than library features - but with all these things only few anonymous
functions remain and thus they are not so scary.
I happen to be in the other camp. Macros indeed make it easier to embed a
domain-specific language, OTOH they require the rest of the syntax to be
more regular than pretty (so they can examine code) and they make the
language and its implementations complicated. Just a tradeoff...
--
__("< Marcin Kowalczyk
\__/ qr****@knm.org.pl
^^ http://qrnik.knm.org.pl/~qrczak/
On Mon, 13 Oct 2003 15:28:57 +1300, "Greg Ewing (using
news.cis.dfn.de)" <g2********@sneakemail.com> wrote: Andrew Dalke wrote: It has sometimes been said that Lisp should use first and rest instead of car and cdr
I used to think something like that would be more logical, too. Until one day it occurred to me that building lists is only one possible, albeit common, use for cons cells. A cons cell is actually a completely general-purpose two-element data structure, and as such its accessors should have names that don't come with any preconceived semantic connotations.
From that point of view, "car" and "cdr" are as good as anything!
"left" and "right" - referring to 'subtrees'?
--
Steve Horne
steve at ninereeds dot fsnet dot co dot uk
Stephen Horne wrote: On Mon, 13 Oct 2003 15:28:57 +1300, "Greg Ewing (using news.cis.dfn.de)" <g2********@sneakemail.com> wrote:
Andrew Dalke wrote:
It has sometimes been said that Lisp should use first and rest instead of car and cdr
I used to think something like that would be more logical, too. Until one day it occurred to me that building lists is only one possible, albeit common, use for cons cells. A cons cell is actually a completely general-purpose two-element data structure, and as such its accessors should have names that don't come with any preconceived semantic connotations.
From that point of view, "car" and "cdr" are as good as anything!
"left" and "right" - referring to 'subtrees'?
Sure, why not?
(defun left (tree)
(car tree))
(defun right (tree)
(cdr tree))
;-)
Note: Why break anyone else's code just because you prefer a different
vocabulary?
(Yes, this is different from the Python mindset. What I have learnt from
this thread is that the languages might seem similar on the technical
level, but the "social" goals of the languages are vastly different.)
Pascal
--
Pascal Costanza University of Bonn
mailto:co******@web.de Institute of Computer Science III http://www.pascalcostanza.de Römerstr. 164, D-53117 Bonn (Germany)
[The original followup was to comp.lang.python. But since Alex mostly
discusses Lisp features, and we probably both don't subscribe to each
other's group, I follow up to both of them]
Alex Martelli writes: As in, no lisper will ever admit that a currently existing feature is considered a misfeature?-)
Paul Graham is possibly the best known such lisper. You may check the
documents about the Arc dialect at his site.
[Pascal Costanza] What makes you think that macros have farther reaching effects in this regard than functions? If I call a method and pass it a function object, I also don't know what the method will do with it.
Of course not -- but it *cannot possibly* do what Gat's example of macros, WITH-MAINTAINED-CONDITION, is _claimed_ to do... "reason" about the condition it's meant to maintain (in his example a constraint on a variable named temperature), about the code over which it is to be maintained (three functions, or macros, that start, run, and stop the reactor), presumably infer from that code a model of how a reactor _works_, and rewrite the control code accordingly to ensure the condition _is_ in fact being maintained. A callable passed as a parameter is _atomic_ -- you call it zero or more times with arguments, and/or you store it somewhere for later calling, *THAT'S IT*. This is _trivially simple_ to document and reason about, compared to something that has the potential to dissect and alter the code it's passed to generate completely new one, most particularly when there are also implicit models of the physical world being inferred and reasoned about. Given that I've seen nobody say, for days!,
The word "reason" looks a bit too AI-sh: macros do much more mundane
things. If I correctly understand Erann Gat's example in the nuclear
reactor context, things would work like this.
Some domain primitives--e.g. for controlling temperature,
starting/stopping the reactor, etc.--would be written by, or with the
help of, nuclear reactor experts. These primitives, typically
implemented as ordinary functions/classes, would embody a model of how
the reactor works. At this point, there's nothing different with what
would be done with other languages.
Now suppose you have a code module in which you have to "maintain" a
certain condition in the reactor. By "maintain" I mean arrange a
possibly long sequence of calls to domain primitives in such a way
that the condition is maintained (e.g. call the function that starts
the reactor with appropriate arguments, call functions for getting
temperature sensor readings with other arguments, check the
temperature readings and take appropriate decisions based on the
values, etc.). I guess this is also what would be done with other
languages--and Lisp.
A WITH-MAINTAINED-CONDITION macro would just provide syntactic sugar
for that possibly long statement/expression sequence for "maintaining"
the condition. That's it. It would typically accept parameters
describing the condition, and would generate the right sequence of
domain primitives with appropriate parameters.
WITH-MAINTAINED-CONDITION wouldn't have its own nuclear reactor model,
or other physical model. It would merely generate code templates,
mostly calls to ordinary functions, that the programmer would write
anyway (or put into a higher level function). Documenting such a macro
would be as easy as documenting the individual functions and/or an
equivalent function with internal calls to domain primitives.
Erann: is my understanding correct?
Alex: how would this way of using macros be dangerous?
Paolo
--
Paolo Amoroso <am*****@mclink.it>
On Sat, 11 Oct 2003 10:37:33 -0500, David C. Ullrich
<ul*****@math.okstate.edu> wrote: It's certainly true that mathematicians do not _write_ proofs in formal languages. But all the proofs that I'm aware of _could_ be formalized quite easily. Are you aware of any counterexamples to this? Things that mathematicians accept as correct proofs which are not clearly formalizable in, say, ZFC?
I am not claiming that it is a counterexample, but I've always met
with some difficulties imagining how the usual proof of Euler's
theorem about the number of corners, sides and faces of a polihedron
(correct terminology, BTW?) could be formalized. Also, however that
could be done, I feel an unsatisfactory feeling about how complex it
would be if compared to the conceptual simplicity of the proof itself.
Just a thought,
Michele
-- Comments should say _why_ something is being done.
Oh? My comments always say what _really_ should have happened. :)
- Tore Aursand on comp.lang.perl.misc
Pascal Costanza <co******@web.de> writes: Stephen Horne wrote: On Mon, 13 Oct 2003 15:28:57 +1300, "Greg Ewing (using news.cis.dfn.de)" <g2********@sneakemail.com> wrote:
Andrew Dalke wrote: From that point of view, "car" and "cdr" are as good as anything! "left" and "right" - referring to 'subtrees'?
Sure, why not?
(defun left (tree) (car tree))
(defun right (tree) (cdr tree))
;-)
Wrong:
(defun left (tree) (car (car tree)))
(defun right (tree) (cdr (car tree)))
(defun label (tree) (cdr tree))
Note: Why break anyone else's code just because you prefer a different vocabulary?
(Yes, this is different from the Python mindset. What I have learnt from this thread is that the languages might seem similar on the technical level, but the "social" goals of the languages are vastly different.)
--
__Pascal_Bourguignon__ http://www.informatimago.com/
Do not adjust your mind, there is a fault in reality.
"John Roth" <ne********@jhrothjr.com> wrote in message
news:vo************@news.supernews.com... foobar = map(| x, y, z | astatement anotherstatement list1, list2, list3)
I kind of like this better, except for one really unfortunate
Hi.
There was a short discussion along these lines back in June where I
mentioned the idea of using thunks, or something like that http://groups.google.ca/groups?hl=en...bellglobal.com
The idea goes something like this:
Let's take imap as an example
def imap(function, *iterables):
iterables = map(iter, iterables)
while True:
args = [i.next() for i in iterables]
if function is None:
yield tuple(args)
else:
yield function(*args)
Here, you would use imap as follows:
mapped = imap(lambda x: x*x, sequence)
My idea would be to define imap as follows:
def imap(&function, *iterables):
iterables = map(iter, iterables)
while True:
args = [i.next() for i in iterables]
if function is None:
yield tuple(args)
else:
yield function(*args)
and the use could be more like this:
mapped = imap(sequence) with x:
print x # or whatever
return x*x
with x: ... creates a thunk, or anonymous function, which will be fed as an
argument to the imap function in place of the &function parameter.
I also had some hazy notion of having a two way feed. Where a thunk that is
associated with an iterator function could be fed arguments, and executed on
each iteration. Something like this:
y = 0
itertools.count(10) do with x:
y += x
print y
Anyway. I was thinking that the foobar example would look cleaner if the
block did not have to be included directly as an argument to the function
call, but could instead be associated with the function call, tacked onto
the end, like so
foobar = map(list1, list2, list3) with x, y, z:
astatement
anotherstatement
or maybe
foobar = map(list1, list2, list3) { (x, y, z):
astatement
anotherstatement
}
or, if we want to be explicit:
foobar = map(&thunk, list1, list2, list3) with x, y, z:
astatement
anotherstatement
so that we know where the thunk is being fed to as an argument. But, this
would probably limit the number of blocks you could pass to a function.
Anyway, as I've said, these are just some fuzzy little notions I've had in
passing. I'm not advocating their inclusion in the language or anything like
that. I just thought I'd mention them in case they're of some use, even if
they're just something to point to and say, "we definitely don't want that".
Sean
Alex Martelli <al*****@yahoo.com> writes:
<typical unthinking stuff>
You're unrelenting tenacity to remain ignorant far exceeds any
inclination on my part to educate.
/Jon
"Sean Ross" <sr***@connectmail.carleton.ca> wrote in message
news:lq*******************@news20.bellglobal.com.. . "John Roth" <ne********@jhrothjr.com> wrote in message news:vo************@news.supernews.com...
Anyway. I was thinking that the foobar example would look cleaner if the block did not have to be included directly as an argument to the function call, but could instead be associated with the function call, tacked onto the end, like so
foobar = map(list1, list2, list3) with x, y, z: astatement anotherstatement
or maybe
foobar = map(list1, list2, list3) { (x, y, z): astatement anotherstatement }
or, if we want to be explicit:
foobar = map(&thunk, list1, list2, list3) with x, y, z: astatement anotherstatement
so that we know where the thunk is being fed to as an argument. But, this would probably limit the number of blocks you could pass to a function.
That's the basic problem with the Rubyesque syntaxes: it limits you to
one block per function, and it makes it an implicit parameter. I don't
know that that's bad per se - people who like Ruby don't seem to feel
it's a huge limitation. However, it simply doesn't slide into Python well.
That's why I used map() as my example: it's a function that almost has to
take another function to do anything useful, and that function is a specific
parameter.
John Roth
Sean
Alex Martelli <al*****@yahoo.com> writes: At the moment the only thing I am willing to denounce as idiotic are your clueless rants.
Excellent! I interpret the old saying "you can judge a man by the quality of his enemies" differently than most do: I'm _overjoyed_ that my enemies are the scum of the earth, and you, sir [to use the word loosely], look as if you're fully qualified to join that self-selected company.
Whatever.
/Jon
On Mon, 13 Oct 2003 14:08:17 +0200, Pascal Costanza <co******@web.de>
wrote: Stephen Horne wrote: On Mon, 13 Oct 2003 15:28:57 +1300, "Greg Ewing (using news.cis.dfn.de)" <g2********@sneakemail.com> wrote:
From that point of view, "car" and "cdr" are as good as anything!
"left" and "right" - referring to 'subtrees'?
Note: Why break anyone else's code just because you prefer a different vocabulary?
I wasn't really suggesting a change to lisp - just asking if they
might be more appropriate names.
Actually, I have been having a nagging doubt about this.
I had a couple of phases when I learned some basic lisp, years ago. A
bit at college in the early nineties, and IIRC a bit when I was still
at school in the mid eighties. This was well before common lisp, I
believe.
Anyway, I'd swear 'car' and 'cdr' were considered backward
compatibility words, with the up-to-date words (of the time) being
'head' and 'tail'.
Maybe these are/were common site library conventions that never made
it into any standard?
This would make some sense. After all, 'head' and 'tail' actually
imply some things that are not always true. Those 'cons' thingies may
be trees rather than lists, and even if they are lists they could be
backwards (most of the items under the 'car' side with only one item
on the 'cdr' side) which is certainly not what I'd expect from 'head'
and 'tail'.
--
Steve Horne
steve at ninereeds dot fsnet dot co dot uk
Alexander Schmolck <a.********@gmx.net> writes: Peter Seibel <pe***@javamonkey.com> writes:
If for some reason you believe that macros will have a different effect--perhaps decreasing simplicity, clarity, and directness then I'm not surprised you disapprove of them. But I'm not sure why you'd think they have that effect. Well, maybe he's seen things like IF*, MVB, RECEIVE, AIF, (or as far as simplicity is concerned LOOP)...?
I'm not saying that macros always have ill-effects, but the actual examples above demonstrate that they *are* clearly used to by people to create idiosyncratic versions of standard functionality. Do you really think clarity, interoperability or expressiveness is served if person A writes MULTIPLE-VALUE-BIND, person B MVB and person C RECEIVE?
Yes. But that's no different with macros than if someone decided that
they like BEGIN and END better than FIRST and REST (or CAR/CDR) and so wrote:
(defun begin (list) (first list))
(defun end (list) (rest list))
As almost everyone who has stuck up for Lisp-style macros has
said--they are just another way of creating abstractions and thus, by
necessity, allow for the possibility of people creating bad
abstractions. But if I come up with examples of bad functional
abstractions or poorly designed classes, are you going to abandon
functions and classes? Probably not. It really is the same thing. (deftest foo-tests () (check (= (foo 1 2 3) 42) (= (foo 4 5 6) 99)))
Note that this is all about the problem domain, namely testing.
I think the example isn't a bad one, in principle, in practice however I guess you could handle this superiorly in python.
Well, I admire your faith in Python. ;-)
I develop my testing code like this:
# like python's unittest.TestCase, only that it doesn't "disarm" # exceptions TestCase = awmstest.PermeableTestCase #TestCase = unittest.TestCase
class BarTest(TestCase): ... def test_foos(self): assert foo(1,2,3) = 42 assert foo(4,5,6) = 99
Now if you run this in emacs/ipython with '@pdb on' a failure will raise an Exception, the debugger is entered and emacs automatically will jump to the right source file and line of code (I am not mistaken in thinking that you can't achieve this using emacs/CL, right?)
No, you're mistaken. In my test framework, test results are signaled
with "conditions" which are the Common Lisp version of exceptions. Run
in interactive mode, I will be dropped into the debugger at the point
the test case fails where I can use all the facilities of the debugger
to figure out what went wrong including jumping to the code in
question, examining stack framse, and then if I think I've figured out
the problem, I can redefine a function or two and retry the test case
and proceed with the rest of my test run with the fixed code.
(Obviously, after such a run you'd want to re-run the earlier tests to
make sure you hadn't regressed. If I really wanted, I could keep track
of the tests that had been run prior to such a change and offer to
rerun them automatically.)
and I can interactively inspect the stackframes and objects that were involved in the failure.
Yup. Me too. Can you retry the test case and proceed with the rest of
your tests?
I find this *very* handy (much handier than having the wrong result printed out, because in many cases I'm dealing with objects such as large arrays wich are not easily visualized).
Once the code and test code works I can easily switch to mere reporting behavior (as described by andrew dalke) by uncommenting unittest.TestCase back in.
Yup. That's really handy. I agree.
So, in all sincere curiosity, why did you assume that this couldn't be
done in Lisp. I really am interested as I'm writing a book about
Common Lisp and part of the challenge is dealing with people's
existing ideas about the language. Feel free to email me directly if
you consider that too far offtopic for c.l.python.
-Peter
--
Peter Seibel pe***@javamonkey.com
Lisp is the red pill. -- John Fraser, comp.lang.lisp
On Monday 13 October 2003 10:22 am, Jon S. Anthony wrote: Alex Martelli <al*****@yahoo.com> writes:
<typical unthinking stuff>
You're unrelenting tenacity to remain ignorant far exceeds any inclination on my part to educate.
/Jon
+1 QOTW, under the comedy section.
-Dave
Alexander Schmolck <a.********@gmx.net> writes: "Andrew Dalke" <ad****@mindspring.com> writes: The smartest people I know aren't programmers. What does that say? I think this is vital point. CL's inaccessibility is painted as a feature of CL by many c.l.l denizens (keeps the unwashed masses out),
I have never seen this in c.l.l. - most seem to feel the inaccessibility
("ew the parens") are a necessary evil..
but IMO the CL community stunts and starves itself intellectually big time because CL is (I strongly suspect) an *extremely* unattractive language for smart people (unless they happen to be computer geeks).
Well Hofstadter seems pretty smart to me, I don't think he's a computer geek, and
he's pretty fascinated by Lisp. See G.E.B. and Metamagical Themas. pr***********@comcast.net writes: Suppose I cut just one arm of a conditional. When I paste, it is unclear whether I intend for the code after the paste to be part of that arm, part of the else, or simply part of the same block.
Sorry, I have difficulties understanding what exactly you mean again. Would
you mind cutting and pasting something like the THEN/ELSE in the examples
below (say just marking the cut region with {}s and and where you'd like to
paste with @)?
(if CONDITION
THEN
ELSE)
if CONDITION:
THEN
else:
ELSE The fact that the information is replicated, and that there is nothing but programmer discipline keeping it consistent is a source of errors. Sure there is. Your editor and immediate visual feedback (no need to remember to reindent after making the semantic changes).
`immediate visual feedback' = programmer discipline Laxness at this point is a source of errors.
You got it backwards.
Not forgetting to press 'M-C-\' = programmer discipline.
Laxness at this point is a source of errors.
And indeed, people *do* have to be educated not to be lax when editing lisp -
newbies frequently get told in c.l.l or c.l.s that they should have reindented
their code because then they would have seen that they got their parens mixed
up.
OTOH, if you make an edit in python the result of this edit is immediately
obvious -- no mismatch between what you think it means and what your computer
thinks it means and thus no (extra) programmer discipline required.
Of course you need *some* basic level of discipline to not screw up your
source code when making edits -- but for all I can see at the moment (and know
from personal experience) it is *less* than what's required when you edit lisp
(I have provided a suggested way to edit this particular example in emacs for
python in my previous post -- you haven't provided an analoguous editing
operation for lisp with an explanation why it would be less error-prone)). >> Yet the visual representation is not only identical between all of these, it >> cannot even be displayed. > > I don't understand what you mean. Could you maybe give a concrete example of > the information that can't be displayed?
Sure. Here are five parens ))))) How much whitespace is there here:
10 spaces (which BTW I counted in emacs in just the same way that I'd count a similar number of parens) -- but what has counting random trailing whitespace got to do with anything?
It is simply an illustration that there is no obvious glyph associated with whitespace, and you wanted a concrete example of something that can't be displayed.
No, I didn't want just *any* example of something that can't be displayed; I
wanted an example of something that can't be displayed and is *pertinent* to
our discussion (based on the Quinean assumption that you wouldn't have brought
up "things that can't be displayed" if they were completely besides the
point).
me: > People can't "read" '))))))))'.
[more dialog snipped] I cannot read Abelson and Sussman's minds, but neither of them are ignorant of the vast variety of computer languages in the world. Nonetheless, given the opportunity to choose any of them for exposition, they have chosen lisp. Sussman went so far as to introduce lisp syntax into his book on classical mechanics.
Well the version of SICM *I've* seen predeominantly seems to use (infixy) math
notation, so maybe Sussman is a little less confident in the perspicuousness
of his brainchild than you (also cf. Iverson)?
Apparently he felt that not only *could* people read ')))))))', but that it was often *clearer* than the traditional notation.
Uhm, maybe we've got an different interpretation of 'read'?
If by 'read' you mean 'could hypothetically decipher', then yeah, sure with
enough effort and allowable margin of error, people can indeed 'read'
')))))))' and know that it amounts to 7 parens, and with even higher effort
and error margins they should even be able to figure out what each ')'
corresponds to.
I'm equally confident that you'd be in principle capable of 'deciphering' a
printout of my message in rot-13, modulo some errors.
I nontheless suspect I might hear complaints from you along the lines that
"couldn't read that" (if you had some reason to expect that its contents would
be of value to you in the first place).
I'm also pretty sure if I gave you version with each line accompagnied by its
rot-13 equivalent (and told you so) you'd just try to read the alphabetical
lines and ignore the rot-13 as noise (even if I told you that the rot-13 is
really the canonical version and the interspersed lines are just there for
visual convenience).
Now it's pretty much exactly the same for lisp code and trailing parens -- any
sane person in a normal setting will just try to ignore them as best as she
can and go by indentation instead -- despite the fact that doing so risks
misinterpreting the code, because the correspondence between parens and
indentation is unenforced and exists purely by convention (and lispers even
tend to have slightly different conventions, e.g. IF in CL/scheme) and C-M-\.
Reading *to me* means extracting the significant information from some written
representation and the ability to read is to do so with a reasonable amount of
effort. So in my diction, if a certain aspect of a written representation is
systematically and universally ignored by readers (at their peril) then surely
this aspect is unlikely to get points of maximum readability and one might
even conclude that people can't read so-and-so?
I don't personally think (properly formated) lisp reads that badly at all
(compared to say C++ or java) and you sure got the word-seperators right. But
to claim that using lisp-style parens are in better conformance with the
dictum above than python-style indentation frankly strikes me as a bit silly
(whatever other strengths and weaknesses these respective syntaxes might
have).
Obviously the indentation. But I'd notice the mismatch.
(Hmm, you or emacs?)
If I gave you a piece of python code jotted down on paper that (as these hypothetical examples usually are) for some reason was of vital importance but I accidentally misplaced the indentation -- how would you know?
Excellent point. But -- wait! Were it Lisp, how would I know that you didn't
intend e.g.
(if (bar watz) foo)
instead of
(if (bar) watz foo)
?
Like in so many fields of human endeavor, XML provides THE solution:
<if><bar/>watz foo</if>
So maybe we should both switch to waterlang, eh?
Moral: I really think your (stereoptypical) argument that the possibility of
inconsistency between "user interpretation" and "machine interpretation" of a
certain syntax is a feature (because it introduces redundancy that can can be
used for error detection) requires a bit more work.
'as
p.s:
[oh, just to demonstrate that large groups of trailing parens actually do
occur and that, as has been mentioned, even perl has its uses]:
/usr/share/emacs/> perl -ne '$count++ if m/\){7}/; END{print "$count\n";}' **/*el
2008
Alex Martelli <al*****@yahoo.com> writes: Can you give an example for the presumably dangerous things macros supposedly can do that you have in mind?
I have given this repeatedly: they can (and in fact have) tempt programmers using a language which offers macros (various versions of lisp) to, e.g., "embed a domain specific language" right into the general purpose language. I.e., exactly the use which is claimed to be the ADVANTAGE of macros. I have seen computer scientists with modest grasp of integrated circuit design embed half-baked hardware-description languages (_at least_ one different incompatible such sublanguage per lab) right into the general-purpose language, and tout it at conferences as the holy grail
But this type of domain-specific language is not the advantage that
people mean. (After all, this type of task is rare.) They don't mean
a language for end-users, they mean a language for the programmers
themselves.
Any large software system builds up a domain-specific vocabulary;
e.g., a reactor-control system would have a function called
SHUTDOWN-REACTOR. In other languages, this vocabulary is usually
limited to constants, variables and functions, sometimes extending to
iterators (represented as a collection of the above); whereas, in
Lisp, it can include essentially any kind of language construct,
including ones that don't exist in the base language. E.g., a
reactor-control system can have a WITH-MAINTAINED-CONDITION context
(for the lack of a better word).
Whether or not the expansion of WITH-MAINTAINED-CONDITION is
particularly complex, the ability to indicate the scope of the
construct by enclosing a code block in a "body" is one of the most
useful aspects of this style of program construction. (This can be
done with HO functions in a fairly nice way, I know, assuming the
implementation of the construct does not need to examine or modify the
body code.)
Another very common language feature in these domain-specific
languages is a definer macro. When a number of similar entities need
to be described, a Lisp programmer would usually write a
DEFINE-<entity> macro which would generate all the boilerplate code
for initialization, registration, serialization, and whatever else
might be needed. This is also the way high-level interfaces to many
Lisp packages work: e.g., to use a Lisp GUI package you would
typically write something like
(define-window my-window (bordered-window)
:title "My Application"
:initial-width (/ (screen-width) 2)
...
:panes (sub-window ...)
)
and that would be 80% of the functionality. (Then you'd have to write
methods for the last 20%, which would be the hard bit.) Matthew
Danish' DEFINSTRUCTION macro in another subthread is a good example as
well.
--
Pekka P. Pirinen
Controlling complexity is the essence of computer programming.
- Kernighan
"Sean Ross" <sr***@connectmail.carleton.ca> wrote in message
news:lq*******************@news20.bellglobal.com.. . My idea would be to define imap as follows:
def imap(&function, *iterables): iterables = map(iter, iterables) while True: args = [i.next() for i in iterables] if function is None: yield tuple(args) else: yield function(*args)
and the use could be more like this:
mapped = imap(sequence) with x: print x # or whatever return x*x
with x: ... creates a thunk, or anonymous function, which will be
fed as an argument to the imap function in place of the &function parameter.
I find it counter-intuitive both that 'with xx' tacked on to the end
of an assignment statement should act +- like an lambda and even more
that the result should be fed back and up as an invisible arg in the
expression. For someone not 'accustomed' to this by Ruby, it seems
rather weird. I don't see any personal advantage over short lambdas
('func' would have been better) and defs, using 'f' as a default
fname.
.... or, if we want to be explicit:
foobar = map(&thunk, list1, list2, list3) with x, y, z: astatement anotherstatement
One could intuitively expect the invisible arg calculated lexically
after the call to be the last rather than first.
so that we know where the thunk is being fed to as an argument. But,
this would probably limit the number of blocks you could pass to a
function. Anyway, as I've said, these are just some fuzzy little notions I've
had in passing. I'm not advocating their inclusion in the language or
anything like that. I just thought I'd mention them in case they're of some use,
even if they're just something to point to and say, "we definitely don't
want that".
That is currently my opinion ;-)
Terry J. Reedy gr***@cs.uwa.edu.au wrote in message news:<bl**********@enyo.uwa.edu.au>... In comp.lang.functional Erann Gat <my************************@jpl.nasa.gov> wrote: :> I can't see why a LISP programmer would even want to write a macro. : That's because you are approaching this with a fundamentally flawed : assumption. Macros are mainly not used to make the syntax prettier : (though they can be used for that). They are mainly used to add features : to the language that cannot be added as functions.
Really? Turing-completeness and all that... I presume you mean "cannot
``Turing completeness and all that'' is an argument invoked by the
clueless.
Turing completeness doesn't say anything about how long something
takes to compute, or how easy it is to express some interesting
computation.
In the worst case, for you to get the behavior in some program I wrote
in language A in your less powerful (but Turing complete!) language B,
you might have to write an A interpreter or compiler, and then just
run my original program written in A! The problem is that you did not
actually find a way to *express* the A program in language B, only a
way to make its behavior unfold.
Moreover, you may have other problems, like difficulties in
communicating between the embedded A program and the surrounding B
code! These difficulties could be alleviated if you could write an A
*compiler* in B, a compiler which is integrated into your B compiler,
so that a mixture of A and B code is processed as one unit!
This is precisely what Lisp macros allow us to do: write compilers for
embedded languages, which operate together, all in the same pass.
So for instance an utterance in the embedded language can refer
directly to a local variable defined in a lexically surrounding
construct of the host language.
: DO-FILE-LINES and WITH-COLLECTOR are macros, and they can't be implemented : any other way because they take variable names and code as arguments.
What does it mean to take a variable-name as an argument? How is that different to taking a pointer? What does it mean to take "code" as an argument? Is that different to taking a function as an argument?
Sheesh. A funtion is an object containing a program, and an
environment that establishes the meaning of entities like variables
for that program.
Code, in this context, means source code: a raw data structure
representing syntax.
Code can be analyzed, subject to transformations, and interpreted to
have arbitrary semantics.
A function can merely be invoked with arguments.
A compiler or interpreter for a functional language like Haskell still
has to deal with the representation of the program at some point: it
has to parse the source characters, recognize the syntax and translate
it into some meaning.
Lisp macros are part of the toolset that allow this translation itself
to be programmable. Thus you are not stuck with a fixed phrase
structure grammar with fixed semantics.
Nearly every programming language has macros, it's just that most of
them have a hard-coded set of ``factory defined'' macros in the form
of a fixed set of production rules with rigidly defined semantics.
What is a macro? It's a recognizer for syntax that implements some
kind of syntax-directed translation. I would argue that while (expr)
statement in the C language is a macro: it's a pattern that matches
a parse subtree that is tagged with the ``while'' token and translates
it into looping code, whose semantics call for the repeated testing of
the guarding expression, followed by execution of the statement if
that expression is true.
Matthias Blume <fi**@my.address.elsewhere> wrote in message news:<m1************@tti5.uchicago.edu>... Well, no, not really. You can define new syntactic forms in terms of old ones, and the evaluation rules end up being determined by those of the old ones. Again, with HOFs you can always get the same effect -- at the expense of an extra lambda here and there in your source code.
A macro can control optimization: whether or not something is achieved
by that extra lambda, or by some open coding.
In the worst cases, the HOF solution would require the user to
completely obfuscate the code with explicitly-coded lambdas. The code
would be unmaintainable.
Secondly, it would be unoptimizeable. The result of evaluating a
lambda expression is an opaque function object. It can only be called.
Consider the task of embedding one programming language into another
in a seamless way. I want to be able to write utterances in one
programming language in the middle of another. At the same time, I
want seamless integration between them right down to the lexical
level. For example, the embedded language should be able to refer to
an outer variable defined in the host language.
HOF's are okay if the embedded language is just some simple construct
that controls the evaluation of coarse-grained chunks of the host
language. It's not too much of an inconvenience to turn a few
coarse-grained chunks into lambdas.
But what if the parameters to the macro are not at all chunks of the
source language but completely new syntax? What if that syntax
contains only microscopic utterances of the host language, such as the
mentions of the names of variables bound in surrounding host language?
You can't put a lambda around the big construct, because it's not even
written in the host language! So what do you do? You can use an escape
hatch to code all the individual little references as host-language
lambdas, and pepper these into the embedded language utterance. For
variables that are both read and written, you need a reader and writer
lambda. Now you have a tossed salad. And what's worse, you have poor
optimization. The compiler for the embedded language has to work with
these lambdas which it cannot crack open. It can't just spit out code
that is integrated into the host language compile, where references
can be resolved directly. This can only be accomplished with functions if you're willing to write a set of functions that defer evaluation, by, say parsing input, massaging it appropriately, and then passing it to the compiler. At that point, however, you've just written your own macro system, and invoked Greenspun's 10th Law.
This is false. Writing your own macro expander is not necessary for getting the effect. The only thing that macros give you in this regard is the ability to hide the lambda-suspensions.
That's like saying that a higher level language gives you the ability
to hide machine instructions. But there is no single unique
instruction sequence that corresponds to the higher level utterance.
Macros not only hide lambdas, but they hide the implementation choice
whether or not lambdas are used, and how! It may be possible to
compile the program in different ways, with different choices.
Moreover, there might be so many lambda closures involved that writing
them by hand may destroy the clarity of expression and maintainability
of the code.
To some people this is more of a disadvantage than an advantage because, when not done in a very carefully controlled manner, it ends up obscuring the logic of the code. (Yes, yes, yes, now someone will jump in an tell me that it can make code less obscure by "canning" certain common idioms. True, but only when not overdone.)
Functions can obscure in the same ways as macros. You have no choice.
Large programs are written by delegating details elsewhere so that a
concise expression can be obtained.
You can no more readily understand some terse code that consists
mostly of calls to unfamiliar functions than you can understand some
terse code written in an embedded language build on unfamiliar macros.
All languages ultimately depend on macros, even those functional
languages that don't have user-defined macros. They still have a whole
bunch of syntax. You can't define a higher order function if you don't
have a compiler which recognizes the higher-order-function-defining
syntax, and that syntax is nothing more than a macro that is built
into the compiler which captures the idioms of programming with higher
order functions!
All higher level languages are based on syntax which captures idioms,
and this is nothing more than macro processing. gr***@cs.uwa.edu.au wrote in message news:<bl**********@enyo.uwa.edu.au>... In comp.lang.functional Erann Gat <my************************@jpl.nasa.gov> wrote: : For example, imagine you want to be able to traverse a binary tree and do : an operation on all of its leaves. In Lisp you can write a macro that : lets you write: : (doleaves (leaf tree) ...) : You can't do that in Python (or any other langauge).
My Lisp isn't good enough to answer this question from your code, but isn't that equivalent to the Haskell snippet: (I'm sure someone here is handy in both languages)
doleaves f (Leaf x) = Leaf (f x) doleaves f (Branch l r) = Branch (doleaves f l) (doleaves f r)
You appear to be using macros here to define some entities. What if we
took away the syntax which lets you write the above combination of
symbols to achieve the associated meaning? By what means would you
give meaning to the = symbol or the syntax (Leaf x)?
Or give me a plausible argument to support the assertion that the =
operator is not a macro. If it's not a macro, then what is it, and how
can I make my own thing that resembles it? ka*@ashi.footprints.net (Kaz Kylheku) writes: Lisp macros are part of the toolset that allow this translation itself to be programmable. Thus you are not stuck with a fixed phrase structure grammar with fixed semantics.
Nearly every programming language has macros, it's just that most of them have a hard-coded set of ``factory defined'' macros in the form of a fixed set of production rules with rigidly defined semantics.
Exactly so. But the average human mind clings viciously to rigid schema
of all kinds in reflexive defence against the terrible uncertainties of
freedom.
To get someone with this neurological ailment to give up their preferred
codification for another is very difficult. To get them to see beyond
the limits of particular hardcoded schema altogether is practically
impossible.
This observation applies uniformly to programming and religion, but is
not limited to them.
--
On Mon, 13 Oct 2003 13:51:22 -0700, Kaz Kylheku wrote: Secondly, it would be unoptimizeable. The result of evaluating a lambda expression is an opaque function object. It can only be called.
This is not true. When the compiler sees the application of a lambda,
it can inline it and perform further optimizations, fusing together
its arguments, its body and its context.
--
__("< Marcin Kowalczyk
\__/ qr****@knm.org.pl
^^ http://qrnik.knm.org.pl/~qrczak/
In article <pa****************************@knm.org.pl>,
Marcin 'Qrczak' Kowalczyk <qr****@knm.org.pl> wrote: Note that Lisp and Scheme have a quite unpleasant anonymous function syntax, which induces a stronger tension to macros than in e.g. Ruby or Haskell.
Actually, I think that any anonymous function syntax is undesirable. I
think code is inerently more readable when functions are named,
preferably in a descriptive fashion.
I think it is the mark of functional cleverness that people's code is
filled with anonymous functions. These show you how the code is doing
what it does, not what it is doing.
Macros, and named functions, focus on what, not how. HOFs and anonymous
functions focus on how, not what. How is an implementation detail. What
is a public interface, and a building block of domain specific languages.
On Mon, 13 Oct 2003 16:19:58 +0200, Michele Dondi
<bi******@tiscalinet.it> wrote: On Sat, 11 Oct 2003 10:37:33 -0500, David C. Ullrich <ul*****@math.okstate.edu> wrote:
It's certainly true that mathematicians do not _write_ proofs in formal languages. But all the proofs that I'm aware of _could_ be formalized quite easily. Are you aware of any counterexamples to this? Things that mathematicians accept as correct proofs which are not clearly formalizable in, say, ZFC? I am not claiming that it is a counterexample, but I've always met with some difficulties imagining how the usual proof of Euler's theorem about the number of corners, sides and faces of a polihedron (correct terminology, BTW?) could be formalized. Also, however that could be done, I feel an unsatisfactory feeling about how complex it would be if compared to the conceptual simplicity of the proof itself.
Well it certainly _can_ be formalized. (Have you any experience
with _axiomatic_ Euclidean geometry? Not as in Euclid - no pictures,
nothing that depends on knowing what lines and points really are,
everything follows strictly logically from explictly stated axioms.
Well, I have no experience with such a thing either, but I know
it exists.)
Whether the formal version would be totally incomprehensible
depends to a large extent on how sophisticated the formal
system being used is - surely if one wrote out a statement
of Euler's theorem in the language of set theory, with no
predicates except "is an element of" it would be totally
incomprehensible. Otoh in a better formal system, for
example allowing definitions, it could be just as comprehensible
as an English version. (Not that I see that this question has
any relevance to the existence of alleged philosophical
inconsistencies that haven't been specified yet...)
Just a thought, Michele
************************
David C. Ullrich
dewatf wrote: 'virus' (slime, poison, venom) is a 2nd declension neuter noun and technically does have a plural 'viri'.
... and also in latin 'viri' is the nominative for 'men' which you do want to use a lot.
So did Roman feminists use the slogan "All men are slime"?
--
Greg Ewing, Computer Science Dept,
University of Canterbury,
Christchurch, New Zealand http://www.cosc.canterbury.ac.nz/~greg
Pascal Costanza wrote: Many programming languages require you to build a model upfront, on paper or at least in your head, and then write it down as source code. This is especially one of the downsides of OOP - you need to build a class hierarchy very early on without actually knowing if it is going to work in the long run.
I don't think that's a downside of OOP itself, but of statically
typed OO languages that make it awkward and tedious to rearrange
your class hierarchy once you've started on it.
Python's dynamic typing and generally low-syntactic-overhead
OO makes it quite amenable to exploratory OO programming, in my
experience.
--
Greg Ewing, Computer Science Dept,
University of Canterbury,
Christchurch, New Zealand http://www.cosc.canterbury.ac.nz/~greg
Bengt Richter wrote: The thing is, the current tokenizer doesn't know def from foo, just that they're names. So either indenting has to be generated all the time, and the job of ignoring it passed on upwards, or the single keyword 'def' could be recognized by the parser in a bracketed context, and it would generate a synthetic indent token in front of the def name token as wide as if all spaces preceded the def, and then continue doing indent/dedent generation like for a normal def, until the def suite closed, at which point it would resume ordinary expression processing (if it was within brackets -- otherwise is would just be a discarded expression evaluated in statement context, and in/de/dent processing would be on anyway. (This is speculative until really getting into it ;-)
I think there is a way of handling indentation that would make
changes like this easier to implement, but it would require a
complete re-design of the tokenizing and parsing system.
The basic idea would be to get rid of the indent/dedent tokens
altogether, and have the tokenizer keep track of the indent
level of the line containing the current token, as a separate
state variable.
Then parsing a suite would go something like
starting_level = current_indent_level
expect(':')
expect(NEWLINE)
while current_indent_level > starting_level:
parse_statement()
The tokenizer would keep track of the current_indent_level
all the time, even inside brackets, but the parser would
choose whether to take notice of it or not, depending on
what it was doing. So switching back into indent-based
parsing in the middle of a bracketed expression wouldn't
be a problem.
--
Greg Ewing, Computer Science Dept,
University of Canterbury,
Christchurch, New Zealand http://www.cosc.canterbury.ac.nz/~greg
On Wed, Oct 08, 2003 at 03:59:19PM -0400, David Mertz wrote: |Come on. Haskell has a nice type system. Python is an application of |Greespun's Tenth Rule of programming. Btw. This is more nonsense. HOFs are not a special Lisp thing. Haskell does them much better, for example... and so does Python.
Wow. The language with the limited lambda form, whose Creator regrets
including in the language, is ... better ... at HOFs?
You must be smoking something really good.
--
; Matthew Danish <md*****@andrew.cmu.edu>
; OpenPGP public key: C24B6010 on keyring.debian.org
; Signed or encrypted mail welcome.
; "There is no dark side of the moon really; matter of fact, it's all dark."
Matthew Danish <md*****@andrew.cmu.edu> wrote previously:
|On Wed, Oct 08, 2003 at 03:59:19PM -0400, David Mertz wrote:
|> |Come on. Haskell has a nice type system. Python is an application of
|> |Greespun's Tenth Rule of programming.
|> Btw. This is more nonsense. HOFs are not a special Lisp thing. Haskell
|> does them much better, for example... and so does Python.
|Wow. The language with the limited lambda form, whose Creator regrets
|including in the language, is ... better ... at HOFs?
|You must be smoking something really good.
I guess a much better saying than Greenspun's would be something like:
"Those who know only Lisp are doomed to repeat it (whenver they look at
another language)." It does a better job of getting at the actual
dynamic.
People who know something about languages that are NOT Lisp know that
there is EXACTLY ZERO relation between lambda forms and HOFs. Well, OK,
I guess you couldn't have playful Y combinators if every function has a
name... but there's little loss there.
In point of fact, Python could completely eliminate the operator
'lambda', and remain exactly as useful for HOFs. Some Pythonistas seem
to want this, and it might well happen in Python3000. It makes no
difference... the alpha and omega of HOFs is that functions are first
class objects that can be passed and returned. Whether they happen to
have names is utterly irrelevant, anonymity is nothing special.
Haskell could probably get rid of anonymous functions even more easily.
It won't, there's no sentiment for that among Haskell programmers. But
there is not conceptual problem in Haskell with replacing every lambda
(more nicely spelled "\" in that language) with a 'let' or 'where'.
Yours, David...
--
mertz@ _/_/_/_/ THIS MESSAGE WAS BROUGHT TO YOU BY: \_\_\_\_ n o
gnosis _/_/ Postmodern Enterprises \_\_
..cx _/_/ \_\_ d o
_/_/_/ IN A WORLD W/O WALLS, THERE WOULD BE NO GATES \_\_\_ z e
Alex Martelli <al*****@yahoo.com> writes:
[...] Without macros, when you see you want to design a special-purpose language you are motivated to put it OUTSIDE your primary language, and design it WITH its intended users, FOR its intended purposes, which may well have nothing at all to do with programming. You parse it with a parser (trivial these days, trivial a quarter of a century ago), and off you go.
....and off I go. A parser for our new DSL syntax is one thing, but
now I'll need a compiler as well, a symbolic debugger understanding
the new syntax would be nice, and perhaps an interactive environment
(interpreter) would be helpful. If we get time (ha!), lets create an
tools to edit the new syntax.
Seems like a lot of work.
I think I'll stay within Lisp and build the language up to problem
just as Paul Graham describes in On Lisp[1]. That way I get all of
the above for free and in much less time.
Footnotes:
[1] http://www.paulgraham.com/onlisp.html
--
Now, my ENTIRE LIFE is flashing before my EYES as I park my DODGE
DART in your EXXON service area for a COMPLETE LUBRICATION!!
Alexander Schmolck <a.********@gmx.net> writes: pr***********@comcast.net writes: Suppose I cut just one arm of a conditional. When I paste, it is unclear whether I intend for the code after the paste to be part of that arm, part of the else, or simply part of the same block. Sorry, I have difficulties understanding what exactly you mean again.
Let me back up here. I originally said: Every line in a block doesn't encode just its depth relative to the immediately surrounding context, but its absolute depth relative to the global context.
To which you replied:
I really don't understand why this is a problem, since its trivial to transform python's 'globally context' dependent indentation block structure markup into into C/Pascal-style delimiter pair block structure markup.
Significantly, AFAICT you can easily do this unambiguously and *locally*, for example your editor can trivially perform this operation on cutting a piece of python code and its inverse on pasting (so that you only cut-and-paste the 'local' indentation).
Consider this python code (lines numbered for exposition):
1 def dump(st):
2 mode, ino, dev, nlink, uid, gid, size, atime, mtime, ctime = st
3 print "- size:", size, "bytes"
4 print "- owner:", uid, gid
5 print "- created:", time.ctime(ctime)
6 print "- last accessed:", time.ctime(atime)
7 print "- last modified:", time.ctime(mtime)
8 print "- mode:", oct(mode)
9 print "- inode/dev:", ino, dev
10
11 def index(directory):
12 # like os.listdir, but traverses directory trees
13 stack = [directory]
14 files = []
15 while stack:
16 directory = stack.pop()
17 for file in os.listdir(directory):
18 fullname = os.path.join(directory, file)
19 files.append(fullname)
20 if os.path.isdir(fullname) and not os.path.islink(fullname):
21 stack.append(fullname)
22 return files
This code is to provide verisimilitude, not to actually run. I wish
to show that local information is insufficient for cutting and pasting
under some circumstances.
If we were to cut lines 18 and 19 and to insert them between lines
4 and 5, we'd have this result:
3 print "- size:", size, "bytes"
4 print "- owner:", uid, gid
18 fullname = os.path.join(directory, file)
19 files.append(fullname)
5 print "- created:", time.ctime(ctime)
6 print "- last accessed:", time.ctime(atime)
Where we can clearly see that the pasted code is at the wrong
indentation level. It is also clear that in this case, the
editor could easily have determined the correct indentation.
But let us consider cutting lines 6 and 7 and putting them
between lines 21 and 22. We get this:
15 while stack:
16 directory = stack.pop()
17 for file in os.listdir(directory):
18 fullname = os.path.join(directory, file)
19 files.append(fullname)
20 if os.path.isdir(fullname) and not os.path.islink(fullname):
21 stack.append(fullname)
6 print "- last accessed:", time.ctime(atime)
7 print "- last modified:", time.ctime(mtime)
22 return files
But it is unclear whether the intent was to be outside the while,
or outside the for, or part of the if. All of these are valid:
15 while stack:
16 directory = stack.pop()
17 for file in os.listdir(directory):
18 fullname = os.path.join(directory, file)
19 files.append(fullname)
20 if os.path.isdir(fullname) and not os.path.islink(fullname):
21 stack.append(fullname)
6 print "- last accessed:", time.ctime(atime)
7 print "- last modified:", time.ctime(mtime)
22 return files
15 while stack:
16 directory = stack.pop()
17 for file in os.listdir(directory):
18 fullname = os.path.join(directory, file)
19 files.append(fullname)
20 if os.path.isdir(fullname) and not os.path.islink(fullname):
21 stack.append(fullname)
6 print "- last accessed:", time.ctime(atime)
7 print "- last modified:", time.ctime(mtime)
22 return files
15 while stack:
16 directory = stack.pop()
17 for file in os.listdir(directory):
18 fullname = os.path.join(directory, file)
19 files.append(fullname)
20 if os.path.isdir(fullname) and not os.path.islink(fullname):
21 stack.append(fullname)
6 print "- last accessed:", time.ctime(atime)
7 print "- last modified:", time.ctime(mtime)
22 return files
Now consider this `pseudo-equivalent' parenthesized code:
1 (def dump (st)
2 (destructuring-bind (mode ino dev nlink uid gid size atime mtime ctime) st
3 (print "- size:" size "bytes")
4 (print "- owner:" uid gid)
5 (print "- created:" (time.ctime ctime))
6 (print "- last accessed:" (time.ctime atime))
7 (print "- last modified:" (time.ctime mtime))
8 (print "- mode:" (oct mode))
9 (print "- inode/dev:" ino dev)))
10
11 (def index (directory)
12 ;; like os.listdir, but traverses directory trees
13 (let ((stack directory)
14 (files '()))
15 (while stack
16 (setq directory (stack-pop))
17 (dolist (file (os-listdir directory))
18 (let ((fullname (os-path-join directory file)))
19 (push fullname files)
20 (if (and (os-path-isdir fullname) (not (os-path-islink fullname)))
21 (push fullname stack)))))
22 files))
If we cut lines 6 and 7 with the intent of inserting them
in the vicinity of line 21, we have several options (as in python),
but rather than insert them incorrectly and then fix them, we have
the option of inserting them into the correct place to begin with.
In the line `(push fullname stack)))))', there are several close
parens that indicate the closing of the WHILE, DOLIST, LET, and IF,
assuming we wanted to include the lines in the DOLIST, but not
in the LET or IF, we'd insert here:
V
21 (push fullname stack))) ))
The resulting code is ugly:
11 (def index (directory)
12 ;; like os.listdir, but traverses directory trees
13 (let ((stack directory)
14 (files '()))
15 (while stack
16 (setq directory (stack-pop))
17 (dolist (file (os-listdir directory))
18 (let ((fullname (os-path-join directory file)))
19 (push fullname files)
20 (if (and (os-path-isdir fullname) (not (os-path-islink fullname)))
21 (push fullname stack)))
6 (print "- last accessed:" (time.ctime atime))
7 (print "- last modified:" (time.ctime mtime))))
22 files))
But it is correct.
(Incidentally inserting at that point is easy: you move the cursor over
the parens until the matching one at the beginning of the DOLIST begins
to blink. At this point, you know that you are at the same syntactic level
as the dolist.) >> The fact that the information is replicated, and that there is nothing >> but programmer discipline keeping it consistent is a source of errors.
Let me expand on this point. The lines I cut are very similar to each
other, and very different from the lines where I placed them. But
suppose they were not, and I had ended up with this:
19 files.append(fullname)
20 if os.path.isdir(fullname) and not os.path.islink(fullname):
21 stack.append(fullname)
6 print "- last accessed:", time.ctime(atime)
7 print "- last modified:", time.ctime(mtime)
22 print "- copacetic"
23 return files
Now you can see that lines 6 and 7 ought to be re-indented, but line 22 should
not. It would be rather easy to either accidentally group line seven with
line 22, or conversely line 22 with line 7.
> Sure there is. Your editor and immediate visual feedback (no need to remember > to reindent after making the semantic changes).
`immediate visual feedback' = programmer discipline Laxness at this point is a source of errors.
You got it backwards. Not forgetting to press 'M-C-\' = programmer discipline. Laxness at this point is a source of errors.
Forgetting to indent properly in a lisp program does not yield
erroneous code.
And indeed, people *do* have to be educated not to be lax when editing lisp - newbies frequently get told in c.l.l or c.l.s that they should have reindented their code because then they would have seen that they got their parens mixed up.
This is correct. But what is recommended here is to use a simple tool to
enhance readability and do a trivial syntactic check.
OTOH, if you make an edit in python the result of this edit is immediately obvious -- no mismatch between what you think it means and what your computer thinks it means and thus no (extra) programmer discipline required.
Would that this were the case. Lisp code that is poorly indented will still
run. Python code that is poorly indented will not. I have seen people write
lisp code like this:
(defun factorial (x)
(if (> x 0)
x
(*
(factorial (- x 1))
x
)))
I still tell them to re-indent it. A beginner writing python in this manner
would be unable to make the code run.
Of course you need *some* basic level of discipline to not screw up your source code when making edits -- but for all I can see at the moment (and know from personal experience) it is *less* than what's required when you edit lisp (I have provided a suggested way to edit this particular example in emacs for python in my previous post -- you haven't provided an analoguous editing operation for lisp with an explanation why it would be less error-prone)).
Ok. For any sort of semantic error (one in which a statement is
associated with an incorrect group) one could make in python, there is
an analagous one in lisp, and vice versa. This is simply because both
have unambiguous parse trees.
However, there is a class of *syntactic* error that is possible in
python, but is not possible in lisp (or C or any language with
balanced delimiters). Moreover, this class of error is common,
frequently encountered during editing, and it cannot be detected
mechanically.
Consider this thought experiment: pick a character (like parenthesis
for example) go to a random line in a lisp file and insert four of them.
Is the result syntactically correct? No. Could a naive user find them?
Trivially. Could I program Emacs to find them? Sure.
Now go to a random line in a python file and insert four spaces. Is
the result syntactically correct? Likely. Could a naive user find
them? Unlikely. Could you write a program to find them? No.
Delete four adjacent parens in a Lisp file. Will it still compile? No.
Will it even be parsable? No.
Delete four adjacent spaces in a Python file. Will it still compile?
Likely. >> >> Yet the visual representation is not only identical between all of these, it >> >> cannot even be displayed. >> > >> > I don't understand what you mean. Could you maybe give a concrete example of >> > the information that can't be displayed? >> >> Sure. Here are five parens ))))) How much whitespace is there here: > > 10 spaces (which BTW I counted in emacs in just the same way that I'd count a > similar number of parens) -- but what has counting random trailing whitespace > got to do with anything?
It is simply an illustration that there is no obvious glyph associated with whitespace, and you wanted a concrete example of something that can't be displayed.
No, I didn't want just *any* example of something that can't be displayed; I wanted an example of something that can't be displayed and is *pertinent* to our discussion (based on the Quinean assumption that you wouldn't have brought up "things that can't be displayed" if they were completely besides the point).
I thought that whitespace was significant to Python.
My computer does not display whitespace. I understand that most
computers do not. There are few fonts that have glyphs at the space
character.
Since having the correct amount of whitespace is *vital* to the
correct operation of a Python program, it seems that the task of
maintaining it is made that much more difficult because it is only
conspicuous by its absence.
me: >> > People can't "read" '))))))))'. [more dialog snipped] I cannot read Abelson and Sussman's minds, but neither of them are ignorant of the vast variety of computer languages in the world. Nonetheless, given the opportunity to choose any of them for exposition, they have chosen lisp. Sussman went so far as to introduce lisp syntax into his book on classical mechanics. Well the version of SICM *I've* seen predeominantly seems to use (infixy) math notation, so maybe Sussman is a little less confident in the perspicuousness of his brainchild than you (also cf. Iverson)?
Perhaps you are looking at the wrong book. The full title is
`Structure and Interpretation of Classical Mechanics' by Gerald Jay
Sussman and Jack Wisdom with Meinhard E. Mayer, and it is published by
MIT Press. Every computational example in the book, and there are
many, is written in Scheme.
Sussman is careful to separate the equations of classical mechanics
from the *implementation* of those equations in the computer, the
former are written using a functional mathematical notation similar to
that used by Spivak, the latter in Scheme. The two appendixes give
the details. Sussman, however, notes ``For very complicated
expressions the prefix notation of Scheme is often better''
I don't personally think (properly formated) lisp reads that badly at all (compared to say C++ or java) and you sure got the word-seperators right. But to claim that using lisp-style parens are in better conformance with the dictum above than python-style indentation frankly strikes me as a bit silly (whatever other strengths and weaknesses these respective syntaxes might have).
And where did I claim that? You originally stated:
Still, I'm sure you're familiar with the following quote (with which I most heartily agree):
"[P]rograms must be written for people to read, and only incidentally for machines to execute."
People can't "read" '))))))))'.
Quoting Sussman and Abelson as a prelude to stating that parenthesis are
unreadable is hardly going to be convincing to anyone.
Obviously the indentation. But I'd notice the mismatch.
(Hmm, you or emacs?)
Does it matter? If I gave you a piece of python code jotted down on paper that (as these hypothetical examples usually are) for some reason was of vital importance but I accidentally misplaced the indentation -- how would you know?
Excellent point. But -- wait! Were it Lisp, how would I know that you didn't intend e.g.
(if (bar watz) foo)
instead of
(if (bar) watz foo)
You are presupposing *two* errors of two different kinds here: the
accidental inclusion of an extra parenthesis after bar *and* the
accidental omission of a parenthesis after watz.
The kind of error I am talking about with Python code is a single
error of either omission or inclusion.
Moral: I really think your (stereoptypical) argument that the possibility of inconsistency between "user interpretation" and "machine interpretation" of a certain syntax is a feature (because it introduces redundancy that can can be used for error detection) requires a bit more work.
I could hardly care less.
Marcin 'Qrczak' Kowalczyk <qr****@knm.org.pl> writes: Note that Lisp and Scheme have a quite unpleasant anonymous function syntax, which induces a stronger tension to macros than in e.g. Ruby or Haskell.
Good grief!
Unpleasant is the inner classes needed to emulate anonymous functions
in Java.
Raffael Cavallaro <ra**************@junk.mail.me.not.mac.com> writes: In article <pa****************************@knm.org.pl>, Marcin 'Qrczak' Kowalczyk <qr****@knm.org.pl> wrote:
Note that Lisp and Scheme have a quite unpleasant anonymous function syntax, which induces a stronger tension to macros than in e.g. Ruby or Haskell. Actually, I think that any anonymous function syntax is undesirable. I think code is inerently more readable when functions are named, preferably in a descriptive fashion.
So it'd be even *more* readable if every subexpression were named as
well. Just write your code in A-normal form.
I think it is the mark of functional cleverness that people's code is filled with anonymous functions. These show you how the code is doing what it does, not what it is doing.
I disagree. This:
(map 'list (lambda (x) (+ x offset)) some-list)
is clearer than this:
(flet ((add-offset (x) (+ x offset)))
(map 'list #'add-offset some-list))
Stephen Horne wrote: This would make some sense. After all, 'head' and 'tail' actually imply some things that are not always true. Those 'cons' thingies may be trees rather than lists, and even if they are lists they could be backwards (most of the items under the 'car' side with only one item on the 'cdr' side) which is certainly not what I'd expect from 'head' and 'tail'.
I think that's the essential point here. The advantage of the names car
and cdr is that they _don't_ mean anything specific. I wouldn't mind if
they were called jrl and jol, or rgk and rsk, etc. pp.
This is similar to how array elements are accessed. In expressions like
a[5], the number 5 doesn't mean anything specific either.
Pascal
--
Pascal Costanza University of Bonn
mailto:co******@web.de Institute of Computer Science III http://www.pascalcostanza.de Römerstr. 164, D-53117 Bonn (Germany)
On Tue, 14 Oct 2003 01:26:54 -0400, David Mertz wrote: Matthew Danish <md*****@andrew.cmu.edu> wrote previously: |On Wed, Oct 08, 2003 at 03:59:19PM -0400, David Mertz wrote: |> |Come on. Haskell has a nice type system. Python is an application of |> |Greespun's Tenth Rule of programming. |> Btw. This is more nonsense. HOFs are not a special Lisp thing. Haskell |> does them much better, for example... and so does Python.
|Wow. The language with the limited lambda form, whose Creator regrets |including in the language, is ... better ... at HOFs? |You must be smoking something really good.
I guess a much better saying than Greenspun's would be something like: "Those who know only Lisp are doomed to repeat it (whenver they look at another language)." It does a better job of getting at the actual dynamic.
Is there anyone who knows only Lisp?
Those who know Lisp repeat it for a reason -- and it isn't because
it's all they know! [Besides, Greenspun's 10th isn't about _Lispers_
reinventing Lisp; it's about _everybody else_ reinventing Lisp]
In point of fact, Python could completely eliminate the operator 'lambda', and remain exactly as useful for HOFs. Some Pythonistas seem to want this, and it might well happen in Python3000. It makes no difference... the alpha and omega of HOFs is that functions are first class objects that can be passed and returned. Whether they happen to have names is utterly irrelevant, anonymity is nothing special.
True enough. Naming things is a pain though. Imagine if you couldn't
use numbers without naming them: e.g., if instead of 2 + 3 you had to
do something like
two = 2
three = 3
two + three
Bleargh! It "makes no difference" in much the same way that using
assembler instead of Python "makes no difference" -- you can do the
same thing either one, but one way is enormously more painful.
[Mind you, Python's lambda is next to useless anyway]
--
Cogito ergo I'm right and you're wrong. -- Blair Houghton
(setq reply-to
(concatenate 'string "Paul Foley " "<mycroft" '(#\@) "actrix.gen.nz>"))
(I know, I swore off cross-posting on this topic, but the claims
made here were too outrageous for me to ignore)
<pr***********@comcast.net> I wish to show that local information is insufficient for cutting and pasting under some circumstances.
Absolutely correct, but rarely the source of errors. When it does
occur it is almost always immediately after the cut&paste and so
the context is fresh in the mind of the who did the c&p. The chance
for it to appear in wild code is minute. I can't recall ever coming
across it.
Consider this thought experiment: pick a character (like parenthesis for example) go to a random line in a lisp file and insert four of them. Is the result syntactically correct? No.
You're surely exaggerating here. Here's your factorial example,
which has not been cleaned up.
(defun factorial (x)
(if (> x 0)
x
(*
(factorial (- x 1))
x
)))
I'll insert four "a" characters in the function name
(defun aaaafactorial (x)
(if (> x 0)
x
(*
(factorial (- x 1))
x
)))
Is that not syntactically correct? For that matter, what if
I insert four spaces, like
(defun factorial (x)
(if (> x 0)
x
(*
(fact orial (- x 1))
x
)))
or insert four quotes
(defun factorial (x)
(if (> x 0)
x
''''(*
(factorial (- x 1))
x
)))
However, there is a class of *syntactic* error that is possible in python, but is not possible in lisp
Here's a class of error possible in Lisp and extremely hard to get
in Python -- omitting a space between an operator and an operand
causes an error
Valid Lisp: (- x 1)
Invalid Lisp: (-x 1) ; ;well, not invalid but semantically different
Valid Python: x - 1
Valid Python: x-1
Yes, I know, it's because '-x' isn't an operator. Tell it to the
beginning programmer who would have these problems in Python.
Here's another class of error you can get in Lisp but is hard to
get in Python (except in a string). Randomly insert a '
Valid Lisp: (- x 1)
Valid Lisp: '(- x 1) ;;; but semantically very different
Will the same beginning user you talk about for Python be
able to identify a single randomly inserted quote in Lisp?
Now go to a random line in a python file and insert four spaces. Is the result syntactically correct? Likely. Could a naive user find them? Unlikely. Could you write a program to find them? No.
I tried writing code to test this automatically but ran into problems
because I inserting four spaces at the start of a line may be syntactically
correct and may also not affect the semantics. For example
def spam(x = some_value,
y = some_other_value):
Adding 4 characters to the start of the 2nd line doesn't change the
meaning of the line.
There definitely were silent errors, like changing returns of
the sort
if x > 0:
return "positive"
return "nonpositive"
into
if x > 0:
return "positive"
return "nonpositive"
NB: This should be caught in any sort of unit test. The real test
is if it's hard to detect by a programmer; I'm not the one to
answer that test since I'm too used to Python. I suspect it is
generally easy to find, especially when the code is actually run,
but that it isn't always automatic.
I also figured given the quote counter example it wasn't
worthwhile categoring everything by hand.
Delete four adjacent spaces in a Python file. Will it still compile? Likely.
Here's test code. Note that I give each file 30 chances to succeed
after deleting 4 spaces, and it *never* did so. That surprised me as
I thought some of the hits would be in continuation blocks. There
may be a bug in my test code so I submit it here for review.
=================
import string, random, os
def remove_random_4_spaces(s):
x = s.split(" ")
if len(x) <= 1:
# needed for re.py which doesn't have 4 spaces
return "]]" # force a syntax error
i = random.randrange(1, len(x))
x[i-1:i+1] = [x[i-1] + x[i]]
return " ".join(s)
def check_for_errors(filename):
s = open(filename).read()
for i in range(30):
t = remove_random_4_spaces(s)
try:
exec t in {}
print "Success with", filename, "on try", i
return 0
except SyntaxError:
pass
return 1
def main():
dirname = os.path.dirname(string.__file__)
filenames = [name for name in os.listdir(dirname)
if name.endswith(".py")]
count = 0
errcount = 0
problems = []
for filename in filenames:
print filename,
err = check_for_errors(os.path.join(dirname, filename))
if err:
errcount += 1
print "properly failed"
else:
print "still passed!"
problems.append(filename)
count += 1
print errcount, "correctly failed of", count
print "had problems with", problems
if __name__ == "__main__":
main()
=================
anydbm.py properly failed
asynchat.py properly failed
asyncore.py properly failed
atexit.py properly failed
audiodev.py properly failed
base64.py properly failed
...
__future__.py properly failed
__phello__.foo.py properly failed
185 correctly failed of 185
had problems with []
Andrew da***@dalkescientific.com
In article <87************@plato.moon.paoloamoroso.it>, Paolo Amoroso
<am*****@mclink.it> wrote:
[A theory of with-maintained-condition] Erann: is my understanding correct?
Yep. There are many plausible ways to implement
with-maintained-condition, and that's one of them.
E.
Raffael Cavallaro <ra**************@junk.mail.me.not.mac.com> writes: I think it is the mark of functional cleverness that people's code is filled with anonymous functions. These show you how the code is doing what it does, not what it is doing.
Uh, I often use a lambda because I think it improves readability.
I could write
(,) x . length . fst
but think
\(a,_) -> (x,lenght a)
is clearer, because it is *less* functionally clever. Of course, it
could be written
let pair_x_and_length_of_first (a,_) = (x,lenght a)
in pair_x_and_length_of_first
but I don't think it improves things, and in fact reduces lucidity and
maintainability. Naming is like comments, when the code is clear
enough, it should be minimized. IMHO.
-kzm
--
If I haven't seen further, it is by standing in the footprints of giants
"Daniel P. M. Silva" <ds****@ccs.neu.edu> writes: Please let me know if you hear of anyone implementing lisp/scheme in Python :)
Nah, I really don't want to hear about yet another personal Lisp
implementation. (Besides, somewhere, Alex Martelli implements a cons
cell as an example extension module, and that's half of Lisp done
already :-)
However, please _do_ tell me if you hear of anyone implementing Python
in Lisp[*].
Having Python as a front-end to Lisp[*] (as it is now a front-end to
C, C++ and Java) would be very interesting indeed.
[*] Common Lisp please.
"Paul Foley" <se*@below.invalid> wrote in message
news:m2************@mycroft.actrix.gen.nz... True enough. Naming things is a pain though. Imagine if you
couldn't use numbers without naming them: e.g., if instead of 2 + 3 you had
to do something like
two = 2 three = 3 two + three
For float constants in a language (such as Fortran) with multiple
float types (of different precisions), naming as part of a declaration
of precision is (or at least has been) a standard practice in some
circles. It makes it easy to change the precision used throughout a
program.
[Mind you, Python's lambda is next to useless anyway]
It is quite useful for its designed purpose, which is to abbreviate
and place inline short one-use function definitions of the following
pattern: def _(*params): return <expression using params>.
However, making the keyword 'lambda' instead of something like 'func'
was a mistake for at least two reasons:
1) it confuses those with no knowledge of lambda calculus and for whom
it is an strange, arbitrary term, possibly conjuring images of little
sheep, rather than being familar and mnemonic;
2) it raises unrealistic expectations in who know of 'lambda' as an
anonymous version of 'defun' (or whatever), leading them to make
statements such as the above.
Terry J. Reedy
Jacek Generowicz <ja**************@cern.ch> writes: However, please _do_ tell me if you hear of anyone implementing Python in Lisp[*].
Having Python as a front-end to Lisp[*] (as it is now a front-end to C, C++ and Java) would be very interesting indeed.
That would not be simple to do, because of weird Python semantics.
But doing a Python dialect (a little bit different from CPython) would
be worthwhile.
In article <8y**********@comcast.net>, pr***********@comcast.net wrote: (flet ((add-offset (x) (+ x offset))) (map 'list #'add-offset some-list))
But flet is just lambda in drag. I mean real, named functions, with
defun. Then the code becomes:
(add-offset the-list)
instead of either of theversions you gave. The implementation details of
add-offset are elswere, to be consulted only when needed. They don't
interrupt the flow of the code, or the reader's understanding of what it
does. If you need that optimization, you can always throw in (declaim
(inline 'add-offset)) before add-offset's definition(s).
I guess I'm arguing that the low level implementation details should not
be inlined by the programmer, but by the compiler. To my eye, anonymous
functions look like programmer inlining.
In article <ty*************@pcepsft001.cern.ch>,
Jacek Generowicz <ja**************@cern.ch> wrote: However, please _do_ tell me if you hear of anyone implementing Python in Lisp[*].
Having Python as a front-end to Lisp[*] (as it is now a front-end to C, C++ and Java) would be very interesting indeed.
It is now also a front-end to Objective C, via the PyObjC project.
--
David Eppstein http://www.ics.uci.edu/~eppstein/
Univ. of California, Irvine, School of Information & Computer Science
<pr***********@comcast.net> wrote in message
Thank you for the clarification. The message for me is this: a
Python-aware editor should have an option to keep a pasted-in snippet
selected so that the indentation can be immediately adjusted by the
normal selected-block indent/dedent methods without having to
reselect.
TJR
<pr***********@comcast.net> wrote in message
news:8y**********@comcast.net... I disagree. This:
(map 'list (lambda (x) (+ x offset)) some-list)
is clearer than this:
(flet ((add-offset (x) (+ x offset))) (map 'list #'add-offset some-list))
I agree for this example (and for the Python equivalent). But if the
function definition is several lines, then I prefer to have it defined
first so that the map remains a mind-bite-sized chunk.
TJR
Raffael Cavallaro wrote: [...] Actually, I think that any anonymous function syntax is undesirable. I think code is inerently more readable when functions are named, preferably in a descriptive fashion. [...]
Before the invention of higher-level languages like Fortran, the
programmer was burdened with the task of naming every intermediate value
in the calculation of an expression. A programmer accustomed to the
functional style finds the need in non-FP languages to name every
function analogously awkward.
-thant
--
America goes not abroad in search of monsters to destroy. She is
the well-wisher of the freedom and independence of all. She is
the champion and vindicator only of her own. -- John Quincy Adams
Raffael Cavallaro <ra**************@junk.mail.me.not.mac.com> writes: In article <8y**********@comcast.net>, pr***********@comcast.net wrote:
(flet ((add-offset (x) (+ x offset))) (map 'list #'add-offset some-list))
But flet is just lambda in drag. I mean real, named functions, with defun. Then the code becomes:
(add-offset the-list)
I'm assuming that offset is lexically bound somewhere outside
the let expression so that I have to capture it with an in-place
FLET or LAMBDA. If we had an external add-offset then I'd have
to do something like
(add-offset offset some-list)
The problem with this is that you've essentially outlawed MAP.
(map 'list (make-adder offset) some-list)
The problem with this is that MAKE-ADDER is no less a lambda in drag,
and it isn't even local.
In article <bm**********@terabinaries.xmission.com>, Thant Tessman wrote: Before the invention of higher-level languages like Fortran, the programmer was burdened with the task of naming every intermediate value in the calculation of an expression. A programmer accustomed to the functional style finds the need in non-FP languages to name every function analogously awkward.
Well put, Thant. Thank you.
--
..:[ dave benjamin (ramenboy) -:- www.ramenfest.com -:- www.3dex.com ]:.
: d r i n k i n g l i f e o u t o f t h e c o n t a i n e r : This thread has been closed and replies have been disabled. Please start a new discussion. Similar topics
by: Brandon J. Van Every |
last post by:
What's better about Ruby than Python? I'm sure there's something. What is
it?
This is not a troll. I'm language shopping and I want people's answers. I
don't know beans about Ruby or have...
|
by: michele.simionato |
last post by:
Paul Rubin wrote:
> How about macros? Some pretty horrible things have been done in C
> programs with the C preprocessor. But there's a movememnt afloat to
> add hygienic macros to Python. Got any...
|
by: Xah Lee |
last post by:
Python, Lambda, and Guido van Rossum
Xah Lee, 2006-05-05
In this post, i'd like to deconstruct one of Guido's recent blog about
lambda in Python.
In Guido's blog written in 2006-02-10 at...
|
by: Paddy3118 |
last post by:
This month there was/is a 1000+ long thread called:
"merits of Lisp vs Python"
In comp.lang.lisp.
If you followed even parts of the thread, AND previously
used only one of the languages AND...
|
by: WaterWalk |
last post by:
I've just read an article "Building Robust System" by Gerald Jay
Sussman. The article is here:
http://swiss.csail.mit.edu/classes/symbolic/spring07/readings/robust-systems.pdf
In it there is a...
|
by: Charles Arthur |
last post by:
How do i turn on java script on a villaon, callus and itel keypad mobile phone
|
by: BarryA |
last post by:
What are the essential steps and strategies outlined in the Data Structures and Algorithms (DSA) roadmap for aspiring data scientists? How can individuals effectively utilize this roadmap to progress...
|
by: nemocccc |
last post by:
hello, everyone, I want to develop a software for my android phone for daily needs, any suggestions?
|
by: Hystou |
last post by:
There are some requirements for setting up RAID:
1. The motherboard and BIOS support RAID configuration.
2. The motherboard has 2 or more available SATA protocol SSD/HDD slots (including MSATA, M.2...
|
by: marktang |
last post by:
ONU (Optical Network Unit) is one of the key components for providing high-speed Internet services. Its primary function is to act as an endpoint device located at the user's premises. However,...
|
by: Hystou |
last post by:
Most computers default to English, but sometimes we require a different language, especially when relocating. Forgot to request a specific language before your computer shipped? No problem! You can...
|
by: Oralloy |
last post by:
Hello folks,
I am unable to find appropriate documentation on the type promotion of bit-fields when using the generalised comparison operator "<=>".
The problem is that using the GNU compilers,...
|
by: Hystou |
last post by:
Overview:
Windows 11 and 10 have less user interface control over operating system update behaviour than previous versions of Windows. In Windows 11 and 10, there is no way to turn off the Windows...
|
by: tracyyun |
last post by:
Dear forum friends,
With the development of smart home technology, a variety of wireless communication protocols have appeared on the market, such as Zigbee, Z-Wave, Wi-Fi, Bluetooth, etc. Each...
| |