I think everyone who used Python will agree that its syntax is
the best thing going for it. It is very readable and easy
for everyone to learn. But, Python does not a have very good
macro capabilities, unfortunately. I'd like to know if it may
be possible to add a powerful macro system to Python, while
keeping its amazing syntax, and if it could be possible to
add Pythonistic syntax to Lisp or Scheme, while keeping all
of the functionality and convenience. If the answer is yes,
would many Python programmers switch to Lisp or Scheme if
they were offered identation-based syntax?
Jul 18 '05
699 31913
Dave Benjamin wrote: Alex Martelli wrote: The only annoyance here is that there is no good 'literal' form for a code block (Python's lambda is too puny to count as such), so you do have to *name* the 'thefunc' argument (with a 'def' statement -- Python firmly separates statements from expressions). Here's my non-PEP for such a feature:
return { |x, y| print x print y }
Which would be the equivalent of:
def anonymous_function(x, y): print x print y return anonymous_function
Oh, and what should:
return {
}
MEAN? An empty dictionary, like today, or the equivalent of
return lambda: None
i.e. an empty argument-less function?
This is just reason #1 why this syntax is not satisfactory (I
guess it could be forced by mandating || to mean "takes no
args" -- deviating from Ruby in that sub-issue, though). The
second point is the use of punctuation in a way that no other
Python syntactic context allows -- it really feels alien.
Further, something that is more often than not desired by
people who desire code blocks is that they *don't* start a
new scope. Ruby fudges it with an ad-hoc rule -- use of
variables already existing outside is "as if" you were in
the same scope, use of new variables isn't (creates a new
variable on each re-entry into the block via yield, right?).
Clearly we can't use that fudge in Python. So, this is a
semantic problem to solve for whatever syntax. Can we find
any approach that solves ALL use cases? I don't know, but my
personal inclination would be to try saying that such a block
NEVER define a new lexical scope, just like list comprehensions
don't -- i.e. in this sense such blocks would NOT be at all
equivalent to the functions produced by a def statement (lots
of implementation work, of course) -- all variables that might
look like locals of such a block would instead be considered
locals of the "enclosing" scope (which isn't enclosing, in a
sense, as there's no other new scope to enclose...;-).
SOME explicit termination is no doubt necessary if we want to
allow returning e.g. a tuple of two or more such functions --
which is why we can't try taking another (and less repellent
to Pythonic syntax) leaf from Ruby's book (using do instead
of { -- requires making do a keyword -- and leaving out the
end that Ruby always requires, of course):
return do(x, y):
print x
print y
there would be no way to write something more after this
block but within the same expression if the only termination
was dedenting; perhaps:
return ( do(x, y):
print x
print y
), ( do(x, y):
print y
print x
)
i.e. with parentheses around the do-expression (if you need
to put anything more after it) might help here.
Then, merge map, filter, and reduce into the list type, so we can play
Why? So you can't e.g. reduce an arbitrary iterator (e.g., genererator),
tuple, array.array, ..., any more? We'd be better off without them, IMHO.
I see no advantage, over e.g. itertools, in associating these syntactically
to sequences (any kind or all kinds) or even iterators.
Alex
Andreas Rossberg wrote: Raymond Wiker wrote: I'm not terribly familiar with the details of Lisp macros but since recursion can easily lead to non-termination you certainly need tight restrictions on recursion among macros in order to ensure termination of macro substitution, don't you? Or at least some ad-hoc depth limitation. Same as with function calls, you mean?
In functional languages you at least have no limitation whatsoever on the depth of tail calls. Is the same true for macros?
any macro which cannot be implemented as a single quasiquoted form is likely
to be implemented by calling a function which computes the expansion. the only
difference between a macro function and any "normal" defined function is that
the former is not necessarily any symbol's function value. an auxiliary
function will be a function like any other function: anonymous, defined,
available in some given lexical context only. whatever. there are no intrinsic
restrictions on the computation which it performs. it need only admit to the
reality, that the environment is that of the compiler. eg, definitions which
are being compiled in the given unit "exist" if so specified only.
i am curious whether the availability of tail call elimination can have any
effect on the space performance of a function which is, in general, being
called to compute expressions for inclusion in a larger form. my intuition
says it would not matter. Apart from that, can one have higher-order macros? With mutual recursion between a macro and its argument?
what would that mean? a macro-proper's argument is generally an s-expression,
and the macro function proper is not bound to a symbol and not necessarily
directly funcallable, but i suppose one could come up with use cases for
mutual recursion among the auxiliary functions.
the generated expressions, on the other hand, often exhibit mutual references.
in this regard, one might want, for example to look at j.schmidt's meta
implementation.[1] perhaps, in some sense, the mutual references which it
generates could be considered "higher-order", but that doesn't feel right.
there's also the issue, that there is nothing which prevents a macro function
from interpreting some aspects of the argument expression as instructions for
operations to be performed at compile-time. eg. constant folding. depending on
how the macro might establish constancy, i'm not sure what "order" that is.
That is, can you write a fixpoint operator on macros?
why one would ever think of doing that is beyond me, but given the standard y
operator definition [0],
? (DEFUN Y (F)
( (LAMBDA (G) #'(LAMBDA (H) (FUNCALL (FUNCALL F (FUNCALL G G)) H)))
#'(LAMBDA (G) #'(LAMBDA (H) (FUNCALL (FUNCALL F (FUNCALL G G))
H)))))
Y
should one feel compelled to do so, one might resort to something like
? (defmacro print* (&rest forms)
`(progn ,@(funcall (y #'(lambda (fn)
#'(lambda (forms)
(unless (null forms)
(cons `(print ,(first forms))
(funcall fn (rest forms)))))))
forms)))
PRINT*
? (macroexpand '(print* (list 1 2) "asdf" 'q))
(PROGN (PRINT (LIST 1 2)) (PRINT "asdf") (PRINT 'Q))
T
? (print* (list 1 2) "asdf" 'q)
(1 2)
"asdf"
Q
Q
? I'm not saying that any of this would be overly useful. Just trying to refute Dirk's rather general statement about macros subsuming HOF's.
hmm... i never thought of it that way.
[0] http://www.nhplace.com/kent/Papers/T...al-Issues.html
[1] http://www.cliki.net/Meta
Doug Tolton wrote: David Mertz wrote:
There's something pathological in my posting untested code. One more try:
def categorize_jointly(preds, it): results = [[] for _ in preds] for x in it: results[all(preds)(x)].append(x) return results
|Come on. Haskell has a nice type system. Python is an application of |Greespun's Tenth Rule of programming.
Btw. This is more nonsense. HOFs are not a special Lisp thing. Haskell does them much better, for example... and so does Python. What is your basis for that statement? I personally like the way Lisp does it much better, and I program in both Lisp and Python. With Python it's not immediately apparent if you are passing in a simple variable or a HOF. Whereas in lisp with #' it's immediately obvious that you are receiving or sending a HOF that will potentially alter how the call operates.
IMO, that syntax is far clearner.
I think it's about a single namespace (Scheme, Python, Haskell, ...) vs
CLisp's dual namespaces. People get used pretty fast to having every
object (whether callable or not) "first-class" -- e.g. sendable as an
argument without any need for stropping or the like. To you, HOFs may
feel like special cases needing special syntax that toots horns and
rings bells; to people used to passing functions as arguments as a way
of living, that's as syntactically obtrusive as, say, O'CAML's mandate
that you use +. and not plain + when summing floats rather than ints
(it's been a couple years since I last studied O'CAML's, so for all I
know they may have changed that now, but, it IS in the book;-).
No doubt they could make a case that float arithmetic has potentially
weird and surprising characteristics and it's a great idea to make it
"immediately obvious" that's it in use -- and/or the case that this
allows stronger type inference and checking than SML's or Haskell's
use of plain + here allows. Rationalization is among the main uses
for the human brain, after all -- whatever feature one likes because
of habit, one can make SOME case or other for;-).
Alex
"Andrew Dalke" <ad****@mindspring.com> writes: That to me is a solid case of post hoc ergo proper. The words "1st" and "rst" are equally as short and easier to memorize. And if terseness were very important, then what about using "." for car and ">" for cdr? No, the reason is that that's the way it started and it will stay that way because of network effects -- is that a solid engineering reason? Well, it depends, but my guess is that he wouldn't weight strongly the impact of social behaviours as part of good engineering. I do.
Right, network effect. And attachment to historic heritage. C has B
and "Hello World!". COBOL has real bugs pined in log books. Lisp has
704' CAR and CDR.
--
__Pascal_Bourguignon__ http://www.informatimago.com/
Do not adjust your mind, there is a fault in reality.
"Andrew Dalke" <ad****@mindspring.com> writes: Or is there a requirement that it be constrained to display systems which can only show ASCII? (Just like a good Lisp editor almost requires the ability to reposition a cursor to blink on matching open parens. Granted, that technology is a few decades old now while Unicode isn't, but why restrict a new language to the display systems of the past instead of the present?)
Because the present is composed of the past. You have to be
compatible, otherwise you could not debug a Deep Space 1 probe
160 million km away, (and this one was only two or three years old).
Indeed. It looks easier to understand to my untrained eye. I disagree that "+" shouldn't work on strings because that operation isn't commutative -- commutativity isn't a feature of + it's a feature of + on a certain type of set.
Mathematicians indeed overload operators with taking into account
their precise properties. But mathematicians are naturally
intelligent. Computers and our programs are not. So it's easier if
you classify operators per properties; if you map the semantics to the
syntax, this allow you to apply transformations on your programs based
on the syntax without having to recover the meaning.
--
__Pascal_Bourguignon__ http://www.informatimago.com/
Do not adjust your mind, there is a fault in reality.
[Followup-To ignored because I don't read comp.lang.python]
On Thu, 09 Oct 2003 16:13:54 GMT, Alex Martelli <al***@aleax.it> wrote: I think it's about a single namespace (Scheme, Python, Haskell, ...) vs CLisp's dual namespaces. People get used pretty fast to having every object (whether callable or not) "first-class" -- e.g. sendable as an argument without any need for stropping or the like. To you, HOFs may feel like special cases needing special syntax that toots horns and rings bells; to people used to passing functions as arguments as a way of living, that's as syntactically obtrusive as, say, O'CAML's mandate that you use +. and not plain + when summing floats rather than ints
In Common Lisp (not "CLisp", that's an implementation) functions /are/
first-class and sendable as an argument "without any need for
stropping or the like." What exactly are you talking about?
Edi.
Alex Martelli wrote: Doug Tolton wrote:
... Btw. This is more nonsense. HOFs are not a special Lisp thing. Haskell does them much better, for example... and so does Python. What is your basis for that statement? I personally like the way Lisp does it much better, and I program in both Lisp and Python. With Python it's not immediately apparent if you are passing in a simple variable or a HOF. Whereas in lisp with #' it's immediately obvious that you are receiving or sending a HOF that will potentially alter how the call operates.
IMO, that syntax is far clearner.
I think it's about a single namespace (Scheme, Python, Haskell, ...) vs CLisp's dual namespaces. People get used pretty fast to having every object (whether callable or not) "first-class" -- e.g. sendable as an argument without any need for stropping or the like. To you, HOFs may feel like special cases needing special syntax that toots horns and rings bells; to people used to passing functions as arguments as a way of living, that's as syntactically obtrusive as, say, O'CAML's mandate that you use +. and not plain + when summing floats rather than ints (it's been a couple years since I last studied O'CAML's, so for all I know they may have changed that now, but, it IS in the book;-).
it can't really be the #' which is so troubling.
? (defmacro define (name parameters &rest body)
`(set (defun ,name ,parameters ,@body)
(function ,name)))
DEFINE
? (define lof (a b) (cons a b))
#<Compiled-function LOF #x78467E6>
? (mapcar lof '(1 2 3) '(a s d))
((1 . A) (2 . S) (3 . D))
?
what is the real issue?
....
Pascal Costanza wrote: Kenny Tilton wrote:
Speaking of non-pros:
"Lisp is easy to learn
Lisp's syntax is simple, compact and spare. Only a handful of “rules” are needed. This is why Lisp is sometimes taught as the first programming language in university-level computer science courses. For the composer it means that useful work can begin almost immediately, before the composer understands much about the underlying mechanics of Lisp or the art of programming in general. In Lisp one learns by doing and experimenting, just as in music composition. "
From: http://pinhead.music.uiuc.edu/~hkt/nm/02/lisp.html
No studies, tho.
Here they are: http://home.adelphi.edu/sbloch/class/hs/testimonials/
Oh, please:
"My point is... before I started teaching Scheme, weak students would
get overwhelmed by it all and would start a downward spiral. With
Scheme, if they just keep plugging along, weak students will have a
strong finish. And that's a great feeling for both of us!"
That kind of anecdotal crap is meaningless. We need statistics!
Preferably with lots of decimal places so we know they are accurate.
:)
-- http://tilton-technology.com
What?! You are a newbie and you haven't answered my: http://alu.cliki.net/The%20Road%20to%20Lisp%20Survey
Doug Tolton wrote:
... don't know me or my background. Alex has stated on many occasions that he has not worked with Macros, but that he is relying on second hand information.
I never used Common Lisp in production: in the period of my life when I
was hired (by Texas Instruments) specifically for my knowledge of "Lisp",
that meant Scheme and a host of other dialects (mostly but not entirely now
forgotten). I did use things that "passed for" macros in those dialects:
I had no choice, since each TI lab or faction within the lab was using a
different divergent mutant thing, all named "lisp" (save a few were named
"scheme" -- hmmm, I do believe that some were using Prolog, too, but I
did not happen to use it in TI), with some of the divergence hinging on
locally developed sets of macros (and some on different vendors/versions).
For all I know, CLisp's macros are SO head and shoulders above any of a
quarter century ago that any vaguely remembered technical problem from
back then may be of purely historical interest. I do believe that the
divergence problem has more to do with human nature and sociology, and
that putting in a language features that encourage groups and subgroups
of users to diverge that language cannot be compensated by technical
enhancements -- it _will_, in my opinion, cause co-workers in any middle-
or large-sized organization to risk ending up standing on each others'
feet, rather than on each others' shoulders. (Remedies must of course
be sociological and lato sensu political first and foremost, but the way
the language & tools are designed CAN help or hinder).
So, I'm nowhere near an _expert_ -- over 20 years' hiatus ensures I
just can't be. But neither is it totally 2nd hand information, and if
I gave the mistaken impression of never having used macros in a
production setting I must have expressed myself badly. I do know I
jumped on the occasion of moving to IBM Research, and the fact that
this would mean going back to APL instead of "lisp" (in the above
vague sense) did matter somewhat in my glee, even though I still
primarily thought of myself as a hardware person then (the programming
was needed to try out algorithms, simulate possible hardware
implementations thereof, etc -- it was never an end in itself).
I don't claim to be a guru on Lisp, however I believe I understand it far better than Alex does. If the people who actually know and use Common Lisp think I am mis-speaking and mis-representing Lisp, please let me know and I will be quiet.
Give that I've heard "everything and its opposite" (within two constant
parameters only: S-expressions are an unalloyed good -- macros are good,
some say unconditionally, others admit they can be prone to abuse) from
posters on this thread from "people who actually know and use" Lisp, I
don't know how you could "mis-speak and mis-represent" as long as you
stick to the two tenets of party doctrine;-).
Like I said, I'm not an expert at Lisp, but I think I understand the spirit and semantics of Lisp far better than Alex, and from what I've
If by Lisp you mean Common Lisp and exclude Scheme, I'm sure you do; if
Scheme is to be included, then I'm not sure (but it's quite possible,
nevertheless) -- at least the "spirit" of the small core and widespread
HOFs w/single-namespace seem to be things I understand more (but the
"spirit" of why it's so wonderful to have extensible syntax isn't:-).
Alex
In article <84**************************@posting.google.com >, Hannu Kankaanp?? wrote: Dave Benjamin <da**@3dex.com> wrote in message news:<u%************@news1.central.cox.net>... For instance, I always thought this was a cooler alternative to the try/finally block to ensure that a file gets closed (I'll try not to mess up this time... ;) :
open('input.txt', { |f| do_something_with(f) do_something_else_with(f) })
But being a function, it'd have the nasty property of a separate scope (yes, that can be nasty sometimes). I'd perhaps want to do
open('input.txt', { |f| data = f.read() })
But alas, 'data' would be local to the anonymous function and not usable outside.
Well, that's the same problem that lambda's got. I don't really have a
solution for that, other than the usual advice: "Use a namespace". =)
Dave
--
..:[ dave benjamin (ramenboy) -:- www.ramenfest.com -:- www.3dex.com ]:.
: d r i n k i n g l i f e o u t o f t h e c o n t a i n e r :
Pascal Costanza wrote: Pick the one Common Lisp implementation that provides the stuff you need. If no Common Lisp implementation provides all the stuff you need, write your own libraries or pick a different language. It's as simple as that.
Coming from a C/C++ background, I'm surprised by this attitude. Is
portability of code across different language implementations not a priority
for LISP programmers?
--
Rainer Deyke - ra*****@eldwood.com - http://eldwood.com
In article <mx**********************@news1.tin.it>, Alex Martelli wrote: Dave Benjamin wrote (answering Mike Rovner): ... "Explicit is better than implicit" In that case, why do we eschew code blocks, yet have no problem with the implicit invocation of an iterator, as in:
for line in file('input.txt'): do_something_with(line)
I don't see that there's anything "implicit" in the concept that a special operation works as indicated by its syntax. I.e., I do not find this construct any more "implicit" in the first line than in its second one, which is the juxtaposition of a name and a pair of parentheses to indicate calling-with-arguments -- and alternatives...
What's implicit to me is that the use of an iterator is never specified.
For instance, we could (and I'm *not* suggesting this) do this:
iterator = file('input.txt')
while iterator.has_next():
line = iterator.next()
do_something_with(line)
This has nothing to do with "eschewing code blocks", btw; code blocks are not "eschewed" -- they are simply syntactically allowed, as "suites", only im specific positions. If Python's syntax defined other forms of suites, e.g. hypothetically:
with <object>: <suite>
meaning to call the object (or some given method[s] in it, whatever) with the suite as its argument, it would be just as explicit as, e.g.:
for <name> in <object>: <suite>
or
<object>(<object>)
This would be an interesting alternative, but it's back to being a special
case, like Ruby has. I think it'd be more flexible as a literal that returns
a callable. This is not to say that I dislike that behavior; in fact, I find it *beneficial* that the manner of looping is *implicit* because you can substitute a generator for a sequence without changing the usage. But
You could do so even if you HAD to say iter(<object>) instead of just <object> after every "for <name> in" -- it wouldn't be any more "explicit", just more verbose (redundant, boiler-platey). So I do not agree with your motivation for liking "for x in y:" either;-).
Well, let's just say I've been on the Java side of the fence for a little
while, and it has redefined my personal definition of explicit. One of the
reasons Python code is so much smaller than Java code is that a lot of
things are implicit that are required to be explicit in Java. I see this as
a good thing. there's little readability difference, IMHO, between that and:
file('input.txt').each_line({ |line| do_something_with(line) })
Not huge, but the abundance of ({ | &c here hurts a little bit.
Well, we all __pick__ our __poisons__... Plus, the first example is only obvious because I called my iteration variable "line", and because this behavior is already widely known. What if I wrote:
for byte in file('input.dat'): do_something_with(byte)
That would be a bit misleading, no? But the mistake isn't obvious. OTOH, in the more explicit (in this case) Ruby language, it would look silly:
open('input.txt').each_line { |byte| # huh? why a byte? we said each_line! }
Here, you're arguing for redundance, not for explicitness: you are claiming that IF you had to say the same thing more than once, redundantly, then mistakes might be more easily caught. I.e., the analogy is with:
file('foo.txt').write('wot?')
where the error is not at all obvious (until runtime when you get an exception): file(name) returns an object *open for reading only* -- so if you could not call file directly but rather than do say, e.g.:
file.open_for_reading_only('foo.txt').write('wot?' )
Nah, I'm not arguing for redundancy at all. I'm saying that there is some
voodoo going on here. When the *constructor* for a file object behaves like
a generator that loops over newline-delimited lines of a text field, doesn't
that seem like it's been specialized for a particular domain in an unobvious
way? Why lines? Why not bytes, words, unicode characters? I mean, it's more
convenient for people that do a lot of text processing, but I don't see
anything specific to text or lines in the phrase "file('foo.txt')". That's
all I'm saying. I think this is important to point out, because the implicit/explicit rule comes up all the time, yet Python is implicit about lots of things! To name a few:
- for loops and iterators
Already addressed above: nothing implicit there.
Likewise, and I still disagre... =) - types of variables
There are none, so how could such a nonexisting thing be EITHER implicit OR explicit? Variables don't HAVE types -- OBJECTS do.
The very fact that variables to not have types, and following that, that
variables do not have manifest types, is an example of implicit being
chosen over explicit. I know your argument, and I understand that Python
variables are Post-It sticky notes and all of that, but please, just try to
look at it from a non-Python-centric perspective. Other languages (like C++,
which I hear you are vaguely familiar with ;) require you to be explicit
about what type of thing you're defining and sending where. Python does not.
This is one of its strengths, because it allows for ad-hoc interfaces and
polymorphism without a lot of boilerplate.
Etc, etc -- can't spend another 1000 lines to explain why your "lots of things" do not indicate violations of "explicit is better than implicit".
They're not *violations*. Correct me if I'm wrong, but the Zen of Python is
not the LAW! It's a poem! It's very beautiful, very concise, inspiring, and
thoughtful, but it's not the 10 commandments! I just get very tired of every
idea getting shot down because of some rule from Tim Peters. I really don't
think he intended for it to be used to prove the validity of ideas.
The implicit/explicit thing is one of the most abused, in my opinion,
because it can quite frankly be used to shut down any attempt at creating
abstraction. In fact, for that reason alone, I'm tempted to say "Implicit is
better than explicit". Say what you want, not how you want it. Be abstract,
not concrete. If all you're saying is that naming something is better than not naming something because explicit is better than implicit, I'd have to ask why:
Sometimes it is (to avoid perilous nesting), sometimes it isn't (to avoid wanton naming). I generally don't mind naming things, but it IS surely possible to overdo it -- without going to the extreme below, just imagine a language where ONLY named argument passing, and no use of positional arguments, was allowed (instead of naming arguments being optional, as it is today in Python).
I don't have to imagine. It's called Smalltalk, and also (to some extent)
Tcl/Tk. Even Tkinter seems to be named-argument only. It's not that bad.
I still like positional parameters, though.
If a Pythonic syntax can't be found to solve ALL use cases you've raised, then the "balance" may be considered not nice enough to compensate for the obvious problem -- a serious case of MTOWTDI.
That's another argument for another day. ;) Python prioritize things diferently than other languages. It's not an APL. "Readability counts"
This is nothing like APL... if anything, it's like Smalltalk, a language designed to be readable by children!
Cite pls? I knew that Logo and ABC had been specifically designed with children in mind, but didn't know that of Smalltalk. http://ei.cs.vt.edu/~history/GASCH.KAY.HTML http://www.cosc.canterbury.ac.nz/~wo...malltalk1.html http://www.cs.washington.edu/homes/dugan/history.html I realize that APL sacrificed readability for expressiveness to an uncomfortable extreme, but I really think you're comparing apples and oranges here. List comprehensions are closer to APL than code blocks.
As an ex-user of APL (and APL2) from way back when, I think you're both talking through your respective hats: neither list comprehensions (particularly in the Python variation on a Haskell theme, with keywords rather than punctuation) nor code blocks resemble APL in the least.
Well, it was a rough analogy, and I've never done any APL myself, but here's
my justification, FWIW:
- APL provides syntactical constructs for high-level array processing
- List comprehensions do this also
- Code blocks have nothing inherently to do with array processing
But I agree that neither resemble APL as I've seen. I guess it's like saying
a carrot is more like APL than a rutabega.
Dave
--
..:[ dave benjamin (ramenboy) -:- www.ramenfest.com -:- www.3dex.com ]:.
: d r i n k i n g l i f e o u t o f t h e c o n t a i n e r :
Rainer Deyke wrote: Pascal Costanza wrote: Pick the one Common Lisp implementation that provides the stuff you need. If no Common Lisp implementation provides all the stuff you need, write your own libraries or pick a different language. It's as simple as that.
Coming from a C/C++ background, I'm surprised by this attitude. Is portability of code across different language implementations not a priority for LISP programmers?
there are some things which the standard does not cover.
....
Alex Martelli wrote: ... I do believe that the divergence problem has more to do with human nature and sociology, and that putting in a language features that encourage groups and subgroups of users to diverge that language ....
Can someone write a nifty Python hack to figure out how many times
Lispniks have tried to get Alex to explain how macros are any different
than high-order functions or new classes when it comes to The Divergence
Problem? I love that we have given it a name, by the way.
One popular macro is WITH-OUTPUT-TO-FILE. My budding RoboCup starter kit
was a vital WITH-STD-ATTEMPT macro. Oh god, no! I need to see the ANSI
Lisp commands for these things so I can really understand them. Better
yet...
why not the disassembly? preferably without meaningful symbols from the
HLL source. I think we are finally getting somewhere with TDP. Those
high order classes, functions, and macros keep me from seeing what is
really going on. Now if I could only see the microcode....
:)
kenny
-- http://tilton-technology.com
What?! You are a newbie and you haven't answered my: http://alu.cliki.net/The%20Road%20to%20Lisp%20Survey
Andreas Rossberg <ro******@ps.uni-sb.de> writes: Dirk Thierbach wrote:you can use macros to do everything one could use HOFs for (if you really want). I should have added: As long as it should execute at compile time, of course.
Really? What about arbitrary recursion?
I don't see the problem. Maybe you have an example? I am sure the Lisp'ers here can come up with a macro solution for it.
I'm not terribly familiar with the details of Lisp macros but since recursion can easily lead to non-termination you certainly need tight restrictions on recursion among macros in order to ensure termination of macro substitution, don't you? Or at least some ad-hoc depth limitation.
I'm not terribly familiar with the details of Python's iteration constructs
but since iteration can easily lead to non-termination you certainly need tight
restrictions on ...
In some cases, recursive macros and functions are easier to get right
(avoid infinite recursion) than their iterative counterparts.
Careless coders will always find a way to code themselves into
infinite loops. The easy way to avoid infinite recursion is:
(cond
((===> base case <===) ...)
((===> another base case? <===) ...)
((...) recursive call)
((...) recursive call)
...
(t recursive call))
Most of the time, it's easy to ensure that all recursive calls "move
up" the cond tree. Times when you can't do that (or not easily), you
should be writing iterative code, or you're just doing something
inherently difficult.
"Computer games don't affect kids; I mean if Pac Man affected us as kids, we would all be running around in darkened rooms, munching magic pills, and listening to repetitive electronic music." - Kristian Wilson, Nintendo Inc.
(That's a great sig!)
--
/|_ .-----------------------.
,' .\ / | No to Imperialist war |
,--' _,' | Wage class war! |
/ / `-----------------------'
( -. |
| ) |
(`-. '--.)
`. )----'
Alex Martelli wrote: Doug Tolton wrote: ...
don't know me or my background. Alex has stated on many occasions that he has not worked with Macros, but that he is relying on second hand information.
I never used Common Lisp in production: in the period of my life when I was hired (by Texas Instruments) specifically for my knowledge of "Lisp", that meant Scheme and a host of other dialects (mostly but not entirely now forgotten). I did use things that "passed for" macros in those dialects: I had no choice, since each TI lab or faction within the lab was using a different divergent mutant thing, all named "lisp" (save a few were named "scheme" -- hmmm, I do believe that some were using Prolog, too, but I did not happen to use it in TI), with some of the divergence hinging on locally developed sets of macros (and some on different vendors/versions).
For all I know, CLisp's macros are SO head and shoulders above any of a quarter century ago that any vaguely remembered technical problem from back then may be of purely historical interest. I do believe that the divergence problem has more to do with human nature and sociology, and that putting in a language features that encourage groups and subgroups of users to diverge that language cannot be compensated by technical enhancements -- it _will_, in my opinion, cause co-workers in any middle- or large-sized organization to risk ending up standing on each others' feet, rather than on each others' shoulders. (Remedies must of course be sociological and lato sensu political first and foremost, but the way the language & tools are designed CAN help or hinder).
I can understand and respect honest differences of opinions. I too
believe that causes of divergence are largely sociological. I differ
though in thinking that features which allow divergence will necessarily
result in divergence.
I have this personal theory (used in the non-strict sense here) that
given enough time any homogenous group will split into at least two
competing factions. This "theory" of mine had it's roots in a nice
dinner at Medieval Times in California. We had arrived for dinner and
we were waiting to be seated, everyone was milling around in a sort of
shop/museum area. We had been given "crowns" for dinner, but no one
paid much attention to them. We were one large group of people, bound
by nothing and separated by nothing. Then the one of the staff took a
microphone and began giving us instructions. She told us the color of
our hats indicated the color of the Knight we would be rooting for, and
tha we would be sitting only with people of similar colored crowns.
Immediately the group (without instructions from the hostess) began
separating into similarly colored groups. Then they began calling the
groups by color to be seated. When they called out group, and we were
ascending the staircase, I looked over my shoulder at the remaining
groups. I was utterly shocked to see apparent hatred and revulsion of
our group on people's faces. To me this was a game, but to some people
in the crowd, having a different colored crown was a serious cause for
emnity.
I have long looked back on that incident, and I have since compared it
to many situations I have observed. Over time it seems to me that human
beings are incapable of remaining as one single cohesive group, rather
that they will always separate into several competing factions. Or at
the very least groups will splinter off the main group and form their
own group.
So it doesn't surprise me when groups splinter and diverge if they are
not strictly controlled from an organizational or sociological point of
view.
However in the opensource world I expect splinters to happen frequently,
simply because there is little to no organizational control. Even
Python hasn't been immune to this phenomenon with both Jython and
Stackless emerging.
Some people want power and expressiveness. Some people want control and
uniformity. Others still will sacrifice high level constucts for raw
pedal to the metal speed, while others wouldn't dream of this sacrifice.
What I'm getting at is that I can understand why people don't like
Macros. As David Mertz said, some people are just wired in dramatically
different ways. So, I'm nowhere near an _expert_ -- over 20 years' hiatus ensures I just can't be. But neither is it totally 2nd hand information, and if I gave the mistaken impression of never having used macros in a production setting I must have expressed myself badly. I do know I jumped on the occasion of moving to IBM Research, and the fact that this would mean going back to APL instead of "lisp" (in the above vague sense) did matter somewhat in my glee, even though I still primarily thought of myself as a hardware person then (the programming was needed to try out algorithms, simulate possible hardware implementations thereof, etc -- it was never an end in itself).
Thank you for that clarification. I must have been mis-interpreting
something, because I did think you had never used them.
I don't claim to be a guru on Lisp, however I believe I understand it far better than Alex does. If the people who actually know and use Common Lisp think I am mis-speaking and mis-representing Lisp, please let me know and I will be quiet.
Give that I've heard "everything and its opposite" (within two constant parameters only: S-expressions are an unalloyed good -- macros are good, some say unconditionally, others admit they can be prone to abuse) from posters on this thread from "people who actually know and use" Lisp, I don't know how you could "mis-speak and mis-represent" as long as you stick to the two tenets of party doctrine;-).
For me it isn't about party doctrine. :-p My mindset very closely
matches Paul Grahams. I can understand why other people have a
different mindset, and from what you've said I can even understand why
you don't like Macros, I just have a different viewpoint.
What get's me is when people (and I do this sometimes as well) expess an
opinion as fact, and that all rational people will agree with them. So,
for what it' worth, for the times I have expressed my opinion as the one
true way of thinking, I'm sorry.
Like I said, I'm not an expert at Lisp, but I think I understand the spirit and semantics of Lisp far better than Alex, and from what I've
If by Lisp you mean Common Lisp and exclude Scheme, I'm sure you do; if Scheme is to be included, then I'm not sure (but it's quite possible, nevertheless) -- at least the "spirit" of the small core and widespread HOFs w/single-namespace seem to be things I understand more (but the "spirit" of why it's so wonderful to have extensible syntax isn't:-).
Honestly I've only used scheme in trivial things. My preference has
been more towards Common Lisp, primarily because I need it for building
real systems, rather than doing language research.
I'm sure there many things that you know that I don't. From what I
understand you've been at this a bit longer than I have. I've only been
doing serious programming for a little over ten years now. In that
time I have been involved with some very large projects on a very large
scale. I think I understand the concepts of abstraction and code reuse
pretty well, and how to build large systems that integrate the efforts
of numerous people.
I personally just don't believe macros are "evil" per se. I believe
they like any other tool can be used effectively, or misused
effectively. However most of my problems don't come from people who
misuse advanced features of the language, rather they come from people
who don't understand basic concepts of optimization and code reuse.
In any event, I think Lisp and Python are both great languages. I use
Python every day at work, and I work on learning more about Lisp (and a
top secret pet project ;) ) every day. I very much respect your
knowledge Alex, because I do believe you have some good insights, and I
do enjoy discussing issues that we disagree on (when we aren't being
"bristly" ;) ) because you have many times helped me to understand my
own point of view better. So even though we don't always agree, I still
appreciate your opinions.
--
Doug Tolton
(format t "~a@~a~a.~a" "dtolton" "ya" "hoo" "com")
Kenny Tilton <kt*****@nyc.rr.com> writes: Pascal Costanza wrote: No studies, tho. Here they are: http://home.adelphi.edu/sbloch/class/hs/testimonials/
Oh, please:
"My point is... before I started teaching Scheme, weak students would get overwhelmed by it all and would start a downward spiral. With Scheme, if they just keep plugging along, weak students will have a strong finish. And that's a great feeling for both of us!"
That kind of anecdotal crap is meaningless. We need statistics! Preferably with lots of decimal places so we know they are accurate.
:)
Why the smiley? Many hours of discussions could be spared if there
were real, scientific, solid studies on the benefit of certain
language features or languages in certain domains or for certain types
of programmers. It would help get successful languages become
accepted in slow/big/dump organizations. It would point language
designers in the right directions. Project leaders could replace
trail-and-error by more efficient search techniques. (Assuming for a
second, programmers or managers would make rational decisions when
choosing a programming language and having available trustworthy
data.)
I imagine such studies are quite hard to do properly, but having them
would be useful.
[snip]
Something like this seems more logical to me:
for line in file('input.txt').lines:
do_something_with(line)
for byte in file('input.txt').bytes:
do_something_with(byte)
Is it possible?
Mike
Kenny Tilton <kt*****@nyc.rr.com> writes: One popular macro is WITH-OUTPUT-TO-FILE. My budding RoboCup starter kit was a vital WITH-STD-ATTEMPT macro. Oh god, no! I need to see the ANSI Lisp commands for these things so I can really understand them. Better yet...
why not the disassembly?
Fortunately for insane, paranoid programmers like Kenny who don't read
docstrings, and refuse to believe that others' libraries might work
correctly, Lisp is quite accomidating. You want to see what a macro
call expands to? Hey, I think we have tools for just that problem.
Disassembly? Now I could be mistaken, but I remember it being far
easier than in most languages ... oh yeah, DISASSEMBLE. Wow, I didn't
have to dig through a whole object file, or anything!
--
/|_ .-----------------------.
,' .\ / | No to Imperialist war |
,--' _,' | Wage class war! |
/ / `-----------------------'
( -. |
| ) |
(`-. '--.)
`. )----'
In article <bm**********@bob.news.rcn.net>, Vis Mike wrote: [snip]
Something like this seems more logical to me:
for line in file('input.txt').lines: do_something_with(line)
for byte in file('input.txt').bytes: do_something_with(byte)
I like that. =)
Is it possible?
Depends on your definition of "possible".
Dave
--
..:[ dave benjamin (ramenboy) -:- www.ramenfest.com -:- www.3dex.com ]:.
: d r i n k i n g l i f e o u t o f t h e c o n t a i n e r :
Pascal Costanza: So what's the result of ("one" - "two") then? ;)
It's undefined on strings -- a type error. Having + doesn't
mean that - must exist.
(A more interesting question would be to ask what
the result of "throne" - "one" is. But again, a type error.)
Pascal Costanza: It's a myth that bytes are restricted to 8 bits. See http://www.wikipedia.org/wiki/Byte
(I can't connect to the URL but I know what you're talking
about.)
Sure. But I'm just barely old enough to have programmed on
a CDC Cyber. When I see the word 'byte' I assume it means
8 bits unless told otherwise. When I buy memory, I don't ask
the sales staff "so is this an 8 bit byte or a 60 bit byte?" (And
yes, I know about the lawsuit against disk drive manufacturors
and their strange definition of "gigabyte", but even then, they
still use an 8 bit byte.)
Me: Is there consensus on the Unicode API?
Pascal Costanza: No, not yet. ANSI CL was finalized in 1994.
Sure. That's why I asked about consensus. De facto rather
than de jure. SAX for XML processing is a de facto standard
but portable across different implementations and even
portable across different languages (that is, there's a very
mechanical and obvious way to convert from one language
to another.)
Again, not part of ANSI CL. Don't judge a standardized language with the measures of a single-vendor language - that's a different subject.
I understand your point of view. OTOH, it's like when I used to
work with C. It was standardized, but required that I download
a slew of packages in order to do things. Eg, I had to evalutate
several different regexp packages before I found one which was
appropriate. I know there are good reasons for a standard to
leave out useful packages, but I know there are good reasons for
an implementation to include a large number of useful packages.
Is there a free Lisp/Scheme implementation I can experiment with
which include in the distribution (without downloading extra
packages; a "moby" distribution in xemacs speak):
- unicode
- xml processing (to some structure which I can use XPath on)
- HTTP-1.1 (client and server)
- URI processing, including support for opening and reading from
http:, file:, and https:
- regular expressions on both 8-bit bytes and unicode
- XML-RPC
- calling "external" applications (like system and popen do for C)
- POP3 and mailbox processing
As far as I can tell, there isn't. I'll need to mix and match packages
from a variety of sources. It isn't like these are highly unusual
requirements; I use them pretty frequently in Python. For examples:
- connect to my POP server, delete messages with .exe attachements
on the assumption that it's spam
- Use DAS (see http://www.biodas.org/) to get genomic sequence
annotations. Requires HTTP and XML processing (mostly
Latin-1 encoding)
- Make an XML-RPC server which takes as input molecular
structure information (eg, "CCO" is ethanol) and calls several
existing command-line packages to compute properties about
the compound. Parse the output with regular expressions and
combine and return all the results.
- Make a client which uses that server.
(Okay, looks like https isn't needed for these, but the rest are.)
(Apart from that, Jython also doesn't provide everything that Python provides, right?)
No, but there is a good overlap. I believe all of the above are
supported on both implementations.
Pick the one Common Lisp implementation that provides the stuff you need. If no Common Lisp implementation provides all the stuff you need, write your own libraries or pick a different language. It's as simple as that.
Which Common Lisp *distribution* provides the above? I
don't doubt that an implementation + some add-in packages do
all that, but each new package means one extra lump on the
barrier to entry and one extra worry when I want others to
use the code I've written.
I use Python in part because of the "batteries included"
philosophy. There will always be 3rd party packages (PIL for
images, ReportLab for PDF generation, PyDaylight or OEChem
for chemical informatics), but there's a lot which comes in
the standard distributions.
You can ask these things in comp.lang.lisp or in one of the various mailing lists. Common Lispniks are generally very helpful.
Understood. I did managed to get the biolisp code working, btw.
Andrew da***@dalkescientific.com
Kenny Tilton wrote: Alex Martelli wrote: ... I do believe that the divergence problem has more to do with human nature and sociology, and that putting in a language features that encourage groups and subgroups of users to diverge that language .... Can someone write a nifty Python hack to figure out how many times Lispniks have tried to get Alex to explain how macros are any different than high-order functions or new classes when it comes to The Divergence Problem? I love that we have given it a name, by the way.
The very 'feature' that was touted by Erann Gat as macros' killer advantage
in the WITH-CONDITION-MAINTAINED example he posted is the crucial
difference: functions (HO or not) and classes only group some existing code
and data; macros can generate new code based on examining, and presumably to
some level *understanding*, a LOT of very deep things about the code
arguments they're given. If all you do with your macros is what you could
do with HOF's, it's silly to have macros in addition to HOF's -- just
MTOWTDItis encouraging multiple different approaches to solve any given
problem -- this, of course, in turn breeds divergence when compared to a
situation in which just one approach is encouraged. If you do use the
potential implied in that example from Gat, to do things that functions and
classes just couldn't _begin_ to, it's worse -- then you're really
designing your own private divergent language (which most posters from
the Lisp camp appear to assert is an unalloyed good, although admittedly
far from all). This is far from the first time I'm explaining this, btw.
Oh, and if you're one of those who disapprove of Gat's example feel free
to say so, but until I hear a substantial majority denouncing it as idiotic
(and I haven't seen anywhere near this intensity of disapproval for it from
your camp) I'm quite justifyied in taking it as THE canonical example of a
macro doing something that is clearly outside the purview of normal tools
such as functions and classes. As I recall there was a lot of that going
on in TI labs, too -- instead of writing and using compilers for hardware
description languages, circuit simulators, etc, based on appropriate and
specialized languages processed with the help general-purpose ones,
the specialized languages (divergent and half-baked) were embedded in
programs coded in the general-purpose languages (Lisp variants, including
Scheme; that was in 1980) using macros that were supposed to do
everything but serve you coffee while you were waiting -- of course when
the snippets you passed (to represent hardware operation) were correct
from the GP language viewpoint but outside the limited parts thereof that
the macros could in fact process significantly down to circuit design &c,
the error messages you got (if you were lucky enough to get error
messages rather than just weird behavior) were QUITE interesting.
One popular macro is WITH-OUTPUT-TO-FILE. My budding RoboCup starter kit was a vital WITH-STD-ATTEMPT macro. Oh god, no! I need to see the ANSI
Do they do things a HOF or class could do? If so why bother using such
an over-powered tool as macros instead of HOFs or classes? If not, how do
they specially process and understand code snippets they're passed?
Alex
james anderson: i realize that this thread is hopelessly amorphous, but this post did introduce some concrete issues which bear concrete responses...
Thank you for the commentary.
i got only as far as the realization that, in order to be of any use,
unicode data management has to support the eventual primitive string operations.
which introduces the problem that, in many cases, these primitive operations eventually devolve to the respective os api. which, if one compares apple
and unix apis are anything but uniform. it is simply not possible to provide
them with the same data and do anything worthwhile. if it is possible to give
some concrete pointers to how other languages provide for this i would be
grateful.
Python does it by ignoring the respective os APIs, if I understand
your meaning and Python's implementation correctly. Here's some
more information about Unicode in Python http://www.python.org/peps/pep-0100.html http://www.python.org/peps/pep-0261.html http://www.python.org/peps/pep-0277.html http://www.python.org/doc/current/ref/strings.html http://www.python.org/doc/current/li...icodedata.html http://www.python.org/doc/current/li...le-codecs.html
and i have no idea what people do with surrogate pairs.
See PEP 261 listed above for commentary, and you may want
to email the author of that PEP, Paul Prescod. I am definitely
not the one to ask.
yes, there are several available common-lisp implementations for http
clients and servers. they offer significant trade-offs in api complexity, functionality, resource requirements and performance.
And there are several available Python implementations for the same;
Twisted's being the most notable. But the two main distributions (and
variants like Stackless) include a common API for it, which makes
it easy to start, and for most cases is sufficient.
I fully understand that it isn't part of the standard, but it would be
useful if there was a consensus that "packages X, Y, and Z will
always be included in our distributions."
if one needs to _port_ it to a new lisp, yes. perhaps you skipped over the list of lisps to which it has been ported. if you look at the #+/- conditionalization, you may observe that the differences are not
significant.
You are correct, and I did skip that list.
Andrew da***@dalkescientific.com
Matthias wrote: Kenny Tilton <kt*****@nyc.rr.com> writes:
Pascal Costanza wrote:
No studies, tho.
Here they are: http://home.adelphi.edu/sbloch/class/hs/testimonials/ Oh, please:
"My point is... before I started teaching Scheme, weak students would get overwhelmed by it all and would start a downward spiral. With Scheme, if they just keep plugging along, weak students will have a strong finish. And that's a great feeling for both of us!"
That kind of anecdotal crap is meaningless. We need statistics! Preferably with lots of decimal places so we know they are accurate.
:)
Why the smiley?
Sorry, I was still laughing to myself about that study with the lines of
code count (and measuring the power of a language by the number of
machine instructions per line or whatever that was).
...Many hours of discussions could be spared if there were real, scientific, solid studies on the benefit of certain language features or languages...
Studies schmudies. Everyone knows 10% of the people do 90% of the code
(well it might be 5-95). Go ask them. I think they are all saying (some)
Lisp and/or Python right now.
in certain domains or for certain types of programmers.
There's that relativism thing again. I think a good programming language
will be good for everyone, not some. What many people do not know is
that Lisp (macros aside!) is just a normal computer language with a
kazillion things done better, like generic functions and special
variables to name just two. Norvig himself talked about this, pardon my
alziness in not scaring up that well-know URL: Python is getting to be a
lot like Lisp, though again macros forced him into some hand-waving.
.. It would help get successful languages become accepted in slow/big/dump organizations.
Why you starry-eyed dreamer, you! Yes, here comes the PHB now waving his
copy of Software Engineering Quarterly.
It would point language designers in the right directions. Project leaders could replace trail-and-error by more efficient search techniques. (Assuming for a second, programmers or managers would make rational decisions when choosing a programming language and having available trustworthy data.)
Careful, any more of that and the MIB will come get you and send you
back to the planet you came from. I imagine such studies are quite hard to do properly, but having them would be useful.
OK, I am smiling again at the first half of that sentence. But there is
hope. My Cells package naturally exposes the interdependency of program
state, something Brooks (correctly) identified as a huge problem in
software engineering, hence his (mistaken) conviction there could be no
magic bullet.
Now Cells can (and have been to various degrees) been ported to C++,
Java, and Python. If those ports were done as fully as possible, such
that they passed the regression tests used on the Lisp reference
implementation, we could then measure productivity, because (I am
guessing) the internal state dependencies will serve quite nicely as a
measure of "how much" program got written by a team, one which could be
used to compare intelligently the productivity on different projects in
different languages. (You can't have the same team do the same project,
and you can't use two different teams, for obvious reasons.)
kenny
-- http://tilton-technology.com
What?! You are a newbie and you haven't answered my: http://alu.cliki.net/The%20Road%20to%20Lisp%20Survey
Pascal Bourguignon: Because the present is composed of the past. You have to be compatible, otherwise you could not debug a Deep Space 1 probe 160 million km away, (and this one was only two or three years old).
Huh? I'm talking purely in the interface. Use ASCII '[' and ']' in the
Lisp code and display it locally as something with more "directionality".
I'm not suggesting the unicode character be used in the Lisp code.
Take advantages of advances in font display to overcome limitations
in ASCII.
Mathematicians indeed overload operators with taking into account their precise properties. But mathematicians are naturally intelligent. Computers and our programs are not. So it's easier if you classify operators per properties; if you map the semantics to the syntax, this allow you to apply transformations on your programs based on the syntax without having to recover the meaning.
Ahhh, so make the language easier for computers to understand and
harder for intelligent users to use? ;)
Andrew da***@dalkescientific.com
Thomas F. Burdick wrote: Kenny Tilton <kt*****@nyc.rr.com> writes:
One popular macro is WITH-OUTPUT-TO-FILE. My budding RoboCup starter kit was a vital WITH-STD-ATTEMPT macro. Oh god, no! I need to see the ANSI Lisp commands for these things so I can really understand them. Better yet...
why not the disassembly?
Fortunately for insane, paranoid programmers like Kenny who don't read docstrings, and refuse to believe that others' libraries might work correctly, Lisp is quite accomidating. You want to see what a macro call expands to? Hey, I think we have tools for just that problem. Disassembly? Now I could be mistaken, but I remember it being far easier than in most languages ... oh yeah, DISASSEMBLE. Wow, I didn't have to dig through a whole object file, or anything!
<rofl> yer right! Lisp is sick. I am going to go disassemble DOTIMES
right now, find out once and for all what those plotters over at Franz
are up to.
kenny
ps. Arnold is your governor, Arnold is your governor, nyeah, nyeah, nya,
nyeah, nyeah.
-- http://tilton-technology.com
What?! You are a newbie and you haven't answered my: http://alu.cliki.net/The%20Road%20to%20Lisp%20Survey
Andreas Rossberg <ro******@ps.uni-sb.de> writes: Apart from that, can you have higher-order macros? With mutual recursion between a macro and its argument? That is, can you write a fixpoint operator on macros?
Yes, you can. This occurs when you call MACROEXPAND as part of
computing the expansion of the macro. my************************@jpl.nasa.gov (Erann Gat) writes: In article <bm**********@bob.news.rcn.net>, "Vis Mike" <visionary25@_nospam_hotmail.com> wrote:
Ahh, but overloading only works at compile time:
That's irrelevant. When it happens doesn't change the fact that this proves it (multiple dispatch with non-congruent arglists) is possible. Nothing prevents you from using the same algorithm at run time.
In fact, on a project I'm working on I have to handle overloaded
functions. I used the MOP to adjust how method combination worked
and now the generic functions first perform an arity-based dispatch.
Pascal Costanza: I guess this reflects his experiences when he has learned Lisp in the beginning of the 80's (AFAIK).
But the talk wasn't given in the early 80s. It was given in the early 00s.
And it implies that Lisp is still strange that way. OTOH, you aren't him
so it's not fair of me to ask you all to defend his statements.
Yes, scripting languages have caught up in this regard. (However, note that Common Lisp also includes a full compiler at runtime.)
However, that's an implementation difference -- the only thing
users should see is that the code runs faster. It doesn't otherwise
provide new functionality. The phrase "they had hard-headed engineering reasons for making the syntax look so strange." reminds me of the statement "better first rate salespeople and second rate engineers than second rate salespeople and first rate engineers" (and better first rate both). That's saying *nothing* about the languages; it's saying that his viewpoint seems to exclude the idea that there are hard-headed non-engineering reasons for doing things."
No, that's not a logical conclusion.
I used those wishy-washy words "reminds me" and "seems". ;)
These abbreviations seem strange to a Lisp outsider, but they are very convenient, and they are easy to read once you have gotten used to them. You don't actually "count" the elements in your head every time you see these operators, but they rather become patterns that you recognize in one go.
I did know about cddr, etc. from Hofstadter's essays in Scientific
America some 15 years ago.
I don't know how this could be done with 1st, rst or hd, tl respectively.
Okay, I gave alternatives of "." and ">" instead of "car" and "cdr"
"." for "here" and ">" for "the rest; over there". These are equally
composable.
.. == car == cdr
cadr == >.
caddr == >>.
cddr == >>
Pick your choice. "There is not only one way to do it." (tm)
Perl beat you to it -- "TMTOWTDO" (There's more than one way to
do it.). ;)
Python's reply "There should be one-- and preferably only one --
obvious way to do it."
The learning curve is steeper, but in the long run you become much more productive.
Which brings us back to the start of this thread. :)
Andrew da***@dalkescientific.com
Alex Martelli wrote: Kenny Tilton wrote:
Alex Martelli wrote:
... I do believe that the divergence problem has more to do with human nature and sociology, and that putting in a language features that encourage groups and subgroups of users to diverge that language .... Can someone write a nifty Python hack to figure out how many times Lispniks have tried to get Alex to explain how macros are any different than high-order functions or new classes when it comes to The Divergence Problem? I love that we have given it a name, by the way.
The very 'feature' that was touted by Erann Gat as macros' killer advantage in the WITH-CONDITION-MAINTAINED example he posted is the crucial difference: functions (HO or not) and classes only group some existing code and data; macros can generate new code based on examining, and presumably to some level *understanding*, a LOT of very deep things about the code arguments they're given.
Stop, your scaring me. You mean to say there are macros out there whose
output/behavior I cannot predict? And I am using them in a context where
I need to know what the behavior will be? What is wrong with me? And
what sort of non-deterministic macros are these, that go out and make
their own conclusions about what I meant in some way not documeted?
I think the objection to macros has at this point been painted into a
very small corner.
...If all you do with your macros is what you could do with HOF's, it's silly to have macros in addition to HOF's
There is one c.l.l. denizen/guru who agrees with you. I believe his
position is "evrything can be done with lambda". And indeed, many a
groovy WITHOUT-CELL-DEPENDENCY expands straight into:
(call-with-cell-dependency (lambda () ,yadayadayada))
But code with WITHOUT-CELL-DEPENDENCY looks prettier (I hope we can
agree that that matters, esp. if you are a Pythonista).
-- just MTOWTDItis encouraging multiple different approaches to solve any given problem -- this, of course, in turn breeds divergence when compared to a situation in which just one approach is encouraged. If you do use the potential implied in that example from Gat, to do things that functions and classes just couldn't _begin_ to, it's worse -- then you're really designing your own private divergent language (which most posters from the Lisp camp appear to assert is an unalloyed good, although admittedly far from all). This is far from the first time I'm explaining this, btw.
Oh. OK, now that you mention it I have been skimming lately. One popular macro is WITH-OUTPUT-TO-FILE. My budding RoboCup starter kit was a vital WITH-STD-ATTEMPT macro. Oh god, no! I need to see the ANSI
Do they do things a HOF or class could do?
Yes.
... If so why bother using such an over-powered tool as macros instead of HOFs or classes?
Hang on, we just agreed that in this case the only added value is
prettier code. Nothing over-powered going on. (And that is, in this
case, why I bother.)
-- http://tilton-technology.com
What?! You are a newbie and you haven't answered my: http://alu.cliki.net/The%20Road%20to%20Lisp%20Survey
Alex Martelli wrote: Doug Tolton wrote:
David Mertz wrote:
There's something pathological in my posting untested code. One more try:
def categorize_jointly(preds, it): results = [[] for _ in preds] for x in it: results[all(preds)(x)].append(x) return results
|Come on. Haskell has a nice type system. Python is an application of |Greespun's Tenth Rule of programming.
Btw. This is more nonsense. HOFs are not a special Lisp thing. Haskell does them much better, for example... and so does Python.
What is your basis for that statement? I personally like the way Lisp does it much better, and I program in both Lisp and Python. With Python it's not immediately apparent if you are passing in a simple variable or a HOF. Whereas in lisp with #' it's immediately obvious that you are receiving or sending a HOF that will potentially alter how the call operates.
IMO, that syntax is far clearner.
I think it's about a single namespace (Scheme, Python, Haskell, ...) vs CLisp's dual namespaces. People get used pretty fast to having every object (whether callable or not) "first-class" -- e.g. sendable as an argument without any need for stropping or the like. To you, HOFs may feel like special cases needing special syntax that toots horns and rings bells; to people used to passing functions as arguments as a way of living, that's as syntactically obtrusive as, say, O'CAML's mandate that you use +. and not plain + when summing floats rather than ints (it's been a couple years since I last studied O'CAML's, so for all I know they may have changed that now, but, it IS in the book;-).
Yeah I'm not a big fan of +. and /. etc. Every operation on floating
point values has to be explicitly specified. You can't even do a 2 +.
3.0. Instead you have do do (float_of_int 2) +. 3.0
I'm not a huge fan of their syntax either. I don't think they've
removed it, because it's in the first chapter of the tutorial on their site.
I also uses HOF's daily, and I don't generally run into problems
specifying that a variable is in fact a function. I like the sharp
quote thouge because it makes it clear that something a little out of
the ordinary is occurring. Honestly though either way is fine with me.
I can see the arguments both ways.
btw CLisp is an implementation of Common Lisp. If you want to shorten
it, it's usually less ambiquous if you use CL.
I'm not going to comment on the dual versus single namespaces, because
that isn't an area I'm very familiar with how Lisp operates.
--
Doug Tolton
(format t "~a@~a~a.~a" "dtolton" "ya" "hoo" "com")
Alex Martelli <al***@aleax.it> writes: Kenny Tilton wrote:
Alex Martelli wrote: ... I do believe that the divergence problem has more to do with human nature and sociology, and that putting in a language features that encourage groups and subgroups of users to diverge that language .... Can someone write a nifty Python hack to figure out how many times Lispniks have tried to get Alex to explain how macros are any different than high-order functions or new classes when it comes to The Divergence Problem? I love that we have given it a name, by the way.
The very 'feature' that was touted by Erann Gat as macros' killer advantage in the WITH-CONDITION-MAINTAINED example he posted is the crucial difference: functions (HO or not) and classes only group some existing code and data; macros can generate new code based on examining, and presumably to some level *understanding*, a LOT of very deep things about the code arguments they're given.
Yes! Macros can generate code, and compiler-macros can transform
perfectly ordinary, easy-to-read code into efficient code. Anyone who
has worked in a domain where efficiency matters has run into the
problem most languages have: abstraction *or* efficiency. Custom,
domain-specific transforms are something you can't always expect the
compiler to do. With Lisp, you're not at the mercy of your vendor; if
you know damn well that some readable code A can be transformed into
equivalent, but efficient code B, you can cause it to happen!
If all you do with your macros is what you could do with HOF's,
But you can do more with macros, so there's no point in looking at the
conclusion to this sentance.
Oh, and if you're one of those who disapprove of Gat's example feel free to say so, but until I hear a substantial majority denouncing it as idiotic (and I haven't seen anywhere near this intensity of disapproval for it from your camp)
Of course not, it's a lovely example of one use of macros.
I'm quite justifyied in taking it as THE canonical example of a macro doing something that is clearly outside the purview of normal tools such as functions and classes.
No, you're not justified at all.
--
/|_ .-----------------------.
,' .\ / | No to Imperialist war |
,--' _,' | Wage class war! |
/ / `-----------------------'
( -. |
| ) |
(`-. '--.)
`. )----'
Björn Lindberg: Apart from the usual problems with micro benchmarks, there are a few things to consider regarding the LOC counts on that site:
I wasn't the one who pointed out those micro benchmarks. Kenny
Tilton pushed the idea that more concise code is better and that Lisp
gives the most concise code, and that Perl is much more compact
than Python. He suggested I look at some comparisons, so I followed
his suggestion and found that 1) the Lisp code there was not more
succinct than Python and 2) neither was the Perl code.
* Declarations. Common Lisp gives the programmer the ability to optimize a program by adding declarations to it.
While OCaml, which has the smallest size, does type inferencing....
since the micro benchmarks in the shootout are focused on speed of execution, and they are so small, all of them contains a lot of declarations, which will increase LOC.
Ahh, good point.
* In many languages, any program can be written on a single line. This goes for Lisp, ut also for C and other languages.
Absolutely correct. Both Alex Martellli and I tried to dissuade
Kenny Tilton that LOC was the best measure of succinctness and
appropriateness, and he objected.
* I don't think the LOC saving qualities of Lisp is made justice in micro benchmarks. The reason Lisp code is so much shorter than the equivalent code in other languages is because of the abstractive powers of Lisp, which means that the difference will be more visible the larger the program is.
Agreed. I pointed out elsewhere that there has been no systematic
study to show that Lisp code is indeed "so much shorter than the
equivalent code in other languages" where "other languages" include
Python, Perl, or Ruby.
The closest is http://www.ipd.uka.de/~prechelt/Biblio/
where the example program, which was non-trivial in size
took about 100LOC in Tcl/Rexx/python/perl and about 250LOC
in Java/C/C++.
There was a repeat of that test at http://www.flownet.com/gat/papers/lisp-java.pdf
which showed that the average Lisp size was 119 LOC and
277 for Java. I eyeballed the numbers, and I'm not sure if
the counts included comments or not, so call it the same.
In any case, it implies you need to get to some serious sized
programs (1000 LOC? 10000LOC? A million?) before
the advantages of Lisp appear to be significant.
That's not saying that they are more obvious in some subdomain.
For that matter, if I want to do hardware I/O on some
memory mapped ports, it's pretty obvious that C is a good
contender for that domain.
So we're left with depending on gut feelings, guided by
(non-rigorous) observation. Eg, observations that Python
does scale to large projects, observations that the people
who will use my code (computational scientists, not
programmers) find Python easier than Lisp and Tcl-style
commands easier than Python :(.
Andrew da***@dalkescientific.com
Rainer Deyke wrote: Is portability of code across different language implementations not a
priority for LISP programmers?
james anderson: there are some things which the standard does not cover.
"The standard" as I understand it, is some document written a decade
ago. Can't a few of you all get together and say "hey, the world's
advanced. Let's agree upon a few new libraries and APIs."?
Andrew da***@dalkescientific.com
Me: Note that I did not at all make reference to macros. Your statements to date suggest that your answer to the first is "no."
Doug Tolton: That's not exactly my position, rather my position is that just about anything can and will be abused in some way shape or fashion. It's a simple fact of working in teams. However I would rather err on the side of abstractability and re-usability than on the side of forced
restrictions.
You are correct. I misremembered "Tolton" as "Tilton" and confused
you with someone else. *blush*
My answer, btw, that the macro preprocessor in C is something
which is useful and too easily prone to misuse. Eg, my original
C book was "C for native speakers of Pascal" and included in
the first section a set of macros like
#define BEGIN {
#define END }
It's not possible to get rid of cpp for C because the language
is too weak, but it is something which takes hard experience to
learn when not to use.
As for a language feature which should never be used. Alex Martelli
gave an example of changing the default definition for == between
floats, which broke other packages, and my favorite is "OPTION
BASE 1" in BASIC or its equivalent in Perl and other langauges.
That is, on a per-program (or even per-module) basis, redefine
the 0 point offset for an array.
Andrew da***@dalkescientific.com
Jon S. Anthony: This thing has been debunked for years. No one with a clue takes it seriously. Even the author(s) indicate that much of it is based on subjective guesses.
Do you have a reference? And a pointer to something better?
The only one better I know is http://www.ipd.uka.de/~prechelt/Biblio/jccpprtTR.pdf
which on page 23, Fig. 17, has a mapping from median
work time actual to work time w.r.t the language list.
Of the 6 languages there are two major misorderings.
First, C actually ended up easier to use than Java or C++.
(which is strange since you would think C++ would be
at least as good as C), and second, Tcl actually ends up
much better.
2 of 6 is better than random, so Jones' work can't be
complete bunkum.
Andrew da***@dalkescientific.com
On Thu, 9 Oct 2003, Andrew Dalke wrote: Rainer Deyke wrote: Is portability of code across different language implementations not a priority for LISP programmers?
james anderson: there are some things which the standard does not cover.
"The standard" as I understand it, is some document written a decade ago. Can't a few of you all get together and say "hey, the world's advanced. Let's agree upon a few new libraries and APIs."?
Isn't that what repositories like perl's CPAN are for?
Though it would be nice if everyone agreed on a single, simple
foreign-function interface...
- Daniel
Alex Martelli <al***@aleax.it> writes: Kenny Tilton wrote:
Alex Martelli wrote: ... I do believe that the divergence problem has more to do with human nature and sociology, and that putting in a language features that encourage groups and subgroups of users to diverge that language .... Can someone write a nifty Python hack to figure out how many times Lispniks have tried to get Alex to explain how macros are any different than high-order functions or new classes when it comes to The Divergence Problem? I love that we have given it a name, by the way.
The very 'feature' that was touted by Erann Gat as macros' killer advantage in the WITH-CONDITION-MAINTAINED example he posted is the crucial difference: functions (HO or not) and classes only group some existing code and data; macros can generate new code based on examining, and presumably to some level *understanding*, a LOT of very deep things about the code arguments they're given.
Are you really this dense? For crying out loud - functions can
generate code as well and just as easily as macros. In fact macros
are just functions anyway so it really should go without saying.
If all you do with your macros is what you could do with HOF's, it's silly to have macros in addition to HOF's -- just
No it isn't, because they the mode of _expression_ may be better with
on in context A and better with the other in context B.
MTOWTDItis encouraging multiple different approaches to solve any given problem -- this, of course, in turn breeds divergence when compared to a
Actually it breeds better solutions to problems. If you don't
understand this, you will never understand much of anything about good
problem solving.
Oh, and if you're one of those who disapprove of Gat's example feel free to say so, but until I hear a substantial majority denouncing it as idiotic
At the moment the only thing I am willing to denounce as idiotic are
your clueless rants.
/Jon
In article <yS**********************@news2.tin.it>, Alex Martelli
<al***@aleax.it> wrote: then you're really designing your own private divergent language (which most posters from the Lisp camp appear to assert is an unalloyed good, although admittedly far from all).
FWIW, IMO "designing your own private divergent language" is not an
unalloyed good, but having the *ability* to design your own private
divergent language quickly and easily is. Also, the "private divergent
languages" that most people design using Lisp macros are supersets of
Lisp, so that limits the extent to which divergence happens in practice.
Your position against macros and in favor of HOFs strikes me as being very
similar to those who want to ban model rocketry on the grounds that it is
too dangerous. Yes, macros (and model rockets) can be dangerous. If
you're not careful you can put your eye out with them. But if you are
careful you can do very cool things with them and -- almost as important
-- learn a lot in the process. (And what you learn from Lisp and model
rocketry are deep truths about the world, not a bunch of random kabuki
juju like what you fill your brain with when you learn C++.) This
mindset, that anything that is potentially dangerous ought to be avoided
because it is potentially dangerous, is IMHO a perverse impediment to
progress. There is no reward without risk.
Life is short. It's not hard to produce a reasonable estimate of the
total number of keystrokes that you will be able to execute in a lifetime,
and it's a pretty small number in the grand and glorious scheme of
things. If you have no ambitions beyond writing
yet-another-standard-web-app then macros are not for you. But if your
goals run grander than that then the extra leverage that you get from
things like macros becomes very precious indeed. Once your ambitions pass
a certain point the only option open to you is to teach your computer to
write code for you, because you don't have time to do it yourself. For
example, there is no reason it should take multiple work years to write an
operating system. There is no fundamental reason why one could not build
a computational infrastructure that would allow a single person to write
an operating system from scratch in a matter of days, maybe even hours or
minutes. But such a system is going to have to have a fairly deep
understanding of the relationship of code to hardware. You may want to
write things like:
(define-hardware-type ethernet-controller ...)
or
(define-hardware-standards-hierarchy
(networking
(ethernet
(standard-ethernet
(NE2000 ....))
(fast-ethernet ...)
(gigabit-ethernet ...))
(fddi ...)
(fibre-channel ...)
...)
(mass-storage
(hard-drive
(ide ...)
(scsi ...))
or
(is-a ne2000 standard-ethernet-card)
or
(define-register-layout ...)
God only knows. Only one thing is certain: with macros and readtables you
will be limited only by your imagination. With anything less you will be
limited by something else.
E.
"Vis Mike" <visionary25@_nospam_hotmail.com> wrote previously:
|Something like this seems more logical to me:
|for line in file('input.txt').lines:
| do_something_with(line)
|for byte in file('input.txt').bytes:
| do_something_with(byte)
Well, it's spelled slightly differently in Python:
for line in file('input.txt').readlines():
do_something_with(line)
for byte in file('input.txt').read():
do_something_with(byte)
Of course, both of those slurp in the whole thing at once. Lazy lines
are 'fp.xreadlines()', but there is no standard lazy bytes.
A method 'fp.xread()' might be useful, actually. And taking a good idea
from Dave Benjamin up-thread, so might 'fp.xreadwords()'. Of course, if
you were happy to write your own class 'File' that provided the extra
iterations, you'd only need to capitalize on letter to get these extra
options.
Yours, Lulu...
--
mertz@ _/_/_/_/ THIS MESSAGE WAS BROUGHT TO YOU BY: \_\_\_\_ n o
gnosis _/_/ Postmodern Enterprises \_\_
..cx _/_/ \_\_ d o
_/_/_/ IN A WORLD W/O WALLS, THERE WOULD BE NO GATES \_\_\_ z e
Thomas F. Burdick: With Lisp, you're not at the mercy of your vendor; if you know damn well that some readable code A can be transformed into equivalent, but efficient code B, you can cause it to happen!
Not at the mercy of your vendor unless you want to use something
which isn't in the standard, like unicode (esp "wide" unicode, >16bit),
regular expressions (esp. regexps of unicode), sockets, or ffi?
But that's just flaming -- ignore me. ;)
Andrew da***@dalkescientific.com
"Andrew Dalke" <ad****@mindspring.com> writes: Pascal Costanza: So what's the result of ("one" - "two") then? ;) It's undefined on strings -- a type error. Having + doesn't mean that - must exist.
No, but it makes the semantics odd for an operation named by "+". Of
course it may not be obvious what the semantics should be, but then
the semantics of "hi" + "there" isn't obvious either.
(A more interesting question would be to ask what the result of "throne" - "one" is. But again, a type error.)
Why? This seems like a likely candidate for a string -.
I understand your point of view. OTOH, it's like when I used to work with C. It was standardized, but required that I download a slew of packages in order to do things.
That's why there are _implementations_. It's odd that the obvious
distinction between "compiler" (or interpreter or whatever) and
"language" is so hard to grasp.
appropriate. I know there are good reasons for a standard to leave out useful packages, but I know there are good reasons for an implementation to include a large number of useful packages.
Wow, this actually sounds right.
Is there a free Lisp/Scheme implementation I can experiment with which include in the distribution (without downloading extra packages; a "moby" distribution in xemacs speak): - unicode - xml processing (to some structure which I can use XPath on) - HTTP-1.1 (client and server) - URI processing, including support for opening and reading from http:, file:, and https: - regular expressions on both 8-bit bytes and unicode - XML-RPC - calling "external" applications (like system and popen do for C) - POP3 and mailbox processing
Yes. Allegro CL (ACL) for one.
As far as I can tell, there isn't. I'll need to mix and match packages
You obviously can't "tell" too well. (Apart from that, Jython also doesn't provide everything that Python provides, right?)
No, but there is a good overlap. I believe all of the above are supported on both implementations.
Interaction with Java (to access it's libararies and whatnot) is also
in ACL.
Which Common Lisp *distribution* provides the above? I
One is pointed out above.
/Jon
"Andrew Dalke" <ad****@mindspring.com> writes: Ahhh, so make the language easier for computers to understand and harder for intelligent users to use? ;)
Spoken like a true Python supporter...
/Jon
Doug Tolton: I have this personal theory (used in the non-strict sense here) that given enough time any homogenous group will split into at least two competing factions.
Reminds me of Olaf Stapledon's "First and Last Men"? His
civilizations often had two roughtly equal but opposing components.
Also reminds me of learning about the blue eyed/brown eyed
experiment in my sociology class in high school. As it turns out,
I was the only blue-eyed person in the class of 25 or so. :)
Over time it seems to me that human beings are incapable of remaining as one single cohesive group, rather that they will always separate into several competing factions. Or at the very least groups will splinter off the main group and form their own group.
Not necessarily "competing", except in a very general sense. Is
Australian English in competition with Canadian English?
However in the opensource world I expect splinters to happen frequently, simply because there is little to no organizational control. Even Python hasn't been immune to this phenomenon with both Jython and Stackless emerging.
As well as PyPy and (more esoterically) Vyper.
Excepting the last, all have had the goal of supporting the C Python
standard library where reasonably possible. When not possible
(as the case with Jython and various C extensions), then supporting
the native Java libraries.
"bristly" ;)
Ohh! Good word! I had forgotten about it.
Andrew da***@dalkescientific.com
"Andrew Dalke" <ad****@mindspring.com> writes: I don't know how this could be done with 1st, rst or hd, tl respectively. Okay, I gave alternatives of "." and ">" instead of "car" and "cdr" "." for "here" and ">" for "the rest; over there". These are equally composable.
. == car == cdr cadr == >. caddr == >>. cddr == >>
These "look" worse than the version you're railing against and are
bombs waiting to go off since they have long standing prior meanins
not in any way associated with this type of operation. OTOH, if you
really wanted them, you could define them.
Python's reply "There should be one-- and preferably only one -- obvious way to do it."
This then is probably the best reason to _not_ use Python for anything
other than the trivial. It has long been known in problem solving
(not just computation) that multiple ways of attacking a problem, and
shifting among those ways, tends to yield the the better solutions. The learning curve is steeper, but in the long run you become much more productive.
Which brings us back to the start of this thread. :)
If your problems are trivial, I suppose the presumed lower startup
costs of Python may mark it as a good solution medium.
/Jon
Andrew Dalke wrote: Me:
Note that I did not at all make reference to macros. Your statements to date suggest that your answer to the first is "no."
Doug Tolton:
That's not exactly my position, rather my position is that just about anything can and will be abused in some way shape or fashion. It's a simple fact of working in teams. However I would rather err on the side of abstractability and re-usability than on the side of forced
restrictions.
You are correct. I misremembered "Tolton" as "Tilton" and confused you with someone else. *blush*
Heh, yeah I've noticed that a couple of times. Poor Kenny keeps getting
blamed for things I've said. D'oh! My answer, btw, that the macro preprocessor in C is something which is useful and too easily prone to misuse. Eg, my original C book was "C for native speakers of Pascal" and included in the first section a set of macros like
#define BEGIN { #define END }
I agree the C macro system is constantly abused. Then again, I haven't
ever been a really big fan of the C macro system, primarily because even
if it's used correctly it has always struck me as an ugly hack. I don't
think that's because it's overly expressive and powerful though, rather
I think it's because of it's limitations and foreign feeling syntax. It's not possible to get rid of cpp for C because the language is too weak, but it is something which takes hard experience to learn when not to use.
As for a language feature which should never be used. Alex Martelli gave an example of changing the default definition for == between floats, which broke other packages, and my favorite is "OPTION BASE 1" in BASIC or its equivalent in Perl and other langauges. That is, on a per-program (or even per-module) basis, redefine the 0 point offset for an array.
Again, I can see setting corporate wide policies that specify if you
change the OPTION BASE, we are going to take you out behind the shed and
beat you silly. I don't think the existence of OPTION BASE is a
problem, personally I think it's when someone decides they want to
change the OPTION BASE to 0 while everyone else is still using 1. That
doesn't necessarily imply that OPTION BASE is by itself and evil construct.
--
Doug Tolton
(format t "~a@~a~a.~a" "dtolton" "ya" "hoo" "com")
Andrew Dalke wrote: Doug Tolton:
I have this personal theory (used in the non-strict sense here) that given enough time any homogenous group will split into at least two competing factions.
Reminds me of Olaf Stapledon's "First and Last Men"? His civilizations often had two roughtly equal but opposing components.
I haven't read it, I may have to check it out. Also reminds me of learning about the blue eyed/brown eyed experiment in my sociology class in high school. As it turns out, I was the only blue-eyed person in the class of 25 or so. :)
I'm not familiar with this experiment. What is it about, and what are
the results?
Over time it seems to me that human beings are incapable of remaining as one single cohesive group, rather that they will always separate into several competing factions. Or at the very least groups will splinter off the main group and form their own group.
Not necessarily "competing", except in a very general sense. Is Australian English in competition with Canadian English?
I guess it comes more into play when there is some limited resource put
into play (eg Darwin), such as winning a prize, making money, number of
people using your system. I agree not everything is a direct
competition, but I bet if you started comparing Australian English to
Canadian English with both types of speakers, eventually serious
disagreement about some minute point would break out.
However in the opensource world I expect splinters to happen frequently, simply because there is little to no organizational control. Even Python hasn't been immune to this phenomenon with both Jython and Stackless emerging.
As well as PyPy and (more esoterically) Vyper.
Excepting the last, all have had the goal of supporting the C Python standard library where reasonably possible. When not possible (as the case with Jython and various C extensions), then supporting the native Java libraries.
I'm not saying they aren't good choices, or that they can even decide to
work together, rather that over time groups tend to diverge. Look at
Unix/Linux/FreeBsd as an example. I'm sure there are times when
divergent groups die out and re-enter the main branch as well.
"bristly" ;)
Ohh! Good word! I had forgotten about it.
I have to give the credit to David Mertz on that one. He used it in
correspondence with me, and I liked it a lot too.
--
Doug Tolton
(format t "~a@~a~a.~a" "dtolton" "ya" "hoo" "com")
On 09 Oct 2003 16:25:22 -0400, j-*******@rcn.com (Jon S. Anthony) wrote: "Andrew Dalke" <ad****@mindspring.com> writes:
Is there a free Lisp/Scheme implementation I can experiment with which include in the distribution (without downloading extra packages; a "moby" distribution in xemacs speak): - unicode - xml processing (to some structure which I can use XPath on) - HTTP-1.1 (client and server) - URI processing, including support for opening and reading from http:, file:, and https: - regular expressions on both 8-bit bytes and unicode - XML-RPC - calling "external" applications (like system and popen do for C) - POP3 and mailbox processing
Yes. Allegro CL (ACL) for one.
As far as I can tell, there isn't. I'll need to mix and match packages
You obviously can't "tell" too well.
It is true that AllegroCL has all these features and it probably is
the only CL implementation that includes all of them out of the box
but it is not true that it is free (which was one of the things
Mr. Dalke asked for). At least it wasn't true the last time I talked
to the Franz guys some days ago. If that has changed in the last week
please let me know... :)
You might be able to get most of these features with "free" CL
implementations but not all at once I think. (AFAIK CLISP is currently
the only "free" CL which supports Unicode but it is lacking in some
other areas.)
As far as "mix and match" of packages is concerned: Use Debian
(testing) or Gentoo. I've been told it's just a matter of some
invocations of 'apt-get install' or 'emerge' to get the CL packages
you want. At least it shouldn't be harder than, say, getting stuff
from CPAN. What? You don't use Debian or Gentoo? Hey, you said you
wanted "free" stuff - you get what you pay for.
No, seriously. It'd definitely be better (for us Lispers) if we had
more freely available libraries plus a standardized installation
procedure à la CPAN. Currently we don't have that - there are
obviously far more people working on Perl or Python libraries.
So, here are your choices:
1. Buy a commercial Lisp. I've done that and I think it was a good
decision.
2. Try to improve the situation of the free CL implementations by
writing libraries or helping with the infrastructure. That's how
this "Open Source" thingy is supposed to work. I'm also doing this.
3. Run around complaining that you can't use Lisp because a certain
combination of features is not available for free. We have far too
many of these guys on c.l.l.
4. Just don't use it. That's fine with me.
It currently looks like the number of people choosing #2 is
increasing. Looks promising. You are invited to take part - it's a
great language and a nice little community... :)
Edi.
PS: You might also want to look at
<http://web.metacircles.com/cirCLe+CD>.
Kenny Tilton: I wouldn't take the Greenspun crack too seriously. That's about applications recreating Lisp, not languages copying Lisp features.
Are you stating that all references of Greenspun's 10th rule,
when applied to Python, are meant in jest? Many of the times
I've seen it used has come from a sense of arrogance; justified
or not. The similar statement which bristles me the most is at the
top of biolisp.org, especially given how paltry the existing public
libraries are for doing bioinformatics in Lisp -- even compared
to Ruby.
It's just a reaction to Python (a perfectly nice little scripting language) trying to morph into a language with the sophistication of Lisp.
Python isn't doing that. It's lives in a perfectly good niche wherein
Lisp is not the most appropriate language. At least not until there's a
macro which works like
(#python '
for i in range(100):
print "Spam, ",
print
)
As for non-professional programmers, the next question is whether a good language for them will ever be anything more than a language for them. Perhaps Python should just stay with the subset of capabalities that made it a huge success--it might not be able to scale to new sophistication without destroying the base simplicity.
But there's no reason to stay with only one language. For those things
which are more appropriate in a different language (eg, adding a new
minimizer to an existing FORTRAN library, or writing an interface
to an existing C llibrary, or doing predicate based logic in Prolog)
then use that other language
I'm perfectly satisfied with claiming that Python is a great language
for scientific programming and that it is a less than perfect language
for doing aspect oriented programming.
Another question is whether Lisp would really be such a bad program for them.
I am also perfectly satisfied with claiming that Lisp is not the best
language for the people I work with.
You presume that only Lisp gurus can learn Lisp because of the syntax.
Not at all. What I said is that Lisp gurus are self-selected to be
the ones who don't find the syntax to be a problem. You incorrectly
assumed the converse to be true.
But methinks a number of folks using Emacs Elisp and Autocad's embedded Lisp are non-professionals.
Methinks there are a great many more people using the VBA
interface to AutoCAD than its Lisp interface. In fact, my friends
(ex-Autodesk) told me that's the case.
And let's not forget Symbolic Composer (music composition) or Mirai (?) the 3D modelling/animation tool, both of which are authored at the highest level with Lisp.
As compared to csound (music synthesis) which is written in C?
Or Gnumeric with Python embedded?
Do you want me to truck out a similar list of programs with
Python embedded? And that would prove ... what exactly?
Logo (a language aimed at children, including very young children) cuts both ways: it's a Lisp, but it dumped a lot of the parens, but then again it relies more on recursion.
Okay, and then there's Alice, from www.alice.org , which
"addresses the specific needs of the subpopulation of middle
school girls" and aims to "provide the best possible first exposure
to programming for students ranging from middle schoolers
to college students."
What does it mean to be "a Lisp"? Is Python considered "a Lisp"
for some definitions of Lisp? If Greenspun's 10th law has any merit
whatsover then Python must surely be an "implementation of half
of Common Lisp."
You (Alex?) also worry about groups of programmers and whether what is good for the gurus will be good for the lesser lights.
If you ever hear me call anyone who is not an expert programmer
a "lesser light" then I give you -- or anyone else here -- permission
to smack me cross-side the head. The people I work for, who use
the software I write, are PhD-level chemists and biologists, who
are developing new drugs, who helped sequence the human genome,
and some of who may in a decade or two receive the Nobel prize
for their efforts. These are not "lesser lights."
I never, ever, EVER made that claim and you are sticking words
in my mouth. You consistently and incorrectly restate others'
claims into an obviously wrong-headed viewpoint that all it does
is highlight your own false assumptions and arrogance.
Andrew da***@dalkescientific.com
In article <yS**********************@news2.tin.it>, Alex Martelli
<al***@aleax.it> wrote: If you do use the potential implied in that example from Gat, to do things that functions and classes just couldn't _begin_ to, it's worse -- then you're really designing your own private divergent language (which most posters from the Lisp camp appear to assert is an unalloyed good, although admittedly far from all).
[This may be a duplicate posting -- I wrote a response to this earlier but
it seems to have vanished into the cosmic void.]
FWIW, I do not believe that "designing your own private divergent language
.... is an unalloyed good." I do, however, believe that having the
*ability* to quickly and easily design your own private language is a Good
Thing. In practice, most "private languages" built using macros are
supersets of Lisp, so this limits the extent to which divergence matters
in practice.
(My original post went on with a rant about how risk aversion was an
impediment to progress, but I think I'll take the disappearance of my
original post as a Sign From God and just leave it at that this time
around.)
E. This thread has been closed and replies have been disabled. Please start a new discussion. Similar topics
by: Brandon J. Van Every |
last post by:
What's better about Ruby than Python? I'm sure there's something. What is
it?
This is not a troll. I'm language shopping and I want people's answers. I
don't know beans about Ruby or have...
|
by: michele.simionato |
last post by:
Paul Rubin wrote:
> How about macros? Some pretty horrible things have been done in C
> programs with the C preprocessor. But there's a movememnt afloat to
> add hygienic macros to Python. Got any...
|
by: Xah Lee |
last post by:
Python, Lambda, and Guido van Rossum
Xah Lee, 2006-05-05
In this post, i'd like to deconstruct one of Guido's recent blog about
lambda in Python.
In Guido's blog written in 2006-02-10 at...
|
by: Paddy3118 |
last post by:
This month there was/is a 1000+ long thread called:
"merits of Lisp vs Python"
In comp.lang.lisp.
If you followed even parts of the thread, AND previously
used only one of the languages AND...
|
by: WaterWalk |
last post by:
I've just read an article "Building Robust System" by Gerald Jay
Sussman. The article is here:
http://swiss.csail.mit.edu/classes/symbolic/spring07/readings/robust-systems.pdf
In it there is a...
|
by: Naresh1 |
last post by:
What is WebLogic Admin Training?
WebLogic Admin Training is a specialized program designed to equip individuals with the skills and knowledge required to effectively administer and manage Oracle...
|
by: Matthew3360 |
last post by:
Hi,
I have been trying to connect to a local host using php curl. But I am finding it hard to do this. I am doing the curl get request from my web server and have made sure to enable curl. I get a...
|
by: Oralloy |
last post by:
Hello Folks,
I am trying to hook up a CPU which I designed using SystemC to I/O pins on an FPGA.
My problem (spelled failure) is with the synthesis of my design into a bitstream, not the C++...
|
by: BLUEPANDA |
last post by:
At BluePanda Dev, we're passionate about building high-quality software and sharing our knowledge with the community. That's why we've created a SaaS starter kit that's not only easy to use but also...
|
by: Ricardo de Mila |
last post by:
Dear people, good afternoon...
I have a form in msAccess with lots of controls and a specific routine must be triggered if the mouse_down event happens in any control.
Than I need to discover what...
|
by: Johno34 |
last post by:
I have this click event on my form. It speaks to a Datasheet Subform
Private Sub Command260_Click()
Dim r As DAO.Recordset
Set r = Form_frmABCD.Form.RecordsetClone
r.MoveFirst
Do
If...
|
by: ezappsrUS |
last post by:
Hi,
I wonder if someone knows where I am going wrong below. I have a continuous form and two labels where only one would be visible depending on the checkbox being checked or not. Below is the...
|
by: jack2019x |
last post by:
hello, Is there code or static lib for hook swapchain present?
I wanna hook dxgi swapchain present for dx11 and dx9.
|
by: DizelArs |
last post by:
Hi all)
Faced with a problem, element.click() event doesn't work in Safari browser.
Tried various tricks like emulating touch event through a function:
let clickEvent = new Event('click', {...
| |