By using this site, you agree to our updated Privacy Policy and our Terms of Use. Manage your Cookies Settings.
424,963 Members | 1,061 Online
Bytes IT Community
+ Ask a Question
Need help? Post your question and get tips & solutions from a community of 424,963 IT Pros & Developers. It's quick & easy.

Python syntax in Lisp and Scheme

P: n/a
I think everyone who used Python will agree that its syntax is
the best thing going for it. It is very readable and easy
for everyone to learn. But, Python does not a have very good
macro capabilities, unfortunately. I'd like to know if it may
be possible to add a powerful macro system to Python, while
keeping its amazing syntax, and if it could be possible to
add Pythonistic syntax to Lisp or Scheme, while keeping all
of the functionality and convenience. If the answer is yes,
would many Python programmers switch to Lisp or Scheme if
they were offered identation-based syntax?
Jul 18 '05
Share this Question
Share on Google+
699 Replies


P: n/a

"Steve VanDevender" <st****@hexadecimal.uoregon.edu> wrote in message
news:87************@localhost.efn.org...
"Terry Reedy" <tj*****@udel.edu> writes:
Lisp (and possibly other languages I am not familiar with) adds the alternative of *not* evaluating arguments but instead passing them as unevaluated expressions. In other words, arguments may be
*implicitly* quoted. Since, unlike as in Python, there is no
alternate syntax to flag the alternate argument protocol, one must, as far as I know, memorize/learn the behavior for each function. The
syntactic unification masks but does not lessen the semantic
divergence. For me, it made learning Lisp (as far as I have gotten) more complicated, not less, especially before I 'got' what going
on.
What you're talking about are called "special forms"
Perhaps by you and by Schemers generally, and perhaps even by modern
Common Lispers (I have no idea) but not by Winston and Horn in LISP
(1st edition, now into 3rd): "Appendix 2: Basic Lisp Functions ... A
FSUBR takes a variable number of arguments which may not be
evaluated.", which goes on to list AND, COND, DEFUN, PROG, etcetera,
along with normal SUBR and LSUBR (variable arg number) arg-evaluating
functions.
and are definitely not functions,
That is *just* what I thought, though I mentally used the word
'psuedofunction'.
and are used when it is semantically necessary to leave
something in an argument position unevaluated (such as in 'cond' or
'if', Lisp 'defun' or 'setq', or Scheme 'define' or 'set!').
Once I understood this, I noticed that the special forms mostly
correspond to Python statements or special operators, which have the
same necessity.

.... Lisp-family languages have traditionally held to the notion that Lisp programs should be easily representable using the list data structure, making it easy to manipulate programs as data.


This is definitely a plus. One of my current interests is in
meta-algorithms that convert between recursive and iterative forms of
expressing 'repetition with variation' (and not just tail recursion).
Better understanding Lisp has help my thinking about this.

Terry J. Reedy
Jul 18 '05 #101

P: n/a

"Shriram Krishnamurthi" <sk@cs.brown.edu> wrote in message
news:w7*************@cs.brown.edu...
"Terry Reedy" <tj*****@udel.edu> writes:
Lisp (and possibly other languages I am not familiar with) adds the
alternative of *not* evaluating arguments but instead passing them as unevaluated expressions.
I'm sorry -- you appear to be hopelessly confused on this point.
Actually, I think I have just achieved clarity: the one S-expression
syntax is used for at least different evaluation protocols -- normal
functions and special forms, which Lispers have also called FSUBR and
FEXPR *functions*. See the post by Steve VanDevender and my response
thereto.
There are no functions in Scheme whose arguments are not evaluated.
That depends on who defines 'function'. As you quoted, I said Lisp
(in general) and not Scheme specifically. I repeat my previous note:
"If anything I write below about Lisp does not apply to Scheme
specificly, my aplogies in advance."
It is possible that you had a horribly confused, and therefore
confusing, Scheme instructor or text.


I will let you debate this with LISP authors Winston and Horn. I also
read the original SICP (several years ago, and have forgotten some
details) and plan to look at the current version sometime.

Terry J. Reedy
Jul 18 '05 #102

P: n/a
"Terry Reedy" <tj*****@udel.edu> writes:
"Shriram Krishnamurthi" <sk@cs.brown.edu> wrote in message
news:w7*************@cs.brown.edu...
"Terry Reedy" <tj*****@udel.edu> writes:
Lisp (and possibly other languages I am not familiar with) adds the
alternative of *not* evaluating arguments but instead passing them as unevaluated expressions.
I'm sorry -- you appear to be hopelessly confused on this point.


Actually, I think I have just achieved clarity: the one S-expression
syntax is used for at least different evaluation protocols -- normal
functions and special forms, which Lispers have also called FSUBR and
FEXPR *functions*. See the post by Steve VanDevender and my response
thereto.


There used to be FEXPR and FSUBRs in MacLisp, but Common Lisp never
had them. They had flags that indicated that their arguments were not
to be evaluated, but were otherwise `normal' functions.

The problem with FEXPRs is when you pass them around as first-class
values. Then it is impossible to know if any particular fragment of
code is going to be evaluated (in fact, it can dynamically change).
Needless to say, this presents problems to the compiler.

It generally became recognized that macros were a better solution.

So FSUBRs, which were primitives that did not evaluate their arguments
have been superseded by `special forms', which are syntactic constructs.

FEXPRs, which were user procedures that did not evaluate their
arguments have been superseded by macros.

Macros and special forms are generally not considered `functions'
because they are not first-class objects.
I will let you debate this with LISP authors Winston and Horn. I also
read the original SICP (several years ago, and have forgotten some
details) and plan to look at the current version sometime.


The original Winston and Horn book came out prior to SICP, which
itself came out prior to the creation of Common Lisp.
Jul 18 '05 #103

P: n/a


A.M. Kuchling wrote:
On Sun, 05 Oct 2003 12:27:47 GMT,
Kenny Tilton <kt*****@nyc.rr.com> wrote:
Python (I gather from what I read here) /deliberately/ interferes in my
attempts to conform my code to the problem at hand, because the
designers have decreed "flat is better". Python rips a tool from my
hands without asking if, in some cases (I would say most) it might be
the right tool (where an algorithm has a tree-like structure).

Oh, for Pete's sake... Python is perfectly capable of manipulating tree
structures, and claiming it "rips a tool from my hand" is simply silly.


Well then I am glad I did not say it! :)

I am talking about coding up an algorithm, not manipulating a tree of
data. An example is:

(my-function ;; takes three parameters, which follow
(this-function x yz) ;; p1
(case x (:left 1)(:right -1)) ;;p2
(if (some-other-function 'z)
42
'norwegian-blue)) ;; p3

where my-function gets passed the first two computations plus either 42
or 'norwegian-blue, ie, the value returned by the IF form.

Looks simple to me. But IIUC (I may not!) in Python IF is a statement,
so that would not work too well. I need an artificial extra statement to
satisfy an artifical rule.

kenny

Jul 18 '05 #104

P: n/a
On 03 Oct 2003 18:51:38 +0100, Alexander Schmolck <a.********@gmx.net>
wrote:
I'd be interested to hear your reasons. *If* you take the sharp distinction
that python draws between statements and expressions as a given, then python's
syntax, in particular the choice to use indentation for block structure, seems
to me to be the best choice among what's currently on offer (i.e. I'd claim
that python's syntax is objectively much better than that of the C and Pascal
descendants


I'll claim C's syntax is objectively better - it has a clean
definition whereas Python's hasn't. Python isn't even consistent - it
uses whitespace some of the time and delimiters some of the time; if
it stuck to the decision to use whitespace it might be a bit less
repellent. Also Python's syntax has a whole category of pitfalls that
C's lacks.

In truth, though, my reason for being unwilling to use Python for
anything other than throwaway scripting isn't based on objective
criteria, it's based on a visceral revulsion - it just _feels_ wrong.
If Python feels right to you, then you should by all means use it.

--
"Sore wa himitsu desu."
To reply by email, remove
the small snack from address.
http://www.esatclear.ie/~rwallace
Jul 18 '05 #105

P: n/a
Kenny Tilton
Not?:

(defun quadratic (a b c)
(let ((rad (sqrt (- (* b b) (* 4 a c)))))
(mapcar (lambda (plus-or-minus)
(/ (funcall plus-or-minus (- b) rad) (+ a a)))
'(+ -))))

:)
Indeed no, I don't suspect most people write it this way.

Also, as Robin Becker pointed out in this branch, the above
equation fails numerically for several cases, when intermediate
terms are too large for floats or when there's a b*b is very
much greater than 4*a*c.
Well it was a bad example because it does require two similar
calculations which can be done /faster/ by pre-computing shared
components.
It was a bad example because it can have two return values.
But then the flattening is about performance
That's not relevant. That's why I said "possibly."
and the subject is whether deeply nested forms are in fact
simpler than flattened sequences where the algorithm
itself would be drawn as a tree.
You said your enlightenment came from analogy to
math, where writing everything as one expression is
better than writing things in different parts. I objected,
saying that math also uses ways to remove deep
hierarchies, including ways to flatten those equations.

It is not changing the subject. It is a rebuttal to one of
your justifications and an attempt at doing a comparison
with other fields, to bring in my view that there is a
balance between flat and deep, and the choice of
where the different tradeoffs are made is the essense
of the Zen of Python ... or of any other field of endevour.
No! Those are like subroutines; they do not flatten, they create call
trees, hiding and encapsulating the details of subcomputations.
But didn't I do that when I said "root = sqrt(b*b-4*a*c)"?
Why isn't that "like [a] subroutine"?
When any local computation gets too long, there is
probably a subroutine to be carved out, or at least I can take 10 lines
and give it a nice readable name so I can avoid confronting too much
detail at any one time.
There's also common subexpressions like my 'root' which
aren't appropriate to compute as a function; that is, it's only
used twice and writing discriminate(a, b, c) is too much hiding
given what it does. But it is one where it may make sense to
use a temporary variable because it makes the expression more
readable .. at least by those who prefer reading flatter equations
than you do. Again, it's that balance thing.

I must agree that the examples weren't the best I could do.
The problem is that books have built up the nomenclature
during the preceeding pages, so don't do the "let X be ..."
that you would see elsewhere. The people who do that
the most that I've seen are the fluid dynamics modellers,
so let me quote from a not atypical example I dug up

http://citeseer.nj.nec.com/cache/pap...nos.rutgers.ed
uzSz~knightzSzpaperszSzaiaa96-0040.pdf/computation-of-d-asymmetric.pdf

===============================
Nomenclature

M_infinity Mach Number
M_t Turbulent Mach Number
Re_delta Reynolds Number, Based on
Incoming Boundary Layer Thickness
delta_infinity Incoming Boundary Layer Thickness
delta^* Boundary Layer Displacement Thickness
...
The Reynolds-average equations for conservation of mass,
momentum and energy are,
__ __ ~
del_t rho + del_k rho u_k = 0
__ ~ __ ~ ~
del_t rho u_i + del_k rho u_i * u_k =
_ ,, '' ___
- del_i p + del_k (-rho u_i u_k + tau_{ik})

...
where del_t = del/del t, del_k = del / del x_k and the
Einstein summantion convention is employed. The overbar
denotes a conventional Reynolds average, while
the overtilde is used to denote the Favre mass
average. A double superscript '' represents
flucuations with respect to the Favre average, while
a single superscript ' stands for fluctuations
respect to the Reynolds average.
___ ~
In the above equations, rho is the mean density, u_i
_
is the mass-averaged velocty, p is the mean pressure
_
and e is the mass-averaged total enery per unit mass.
The following relations are employed to evaluate
_ _
p and e:
_ ___ _
p = rho R T
_ _ ~ ~
e = c_v T + (1/2) u_i u_i + k

where k is the mass-averaged turbulence kinetic
energy
_____________
___ ,, ,,
rho k = (1/2) rho u_i u_i

===============================

All those terms, definitions, and relationships are
needed to understand the equations. Some of these are
external variables, others are simple substitutions
_ ___ _
(like p which is rho R T ) and some require multiple
_
substitutions, like e, which uses k, which is based
on another function).

This could all be written as one set of expressions,
_ _
without any of the substituted variables p and e,
but it wouldn't be done because that makes the structure
of the equation less obvious and overly verbose.
But I don't throw away the structure of the
problem to get to simplicity.
Who said anything about throwing it away? Again,
you interpreted things in an extreme view never
advocated by anyone here, much less me.

The questions are, when does the structure get too
complicated and what substructures are present which
can be used to simplify the overall structure without
losing understanding? And how can it be presented so
that others can more easily understand what's going on?

Hmmm, Zen constrained by the details of a computing language. Some
philosophy! :) What I see in "flat is better" is the mind imposing
preferred structure on an algorithm which has its own structure
independent of any particular observer/mind.
But it's well known there are different but equivalent ways
to approach the same problem. For example, in mechanics you
can use a Newtownian approach or a Lagrangian one and you'll
end up with the same answer even through the approaches are quite
different in style. In quantum mechanics, Schrodinger's wave
equation is identical to Heisenberg's matrix formulation.
In analysis, you can use measure theory or infinitesimals to
solve the same problem. Or as a simpler example, you can use
geometry to solve some problems easier than you can algebra.
(I recall a very nice, concise geometic proof of the Pythagorean
theorem.) Consider Ramanujan, who came up with very different
and powerful ways to think about problems, including alternate
ways to handle existing proofs. In computer science, all
recursive solutions can be made iterative, but there are times
when the recursive solution is easier to think about. We
know that NP-hard problems are identically hard (up to a
polynomial scaling factor) but using an algorithm for solving
a minimax problem might be more appropriate for a given task
than one meant for subgraph isomorphism, even if the two
solutions are equivalent.

By solving the problem using one of these alternatives, then
yes, your mind would impose a preferred structure to the solution.
So what? Sapir-Whorf is wrong and we can come up with new
new language and structures. It's hard to come up with new,
generally powerful solutions, but possible.
I am getting excellent results lately by always striving to conform my
code to the structure of the problem as it exists independently of me.
How can I know the structure independently of my knowing? I cannot, but
the problem will tell me if I screw up and maybe even suggest how I went
wrong.
Suppose you needed to find a given number in an unsorted list
of numbers. You know a priori that the number is in the list.
What algorithm do you use? My guess is you use a linear search,
which is O(N).

However, if you had a general purpose quantum computer available
to you then you could use Grover's algorithm and solve the problem
in O(sqrt(N)) time.

Did you even consider that alternative? Likely no, since
quantum algorithms still have little real life applicability.
(I think the biggest search so far has been in a list of 5
elements.)

If you didn't, then you must agree that your decision of how
to think of the problem and describe it on a computer is
constrained (not limited! - constraints can be broken) by
your training and the tools you have available to you.

Even if you did agree, can you prove there are no other
alternative solutions to your problem which are equally
graceful? If you say yes, I'm sure a Forth fan would disagree.

Python (I gather from what I read here) /deliberately/ interferes in my
attempts to conform my code to the problem at hand, because the
designers have decreed "flat is better". Python rips a tool from my
hands without asking if, in some cases (I would say most) it might be
the right tool (where an algorithm has a tree-like structure).


That's a false and deliberately antagonistic statement. Python
doesn't constrain you to squat - you're free to use any other
language you want to use. Python is another tool available to you,
and adding it to the universe of tools doesn't take any other one
away from you.

A programming language must balance between many and often
contrary factors. The fact that you might be good at deeply
hierarchical structures does not mean that others share your
abilities or preferences (and studies have shown that people
in general are not good at deep hierarchies - look at how
flat most user directories are) nor does it mean that every
programming language must support your favored way of thinking.

One of the balances is the expressability and power
available to a single user vs. the synergies available to
a group of people with different skills who have a consistent
worldview -- even if imposed from the outside. Ramanujan
made up his own notation for math, which has taken a lot
of time for others decode and so helping to reduce the impact
of his work on the rest of the world. Your viewpoint is
that there is no tradeoff, or rather that the balance point
is not much different than full single-person expressability.
Others here, including me, disagree.

In any case, your argument that mathematics equations can be
used as justification for prefering a deeply hierarchical
description has no basis in how people actually use and
express equations. If there was then FORTRAN would have
looked a lot different.

Andrew Dalke
da***@dalkescientific.com
Jul 18 '05 #106

P: n/a
Jason Creighton:
I just wish the Zen of Python (try "import this" on a Python interpreter
for those who haven't read it.) would make it clearer that "Explicit is
better than implicit" really means "Explicit is better than implicit _in
some cases_"


http://dictionary.reference.com/search?q=better

better
adj. Comparative of good.
1. Greater in excellence or higher in quality.

"X is better than Y" does not mean "eschew Y for X".

Andrew
da***@dalkescientific.com
Jul 18 '05 #107

P: n/a
Russell Wallace:
Also Python's syntax has a whole category of pitfalls that
C's lacks.
To be fair, there are a couple of syntax pitfalls Python doesn't
have that C has, like the dangling else

if (a)
if (b)
c++;
else /* indented incorrectly but valid */
c--

and left-right token disambiguation which turns

a+++b

into

a++ + b

(in early days of C, I did a =+1 only to find that =+ was
a deprecated version of += )

or in C++ makes it hader to do double level templates
because of the <<
If Python feels right to you, then you should by all means use it.


Yup, it fits my brain, but my brain ain't yours.

Andrew
da***@dalkescientific.com
Jul 18 '05 #108

P: n/a
On Sun, 05 Oct 2003 12:27:47 GMT, Kenny Tilton <kt*****@nyc.rr.com> wrote:


Andrew Dalke wrote:
Still have only made slight headway into learning Lisp since the
last discussion, so I've been staying out of this one. But

Kenny Tilton:
Take a look at the quadratic formula. Is that flat? Not. Of course
Python allows nested math (hey, how come!), but non-mathematical
computations are usually trees, too.

Since the quadratic formula yields two results, ...


I started this analogy, didn't I? <g>

I expect most
people write it more like

droot = sqrt(b*b-4*a*c) # square root of the discriminate
x_plus = (-b + droot) / (4*a*c)
x_minus = (-b - droot) / (4*a*c)


Not?:

(defun quadratic (a b c)
(let ((rad (sqrt (- (* b b) (* 4 a c)))))
(mapcar (lambda (plus-or-minus)
(/ (funcall plus-or-minus (- b) rad) (+ a a)))
'(+ -))))

:)

So cluttered ;-)
(and you evaluate a+a twice, why not (two-a (+ a a)) in the let ;-)

JFTHOI, a one-liner:

def quadr(a,b,c): return [op(-b, r)/d for r,d in [(sqrt(b*b-4*a*c),a+a)] for op in (add,sub)]

But that aside, don't you think the following 3 lines are more readable than your 5 above?

def quadratic(a,b,c):
rad = sqrt(b*b-4*a*c); den =a+a
return (-b+rad)/den, (-b-rad)/den

(Not tested byond what you see below ;-)

====< quadratic.py >================================================= ========
from operator import add,sub
from math import sqrt
def quadr(a,b,c): return [op(-b, r)/d for r,d in [(sqrt(b*b-4*a*c),a+a)] for op in (add,sub)]

def quadratic(a,b,c):
rad = sqrt(b*b-4*a*c); den =a+a
return (-b+rad)/den, (-b-rad)/den

def funs(a,b,c,q):
def y(x): return a*x*x+b*x+c
def invy(y): return a and q(a,b,c-y) or [(y-c)/b, None]
return y,invy

def test():
for q in (quadr, quadratic):
for coeffs in [(1,2,3), (1,0,0),(0,5,0)]:
print '\n ff(x)=%s*x*x + %s*x + %s, and inverse is using %s:'%(coeffs+(q.__name__,))
ff,fi = funs(*(coeffs+(q,)))
for x in range(-3,4):
ffx = 'ff(%s)'%x; sy='%s, '%ff(x); fiy='fi(%s)'%(ffx)
print '%8s => %6s %11s => %s'%(ffx,sy,fiy, fi(ff(x)))

if __name__ == '__main__':
test()
================================================== ===========================

test result:

[16:51] C:\pywk\clp>quadratic.py

ff(x)=1*x*x + 2*x + 3, and inverse is using quadr:
ff(-3) => 6, fi(ff(-3)) => [1.0, -3.0]
ff(-2) => 3, fi(ff(-2)) => [0.0, -2.0]
ff(-1) => 2, fi(ff(-1)) => [-1.0, -1.0]
ff(0) => 3, fi(ff(0)) => [0.0, -2.0]
ff(1) => 6, fi(ff(1)) => [1.0, -3.0]
ff(2) => 11, fi(ff(2)) => [2.0, -4.0]
ff(3) => 18, fi(ff(3)) => [3.0, -5.0]

ff(x)=1*x*x + 0*x + 0, and inverse is using quadr:
ff(-3) => 9, fi(ff(-3)) => [3.0, -3.0]
ff(-2) => 4, fi(ff(-2)) => [2.0, -2.0]
ff(-1) => 1, fi(ff(-1)) => [1.0, -1.0]
ff(0) => 0, fi(ff(0)) => [0.0, 0.0]
ff(1) => 1, fi(ff(1)) => [1.0, -1.0]
ff(2) => 4, fi(ff(2)) => [2.0, -2.0]
ff(3) => 9, fi(ff(3)) => [3.0, -3.0]

ff(x)=0*x*x + 5*x + 0, and inverse is using quadr:
ff(-3) => -15, fi(ff(-3)) => [-3, None]
ff(-2) => -10, fi(ff(-2)) => [-2, None]
ff(-1) => -5, fi(ff(-1)) => [-1, None]
ff(0) => 0, fi(ff(0)) => [0, None]
ff(1) => 5, fi(ff(1)) => [1, None]
ff(2) => 10, fi(ff(2)) => [2, None]
ff(3) => 15, fi(ff(3)) => [3, None]

ff(x)=1*x*x + 2*x + 3, and inverse is using quadratic:
ff(-3) => 6, fi(ff(-3)) => (1.0, -3.0)
ff(-2) => 3, fi(ff(-2)) => (0.0, -2.0)
ff(-1) => 2, fi(ff(-1)) => (-1.0, -1.0)
ff(0) => 3, fi(ff(0)) => (0.0, -2.0)
ff(1) => 6, fi(ff(1)) => (1.0, -3.0)
ff(2) => 11, fi(ff(2)) => (2.0, -4.0)
ff(3) => 18, fi(ff(3)) => (3.0, -5.0)

ff(x)=1*x*x + 0*x + 0, and inverse is using quadratic:
ff(-3) => 9, fi(ff(-3)) => (3.0, -3.0)
ff(-2) => 4, fi(ff(-2)) => (2.0, -2.0)
ff(-1) => 1, fi(ff(-1)) => (1.0, -1.0)
ff(0) => 0, fi(ff(0)) => (0.0, 0.0)
ff(1) => 1, fi(ff(1)) => (1.0, -1.0)
ff(2) => 4, fi(ff(2)) => (2.0, -2.0)
ff(3) => 9, fi(ff(3)) => (3.0, -3.0)

ff(x)=0*x*x + 5*x + 0, and inverse is using quadratic:
ff(-3) => -15, fi(ff(-3)) => [-3, None]
ff(-2) => -10, fi(ff(-2)) => [-2, None]
ff(-1) => -5, fi(ff(-1)) => [-1, None]
ff(0) => 0, fi(ff(0)) => [0, None]
ff(1) => 5, fi(ff(1)) => [1, None]
ff(2) => 10, fi(ff(2)) => [2, None]
ff(3) => 15, fi(ff(3)) => [3, None]

Regards,
Bengt Richter
Jul 18 '05 #109

P: n/a


Bengt Richter wrote:
On Sun, 05 Oct 2003 12:27:47 GMT, Kenny Tilton <kt*****@nyc.rr.com> wrote:


Andrew Dalke wrote:

Still have only made slight headway into learning Lisp since the
last discussion, so I've been staying out of this one. But

Kenny Tilton:
Take a look at the quadratic formula. Is that flat? Not. Of course
Python allows nested math (hey, how come!), but non-mathematical
computations are usually trees, too.
Since the quadratic formula yields two results, ...
I started this analogy, didn't I? <g>

I expect most
people write it more like

droot = sqrt(b*b-4*a*c) # square root of the discriminate
x_plus = (-b + droot) / (4*a*c)
x_minus = (-b - droot) / (4*a*c)


Not?:

(defun quadratic (a b c)
(let ((rad (sqrt (- (* b b) (* 4 a c)))))
(mapcar (lambda (plus-or-minus)
(/ (funcall plus-or-minus (- b) rad) (+ a a)))
'(+ -))))

:)


So cluttered ;-)
(and you evaluate a+a twice, why not (two-a (+ a a)) in the let ;-)


I like to CPU-binge every once in a while. :)

JFTHOI, a one-liner:

def quadr(a,b,c): return [op(-b, r)/d for r,d in [(sqrt(b*b-4*a*c),a+a)] for op in (add,sub)]
I like it! reminds me of COBOL's perform varying from by until. DEC
Basics after Basic Plus also had statement modifiers:

print x, y for x = 1 to 3 for y = 2 to 6 step 2 until <etc>

But that aside, don't you think the following 3 lines are more readable than your 5 above?

def quadratic(a,b,c):
rad = sqrt(b*b-4*a*c); den =a+a
return (-b+rad)/den, (-b-rad)/den


Not bad at all. Now gaze upon true beauty:

(defun quadratic (a b c)
(let ((rad (sqrt (- (* b b)
(* 4 a c)))) ;; much nicer than 4*a*c
(den (+ a a)))
(list (/ (+ (- b) rad) den)
(/ (- (- b) rad) den)))))

(Not tested, and without my Lisp editor, parens likely off.)

As for line counting and squeezing semantics into the smallest screen
area possible, (a) I have two monitors for a reason and (b) have you
seen the language K?

:)

kenny

Jul 18 '05 #110

P: n/a
Russell Wallace wrote:

OK, I'll bite --
I'll claim C's syntax is objectively better - it has a clean
definition whereas Python's hasn't.
It hasn't? What is unclean about it?
Python isn't even consistent - it
uses whitespace some of the time and delimiters some of the time;
Python uses indentation to indicate code blocks. I don't really see what is
inconsistent about it. Indentation doesn't matter inside list, dict and tuple
literals, but then again code blocks don't appear in those.
if
it stuck to the decision to use whitespace it might be a bit less
repellent. Also Python's syntax has a whole category of pitfalls that
C's lacks.


Like what? If you mean inconsistent indentation, that one bites you just the
same in other languages, just in different ways.

Curiously y'rs,

--
Hans (ha**@zephyrfalcon.org)
http://zephyrfalcon.org/

Jul 18 '05 #111

P: n/a
On Mon, 06 Oct 2003 01:07:32 GMT, Kenny Tilton wrote:
But that aside, don't you think the following 3 lines are more
readable than your 5 above?
def quadratic(a,b,c):
rad = sqrt(b*b-4*a*c); den =a+a
return (-b+rad)/den, (-b-rad)/den
Not bad at all. Now gaze upon true beauty: (defun quadratic (a b c)
(let ((rad (sqrt (- (* b b)
(* 4 a c)))) ;; much nicer than 4*a*c
(den (+ a a)))
(list (/ (+ (- b) rad) den)
(/ (- (- b) rad) den))))) (Not tested, and without my Lisp editor, parens likely off.)


(defun ± (x y)
(values (+ x y) (- x y)))

(defun quadratic (a b c)
(± (/ (- b) (* 2 a)) (/ (sqrt (- (* b b) (* 4 a c))) (* 2 a))))

--
Cogito ergo I'm right and you're wrong. -- Blair Houghton

(setq reply-to
(concatenate 'string "Paul Foley " "<mycroft" '(#\@) "actrix.gen.nz>"))
Jul 18 '05 #112

P: n/a
(Note that I'm not usually in the habit of coming along to comp.lang.X
and posting criticism of X; I read this thread on comp.lang.lisp
without realizing a poster had set followups to this newsgroup only;
but I'll answer the questions below.)

On Sun, 05 Oct 2003 22:13:32 -0400, Hans Nowak <ha**@zephyrfalcon.org>
wrote:
Russell Wallace wrote:

OK, I'll bite --
I'll claim C's syntax is objectively better - it has a clean
definition whereas Python's hasn't.
It hasn't? What is unclean about it?


The relevant definitions in C:
A program is a stream of tokens, which may be separated by whitespace.
The sequence { (zero or more statements) } is a statement.

What's the equivalent for Python?
Python uses indentation to indicate code blocks. I don't really see what is
inconsistent about it. Indentation doesn't matter inside list, dict and tuple
literals, but then again code blocks don't appear in those.


Except that 'if', 'while' etc lines are terminated with delimiters
rather than newline. Oh, and doesn't Python have the option to use \
or somesuch to continue a regular line?
if
it stuck to the decision to use whitespace it might be a bit less
repellent. Also Python's syntax has a whole category of pitfalls that
C's lacks.


Like what? If you mean inconsistent indentation, that one bites you just the
same in other languages, just in different ways.


But in ways that are objectively less severe because:

- If the indentation is buggered up, the brackets provide the
information you need to figure out what the indentation should have
been.

- The whole tabs vs spaces issue doesn't arise.

--
"Sore wa himitsu desu."
To reply by email, remove
the small snack from address.
http://www.esatclear.ie/~rwallace
Jul 18 '05 #113

P: n/a
In comp.lang.functional Erann Gat <my************************@jpl.nasa.gov> wrote:
:> I can't see why a LISP programmer would even want to write a macro.
: That's because you are approaching this with a fundamentally flawed
: assumption. Macros are mainly not used to make the syntax prettier
: (though they can be used for that). They are mainly used to add features
: to the language that cannot be added as functions.

Really? Turing-completeness and all that... I presume you mean "cannot
so easily be added as functions", but even that would surprise me.
(Unless you mean cannot be added _to_Lisp_ as functions, because I don't
know as much as I'd like to about Lisp's capabilities and limitations.)

: For example, imagine you want to be able to traverse a binary tree and do
: an operation on all of its leaves. In Lisp you can write a macro that
: lets you write:
: (doleaves (leaf tree) ...)
: You can't do that in Python (or any other langauge).

My Lisp isn't good enough to answer this question from your code,
but isn't that equivalent to the Haskell snippet: (I'm sure
someone here is handy in both languages)

doleaves f (Leaf x) = Leaf (f x)
doleaves f (Branch l r) = Branch (doleaves f l) (doleaves f r)

I'd be surprised if Python couldn't do the above, so maybe doleaves
is doing something more complex than it looks to me to be doing.

: Here's another example of what you can do with macros in Lisp:

: (with-collector collect
: (do-file-lines (l some-file-name)
: (if (some-property l) (collect l))))

: This returns a list of all the lines in a file that have some property.

OK, that's _definitely_ just a filter: filter someproperty somefilename
Perhaps throw in a fold if you are trying to abstract "collect".

: DO-FILE-LINES and WITH-COLLECTOR are macros, and they can't be implemented
: any other way because they take variable names and code as arguments.

What does it mean to take a variable-name as an argument? How is that
different to taking a pointer? What does it mean to take "code" as an
argument? Is that different to taking a function as an argument?

-Greg
Jul 18 '05 #114

P: n/a
gr***@cs.uwa.edu.au writes:
Really? Turing-completeness and all that... I presume you mean "cannot
so easily be added as functions", but even that would surprise me.
well you can pass around code full of lambdas so most macros (expect
the ones which perform hairy source transformations) can be rewritten
as functions, but that isn't the point. Macros are about saying what
you mean in terms that makes sense for your particular app.
: Here's another example of what you can do with macros in Lisp:

: (with-collector collect
: (do-file-lines (l some-file-name)
: (if (some-property l) (collect l))))

: This returns a list of all the lines in a file that have some property.

OK, that's _definitely_ just a filter: filter someproperty somefilename
Perhaps throw in a fold if you are trying to abstract "collect".
no it's not, and the proof is that it wasn't written as a filter. For
whatever reason the author of that snippet decided that the code
should be written with WITH-COLLECTOR and not as a filter, some
languages give you this option, some don't, some people think this is
a good thing, some don't.
: DO-FILE-LINES and WITH-COLLECTOR are macros, and they can't be implemented
: any other way because they take variable names and code as arguments.

What does it mean to take a variable-name as an argument? How is that
different to taking a pointer? What does it mean to take "code" as an
argument? Is that different to taking a function as an argument?


You are confusing the times at which things happen. A macro is
expanded at compile time, there is no such thing as a pointer as far
as macros are concerned (more or less), macros are passed pieces of
source code in the form of lists and atoms and return _source code_ in
the form of lists and atoms. The source code is then compiled (whith
further macro expansion in need be) and finally, after the macro has
long since finished working, the code is executed.

Another trivial example:

We often see code like this:

(let ((var (foo)))
(if var
(do-stuff-with-var)
(do-other-stuff)))

So write a macro called IF-BIND which allows you to write this instead:

(if-bind var (foo)
(do-stuff-with-var)
(do-other-stuff))

The definition for IF-BIND is simply:

(defmacro if-bind (var condition then &optional else)
`(let ((,var ,condition))
(if ,then ,else)))

But what if the condition form returns multiple values which we didn't
want to throw away? Well easy enough:

(defmacro if-bind (var condition then &optional else)
(etypecase var
(cons `(multiple-value-bind ,var ,condition
(if ,(car var) ,then ,else)))
(symbol `(let ((,var ,condition))
(if ,var ,then ,else)))))

Notice how we use lisp to inspect the original code and decide what
code to produce depending on whether VAR is a cons or a symbol.

I could get the same effect (from an execution stand point) of if-bind
without the macro, but the source code is very different. Macros allow
me to say what I _mean_, not what the compiler wants.

If you want more examples look in Paul Graham's OnLisp
(http://www.paulgraham.com/onlisp.html) book for the chapters on
continuations or multitasking.

--
-Marco
Ring the bells that still can ring.
Forget your perfect offering.
There is a crack in everything.
That's how the light gets in.
-Leonard Cohen
Jul 18 '05 #115

P: n/a
[cc'ed since I wasn't sure if you would be tracking the c.l.py thread]

Russell Wallace:
A program is a stream of tokens, which may be separated by whitespace.
The sequence { (zero or more statements) } is a statement.
Some C tokens may be separated by whitespace and some *must* be
separated by whitespace.

static const int i
static const inti

i + + + 1
i ++ + 1

The last case is ambiguous, so the tokenizer has some logic to
handle that -- specifically, a greedy match with no backtracking.
It throws away the ignorable whitespace and gives a stream of
tokens to the parser.
What's the equivalent for Python?
One definition is that "a program is a stream of tokens, some
of which may be separated by whitespace and some which
must be separated by whitespace." Ie, the same as my
reinterpretation of your C definition.

For a real answer, start with

http://python.org/doc/current/ref/line-structure.html
"A Python program is divided into a number of logical lines."

http://python.org/doc/current/ref/logical.html
"The end of a logical line is represented by the token NEWLINE.
Statements cannot cross logical line boundaries except where
NEWLINE is allowed by the syntax (e.g., between statements in
compound statements). A logical line is constructed from one or
more physical lines by following the explicit or implicit line joining
rules."

http://python.org/doc/current/ref/physical.html
"A physical line ends in whatever the current platform's convention
is for terminating lines. On Unix, this is the ASCII LF (linefeed)
character. On Windows, it is the ASCII sequence CR LF (return
followed by linefeed). On Macintosh, it is the ASCII CR (return)
character."

and so on.
Except that 'if', 'while' etc lines are terminated with delimiters
rather than newline. Oh, and doesn't Python have the option to use \
or somesuch to continue a regular line?
The C tokenizer turns the delimiter character into a token.

The Python tokenizer turns indentation level changes into
INDENT and DEDENT tokens. Thus, the Python parser just
gets a stream of tokens. I don't see a deep difference here.

Both tokenizers need to know enough about the respective
language to generate the appropriate tokens.
But in ways that are objectively less severe because:

- If the indentation is buggered up, the brackets provide the
information you need to figure out what the indentation should have
been.
As I pointed out, one of the pitfalls which does occur in C
is the dangling else

if (a)
if (b)
c++;
else /* indented incorrectly but valid */
c--

That mistake does not occur in Python. I personally had
C++ code with a mistake based on indentation. I and
three other people spent perhaps 10-15 hours spread
over a year to track it down. We all knew where the bug
was supposed to be in the code, but the indentation threw
us off.
- The whole tabs vs spaces issue doesn't arise.


That's an issue these days? It's well resolved -- don't
use tabs.

And you know, I can't recall a case where it's every
been a serious problem for me. I have a couple of times
had a problem, but never such that the code actually
worked, unlike the if/else code I listed above for C.

Andrew
da***@dalkescientific.com
Jul 18 '05 #116

P: n/a
Joe Marshall <jr*@ccs.neu.edu> writes:
Alexander Schmolck <a.********@gmx.net> writes:
pr***********@comcast.net writes:
mi*****@ziplip.com writes:

> I think everyone who used Python will agree that its syntax is
> the best thing going for it.

I've used Python. I don't agree.


I'd be interested to hear your reasons. *If* you take the sharp distinction
that python draws between statements and expressions as a given, then python's
syntax, in particular the choice to use indentation for block structure, seems
to me to be the best choice among what's currently on offer (i.e. I'd claim
that python's syntax is objectively much better than that of the C and Pascal
descendants -- comparisons with smalltalk, prolog or lisp OTOH are an entirely
different matter).


(I'm ignoring the followup-to because I don't read comp.lang.python)

Indentation-based grouping introduces a context-sensitive element into
the grammar at a very fundamental level. Although conceptually a
block is indented relative to the containing block, the reality of the
situation is that the lines in the file are indented relative to the
left margin. So every line in a block doesn't encode just its depth
relative to the immediately surrounding context, but its absolute
depth relative to the global context. Additionally, each line encodes
this information independently of the other lines that logically
belong with it, and we all know that when some data is encoded in one
place may be wrong, but it is never inconsistent.

There is yet one more problem. The various levels of indentation
encode different things: the first level might indicate that it is
part of a function definition, the second that it is part of a FOR
loop, etc. So on any line, the leading whitespace may indicate all
sorts of context-relevant information. Yet the visual representation
is not only identical between all of these, it cannot even be
displayed.


It's actually even worse than you think. Imagine you want "blank
lines" in your code, so act as paragraph separators. Do these require
indentation, even though there is no code on them? If so, how does
that interact with a listener? From what I can tell, the option chosen
in the Python (the language) community, the listener and the file
reader have different view on blank lines. This makes it harder than
necessary to edit stuff in one window and "just paste" code from
another. Bit of a shame, really.

//ingvar
--
When it doesn't work, it's because you did something wrong.
Try to do it the right way, instead.
Jul 18 '05 #117

P: n/a
Ingvar Mattsson wrote:
It's actually even worse than you think. Imagine you want "blank
lines" in your code, so act as paragraph separators. Do these require
indentation, even though there is no code on them? If so, how does
that interact with a listener? From what I can tell, the option chosen
in the Python (the language) community, the listener and the file
reader have different view on blank lines. This makes it harder than
necessary to edit stuff in one window and "just paste" code from
another. Bit of a shame, really.


Blank lines are ignored by Python.

Gerrit.

--
193. If the son of a paramour or a prostitute desire his father's
house, and desert his adoptive father and adoptive mother, and goes to his
father's house, then shall his eye be put out.
-- 1780 BC, Hammurabi, Code of Law
--
Asperger Syndroom - een persoonlijke benadering:
http://people.nl.linux.org/~gerrit/
Kom in verzet tegen dit kabinet:
http://www.sp.nl/

Jul 18 '05 #118

P: n/a
Jason Creighton wrote:
I agree with most of the rest of "The Zen of Python", except for the
"There should be one-- and preferably only one --obvious way to do it."
bit. I think it should be "There should be one, and preferably only one
, *easy* (And it should be obvious, if we can manage it) way to do it."


Of course, this rule is not to be taken literally.

a = 2
a = 1 + 1
a = math.sqrt(4)
a = int((sys.maxint+1) ** (1/31))

....all mean the same thing.

Gerrit.

--
243. As rent of herd cattle he shall pay three gur of corn to the
owner.
-- 1780 BC, Hammurabi, Code of Law
--
Asperger Syndroom - een persoonlijke benadering:
http://people.nl.linux.org/~gerrit/
Kom in verzet tegen dit kabinet:
http://www.sp.nl/

Jul 18 '05 #119

P: n/a
gr***@cs.uwa.edu.au writes:
In comp.lang.functional Erann Gat <my************************@jpl.nasa.gov> wrote:
:> I can't see why a LISP programmer would even want to write a macro.
: That's because you are approaching this with a fundamentally flawed
: assumption. Macros are mainly not used to make the syntax prettier
: (though they can be used for that). They are mainly used to add features
: to the language that cannot be added as functions.

Really? Turing-completeness and all that... I presume you mean "cannot
so easily be added as functions", but even that would surprise me.
(Unless you mean cannot be added _to_Lisp_ as functions, because I don't
know as much as I'd like to about Lisp's capabilities and limitations.)


IMHO, these discussions are less usefull when not accompanied by
specific examples. What are these macros good for? Some examples
where you might have difficulties with using ordinary functions:

1.) Inventing new control structures (implement lazy data structures,
implement declarative control structures, etc.)
=> This one is rarely needed in everyday application programming and
can easily be misused.

2.) Serve as abbreviation of repeating code. Ever used a code
generator? Discovered there was a bug in the generated code? Had
to fix it at a zillion places?
=> Macros serve as extremely flexible code generators, and there
is only one place to fix a bug.
=> Many Design Patterns can be implemented as macros, allowing you
to have them explicitly in your code. This makes for better
documentation and maintainability.

3.) Invent pleasant syntax in limited domains.
=> Some people don't like Lips' prefix syntax. It's changeable if you
have macros.
=> This feature can also be misused.

4.) Do computations at compile time instead of at runtime.
=> Have heard about template metaprogramming in the C++ world?
People do a lot to get fast performance by shifting computation
to compile time. Macros do this effortlessly.

These are four specific examples which are not easy to do without
macros. In all cases, implementing them classically will lead to code
duplication with all the known maintainability issues. In some cases
misuse will lead to unreadable or buggy code. Thus, macros are
powerful tools for the hand of professionals. You have to know if you
want a sharp knife (which may hurt you when misused) or a less sharper
one (where it takes more effort to cut with).
Jul 18 '05 #120

P: n/a
gr***@cs.uwa.edu.au writes:
What does it mean to take a variable-name as an argument? How is that
different to taking a pointer? What does it mean to take "code" as an
argument? Is that different to taking a function as an argument?


The difference is that you can declare (compilation-time) it and
associated variables or functions.

For example, I recently defined this macro, to declare at the same
time a class and a structure, and to define a couple of methods to
copy the objects to and from structures.

That's so useful that even cpp provide us with a ## operator to build
new symbols.

(DEFMACRO DEFCLASS-AND-STRUCT (NAME SUPER-CLASSES ATTRIBUTES OPTIONS)
(LET ((STRUCT-NAME (INTERN (FORMAT NIL "~A-STRUCT" NAME))))
`(PROG1
(DEFCLASS ,NAME ,SUPER-CLASSES ,ATTRIBUTES ,OPTIONS)
(DEFSTRUCT ,STRUCT-NAME
,@(MAPCAR (LAMBDA (ATTRIBUTE)
(CONS
(CAR ATTRIBUTE)
(CONS (GETF (CDR ATTRIBUTE) :INITFORM NIL)
(IF (GETF (CDR ATTRIBUTE) :TYPE NIL)
NIL
(LIST :TYPE (GETF (CDR ATTRIBUTE) :TYPE))))))
ATTRIBUTES))
(DEFMETHOD COPY-TO-STRUCT ((SELF ,NAME))
(MAKE-STRUCT
',NAME
,@(MAPCAN (LAMBDA (ATTRIBUTE)
`(,(INTERN (STRING (CAR ATTRIBUTE)) "KEYWORD")
(COPY-TO-STRUCT (SLOT-VALUE SELF ',(CAR ATTRIBUTE)))))
ATTRIBUTES)))
(DEFMETHOD COPY-FROM-STRUCT ((SELF ,NAME) (STRUCT ,STRUCT-NAME))
,@(MAPCAR
(LAMBDA (ATTRIBUTE)
`(SETF (SLOT-VALUE SELF ',(CAR ATTRIBUTE))
(,(INTERN (FORMAT NIL "~A-~A"
STRUCT-NAME (CAR ATTRIBUTE))) STRUCT)))
ATTRIBUTES)
SELF)
))
);;DEFCLASS-AND-STRUCT
--
__Pascal_Bourguignon__
http://www.informatimago.com/
Do not adjust your mind, there is a fault in reality.
Jul 18 '05 #121

P: n/a
Matthias <no@spam.pls> writes:
1.) Inventing new control structures (implement lazy data structures,
implement declarative control structures, etc.)
=> This one is rarely needed in everyday application programming and
can easily be misused.
This is, IMHO, wrong. One particular example is creating
macros (or read macros) for giving values to application-specific data
structures.
You have to know if you want a sharp knife (which may hurt you when
misused) or a less sharper one (where it takes more effort to cut
with).


It is easier to hurt yourself with a blunt knife than a sharp
one.

--
Raymond Wiker Mail: Ra***********@fast.no
Senior Software Engineer Web: http://www.fast.no/
Fast Search & Transfer ASA Phone: +47 23 01 11 60
P.O. Box 1677 Vika Fax: +47 35 54 87 99
NO-0120 Oslo, NORWAY Mob: +47 48 01 11 60

Try FAST Search: http://alltheweb.com/
Jul 18 '05 #122

P: n/a
On Mon, 6 Oct 2003 11:39:55 +0200, Gerrit Holl <ge****@nl.linux.org> wrote:
Ingvar Mattsson wrote:
It's actually even worse than you think. Imagine you want "blank
lines" in your code, so act as paragraph separators. Do these require
indentation, even though there is no code on them? If so, how does
that interact with a listener? From what I can tell, the option chosen
in the Python (the language) community, the listener and the file
reader have different view on blank lines. This makes it harder than
necessary to edit stuff in one window and "just paste" code from
another. Bit of a shame, really.

I think I agree somewhat. The problem is detecting end-of-chunk from the user input.
I think it would be possible to write a different listener that would accept
chunks terminated with an EOF from the user using some key binding, like a function key
or Ctl-z or Ctl-d. Then you could type away until you wanted it interpreted.
A zero length chunk followed by EOF would terminate the overall listener.

The source is open if you want to try it ;-) Let us know how it feels to use.
Alternatively, maybe interactively postponing blank-line dedent processing until the
next line would be better. Two blank lines at the end of an indented series of chuncks
separated by a single blank line would be fairly natural. If there was no indent legal
on the next line, you'd process immediately, not go to ... prompt. But this is tricky
to get right.
Blank lines are ignored by Python.

You are right re the language, but the interactive command line interface (listener)
doesn't ignore them. (Its syntactical requirements are a somewhat separate issue from
Python the language, but they are part of the overall Python user experience):
def foo(): print 'foo' ... foo() foo if 1:

... def foo(): print 'foo'
... foo()
...
foo

Regards,
Bengt Richter
Jul 18 '05 #123

P: n/a
On Sat, 04 Oct 2003 16:48:00 GMT, <pr***********@comcast.net> wrote:
I agree that injudicious use of macros can destroy the readability of
code, but judicious use can greatly increase the readability. So
while it is probably a bad idea to write COND1 that assumes
alternating test and consequence forms, it is also a bad idea to
replicate boilerplate code because you are eschewing macros.


But it may also be a mistake to use macros for the boilerplate code when
what you really need is a higher-order function...

david rush
--
(\x.(x x) \x.(x x)) -> (s i i (s i i))
-- aki helin (on comp.lang.scheme)
Jul 18 '05 #124

P: n/a
On Sun, 5 Oct 2003 11:55:00 -0400, Terry Reedy <tj*****@udel.edu> wrote:
"Shriram Krishnamurthi" <sk@cs.brown.edu> wrote in message
"Terry Reedy" <tj*****@udel.edu> writes:
Lisp (and possibly other languages I am not familiar with) adds the
alternative of *not* evaluating arguments but instead passing them
as unevaluated expressions.
I'm sorry -- you appear to be hopelessly confused on this point.
Actually, I think I have just achieved clarity: the one S-expression
syntax is used for at least different evaluation protocols -- normal
functions and special forms, which Lispers have also called FSUBR and
FEXPR *functions*.


Well, I got it, and you raised a good point but used the wrong terminology.
To know how any particular sexp is going to be evaluated you must know
whether the head symbol is bound to either a macro or some other
(preferably
a function) value. The big difference between the CL and Scheme communities
in this respect is that Scheme requires far fewer macros because it has
first-class functions (no need for the context-sensitive funcall). So while
you have a valid point, and indeed a good reason for minimizing the number
of macros in a program, in practice this is *much* less of a problem in
Scheme.
There are no functions in Scheme whose arguments are not evaluated.


That depends on who defines 'function'.


In Scheme (and Lisp generally I suspect) function != macro for any values
of the above. Both are represented as s-expression 'forms' (which is the
correct local terminology)
"If anything I write below about Lisp does not apply to Scheme
specificly, my aplogies in advance."


No bother...

david rush
--
(\x.(x x) \x.(x x)) -> (s i i (s i i))
-- aki helin (on comp.lang.scheme)
Jul 18 '05 #125

P: n/a
David Rush <dr***@aol.net> writes:
On Sat, 04 Oct 2003 16:48:00 GMT, <pr***********@comcast.net> wrote:
I agree that injudicious use of macros can destroy the readability of
code, but judicious use can greatly increase the readability. So
while it is probably a bad idea to write COND1 that assumes
alternating test and consequence forms, it is also a bad idea to
replicate boilerplate code because you are eschewing macros.


But it may also be a mistake to use macros for the boilerplate code when
what you really need is a higher-order function...


Certainly.

One should be willing to use the appropriate tools: higher-order
functions, syntactic abstraction, and meta-linguistic abstraction
(embedding a domain-specific `tiny language' within the host
language). Macros come in handy for the latter two.

Jul 18 '05 #126

P: n/a
In article <bl**********@enyo.uwa.edu.au>, gr***@cs.uwa.edu.au wrote:
In comp.lang.functional Erann Gat <my************************@jpl.nasa.gov>
wrote:
:> I can't see why a LISP programmer would even want to write a macro.
: That's because you are approaching this with a fundamentally flawed
: assumption. Macros are mainly not used to make the syntax prettier
: (though they can be used for that). They are mainly used to add features
: to the language that cannot be added as functions.

Really? Turing-completeness and all that... I presume you mean
"cannot so easily be added as functions", but even that would
surprise me. (Unless you mean cannot be added _to_Lisp_ as
functions, because I don't know as much as I'd like to about Lisp's
capabilities and limitations.)


You know Haskell. Think about the do-noatation for monads: it takes
what would be awkward, error-prone code (using >> and >>= manually)
and makes it pleasant and readable. Do-notation is basically a macro
(and can easily be expressed as such in Scheme or Lisp). Syntactic
convenience is very important; consider how many fewer programmers in
ML are willing to reach for a monadic solution, even when it would be
appropriate. Or for that matter, think how many fewer Java programmers
are willing to write a fold than in ML or Haskell, even when it would
be appropriate.
--
Neel Krishnaswami
ne***@cs.cmu.edu
Jul 18 '05 #127

P: n/a
On Mon, Oct 06, 2003 at 10:19:46AM +0000, gr***@cs.uwa.edu.au wrote:
In comp.lang.functional Marco Baringer <mb@bese.it> wrote:
: gr***@cs.uwa.edu.au writes:
:> Really? Turing-completeness and all that... I presume you mean "cannot
:> so easily be added as functions", but even that would surprise me.

: well you can pass around code full of lambdas so most macros (expect
: the ones which perform hairy source transformations) can be rewritten
: as functions, but that isn't the point. Macros are about saying what
: you mean in terms that makes sense for your particular app.

OK, so in some other application, they might allow you to extend the
syntax of the language to encode some problem domain more naturally?
Right.

:> OK, that's _definitely_ just a filter:
: no it's not, and the proof is that it wasn't written as a filter.

He was saying that this could not be done in Python, but Python has
a filter function, AFAIK.
He meant the way it was expressed. Java can ``do'' it too, but it's not
going to look as simple.

: For whatever reason the author of that snippet decided that the code
: should be written with WITH-COLLECTOR and not as a filter, some
: languages give you this option, some don't, some people think this is
: a good thing, some don't.

Naturally. I'm against extra language features unless they increase
the expressive power, but others care more for ease-of-writing and
less for ease-of-reading and -maintaining than I do.
Then you should like macros, because ease-of-reading and -maintaining is
precisely why I use them. Like with functions, being able to label
common abstractions is a great maintainability boost.

You don't write ((lambda (x) ...) 1) instead of (let ((x 1)) ...), right?
:> : DO-FILE-LINES and WITH-COLLECTOR are macros, and they can't be implemented
:> : any other way because they take variable names and code as arguments.
:> What does it mean to take a variable-name as an argument? How is that
:> different to taking a pointer? What does it mean to take "code" as an
:> argument? Is that different to taking a function as an argument?
: You are confusing the times at which things happen. A macro is
: expanded at compile time,

OK, yep. It should have occurred to me that that was the difference.
So now the question is "what does that give you that higher-order
functions don't?".
: Another trivial example:
: <IF-BIND>

: Macros allow me to say what I _mean_, not what the compiler wants.

Interesting. It would be interesting to see an example where it allows
you to write the code in a less convoluted way, rather than the three
obfuscating (or would the non-macro Lisp versions be just as obfuscated?
I know Lisp is fine for higher-order functions, but I guess the IF-BIND
stuff might be hard without pattern-matching.) examples I've seen so far.


Here's an example that I am currently using:

(definstruction move ((s register) (d register))
:sources (s)
:destinations (d)
:template "movl `s0, `d0"
:class-name move-instruction)

1. This expands into a
(defmethod make-instruction-move ((s register) (d register)) ...)
which itself is called indirectly, but most importantly it allows the
compiler to compile a multiple-dispatch method statically rather than
trying to replicate that functionality at runtime (which would require
parsing a list of parameters supplied by the &rest lambda-list keyword,
not to mention implementing multiple-dispatch).

2. Sources and destinations can talk about variable names rather than indices
into a sequence. (Templates cannot because they need the extra layer of
indirection--the source and destination lists are subject to change in
this system currently. Well, I suppose it could be worked out anyway,
perhaps if I have time I will try it).

3. Originally I processed the templates at run-time, and upon profiling
discovered that it was the most time-consuming function by far. I modified
the macro to process the template strings statically and produce a
function which could be compiled with the rest of the code, and the
overhead completely disappeared. I can imagine a way to do this with
functions: collect a list of functions which take the relevant values
as arguments, then map over them and apply the results to format.
This is less efficient because
(a) you need to do some extra steps, which the macro side-steps by
directly pasting the code into the proper place, and
(b) FORMATTER is a macro which lets you compile a format string into
a function, and this cannot be used in the functional version,
since you cannot say (FORMATTER my-control-string) but must supply
a string statically, as in (FORMATTER "my control string").
Could FORMATTER be implemented functionally? Probably, but either
you require the use of the Lisp compiler at run-time, which is
certainly possible though heavyweight usually, or you write a
custom compiler for that function. If you haven't figured it out
yet, Lispers like to leverage existing resources =)

4. The macro arranges all the relevant information about a machine instruction
in a simple way that is easy to write even if you don't understand
the underlying system. If you know anything about assembly language,
it is probably pretty easy to figure out what information is being encoded.
Here's another fun macro which I've been using as of yesterday afternoon,
courtesy of Faré Rideau:

(match s
...
((ir move (as temp (ir temp _)) b)
(reorder-stm (list b)
(mlambda ((list b) ;; match + lambda
(make-ir-move temp b)))))
...)

MATCH performs ML/Erlang-style pattern matching with a Lispy twist: patterns
are of the form: literal, variable, or (designator ...) where designator is a
symbol specified by some defining construct.

I wrote this to act like the ML `as' [meta-?]pattern:

(define-macro-matcher as
;; I think this lambda should be folded into the macro, but whatever
#'(lambda (var pat)
(multiple-value-bind (matcher vars)
(pattern-matcher pat)
(values `#'(lambda (form)
(m%and (funcall ,matcher form)
(setf ,var form)))
(merge-matcher-variables (list vars (list var)))))))

for example, which at macro-expansion time computes the pattern-matching code
of the pat argument, adds var to the list of variables (used by MATCH), and
creates a function which first checks the pattern and then sets the supplied
(lexical) variable var to the value of the form at this point the form at this
point. Calling PATTERN-MATCHER yourself is quite enlightening on this:

* (pattern-matcher '(as x 1))
#'(LAMBDA (FORM)
(M%AND (FUNCALL #'(LAMBDA (#:FORM) (M%WHEN (EQL #:FORM '1))) FORM)
(SETF X FORM)))
(X)

MATCH (really implemented in terms of IFMATCH) computes this at macro-expansion
and the Lisp statically compiles it afterwards. Of course, MATCH could be
implemented functionally, but consider the IR matcher that
(a) looks up the first parameter in a table (created by another macro) to
see if it is a valid IR type and get the slot names
(b) which are used to create slot-accessing forms that can be optimized
by a decent CLOS implementation when the slot-name is a literal value
(as when constructed by the macro, something a functional version
could not do).

Not to mention that a functional version would have to look something like:

(match value
'(pattern involving x, y, and z) #'(lambda (x y z) ...)
... ...)

Rather annoying, don't you think? The variables need to be repeated.

The functional version would have to create some kind of structure to hold the
bound variable values and construct a list to apply the consequent function
with. The macro version can get away with modifying lexical variables.

Also the macro version can be extended to support multiple value forms, which
in Lisp are not first-class (but more efficient than returning lists).
A third quick example:

(ir-sequence
(make-ir-move ...)
(make-ir-jump ...)
...)

Which transforms a list of values into a list-like data structure. I wrote
this originally as a macro, because in my mind it was a static transformation.
I realized later that it could be implemented as a function, without changing
any uses, but I didn't because
(a) I wasn't using it with higher-order functions, or situations demanding
them.
(b) It would now have to cons a list every call and do the transformation;
added overhead and the only gain being that it was now a function which
I never even used in a functional way. Rather questionable.
(c) I can always write a separate functional version if I need it.
Basically, this boils down to:

* Macros can do ``compile-time meta-programming'' or whatever the buzzword
is these days, and those above are some real-life examples.
This allows for compiler optimization and static analysis where desired.
* Macros make syntax much more convenient and less cluttered. I really
don't understand the people who think that macros make things harder to read.
It is far better to have clear labelled markers in the source code rather
than having to sort through boilerplate to figure out the intended meaning
of code. Just because I understand lambda calculus doesn't mean I want to
sort through nested lambdas just to use some functionality, every time.
If you are afraid because you are unsure of what the macro does, or its
complete syntax, MACROEXPAND-1 is your friend. That, and an editor with
some hot-keys to find source/docs/expansion/etc.

--
; Matthew Danish <md*****@andrew.cmu.edu>
; OpenPGP public key: C24B6010 on keyring.debian.org
; Signed or encrypted mail welcome.
; "There is no dark side of the moon really; matter of fact, it's all dark."
Jul 18 '05 #128

P: n/a
Another example:

Given a set of files containing database data, I want to create

- classes that represent (the interesting bits of) each table

- functions that parse lines from the files, create instances
of a given class, and returns the instance along with the "primary
key" of the instance.

The interface to this functionality is the macro

(defmacro define-record (name key args &body body)
...)

which I use like this:

(define-record category oid ((oid 0 integer)
(name 1 string)
(status 4 integer)
(deleted 5 integer)
(parent 9 integer)
active)
(unless (zerop deleted)
(return t)))

which expands into

(PROGN
(DEFCLASS CATEGORY-CLASS
NIL
((OID :INITARG :OID) (NAME :INITARG :NAME) (STATUS :INITARG :STATUS)
(DELETED :INITARG :DELETED) (PARENT :INITARG :PARENT)
(ACTIVE :INITARG :ACTIVE)))
(DEFUN HANDLE-CATEGORY (#:LINE-2524)
(WHEN #:LINE-2524
(LET ((#:FIELDS-2525
(SPLIT-LINE-COLLECT #:LINE-2524
'((0 . INTEGER) (1 . STRING) (4 . INTEGER)
(5 . INTEGER) (9 . INTEGER)))))
(WHEN #:FIELDS-2525
(PROGV
'(OID NAME STATUS DELETED PARENT)
#:FIELDS-2525
(BLOCK NIL
(LET (ACTIVE)
(UNLESS (ZEROP DELETED) (RETURN T))
(VALUES OID
(MAKE-INSTANCE 'CATEGORY-CLASS
:OID
OID
:NAME
NAME
:STATUS
STATUS
:DELETED
DELETED
:PARENT
PARENT
:ACTIVE
ACTIVE))))))))))

The implementation of this macro is probably not perfect (I've
learnt more about Common Lisp since I wrote it). This is OK, since I
can go back and change the innards of the macro whenever I want to :-)
Actually, this is probably something that calls for the use of MOP.

--
Raymond Wiker Mail: Ra***********@fast.no
Senior Software Engineer Web: http://www.fast.no/
Fast Search & Transfer ASA Phone: +47 23 01 11 60
P.O. Box 1677 Vika Fax: +47 35 54 87 99
NO-0120 Oslo, NORWAY Mob: +47 48 01 11 60

Try FAST Search: http://alltheweb.com/
Jul 18 '05 #129

P: n/a
In article <bl**********@enyo.uwa.edu.au>, gr***@cs.uwa.edu.au wrote:
In comp.lang.functional Erann Gat <my************************@jpl.nasa.gov> wrote: :> I can't see why a LISP programmer would even want to write a macro.
: That's because you are approaching this with a fundamentally flawed
: assumption. Macros are mainly not used to make the syntax prettier
: (though they can be used for that). They are mainly used to add features
: to the language that cannot be added as functions.

Really? Turing-completeness and all that... I presume you mean "cannot
so easily be added as functions", but even that would surprise me.
No, I meant what I wrote. Turing-completeness is a red herring with
respect to a discussion of programming language features. If it were not
then there would be no reason to program in anything other than machine
language.

: For example, imagine you want to be able to traverse a binary tree and do
: an operation on all of its leaves. In Lisp you can write a macro that
: lets you write:
: (doleaves (leaf tree) ...)
: You can't do that in Python (or any other langauge).

My Lisp isn't good enough to answer this question from your code,
but isn't that equivalent to the Haskell snippet: (I'm sure
someone here is handy in both languages)

doleaves f (Leaf x) = Leaf (f x)
doleaves f (Branch l r) = Branch (doleaves f l) (doleaves f r)

I'd be surprised if Python couldn't do the above, so maybe doleaves
is doing something more complex than it looks to me to be doing.
You need to change your mode of thinking. It is not that other languages
cannot do what doleaves does. It is that other langauges cannot do what
doleaves does in the way that doleaves does it, specifically allowing you
to put the code of the body in-line rather than forcing you to construct a
function.

Keep in mind also that this is just a trivial example. More sophisticated
examples don't fit well in newsgroup postings. Come see my ILC talk for
an example of what you can do with macros in Lisp that you will find more
convincing.
: Here's another example of what you can do with macros in Lisp:

: (with-collector collect
: (do-file-lines (l some-file-name)
: (if (some-property l) (collect l))))

: This returns a list of all the lines in a file that have some property.

OK, that's _definitely_ just a filter: filter someproperty somefilename
Perhaps throw in a fold if you are trying to abstract "collect".
The net effect is a filter, but again, you need to stop thinking about the
"what" and start thinking about the "how", otherwise, as I said, there's
no reason to use anything other than machine language.
: DO-FILE-LINES and WITH-COLLECTOR are macros, and they can't be implemented
: any other way because they take variable names and code as arguments.

What does it mean to take a variable-name as an argument? How is that
different to taking a pointer? What does it mean to take "code" as an
argument? Is that different to taking a function as an argument?


These questions are answered in various books. Go seek them out and read
them. Paul Graham's "On Lisp" is a good place to start.

E.
Jul 18 '05 #130

P: n/a
|> def posneg(filter,iter):
|> results = ([],[])
|> for x in iter:
|> results[not filter(x)].append(x)
|> return results
|> collect_pos,collect_neg = posneg(some_property, some_file_name)

Pascal Costanza <co******@web.de> wrote previously:
|What about dealing with an arbitrary number of filters?

Easy enough:

def categorize_exclusive(filters, iter):
results = tuple([[] for _ in len(filters)])
for x in iter:
for n, filter in enumerate(filters):
if filter(x):
results[n].append(x)
break
return results

Or if you want to let things fall in multiple categories:

def categorize_inclusive(filters, iter):
results = tuple([[] for _ in len(filters)])
for x in iter:
for n, filter in enumerate(filters):
if filter(x):
results[n].append(x)
return results

Or if you want something to satisfy ALL the filters:

def categorize_compose(filters, iter):
results = tuple([[] for _ in len(filters)])
for x in iter:
results[compose(filters)(x)].append(x)
return results

The implementation of 'compose()' is left as an exercise to readers :-).
Or you can buy my book, and read the first chapter.

Yours, David...

--
Keeping medicines from the bloodstreams of the sick; food from the bellies
of the hungry; books from the hands of the uneducated; technology from the
underdeveloped; and putting advocates of freedom in prisons. Intellectual
property is to the 21st century what the slave trade was to the 16th.
--
Buy Text Processing in Python: http://tinyurl.com/jskh

Jul 18 '05 #131

P: n/a
|> def posneg(filter,iter):
|> results = ([],[])
|> for x in iter:
|> results[not filter(x)].append(x)
|> return results
|> collect_pos,collect_neg = posneg(some_property, some_file_name)

Pascal Costanza <co******@web.de> wrote previously:
|What about dealing with an arbitrary number of filters?

Easy enough:

def categorize_exclusive(filters, iter):
results = tuple([[] for _ in len(filters)])
for x in iter:
for n, filter in enumerate(filters):
if filter(x):
results[n].append(x)
break
return results

Or if you want to let things fall in multiple categories:

def categorize_inclusive(filters, iter):
results = tuple([[] for _ in len(filters)])
for x in iter:
for n, filter in enumerate(filters):
if filter(x):
results[n].append(x)
return results

Or if you want something to satisfy ALL the filters:

def categorize_compose(filters, iter):
results = tuple([[] for _ in len(filters)])
for x in iter:
results[compose(filters)(x)].append(x)
return results

The implementation of 'compose()' is left as an exercise to readers :-).
Or you can buy my book, and read the first chapter.

Yours, David...

--
Keeping medicines from the bloodstreams of the sick; food from the bellies
of the hungry; books from the hands of the uneducated; technology from the
underdeveloped; and putting advocates of freedom in prisons. Intellectual
property is to the 21st century what the slave trade was to the 16th.
--
Buy Text Processing in Python: http://tinyurl.com/jskh
Jul 18 '05 #132

P: n/a


Hannu Kankaanp?? wrote:
The problem with the example arises from the fact that indentation
is used for human readability, but parens are used by the parser.
And by the editor, meaning the buggy code with the extra parens had not
been written in a parens-aware editor (or the coder had stuck a parens
on without kicking off a re-indentation).
A clash between these two representations can lead to subtle bugs
like this one. But remove one of the representations, and there
can't be clashes.


No need. Better yet, with parens you do not have to do the indentation
yourself, you just have to look at what you are typing. Matching parens
highlight automatically as you close up nested forms (it's kinda fun
actually), and then a single key chard re-indents (if you have been
refactoring and things now belong at diff indentation levels).

I used to spend a /lot/ of time on indentation (in other languages). No
more. That is just one of the advantages of all those parentheses.

kenny

Jul 18 '05 #133

P: n/a

Marcin 'Qrczak' Kowalczyk <qr****@knm.org.pl> writes:
The main disadvantage of macros is that they either force the syntax
to look like Lisp, or the way to present code snippets to macros is
complicated (Template Haskell), or they are as limited as C preprocessor
(which can't examine parameters).

I find the Lisp syntax hardly readable when everything looks alike,
mostly words and parentheses, and when every level of nesting requires
parens. I understand that it's easier to work with by macros, but it's
harder to work with by humans like I.


I don't understand what you're complaining about.

When you have macros such as loop that allow you to write stuff like:

(loop for color in '(blue white red)
with crosses = :crosses
collect (rgb color) into rgb-list
maximize (blue-component color) into max-blue
until (color-pleases-user color)
finally return (vlues color rgb-list max-blue))

where are the parentheses at EVERY level you're complaining about?
where is the lisp-like syntax?
Lisp is not commie-stuff, nobody forces you to program your macros
following any hypothetical party line.

--
__Pascal_Bourguignon__
http://www.informatimago.com/
Do not adjust your mind, there is a fault in reality.
Jul 18 '05 #134

P: n/a
My answer sucked in a couple ways.

(1) As Bengt Ricther pointed out up-thread, I should have changed David
Eppstein's names 'filter' and 'iter' to something other than the
built-in names.

(2) The function categorize_compose() IS named correctly, but it doesn't
DO what I said it would. If you want to fulfill ALL the filters, you
don't to compose them, but... well, 'all()' them:

| def categorize_jointly(preds, it):
| results = [[] for _ in len(preds)]
| for x in it:
| results[all(filters)(x)].append(x)
| return results

Now if you wonder what the function 'all()' does, you could download:

http://gnosis.cx/download/gnosis/util/combinators.py

But the relevant part is:

from operator import mul, add, truth
apply_each = lambda fns, args=[]: map(apply, fns, [args]*len(fns))
bools = lambda lst: map(truth, lst)
bool_each = lambda fns, args=[]: bools(apply_each(fns, args))
conjoin = lambda fns, args=[]: reduce(mul, bool_each(fns, args))
all = lambda fns: lambda arg, fns=fns: conjoin(fns, (arg,))

For 'lazy_all()', look at the link.

See, Python is Haskell in drag.

Yours, David...

--
Keeping medicines from the bloodstreams of the sick; food from the bellies
of the hungry; books from the hands of the uneducated; technology from the
underdeveloped; and putting advocates of freedom in prisons. Intellectual
property is to the 21st century what the slave trade was to the 16th.

Jul 18 '05 #135

P: n/a
Joe Marshall <jr*@ccs.neu.edu> writes:

Alexander Schmolck <a.********@gmx.net> writes:
pr***********@comcast.net writes: (I'm ignoring the followup-to because I don't read comp.lang.python)


Well, I supposed this thread has spiralled out of control already anyway:)
Indentation-based grouping introduces a context-sensitive element into
the grammar at a very fundamental level. Although conceptually a
block is indented relative to the containing block, the reality of the
situation is that the lines in the file are indented relative to the
left margin. So every line in a block doesn't encode just its depth
relative to the immediately surrounding context, but its absolute
depth relative to the global context.
I really don't understand why this is a problem, since its trivial to
transform python's 'globally context' dependent indentation block structure
markup into into C/Pascal-style delimiter pair block structure markup.

Significantly, AFAICT you can easily do this unambiguously and *locally*, for
example your editor can trivially perform this operation on cutting a piece of
python code and its inverse on pasting (so that you only cut-and-paste the
'local' indentation). Prima facie I don't see how you loose any fine control.
Additionally, each line encodes this information independently of the other
lines that logically belong with it, and we all know that when some data is
encoded in one place may be wrong, but it is never inconsistent.
Sorry, I don't understand this sentence, but maybe you mean that the potential
inconsitency between human and machine interpretation is a *feature* for Lisp,
C, Pascal etc!? If so I'm really puzzled.
There is yet one more problem. The various levels of indentation encode
different things: the first level might indicate that it is part of a
function definition, the second that it is part of a FOR loop, etc. So on
any line, the leading whitespace may indicate all sorts of context-relevant
information.
I don't understand why this is any different to e.g. ')))))' in Lisp. The
closing ')' for DEFUN just looks the same as that for IF.
Yet the visual representation is not only identical between all of these, it
cannot even be displayed.
I don't understand what you mean. Could you maybe give a concrete example of
the information that can't be displayed? AFAICT you can have 'sexp'-movement,
markup and highlighting commands all the same with whitespace delimited block
structure.

Is this worse than C, Pascal, etc.? I don't know.
I'm pretty near certain it is better: In Pascal, C etc. by and large block
structure delimitation is regulated in such a way that what has positive
information content for the human reader/programmer (indentation) has zero to
negative information content for the compiler and vice versa. This is a
remarkably bad design (and apart from cognitive overhead obviously also causes
errors).

Python removes this significant problem, at as far as I'm aware no real cost
and plenty of additional gain (less visual clutter, no waste of delimiter
characters ('{','}') or introduction of keywords that will be sorely missed as
user-definable names ('begin', 'end')).

In Lisp the situtation isn't quite as bad, because although most of the parens
are of course mere noise to a human reader, not all of them are and because of
lisp's simple but malleable syntactic structure a straighforward replacement
of parens with indendation would obviously result in unreadable code
(fragmented over countless lines and mostly in past the 80th column :).

So unlike C and Pascal where a fix would be relatively easy, you would need
some more complicated scheme in the case of Lisp and I'm not at all sure it
would be worth the hassle (especiallly given that efforts in other areas would
likely yield much higher gains).

Still, I'm sure you're familiar with the following quote (with which I most
heartily agree):

"[P]rograms must be written for people to read, and only incidentally for
machines to execute."

People can't "read" '))))))))'.
Worse than Lisp, Forth, or Smalltalk? Yes.


Possibly, but certainly not due to the use of significant whitespace.
'as
Jul 18 '05 #136

P: n/a
Marco Antoniotti <ma*****@cs.nyu.edu> writes:
The best choice for code indentation in any language is M-C-q in Emacs.


You mean C-c> and C-c<.

'as
Jul 18 '05 #137

P: n/a
Marco Antoniotti <ma*****@cs.nyu.edu> writes:
Why do I feel like crying? :{


Could it be because you've actually got some rational argument against
significant whitespace a la python?!

'as
Jul 18 '05 #138

P: n/a
james anderson <ja************@setf.de> wrote:
Matthias Blume wrote:
Most of the things that macros can do can be done with HOFs with just
as little source code duplication as with macros. (And with macros
only the source code does not get duplicated, the same not being true
for compiled code. With HOFs even executable code duplication is
often avoided -- depending on compiler technology.)

is the no advantage to being able to do either - or both - as the
occasion dictates?
I can't parse this sentence, but of course you can also use HOFs in Lisp
(all flavours). The interesting part is that most Lisp'ers don't seem
to use them, or even to know that you can use them, and use macros instead.

The only real advantage of macros over HOFs is that macros are guaranteed
to to executed at compile time. A good optimizing compiler (like GHC
for Haskell) might actually also evaluate some expressions including
HOFs at compile time, but you have no control over that.
i'd be interested to read examples of things which are better done
with HOF features which are not available in CL.


HOFs can of course be used directly in CL, and you can use macros to
do everything one could use HOFs for (if you really want).

The advantage of HOFs over macros is simplicity: You don't need additional
language constructs (which may be different even for different Lisp
dialects, say), and other tools (like type checking) are available for
free; and the programmer doesn't need to learn an additional concept.

- Dirk
Jul 18 '05 #139

P: n/a
Marcin 'Qrczak' Kowalczyk <qr****@knm.org.pl> writes:
I find the Lisp syntax hardly readable when everything looks alike,
mostly words and parentheses, and when every level of nesting requires
parens. I understand that it's easier to work with by macros, but it's
harder to work with by humans like I.


You find delimited words more difficult than symbols? For literate
people who use alphabet-based languages, I find this highly suspect.
Maybe readers of only ideogram languages might have different
preferences, but we are writing in English here...

--
/|_ .-----------------------.
,' .\ / | No to Imperialist war |
,--' _,' | Wage class war! |
/ / `-----------------------'
( -. |
| ) |
(`-. '--.)
`. )----'
Jul 18 '05 #140

P: n/a
On Mon, 06 Oct 2003 17:12:18 GMT, Alex Martelli <al***@aleax.it>
wrote:
Imagine a group of, say, a dozen programmers, working together by
typical Agile methods to develop a typical application program of
a few tens of thousands of function points -- developing about
100,000 new lines of delivered code plus about as much unit tests,
and reusing roughly the same amount of code from various libraries,
frameworks, packages and modules obtained from the net and/or from
commercial suppliers. Nothing mind-boggling about this scenario,
surely -- it seems to describe a rather run-of-the-mill case.

Now, clearly, _uniformity_ in the code will be to the advantage
of the team and of the project it develops. Extreme Programming
makes a Principle out of this (no "code ownership"), but even if
you don't rate it quite that highly, it's still clearly a good
thing. Now, you can impose _some_ coding uniformity (within laxer
bounds set by the language) _for code originally developed by the
team itself_ by adopting and adhering to team-specific coding
guidelines; but when you're reusing code obtained from outside,
and need to adopt and maintain that code, the situation is harder.
Either having that code remain "alien", by allowing it to break
all of your coding guidelines; or "adopting" it thoroughly by,
in practice, rewriting it to fit your guidelines; is a serious
negative impact on the team's productivity.


Alex, this is pure un-mitigated non-sense. Python's Metaclasses are
far more dangerous than Macro's. Metaclasses allow you to globally
change the underlying semantics of a program. Macros only allow you
to locally change the Syntax. Your comparison is spurious at best.

Your argument simply shows a serious mis-understanding of Macros.
Macros as has been stated to you *many* times are similar to
functions. They allow a certain type of abstraction to remove
extraneous code.

Based on your example you should be fully campaigning against
Metaclasses, FP constructs in python and Functions as first class
objects. All of these things add complexity to a given program,
however they also reduce the total number of lines. Reducing program
length is to date the only effective method I have seen of reducing
complexity.

If you truly believe what you are saying, you really should be
programming in Java. Everything is explicit, and most if not all of
these powerful constructs have been eschewed, because programmers are
just too dumb to use them effectively.
Doug Tolton
(format t "~a@~a~a.~a" "dtolton" "ya" "hoo" "com")
Jul 18 '05 #141

P: n/a
On Tue, 07 Oct 2003 21:59:11 +0200, Pascal Bourguignon wrote:
When you have macros such as loop that allow you to write stuff like:

(loop for color in '(blue white red)

[...]

Well, some people say the "loop" syntax is not very lispish - it's unusual
that it uses many words and few parentheses. It still uses only words and
parentheses, no other punctuation, and it introduces one pair of parentheses
for its one nesting level.

A richer alphabet is often more readable. Morse code can't be read as fast
as Latin alphabet because it uses too few different symbols. Japanese say
they won't abandon Kanji because it's more readable as soon as you know it -
you don't have to compose words from many small pieces which look alike
but each word is distinct. Of course *too* large alphabet requires long
learning and has technical difficulties, but Lisp expressions are too
little distinctive for my taste.

I know I can implement infix operators with Lisp macros, but I even don't
know how they feel because nobody uses them (do I have to explicitly open
infix region and explicitly escape from it to regular syntax?), and
arithmetic is not enough. All Lisp code I've read uses lots of parentheses
and they pile up at the end of each large subexpression so it's hard to
match them (an editor is not enough, it won't follow my eyes and won't
work with printed code).

Syntax is the thing I like the least in Lisp & Scheme.

--
__("< Marcin Kowalczyk
\__/ qr****@knm.org.pl
^^ http://qrnik.knm.org.pl/~qrczak/

Jul 18 '05 #142

P: n/a
I was never very fond of lisp. I guess I mean scheme technically, I
took the Ableson and Sussman course back in college, so that's what I
learned of scheme, lisp in general I've mostly used embedded in other
things. In general, it always seemed to me that a lot of the design
choices in lisp are driven more by elegance and simplicity than
usability. When it comes to programming languages, I really want the
language to be a good tool, and to do as much of the work for me as
possible. Using parentheses and rpn everywhere makes lisp very easy
to parse, but I'd rather have something easy for me to understand and
hard for the computer to parse. (Not to mention car, cdr, cadr, and
so on vs. index notation, sheesh.) That's why I prefer python, you
get a nice algebraic syntax with infix and equal signs, and it's easy
understand. Taking out ';' at the ends of lines and indenting for
blocks helps me by removing the clutter and letting me see the code.
And yes, I'm sure you can write macros in lisp to interpret infix
operators and indexing and whatever you want, but learning a core
language that's wildly non-intuitive so that I can make it more
intuitive never seemed like a good use of my time. Python is
intuitive to me out of the box, and it just keeps getting better, so I
think I'll stick with it.
Jul 18 '05 #143

P: n/a
On 7 Oct 2003 12:59:07 -0700, ha******@yahoo.com.au (Hannu Kankaanp??)
wrote:
pr***********@comcast.net wrote in message news:<8y**********@comcast.net>...
ha******@yahoo.com.au (Hannu Kankaanp??) writes:
> So getting to the point, doesn't the example show that indentation
> is in fact a *good* choice for block delimiting? It's an alternative that
> wouldn't have the subtle bug you introduced here. So how can you
> claim that this example shows indentation is a poor choice for block
> delimiting? It doesn't make sense.
The point was that even though I screwed up the indentation, it was
easily discovered and repaired with Emacs. If the program were
whitespace-sensitive, then the screwed-up version would be
mechanically indistinguishable from the intended one.

How do you "screw up indentation"? Actually, being more of a
Python fan, I though you had screwed up parens, since indentation
is absolute for me :). Anyway, do you just go to some line and start
pressing spacebar randomly, and not notice this? Maybe
you do need a safety net, then.

Maybe by coding in an environment with multiple programmers?
Alex was saying that Python's syntax and simplicity is so much better
for large scale projects with multiple programmers. With just 4
programmers using Python at work we have problems with indentation
levels. One guy (me) uses emacs, another uses ultra-edit, another
uses Boa and the other uses PythonWin. Indenting can be a major
source of compilation bugs.
But the whitespace already has all the structural information needed.
The whitespace doesn't just disappear suddenly, much like parens don't
magically disappear or mutate.

There can be problems in cutting and pasting code if one mixes tabs
and spaces, like Ingvar said, but generally the rule in Python is to
use 4 spaces for each indentation level. If you break this rule,
you know you had it coming when something fails ;).


Just because it's defined that way, doesn't make it less a pain in the
ass to fix the problems.

of course it's not a big problem if you are the only consumer of your
code.
Doug Tolton
(format t "~a@~a~a.~a" "dtolton" "ya" "hoo" "com")
Jul 18 '05 #144

P: n/a
Matthias Blume <fi**@my.address.elsewhere> writes:
ra*****@mediaone.net (Raffael Cavallaro) writes:
Two words: code duplication.


Three words and a hyphen: Higher-Order Functions.


This covers most usages of macros for control structures, but not
all. (For example, goto.)

But there is an additional word to add, no hyphens: Bindings.

This can only be accomplished with functions if you're willing to
write a set of functions that defer evaluation, by, say parsing
input, massaging it appropriately, and then passing it to the
compiler. At that point, however, you've just written your own
macro system, and invoked Greenspun's 10th Law.


This is false. Writing your own macro expander is not necessary for
getting the effect. The only thing that macros give you in this
regard is the ability to hide the lambda-suspensions. To some
people this is more of a disadvantage than an advantage because,
when not done in a very carefully controlled manner, it ends up
obscuring the logic of the code. (Yes, yes, yes, now someone will
jump in an tell me that it can make code less obscure by "canning"
certain common idioms. True, but only when not overdone.)


(This hits one of the major differences between Lisp and Scheme -- in
Lisp I'm not as happy to use HOFs because of the different syntax
(which is an indication of a different mindset, which leads to
performance being optimized for a certain style). Scheme is much more
functional in this respect, for example -- using HOF versions of
with-... compared to Lisp where these are always macros.)

--
((lambda (x) (x x)) (lambda (x) (x x))) Eli Barzilay:
http://www.barzilay.org/ Maze is Life!
Jul 18 '05 #145

P: n/a


Eli Barzilay wrote:

Matthias Blume <fi**@my.address.elsewhere> writes:

...

This is false. Writing your own macro expander is not necessary for
getting the effect. The only thing that macros give you in this
regard is the ability to hide the lambda-suspensions. To some
people this is more of a disadvantage than an advantage because,
when not done in a very carefully controlled manner, it ends up
obscuring the logic of the code. (Yes, yes, yes, now someone will
jump in an tell me that it can make code less obscure by "canning"
certain common idioms. True, but only when not overdone.)
(This hits one of the major differences between Lisp and Scheme -- in
Lisp I'm not as happy to use HOFs because of the different syntax


which difference differnt syntax?
(which is an indication of a different mindset, which leads to
performance being optimized for a certain style). Scheme is much more
functional in this respect, for example -- using HOF versions of
with-... compared to Lisp where these are always macros.)


in practice, as a rule, a with- is available at least optionally also as a
call-with-. not just for convenience, but also for maintainability.

in the standard there are but twelve of these. should they be an impediment,
in most cases there is nothing preventing one from writing call-with-...
equivalents. many expand fairly directly to special forms, so they are even
trivial to reimplement if the "double-nesting" were distasteful. i am curious,
however, about the HOF equivalents for macros which expand primarily to
changes to the lexical environment. not everything has a ready equivalence to
lambda. for instance with-slots and with-accessors. what is the HOF equivalent
for something like

? (defclass c1 () ((s1 )))

#<STANDARD-CLASS C1>
? (defmethod c1-s1 ((instance c1))
(with-slots (s1) instance
(if (slot-boundp instance 's1)
s1
(setf s1 (get-universal-time)))))

#<STANDARD-METHOD C1-S1 (C1)>
? (defparameter *c1* (make-instance 'c1))

*C1*
? (describe *c1*)
#<C1 #x69D95C6>
Class: #<STANDARD-CLASS C1>
Wrapper: #<CCL::CLASS-WRAPPER C1 #x69D95A6>
Instance slots
S1: #<Unbound>
? (c1-s1 *c1*)
3274553173
? (describe *c1*)
#<C1 #x69D95C6>
Class: #<STANDARD-CLASS C1>
Wrapper: #<CCL::CLASS-WRAPPER C1 #x69D95A6>
Instance slots
S1: 3274553173
?
Jul 18 '05 #146

P: n/a
james anderson <ja************@setf.de> writes:
Eli Barzilay wrote:

(This hits one of the major differences between Lisp and Scheme --
in Lisp I'm not as happy to use HOFs because of the different
syntax
which difference differnt syntax?


Huh?

(which is an indication of a different mindset, which leads to
performance being optimized for a certain style). Scheme is much more
functional in this respect, for example -- using HOF versions of
with-... compared to Lisp where these are always macros.)


in practice, as a rule, a with- is available at least optionally also as a
call-with-. not just for convenience, but also for maintainability.
[...]


Yes, but I was talking about the difference approaches, for example:

(dolist (x foo)
(bar x))

vs:

(mapc #'bar foo)

i am curious, however, about the HOF equivalents for macros which
expand primarily to changes to the lexical environment. [...]


That was the point I made in the beginning.

--
((lambda (x) (x x)) (lambda (x) (x x))) Eli Barzilay:
http://www.barzilay.org/ Maze is Life!
Jul 18 '05 #147

P: n/a


Eli Barzilay wrote:

james anderson <ja************@setf.de> writes:
Eli Barzilay wrote:

(This hits one of the major differences between Lisp and Scheme --
in Lisp I'm not as happy to use HOFs because of the different
syntax
which different [] syntax?


Huh?


that is, what is different about the syntax for higher-order functions in lisp?
(which is an indication of a different mindset, which leads to
performance being optimized for a certain style). Scheme is much more
functional in this respect, for example -- using HOF versions of
with-... compared to Lisp where these are always macros.)
in practice, as a rule, a with- is available at least optionally also as a
call-with-. not just for convenience, but also for maintainability.
[...]


Yes, but I was talking about the difference approaches, for example:

(dolist (x foo)
(bar x))

vs:

(mapc #'bar foo)


are these not two examples of coding in common-lisp. how do they demonstrate
that "scheme is much more functional"?
i am curious, however, about the HOF equivalents for macros which
expand primarily to changes to the lexical environment. [...]


That was the point I made in the beginning.


sorry, i missed that.
Jul 18 '05 #148

P: n/a
co************@attbi.com (Corey Coughlin) writes:
Using parentheses and rpn everywhere makes lisp very easy to parse,
but I'd rather have something easy for me to understand and hard for
the computer to parse.


That would be a strong argument--seriously--if the only folks who
benefited from the trivial mapping between Lisp's surface syntax and
underlying were the compiler writers. I certainly agree that if by
expending some extra effort once compiler writers can save their users
effort every time they write a program that is a good trade off.

If the only thing a "regular" programmer ever does with a language's
syntax is read and write it, then the only balance to be struck is
between the perhaps conflicting goals of readability and writability.
(For instance more concise code may be more "writable" but taken to an
extreme it may be "write only".) But "machine parsability" beyond,
perhaps, being amennable to normal machine parsing techniques (LL,
LALR, etc.) should not be a consideration. So I agree with you.

But Lisp's syntax is not the way it is to make the compiler writer's
job easier. In Lisp "regular" programmers also interact with the code
as data. We can easily write code generators (i.e. macros) that are
handed a data representation of some code, or parts of code, and have
only to return a new data structure representing the generated code.
This is such a useful technique that it's built into the compiler--it
will run our code generators when it compiles our code so we don't
have to screw around figuring out how to generate code at runtime and
get it loaded into our program. *That's* why we don't mind, and, in
fact, actively like, Lisp's syntax.

The point is not that the syntax, taking in total isolation from the
rest of the language, is necessarily the best of all possible syntaxi.
The point is that the syntax makes other things possible that *way*
outweigh whatever negatives the syntax may have.

I'd humbly suggest that if you can't see *any* reason why someone
would prefer Lisp's syntax, then you're not missing some fact about
the syntax itself but about how other language features are supported
by the syntax.

-Peter

--
Peter Seibel pe***@javamonkey.com

Lisp is the red pill. -- John Fraser, comp.lang.lisp
Jul 18 '05 #149

P: n/a


Dirk Thierbach wrote:

james anderson <ja************@setf.de> wrote:
Matthias Blume wrote:
Most of the things that macros can do can be done with HOFs with just
as little source code duplication as with macros. (And with macros
only the source code does not get duplicated, the same not being true
for compiled code. With HOFs even executable code duplication is
often avoided -- depending on compiler technology.)
is the no advantage to being able to do either - or both - as the
occasion dictates?


I can't parse this sentence,


sorry: is the[re] no advantage to being able to do either - or both - as the
occasion dictates?


but of course you can also use HOFs in Lisp
(all flavours). The interesting part is that most Lisp'ers don't seem
to use them, or even to know that you can use them, and use macros instead.

while the first assertion might well be born out by a statistical analysis of,
for example, open-source code, i'm curious how one reaches the second conclusion.
The only real advantage of macros over HOFs is that macros are guaranteed
to to executed at compile time. A good optimizing compiler (like GHC
for Haskell) might actually also evaluate some expressions including
HOFs at compile time, but you have no control over that.
i'd be interested to read examples of things which are better done
with HOF features which are not available in CL.
HOFs can of course be used directly in CL, and you can use macros to
do everything one could use HOFs for (if you really want).


that's a disappointment.

The advantage of HOFs over macros is simplicity: You don't need additional
language constructs
when did common-lisp macros become an "additional language construct"?
(which may be different even for different Lisp
dialects, say), and other tools (like type checking) are available for
free; and the programmer doesn't need to learn an additional concept.


doesn't that last phrase contradict the previous one?

i do admit to infrequent direct use of higher-order functions. one reason is
that there is little advantage to articulating the creation of functions which
have dynamic extent only, so in my use, most hof's are manifest through a
macro interface. it's the same distaste i have about inner and anonymous java
classes. the other reason is that when i moved from scheme to lisp, in the
process of porting the code which i carried over, it occurred to me that much
of what i was using higher-order functions for could be expressed more clearly
with abstract classes and appropriately defined generic function method combinations.

....
Jul 18 '05 #150

699 Replies

This discussion thread is closed

Replies have been disabled for this discussion.