By using this site, you agree to our updated Privacy Policy and our Terms of Use. Manage your Cookies Settings.
424,994 Members | 2,036 Online
Bytes IT Community
+ Ask a Question
Need help? Post your question and get tips & solutions from a community of 424,994 IT Pros & Developers. It's quick & easy.

Python syntax in Lisp and Scheme

P: n/a
I think everyone who used Python will agree that its syntax is
the best thing going for it. It is very readable and easy
for everyone to learn. But, Python does not a have very good
macro capabilities, unfortunately. I'd like to know if it may
be possible to add a powerful macro system to Python, while
keeping its amazing syntax, and if it could be possible to
add Pythonistic syntax to Lisp or Scheme, while keeping all
of the functionality and convenience. If the answer is yes,
would many Python programmers switch to Lisp or Scheme if
they were offered identation-based syntax?
Jul 18 '05
Share this Question
Share on Google+
699 Replies


P: n/a
Peter Seibel:
So, to write a new test function, here's what I
write:

(deftest foo-tests ()
(check
(= (foo 1 2 3) 42)
(= (foo 4 5 6) 99)))
Python bases its unit tests on introspection. Including the
full scaffolding, the equivalent for Python would be

import unittest
import foo_module # I'm assuming 'foo' is in some other module

class FooTestCase(unittest.TestCase):
def testFoo(self):
self.assertEquals(foo_module.foo(1, 2, 3), 42)
self.assertEquals(foo_module.foo(4, 5, 6), 99)

if __name__ == '__main__':
unittest.main()

Here's what it looks like
class FooTestCase(unittest.TestCase): .... def testFoo(self):
.... self.assertEquals(foo(1,2,3), 42)
.... self.assertEquals(foo(4,5,6), 99)
.... unittest.main() F
================================================== ====================
FAIL: testFoo (__main__.FooTestCase)
----------------------------------------------------------------------
Traceback (most recent call last):
File "<interactive input>", line 4, in testFoo
File "E:\Python23\Lib\unittest.py", line 302, in failUnlessEqual
raise self.failureException, \
AssertionError: 42 != 99

A different style of test can be done with doctest, which uses Python's
docstrings. I'll define the function and include an invalid example
in the documentation.

def foo(x, y, z):
"""Returns 42
foo(1,2,3) 42 foo(4,5,6) 99 """"
return 42

Here's what I see when I run it.
import doctest
doctest.testmod()

************************************************** ***************
Failure in example: foo(5,6,7)
from line #3 of __main__.foo
Expected: 99
Got: 42
************************************************** ***************

Doctests are fun. ;)
Note that this is all about the problem domain, namely testing. Each
form within the body of the CHECK is evaluated as a separate test
case.
The unittest example I have makes them all part of the same test case.
To be a different test case it needs a name. If it has a name, it can
be tested independent of the other tests, eg, if you want to tell the
regression framework to run only one of the tests, as when debugging.
If you have that functionality you'll have to specify the test by number.
If a given form doesn't evaluate to true then a failure is
reported like this which tells me which test function the failure
was in, the literal form of the test case and then the values of any
non-literal values is the function call (i.e. the arguments to = in
this case.)
The Python code is more verbose in that regard because ==
isn't a way to write a function. I assume you also have tests for
things like "should throw exception of type X" and "should not
throw expection" and "floating point within epsilon of expected value"?
Test Failure:

Test Name: (FOO-TESTS)
Test Case: (= (FOO 1 2 3) 42)
Values: (FOO 1 2 3): 6
Feel free to compare with the above. The main difference, as you
point out below, is that you get to see the full expression. Python
keeps track of the source line number, which you can see in the
traceback. If the text was in a file it would also show the contents
of that line in the traceback, which would provide equivalent output
to what you have. In this case the input was from a string and it
doesn't keep strings around for use in tracebacks.

(And the 'doctest' output includes the part of the text used to
generate the test; the previous paragraph only applies to unittest.)

I expect a decent IDE would make it easy to get to an
error line given the unittest output. I really should try one of
the Python IDEs, or even just experiment with python-mode.

I expect the usefulness of showing the full expression to be
smaller when the expression is large, because it could be
an intermediate in the expression which has the problem, and
you don't display those intermediates.
So what is the equivalent non-macro code? Well the equivalent code
to the DEFTEST form (i.e. the macro expansion) is not *that* much
more complex--it just has to do the stuff I mentioned; binding the
test name variable and registering the test function. But it's
complex enough that I sure wouldn't want to have to type it over and
over again each time I write a test:
Python's introspection approach works by looking for classes of a
given type (yes, classes, not instances), then looking for methods
in that class which have a given prefix. These methods become the
test cases. I imagine Lisp could work the same way, except that
because other solutions exist (like macros), there's a prefered
reason to choose another style.

The Python code is the same number of lines as your code, except
that it is more verbose. It does include the ability for tests to have
a setup and teardown stage, which appears to be harder for your
code to handle.
Note that it's the ability, at macro expansion time, to treat the code
as data that allows me to generate test failure messages that contain
the literal code of the test case *and* the value that it evaluated
to. I could certainly write a HOF version of CHECK that accepts a list
of test-case-functions: But since each test case would be an opaque function object by the
time CHECK sees it, there'd be no good option for nice reporting from
the test framework.
You are correct in that Python's way of handling the output doesn't
include the expression which failed. Intead, it includes the location
(source + line number) in the stack trace and if that source is a file
which still exists it shows that line which failed.

A solution which would get what you want without macros is
the addition of more parse tree information, like the start/end positions
of each expression. In that way the function could look up the
stack, find the context from which it was called, then get the full
text of the call. This gets at the code "from the other direction",
that is, from looking at the code after it was parsed rather than
before.

Or as I said, let the IDE help you find the error location and
full context.
but for me, the test, no pun intended, is, is the thing I have to
write to define a new test function much more complex than my original
DEFTEST form?


I'll let you decide if Lisp's introspection abilities provide an alternate
non-macro way to handle building test cases which is just as short.

Knowing roughly no Lisp and doing just pattern matching, here's
a related solution, which doesn't use classes.

(defun utest-foo ()
(= (foo 1 2 3) 42)
(= (foo 4 5 6) 99))

...
(run-unit-tests)

where run-unit-tests looks at all the defined symbols, finds
those which start with 'utest-', wraps the body of each one
inside a 'check' then runs the
(eval-when (:compile-toplevel :load-toplevel :execute)
..
on the body.

If that works, it appears to make the unit test code slightly
easier because the 'check' macro is no longer needed in each
of the test cases; it's been moved to 'run-unit-tests' and can
therefore work as a standard function.

Andrew
da***@dalkescientific.com
Jul 18 '05 #501

P: n/a
Andrew Dalke wrote:
Me:
My continued response is that [Lisp is] not optimal for all
domains.

Pascal Costanza:
Yes, we disagree in this regard.

*Shrug* The existence of awk/perl (great for 1-liners on the
unix command-line) or PHP (for simple web programming) or
Mathematica (for symbolic math) is strong enough evidence for
me to continue to disagree.


Tyler: "How's that working out for you?"
Jack: "Great."
Tyler: "Keep it up, then."
Pascal Costanza:
However, you have mentioned that someone has implemented an quantum
extension for Perl - and if that's possible then you can safely bet that
it's also possible in pure Lisp.

The fact that all computing can be programmed in Turing Machine
Language doesn't mean TML is the optimal programming language.


Right.
The fact that there is perl code for emulating *some* quantum
programming means that Lisp can handle that subset. It doesn't mean
that people have fully explored even in Lisp what it means to do all
of quantum computing.
....and you expect me to have fully explored it? If this topic is so
interesting to you, why don't you just grab a Common Lisp environment
and start working on it? ;)

I am pretty sure you can get very far.
Furthermore, it has been suggested more than once that a valid
working model is that a good Lisp programmer can provide a
domain-specific language for the non-professional programmer. It's very
likely that a DSL matches better the needs of the user than some
restricted general-purpose language.


Another *shrug* And a good C programmer can provide a
domain-specific language for the non-professional programmer.


Sure, but it's much more work.
Any good Python programmer could make an implementation
of a Lisp (slow, and not all of GC Lisp, but a Lisp) in Python, like

import lisp
def spam(distance):
"""(time_to_fall 9.8 distance)"""
spam = lisp.convert(spam)

def time_to_fall(g, distance):
print "The spam takes", (2.0*distance/g)**(0.5), "seconds to fall"

print spam(10)
Sure, but inconvenient.
Ah, but then you need to constantly change the syntax and need to
remember the idiosyncrasies of several languages.

Yup. Just like remembering what macros do for different domains.


Sure, there is no way around that. But you can reduce the tediousness in
the long run.

I believe it is an accepted fact that uniformity in GUI design is a good
thing because users don't need to learn arbitrarily different ways of
using different programs. You only need different ways of interaction
when a program actually requires it for its specific domain.

That's pretty much the same when you program in Lisp. It takes some time
to get used to s-expressions, but afterwards you forget about syntax and
focus on the real problems.
I firmly believe people can in general easily handle much more
complicated syntax than Lisp has. There's plenty of room to
spare in people's heads for this subject.
Sure, but is it worth it?
I am not so sure whether this is a good idea. Personally, I prefer not
to think about syntax anymore. It's boring. But that's maybe just me.

wave equation vs. matrix approach
Newtownian mechanics or Lagrangian
measure theory or non-standard analysis
recursive algorithms vs. iterative ones
travelling salesman vs. maximum clique detection

Each is a pair of different but equivalent ways of viewing the same
problem. Is the difference just syntax?


Probably not. This question is too general though for my taste.
Ahh, but that assumes that behaviour is the only important thing
in a language.


No.

Thank you for your elaboration. You say the driving force is the
ability to handle unexpected events. I assumed that means you need
new styles of behaviour.


Convenience is what matters. If you are able to conveniently express
solutions for hard problems, then you win. In the long run, it doesn't
matter much how things behave in the background, only at first.

Or do you really still care about how sorting algorithms work? No, you
look for an API that has some useful sorting functions, and then you
just use them.

Macros are just another tool to create new abstractions that allow you
to conveniently express solutions for hard problems.

It seems to me that in Python, just as in most other languages, you
always have to be aware that you are dealing with classes and objects.
Why should one care? Why does the language force me to see that when it
really doesn't contribute to the solution?

That's why lambda expressions are sometimes also not quite right. When I
want to execute some code in some context, why should I care about it
being wrapped in a lambda expression to make it work? How does that
contribute to the problem I am trying to tackle?

I want to think in terms of the problem I need to solve. Lisp is one of
the very rare languages that doesn't force me to think in terms of its
native language constructs.
If it's only a syntactical issue, then it's a safe bet that you can add
that to the language. Syntax is boring.


Umm... Sure. C++ can be expressed as a parse tree, and that
parse tree converted to an s-exp, which can be claimed to be
a Lisp; perhaps with the right set of macros.


That's computational equivalence, and that's not interesting.
Still doesn't answer my question on how nicely Lisp handles
the 'unexpected' need of allocating objects from different
memory arenas.


If it's a good Lisp library I would expect it to work like this:

(with-allocation-from :shared-memory
...)

;)

Any more questions?
Sounds like a possible application for the CLOS MOP.

Or any other language's MOP.


Sure.
Seriously, I haven't invented this analogy, I have just tried to answer
a rhetorical question by Alex in a non-obvious way. Basically, I think
the analogy is flawed.

Then the right solution is to claim the analogy is wrong, not
go along with it as you did. ;)


Thanks for your kind words. ;)
Pascal

Jul 18 '05 #502

P: n/a
Kenny Tilton:
Aren't you the scientist who praised a study because statistics showed
the studies statistics had a better than even chance of not being
completely random? Credibility zero, dude. (if you now complain that it
was fully a 75% chance of not being completely random, you lose.)


Wow! You continue to be wrong in your summaries:

- I am not a scientist and haven't claimed to be one in about 8 years

- I didn't 'praise' the study, I pointed out that it exists and that it
offers
some interesting points to consider. At the very least it makes a
testable prediction.

- After someone asserted that that study had been "debunked" I asked for
more information on the debunking, and pointed out the results of
one experiment suggest that the language mapping was not complete
bunkum. (Note that since 100% correlation is also a 'better than even
chance' your statement above is meaningless. What is your
threshold?)

I would be *delighted* to see more studies on this topic,
even ones which state that COBOL is easier to use than Python.

- When I make statements of belief, I present where possible the
sources and the analyses used to justify the belief and, in an
attempt at rigour, the weaknesses of those arguments. As such,
I find it dubious that my credibility can be lower than someone
making claims based solely on gut feelings and illogical thought.
I take that back; a -1.0 credibility makes a wonderful oracle.

Given how imprecise you are in your use of language (where your
thoughtless turns of phrase gracelessly demean those who don't believe
that programming is the be-all and end-all of ambitions), your inability
to summarize matters correctly, and your insistance on ad hominum attacks
(dude!) over logical counter-argument and rational discourse, I'm surprised
you can make a living as a programmer or in any other field which
requires mental aptitude and the ability to communicate.

Andrew
da***@dalkescientific.com
Jul 18 '05 #503

P: n/a
Alex Martelli wrote:
[snip]
but we DO want to provide very clear and precise error diagnostics, of course,
and the language/metalanguage issue is currently open). You will note
that this use of macros involves none of the issues I have expressed about
them (except for the difficulty of providing good error-diagnostics, but
that's of course solvable).


finally, a breath of fresh air.

Jul 18 '05 #504

P: n/a

"Andrew Dalke" <ad****@mindspring.com> wrote in message
news:rr*****************@newsread4.news.pas.earthl ink.net...
Furthermore, it has been suggested more than once that a valid
working model is that a good Lisp programmer can provide a
domain-specific language for the non-professional programmer. It's very
likely that a DSL matches better the needs of the user than some
restricted general-purpose language.
Another *shrug* And a good C programmer can provide a
domain-specific language for the non-professional programmer.


Now you're obviously just trying to be difficult! Reminds me of a thread a
while back where someone argued that C could so dynamically compile code and
posted an example that wrote a hello-world in a string, wrote it to a file,
called gcc on it and then system-called the .so file!

Or is there something else you mean besides design the language, implement a
parser, write a compiler? Yes you can do that in C.
Any good Python programmer could make an implementation
of a Lisp (slow, and not all of GC Lisp, but a Lisp) in Python, like

import lisp
def spam(distance):
"""(time_to_fall 9.8 distance)"""
spam = lisp.convert(spam)

def time_to_fall(g, distance):
print "The spam takes", (2.0*distance/g)**(0.5), "seconds to fall"

print spam(10)
I don't get your point at all. At least not how it possibly applies to a
discussion of using macros to create domain specific languages.
Ah, but then you need to constantly change the syntax and need to
remember the idiosyncrasies of several languages.


Yup. Just like remembering what macros do for different domains.


This is just the same memorizing that applies to application specific
functions, classes etc. Not at all like memorizing @ % $ ; { } & etc.
Really, this argument is based on pure FUD. It is not the norm to use
macros to do anything at all obfuscating. One of the most controversial
macros has got to be loop, and it is ridiculed for something close to what
you seem to be arguing: it creates a very different and unlisp-like
sublanguage.

But even that, it is not:
(loop ^x %% foo==bar -> baz)

it is:
(loop for x from foo upto bar finally return baz)

Not hard to memorize what FROM and UPTO mean.
I firmly believe people can in general easily handle much more
complicated syntax than Lisp has. There's plenty of room to
spare in people's heads for this subject.
My head prefers to be full of more important things, that's all.
Lispniks are driven by the assumption that there is always the
unexpected. No matter what happens, it's a safe bet that you can make
Lisp behave the way you want it to behave, even in the unlikely event
that something happens that no language designer has ever thought of
before. Ahh, but that assumes that behaviour is the only important thing
in a language.
No.


Thank you for your elaboration. You say the driving force is the
ability to handle unexpected events. I assumed that means you need
new styles of behaviour.


Just have another look. He did not even say behaviour is the most
important, let alone the only important thing.

But in the global trade of language design (because remember, everything is
a trade-off) behaviour *is* more important than syntax *if* your goals are
primarily technical rather than social.

Nothing wrong with social goals, but you should not be naive when it comes
to considering what you have traded in for personal notions of ease of
learning.
Still doesn't answer my question on how nicely Lisp handles
the 'unexpected' need of allocating objects from different
memory arenas.


I don't think this is a reasonable discussion point. You are presumably
trying to show us an example of a problem lisp is not flexible enough to
handle, but you have not presented a problem to solve, you have presented a
solution to implement. These are entirely different things.

--
Coby Beck
(remove #\Space "coby 101 @ big pond . com")

Jul 18 '05 #505

P: n/a

"Andrew Dalke" <ad****@mindspring.com> wrote in message
news:we*****************@newsread4.news.pas.earthl ink.net...
The smartest people I know aren't programmers. What does
that say?


Nothing surprising! ;)

--
Coby Beck
(remove #\Space "coby 101 @ big pond . com")
Jul 18 '05 #506

P: n/a

"Pascal Costanza" <co******@web.de> wrote in message
news:bm**********@newsreader2.netcologne.de...
Does Python allow local function definitions?
Module level functions are local to the module unless imported by
another module. Nested functions are local to the function they are
nested within unless explicitly returned. Methods are local to
classes and subclasses. Lambda expressions are very local unless
somehow passed around.

I am not sure which best meets your intention.
Can they shadow predefined functions?


Yes, named objects, including functions can (locally) shadow
(override) builtins. It is considered a bad habit/practice unless
done intentionally with a functional reason.

Terry J. Reedy
Jul 18 '05 #507

P: n/a
Pascal Costanza:
[quantum programming]
While an interesting topic, it's something I'm not going to worry about.
And if I did, it would be in Python ;)

I bring it up as a counter-example to the idea that all modes of
programming have been and can be explored in a current Lisp.
I conjectured one interesting possibility -- that of handling ensembles
of possible solutions to a given problem.

In retrospect I should have given a more obvious possibility.
As some point I hope to have computer systems I can program
by voice in English, as in "House? Could you wake me up
at 7?" That is definitely a type of programming, but Lisp is
a language designed for text, not speed.

Pascal Costanza: I believe it is an accepted fact that uniformity in GUI design is a good
thing because users don't need to learn arbitrarily different ways of
using different programs. You only need different ways of interaction
when a program actually requires it for its specific domain.
My spreadsheet program looks different from my word processor
looks different from my chemical structure editor looks different from
my biosequence display program looks different from my image
editor looks different from my MP3 player looks different from my
email reader looks different from Return to Castle Wolfinstein ....

There are a few bits of commonality; they can all open files. But
not much more. Toss out the MP3 player and RtCW and there
is more in common. Still, the phrase "practicality beats purity" is
seems appropriate here.
I firmly believe people can in general easily handle much more
complicated syntax than Lisp has. There's plenty of room to
spare in people's heads for this subject.


Sure, but is it worth it?


Do you have any doubt to my answer? :)
Convenience is what matters. If you are able to conveniently express
solutions for hard problems, then you win. In the long run, it doesn't
matter much how things behave in the background, only at first.
Personally, I would love to write equations on a screen like I
would on paper, with integral signs, radicals, powers, etc. and
not have to change my notation to meet the limitations of computer
input systems.

For Lisp is a language tuned to keyboard input and not the full
range of human expression. (As with speech.)

(I know, there are people who can write equations in TeX as
fast as they can on paper. But I'm talking about lazy ol' me
who wants the covenience.)

Or, will there ever be a computer/robot combination I can
teach to dance? Will I do so in Lisp?
It seems to me that in Python, just as in most other languages, you
always have to be aware that you are dealing with classes and objects.
Why should one care? Why does the language force me to see that when it
really doesn't contribute to the solution?
Hmmm.. Is the number '1' an object? Is a function an object?
What about a module? A list? A class?
print sum(range(100)) 4950
Where in that example are you aware that you are dealing with classes
and objects?
If it's only a syntactical issue, then it's a safe bet that you can add
that to the language. Syntax is boring.


Umm... Sure. C++ can be expressed as a parse tree, and that
parse tree converted to an s-exp, which can be claimed to be
a Lisp; perhaps with the right set of macros.


That's computational equivalence, and that's not interesting.


Which is why I didn't see the point of original statement. My
conjecture is that additional syntax can make some things easier.
That a problem can be solved without new syntax does not
contradict my conjecture.
If it's a good Lisp library I would expect it to work like this:

(with-allocation-from :shared-memory
...)

;)

Any more questions?


Yes. Got a URL for documentation on a Lisp providing access
to shared memory? My guess is that the Lisp runtime needs
to be told about the arenas and that the multiple instances of
Lisp sharing the arena must use some extra IPC to handle
the distributed gc.

It gets worse if program X forks copies Y and Z, with shared
memory XY between X and Y (but not Z) and XZ between
X and Z (but not Y). X needs to be very careful on which
data is copied, and it isn't immediately obvious what happens
when some object from XZ is inserted into a list accessible
to Y via XY.

Consider also a "persistent memory" server running in C
(or hardware access to some sort of non-volatile memory.)
You can use standard IPC to get an initially zeroed memory
block and are free to use that memory without restrictions.
It's persistent after program exit so when the program restarts
it can reconnect to shared memory and get the data as it
was at exit.

This service is straight-forward to support in C/C++. It
sounds like for Lisp you are dependent on the implementation,
in that if the implementation doesn't support access to its
memory allocator/gc subsystem then it's very hard to
write code for this hardware on your own. It may be
possible to use an extension (written in C? ;) to read/write
to that persistent memory using some sort of serialization,
but that's the best you can do -- you don't have live objects
running from nonvolatile store -- which is worse than C++.
Andrew
da***@dalkescientific.com
Jul 18 '05 #508

P: n/a
Coby Beck:
Now you're obviously just trying to be difficult!


Hmm, I think you are correct. This discussion has worn me out
and I'm reacting now more out of crabbishness than thoughtfulness.

I hereby withdraw from this thread. Or at least from cross-posting
outside of c.l.py ;)

Andrew
da***@dalkescientific.com
Jul 18 '05 #509

P: n/a


Andrew Dalke wrote:
Given how imprecise you are in your use of language


Ah, but there was a 75% chance that my remarks were not /completely/
random, so my unfairness towards you can't be complete bunkum.

:)
--
http://tilton-technology.com
What?! You are a newbie and you haven't answered my:
http://alu.cliki.net/The%20Road%20to%20Lisp%20Survey

Jul 18 '05 #510

P: n/a

"Peter Seibel" <pe***@javamonkey.com> wrote in message
news:m3************@javamonkey.com...
Note that it's the ability, at macro expansion time, to treat the code as data that allows me to generate test failure messages that contain the literal code of the test case *and* the value that it evaluated
to. I could certainly write a HOF version of CHECK that accepts a listof test-case-functions: .... But since each test case would be an opaque function object by the
time CHECK sees it, there'd be no good option for nice reporting from the test framework.


But can't you explicitly quote the test cases for input to the HOF and
eval them within the HOF, so you again have both the literal code and
value generated? Not as pretty, admittedly, and perhaps less
efficient, but workable?

Terry J. Reedy
Jul 18 '05 #511

P: n/a
Andrew Dalke wrote:
In retrospect I should have given a more obvious possibility.
As some point I hope to have computer systems I can program
by voice in English, as in "House? Could you wake me up
at 7?" That is definitely a type of programming, but Lisp is
a language designed for text, not speed.
I don't understand that last sentence. Could you clarify this a bit? You
don't want to say that there is an inherent dichotomy between text and
speed, do you?!?
Pascal Costanza:
I believe it is an accepted fact that uniformity in GUI design is a good
thing because users don't need to learn arbitrarily different ways of
using different programs. You only need different ways of interaction
when a program actually requires it for its specific domain.

My spreadsheet program looks different from my word processor
looks different from my chemical structure editor looks different from
my biosequence display program looks different from my image
editor looks different from my MP3 player looks different from my
email reader looks different from Return to Castle Wolfinstein ....

There are a few bits of commonality; they can all open files. But
not much more.


....but you probably know from the start where to find the menus, what
the shortcuts are for opening and saving files, how to find the online
help, and so forth.

Lisp also has this to a certain degree: It's always clear what
constitutes the meaning of an s-expression, namely its car, no matter
what language "paradigm" you are currently using.
Toss out the MP3 player and RtCW and there
is more in common. Still, the phrase "practicality beats purity" is
seems appropriate here.

I firmly believe people can in general easily handle much more
complicated syntax than Lisp has. There's plenty of room to
spare in people's heads for this subject.
Sure, but is it worth it?

Do you have any doubt to my answer? :)


No, not really. :)
Convenience is what matters. If you are able to conveniently express
solutions for hard problems, then you win. In the long run, it doesn't
matter much how things behave in the background, only at first.

Personally, I would love to write equations on a screen like I
would on paper, with integral signs, radicals, powers, etc. and
not have to change my notation to meet the limitations of computer
input systems.


I know people who have even started to use s-expression for mathematical
notation (on paper), because they find it more convenient.
For Lisp is a language tuned to keyboard input and not the full
range of human expression. (As with speech.)
There is some research going on to extend Lisp even in this regard
(incorporating more ways of expression).
(I know, there are people who can write equations in TeX as
fast as they can on paper. But I'm talking about lazy ol' me
who wants the covenience.)

Or, will there ever be a computer/robot combination I can
teach to dance? Will I do so in Lisp?
?!?
It seems to me that in Python, just as in most other languages, you
always have to be aware that you are dealing with classes and objects.
Why should one care? Why does the language force me to see that when it
really doesn't contribute to the solution?

Hmmm.. Is the number '1' an object? Is a function an object?
What about a module? A list? A class?

print sum(range(100))
4950
Where in that example are you aware that you are dealing with classes
and objects?


Well, maybe I am wrong. However, in a recent example, a unit test
expressed in Python apparently needed to say something like
"self.assertEqual ...". Who is this "self", and what does it have to do
with testing? ;)
If it's only a syntactical issue, then it's a safe bet that you can add
that to the language. Syntax is boring.

Umm... Sure. C++ can be expressed as a parse tree, and that
parse tree converted to an s-exp, which can be claimed to be
a Lisp; perhaps with the right set of macros.


That's computational equivalence, and that's not interesting.

Which is why I didn't see the point of original statement. My
conjecture is that additional syntax can make some things easier.
That a problem can be solved without new syntax does not
contradict my conjecture.


If additional syntax makes specific things easier, then in god's name
just add it! The loop macro in Common Lisp is an example of how you can
add syntax to make certain things easier. This is not rocket science.

The point here is that for most languages, if you want to add some
syntax, you have to change the definition of the language, extend the
grammar, write a parser, extended a compiler and/or interpreter, maybe
even the internal bytecode representation, have wars with other users of
the language whether it's a good idea to change the language that way,
and so forth. In Lisp, you just write a bunch of macros and you're done.
No problems with syntax except if you want them, most of the time no
problems with changes to the language (far less than in other
languages), no messing around with grammars and related tools, no need
to know about compiler/interpreter internals and internal
representation, no wars with other language users, and so forth.

Syntax is boring. ;)
If it's a good Lisp library I would expect it to work like this:

(with-allocation-from :shared-memory
...)

;)

Any more questions?

Yes. Got a URL for documentation on a Lisp providing access
to shared memory?


OK, I am sorry that I have lost focus here. You have given this example
as one that shows what probably cannot not be done in Lisp out of the
box. However, most Lisp implementations provide a way to access native
code and in that way deal with specific features of the operating
system. And there is a de-facto standard for so-called foreign function
calls called UFFI that you can use if you are interested in a
considerable degree of portability.

I don't know a lot about the specifics of shared memory, so I can't
comment on your specific questions.
This service is straight-forward to support in C/C++. It
sounds like for Lisp you are dependent on the implementation,
in that if the implementation doesn't support access to its
memory allocator/gc subsystem then it's very hard to
write code for this hardware on your own. It may be
possible to use an extension (written in C? ;) to read/write
to that persistent memory using some sort of serialization,
but that's the best you can do -- you don't have live objects
running from nonvolatile store -- which is worse than C++.


This should be possible as a combination of a FFI/UFFI and the CLOS MOP.
AFAIK, you can define the memory layout and the allocation of memory for
specific metaclasses. However, I really don't know the details.

The paper at
http://www-db.stanford.edu/~paepcke/...ts/mopintro.ps might
be interesting. For UFFI, see http://uffi.b9.com/

As Paul Graham put it, yes, there is some advantage when you use the
language the operating system is developed in, or it ++.

Pascal

Jul 18 '05 #512

P: n/a
Terry Reedy wrote:
"Pascal Costanza" <co******@web.de> wrote in message
news:bm**********@newsreader2.netcologne.de...
Does Python allow local function definitions?

Module level functions are local to the module unless imported by
another module. Nested functions are local to the function they are
nested within unless explicitly returned. Methods are local to
classes and subclasses. Lambda expressions are very local unless
somehow passed around.

I am not sure which best meets your intention.

Can they shadow predefined functions?

Yes, named objects, including functions can (locally) shadow
(override) builtins. It is considered a bad habit/practice unless
done intentionally with a functional reason.


Well, this proves that Python has a language feature that is as
dangerous as many people seem to think macros are.

<irony>
What you say is that local function definitions can obscure the meaning
of the Python language and/or its standard library, and this has the
potential to split the language community and make it impossible to read
each other's code. Heck, you really can't rely on the fact that sum(...)
sums its arguments? This means that a local function definition is
really a dangerous language feature, isn't it? Shouldn't it better be
abandoned?
</irony>

That last paragraph sounds as non-sensical to your ears as the arguments
against the inclusion of macros into a language because of their
expressive power sound to our ears.

BTW, do you know the talk "Growing a Language" by Guy Steele? See
http://www.research.avayalabs.com/us...e-oopsla98.pdf - read
it, it's very insightful (even though it talks about Java. ;)

Pascal

Jul 18 '05 #513

P: n/a
"Andrew Dalke" <ad****@mindspring.com> writes:
pr***********@comcast.net:
The smartest programmers I know all prefer Lisp (in some form or
another). Given that they agree on very little else, that's saying
a lot.
Guess you don't know Knuth.


Never met him.
The smartest people I know aren't programmers. What does
that say?


You hang out with dumb programmers?
Jul 18 '05 #514

P: n/a
Andrew Dalke wrote:
Pascal Costanza:
[quantum programming]
While an interesting topic, it's something I'm not going to worry about.


Me neither, for now.
And if I did, it would be in Python ;)
I suspect no existing language would be anywhere near adequate.
But if any current programming concept could stretch there, it might
be that of "unrealized until looked-into set of things", as in, Haskell's
"lazy" (nonstrict) lists. Now lists are sequential and thus quantumly
inappropriate, but perhaps it's a start.
I bring it up as a counter-example to the idea that all modes of
programming have been and can be explored in a current Lisp.
I conjectured one interesting possibility -- that of handling ensembles
of possible solutions to a given problem.
I suspect we may have to map the 'ensembles' down to sets of
items, just as we generally map concurrency down to sets of
sequential actions, in order to be able to reason about them (though
I have no proof of that conjecture). IF you have to map more
complicated intrinsics down to sequential, deterministic, univocal
"things", I'm sure you could do worse than Lisp. As to whether
that makes more sense than dreaming up completely different
languages having (e.g.) nondeterminism or multivocity as more
intrinsic concepts, I pass: it depends mostly on what human beings
will find they need to use in order to reason most effectively in
this new realm -- and quite likely different humans will find they
have different needs in this matter.

In retrospect I should have given a more obvious possibility.
As some point I hope to have computer systems I can program
by voice in English, as in "House? Could you wake me up
at 7?" That is definitely a type of programming, but Lisp is
Yeah, well, I fear the answer will be yes (it could), but it won't
do so since you haven't _asked_ it to wake you up, only if it
could. ME, I definitely don't want to use natural language with
all of its ambiguity for anything exept communicating with
other human beings, thankyouverymuch.
a language designed for text, not speed.
*blink* what does THAT doubtful assertion have to do with anything
else we were discussing just now...? I think lisp was designed for
lists (as opposed to, say, snobol, which WAS "designed for text") and
that they're a general enough data structure (and supplemented in
today's lisps with other good data structures) that they'll be quite good
for all kinds of 'normal' (deterministic &c) programming. As for speed,
I'm sure it's easier to get it out of lisp than out of python right now.
So what's your point, and its relation to the above...?

Pascal Costanza:
I believe it is an accepted fact that uniformity in GUI design is a good
thing because users don't need to learn arbitrarily different ways of
using different programs. You only need different ways of interaction
when a program actually requires it for its specific domain.
Yes, I agree this IS generally accepted (with, of course, some dissenters,
but in a minority).
My spreadsheet program looks different from my word processor
Sure, so do mine, but their menus are quite similar -- in as much as
it makes sense for them to have similar operations -- and ditto ditto
for their toolbars, keyboard shortcuts, etc etc. I.e. the differences
only come "when needed for a specific domain" just as Pascal just
said. So I don't know what you're intending with this answer.
is more in common. Still, the phrase "practicality beats purity" is
seems appropriate here.
Uniformity is more practical than diversity: e.g. ctrl-c as the Copy
operation everywhere means my fingers, even more than my brain, get
used to it. If you assign ctrl-c to some totally different operation in
your gui app "because you think it's more practical" you're gonna
drive me crazy, assuming I have to use your app. (That already
happens to me with the -- gnome based i think -- programs using
ctrl-z for minimize instead of undo -- I'm starting to have frayed
nerves about that even for GVIM, one of the programs I use most
often...:-).
> I firmly believe people can in general easily handle much more
> complicated syntax than Lisp has. There's plenty of room to
> spare in people's heads for this subject.


Sure, but is it worth it?


Do you have any doubt to my answer? :)


Given the difficulty I'm having understanding your stance[s] in
this post, I do. My own answer would be that syntax sugar is
in people's head anyway because of different contexts -- infix
arithmetic taught since primary school, indentation in outlines
and pseudocode, etc etc -- so, following that already-ingrained
set of conventions takes no "room to spare in people's heads" --
indeed, the contrary -- it saves them effort. If people's head
started as "tabula rasa" it might be different, but they don't, so
that's a moot issue.

That much being said, I _do_ like prefix syntax. In some cases
I need to sum a+b+c+d and repeating that silly plus rather than
writing (+ a b c d) grates. Or I need to check a<b<c<d and
again I wish I could more summarily write (< a b c d). When I
designed my own language for bridge-hands evaluation, BBL, I
used prefix notation, though in the form operator ( operands )
[which I thought would have been easier for other bridge players
to use], e.g.:

& ( # weak NT opener requires AND of two things:
s ( 1 g 4 3 3 3 # shape 4333 (any), or
2 g 4 4 3 2 # 4432 (any), or
3 3- 3- 3- 5 # 5332 with 5 clubs, or
4 3- 3- 5 3- # 5332 with 5 diamonds
)
< ( 12 # as well as, 13-15 range for
\+ SHDC c( 4 3 2 1 0) # normal Milton-Work pointcount
16
)
)

Maybe readers are starting to understand why I don't WANT to
use a language I design myself;-). Anyway, the language was
NOT enthusiastically taken up, until I wrote code generators with
a GUI accepting conditions in more "ordinary looking" notations
and building this, ahem, intrinsically "better" one;-) -- then, but only
then, did other players start using this to customize hand generators
and the like. (Yes, I did have macros, but puny enough that they
still required operator(operands) syntax -- they basically served only
to reduce duplication, or provide some little abstraction, not to
drastically change the language syntax at all). Ah well -- maybe I
should just put the BBL (Turbo Pascal) implementation and (Italian-
language) reference manual online -- it still moves nostalgia in me!-)

Convenience is what matters. If you are able to conveniently express
solutions for hard problems, then you win. In the long run, it doesn't
My APL experience tells me this is false: conveniently expressing
solutions is HALF the problem -- you (and others!) have to be
able to read them back and maintain and alter them later too.
matter much how things behave in the background, only at first.


Personally, I would love to write equations on a screen like I
would on paper, with integral signs, radicals, powers, etc. and
not have to change my notation to meet the limitations of computer
input systems.


So jot your equations on a tablet-screen and look for a good
enriched text recognition system. What's programming gotta
do with it?
For Lisp is a language tuned to keyboard input and not the full
range of human expression. (As with speech.)
Python even more so on the output side -- try getting a screen-reader to
do a halfway decent job with it. But what does this matter here?

(I know, there are people who can write equations in TeX as
fast as they can on paper. But I'm talking about lazy ol' me
who wants the covenience.)

Or, will there ever be a computer/robot combination I can
teach to dance? Will I do so in Lisp?
You may want to teach by showing and having the computer
infer more general rules from example. Whether the inference
engine will be best built in lisp, prolog, ocaml, mozart, whatever,
I dunno. I don't think it will be optimally done in Python, though.
"Horses for courses" is my philosophy in programming.

It seems to me that in Python, just as in most other languages, you
always have to be aware that you are dealing with classes and objects.
Given the "everything is an object" (classes included) and every object
belongs to a class, you could indeed say that -- in much the same sense
as you may be said to always be aware that you're breathing air in
everyday life. Such awareness is typically very subconscious, of course.
Why should one care? Why does the language force me to see that when it
really doesn't contribute to the solution?


I'm not sure in what sense "python forces you to see" that, e.g.,
the number 2 is an object -- or how can that fail to "contribute to
the solution". Care to exemplify?
Hmmm.. Is the number '1' an object? Is a function an object?
What about a module? A list? A class?
Yes to all of the above, in Python. I don't get your point.
print sum(range(100))

4950

Where in that example are you aware that you are dealing with classes
and objects?


Me? At every step -- I know 'sum' names a builtin object that is a
function (belonging to the class of builtin functions) taking one argument
which is a sequence, 'range' names another builtin object returning
a list object, etc. I'm not directly dealing with any of their classes --
I know they belong to classes, like any object does, but I have no need
to think about them in this specific statement (in fact, I hardly ever do;
signature-based polymorphism is what I usually care about, not class
membership, far more often than not).

But I don't get your points -- neither Andrew's nor Pascal's. How does
this differ from the awareness I might have in some macro-enhanced
lisp where I would type (print (sum (range 100))) or the like?
conjecture is that additional syntax can make some things easier.
That a problem can be solved without new syntax does not
contradict my conjecture.


But even if we provisionally concede your conjecture we are still
left wondering: is the degree of easing so high that it overcomes
the inevitable increase in complication, needed for a language to
have N+1 syntax forms where previously it only had N? I.e., it's
in general a difficult engineering tradeoff, like many in language
design -- which is why I'd rather delegate the decisions on these
tradeoffs to individuals, groups and processes with a proven track
record for making a lot of them with complexive results that I find
delightful, rather than disperse them to myself & many others
(creating lots of not-quite-congruent language dialects).
Alex

Jul 18 '05 #515

P: n/a
"Andrew Dalke" <ad****@mindspring.com> writes:
Pascal Costanza:
[quantum programming]
While an interesting topic, it's something I'm not going to worry about.
And if I did, it would be in Python ;)

I bring it up as a counter-example to the idea that all modes of
programming have been and can be explored in a current Lisp.
I conjectured one interesting possibility -- that of handling ensembles
of possible solutions to a given problem.


Oops, try again.

http://hampshire.edu/lspector/qgame.html
http://www.het.brown.edu/people/andre/qlambda/
http://mitpress.mit.edu/sicp/full-te...ok/node88.html

In retrospect I should have given a more obvious possibility.
As some point I hope to have computer systems I can program
by voice in English, as in "House? Could you wake me up
at 7?" That is definitely a type of programming, but Lisp is
a language designed for text, not speed.
Oops, try again.

http://www.hpl.hp.com/techreports/94/HPL-94-30.html
http://dynamo.ecn.purdue.edu/~qobi/software.html
http://citeseer.nj.nec.com/siskind93screamer.html

If you were to select a language that has been used for *more* different
kinds of programming paradigms, you'd be hard pressed to find something
better than lisp.
Personally, I would love to write equations on a screen like I
would on paper, with integral signs, radicals, powers, etc. and
not have to change my notation to meet the limitations of computer
input systems.

For Lisp is a language tuned to keyboard input and not the full
range of human expression. (As with speech.)
Oops, it turns out that Lisp is used for handwriting recognition
in the banking industry. I've heard rumors of Lisp being used
for voice recognition.
Or, will there ever be a computer/robot combination I can
teach to dance? Will I do so in Lisp?
You want to teach a robot to dance? I would prefer a cute woman,
myself. Suit yourself, but you can use Lisp:

http://citeseer.nj.nec.com/lee95programming.html

Yes. Got a URL for documentation on a Lisp providing access
to shared memory? My guess is that the Lisp runtime needs
to be told about the arenas and that the multiple instances of
Lisp sharing the arena must use some extra IPC to handle
the distributed gc.
http://www.double.co.nz/creatures/de...aredmemory.htm
It gets worse if program X forks copies Y and Z, with shared
memory XY between X and Y (but not Z) and XZ between
X and Z (but not Y). X needs to be very careful on which
data is copied, and it isn't immediately obvious what happens
when some object from XZ is inserted into a list accessible
to Y via XY.
Actually, it is obvious with a little thought. Objects sharable
between X and Y must reside in the XY address space. Objects
sharable between X and Z must reside in the XZ address space.

Since neither Y nor Z share their entire address space with X, there
exist outgoing edges in the heap of Y that do not refer to objects
within Y. Presumably the author of the system thought of that.
(If not, he's in trouble before Z even exists.) An object common
with X and Z is as opaque to Y as any other object in X.
Consider also a "persistent memory" server running in C
(or hardware access to some sort of non-volatile memory.)
You can use standard IPC to get an initially zeroed memory
block and are free to use that memory without restrictions.
It's persistent after program exit so when the program restarts
it can reconnect to shared memory and get the data as it
was at exit.

This service is straight-forward to support in C/C++. It
sounds like for Lisp you are dependent on the implementation,
in that if the implementation doesn't support access to its
memory allocator/gc subsystem then it's very hard to
write code for this hardware on your own.


Why do you think this? The api would be straighforward. a
shared persistent memory would at least have these calls:

allocate, which takes an allocation amount and returns
some name for the object allocated,

retrieve, which takes some name for an object and recovers
the object itself (for use upon restarting)

dereference, which takes an object and a field and returns
or assigns the field.

An obvious way to integrate this into lisp is to make a
displaced array.
Jul 18 '05 #516

P: n/a
"Andrew Dalke" <ad****@mindspring.com> writes:
The smartest people I know aren't programmers. What does
that say?


I think this is vital point. CL's inaccessibility is painted as a feature of
CL by many c.l.l denizens (keeps the unwashed masses out), but IMO the CL
community stunts and starves itself intellectually big time because CL is (I
strongly suspect) an *extremely* unattractive language for smart people
(unless they happen to be computer geeks).

Apart from the fact that this yields a positive feedback loop, I'd think that
even the smart computer geeks are likely to suffer from this incestuousness in
the midrun.

'as
Jul 18 '05 #517

P: n/a
"Terry Reedy" <tj*****@udel.edu> writes:
"Peter Seibel" <pe***@javamonkey.com> wrote in message
news:m3************@javamonkey.com...

Note that it's the ability, at macro expansion time, to treat the
code as data that allows me to generate test failure messages that
contain the literal code of the test case *and* the value that it
evaluated to. I could certainly write a HOF version of CHECK that
accepts a list of test-case-functions:

...
But since each test case would be an opaque function object by the
time CHECK sees it, there'd be no good option for nice reporting
from the test framework.


But can't you explicitly quote the test cases for input to the HOF and
eval them within the HOF, so you again have both the literal code and
value generated? Not as pretty, admittedly, and perhaps less
efficient, but workable?


Well, if you eval the test cases, they are evaled in what's known as
the "null" lexical environment. So if you do something like:

(let ((x 10))
(check '(= (foo x) (* 2 x))))

where check is a function that evals its argument, rather than a
macro, then the evaluation will not be able to "see" the local binding
of the variable x so is unlikely to do what you think.

Python's eval--as I understand it--handles this differently. Common
Lisp's EVAL may be the way it is partially because it is not needed
for things like this given the existence of macros. There's are also
some semantic difficulties of what lexical environment the EVAL should
occur in. Given that the quoted expression above is just a piece of
data there's no particular way to attach the lexical environment to it
so that a subsequent EVAL can use it.

-Peter

--
Peter Seibel pe***@javamonkey.com

Lisp is the red pill. -- John Fraser, comp.lang.lisp
Jul 18 '05 #518

P: n/a
Peter Seibel <pe***@javamonkey.com> writes:
If for some reason you believe that macros will have a different
effect--perhaps decreasing simplicity, clarity, and directness then
I'm not surprised you disapprove of them. But I'm not sure why you'd
think they have that effect.
Well, maybe he's seen things like IF*, MVB, RECEIVE, AIF, (or as far as
simplicity is concerned LOOP)...?

I'm not saying that macros always have ill-effects, but the actual examples
above demonstrate that they *are* clearly used to by people to create
idiosyncratic versions of standard functionality. Do you really think clarity,
interoperability or expressiveness is served if person A writes
MULTIPLE-VALUE-BIND, person B MVB and person C RECEIVE?
(deftest foo-tests ()
(check
(= (foo 1 2 3) 42)
(= (foo 4 5 6) 99)))

Note that this is all about the problem domain, namely testing.


I think the example isn't a bad one, in principle, in practice however I guess
you could handle this superiorly in python.

I develop my testing code like this:

# like python's unittest.TestCase, only that it doesn't "disarm"
# exceptions
TestCase = awmstest.PermeableTestCase
#TestCase = unittest.TestCase

class BarTest(TestCase):
...
def test_foos(self):
assert foo(1,2,3) = 42
assert foo(4,5,6) = 99

Now if you run this in emacs/ipython with '@pdb on' a failure will raise an
Exception, the debugger is entered and emacs automatically will jump to the
right source file and line of code (I am not mistaken in thinking that you
can't achieve this using emacs/CL, right?) and I can interactively inspect the
stackframes and objects that were involved in the failure.

I find this *very* handy (much handier than having the wrong result printed
out, because in many cases I'm dealing with objects such as large arrays wich
are not easily visualized).

Once the code and test code works I can easily switch to mere reporting
behavior (as described by andrew dalke) by uncommenting unittest.TestCase back
in.
'as
Jul 18 '05 #519

P: n/a
"Andrew Dalke" <ad****@mindspring.com> writes:
Peter Seibel:
So, to write a new test function, here's what I
write:

(deftest foo-tests ()
(check
(= (foo 1 2 3) 42)
(= (foo 4 5 6) 99)))
Python bases its unit tests on introspection. Including the
full scaffolding, the equivalent for Python would be

import unittest
import foo_module # I'm assuming 'foo' is in some other module

class FooTestCase(unittest.TestCase):
def testFoo(self):
self.assertEquals(foo_module.foo(1, 2, 3), 42)
self.assertEquals(foo_module.foo(4, 5, 6), 99)

if __name__ == '__main__':
unittest.main()


Yup. I'd certainly be loath to claim that there's anything that *only*
be done using macros. And it *is* the case that dynamic and
introspective features make lots of things that might otherwise be
done with macros less painful.

So here's another example of using macros to define a domain specific
language. In this case the domain is lexing and parsing. In this case
I wrote a parser generator (like ANTLR in Java on YACC in C) by
defining the macros DEFPROD, DEFCHARTYPE, and DEFLEXE macros that
allow me to define the gramatical rules for a lexer in an s-expression
notation and then stitch them together into a procedure that
implements a tokenizer based on the grammar rules. The notation may
look a bit funny if you're not used to Lisp syntax but you can
probably observe that what I have hear is pretty much a translation
from the BNF in the Java language standard to my own s-exp notation.
(I'll give just a sample of the production rules, the rest are
similar. The code required to implement the lexer proper is 91 lines
of actual code, not counting a preprocessor function that translates
Java's special unicode escape syntax.)

In other languages parser generators usually have to define their own
grammar language and write a program that parsers *that* format in
order to generate the parsing code. Macros let me do the same thing
with much less work because Lisp does parsing for me--what follows
*is* a Lisp program given that I've defined the macros it uses.

As with DEFTEST, the point is not that this is the only way to do it
but rather that macros give me a way to concisely and directly express
the stuff I *care* about (i.e. what is the grammar I'm trying to
parse) while abstracting away the mechanims by which it gets
translated into efficient code that actually does the work.
(A sample of the productions from the code that generates a Java lexer.)

;; 3.4 Line terminators

(defprod line-terminator () (/ #\newline (#\return (? #\newline))))

(defchartype input-character
'(and character (not (member #\newline #\return))))

;; 3.5 Input Elements and Tokens

(defprod input () ((* input-element) (? #\Sub)))

(defprod input-element () (/ white-space comment token))

(defprod token () (/ identifier java-keyword literal separator operator))

;; 3.6 White space

(defprod white-space () (/ #\space #\tab #\page line-terminator))

;; 3.9 Keywords
(defprod java-keyword ()
(/
"abstract" "boolean" "break" "byte" "case" "catch" "char" "class" "const"
"continue" "default" ("do" (? "uble")) "else" "extends" ("final" (? "ly"))
"float" "for" "goto" "if" "implements" "import" "instanceof"
("int" (? "erface")) "long" "native" "new" "package" "private" "protected"
"public" "return" "short" "static" "strictfp" "super" "switch"
"synchronized" "this" ("throw" (? "s")) "transient" "try" "void"
"volatile" "while"))

;; 3.10.2 Floating-Point Literals

(defprod floating-point-literal ()
(/
((+ digit)
(/
(#\. (* digit) (? exponent-part) (? float-type-suffix))
(exponent-part (? float-type-suffix))
float-type-suffix))
(#\. (+ digit) (? exponent-part) (? float-type-suffix))))

(defprod exponent-part () (exponent-indicator signed-integer))
(defchartype exponent-indicator '(member #\e #\E))
(defprod signed-integer () ((? sign) (+ digit)))
(defchartype sign '(member #\+ #\-))
(defchartype float-type-suffix '(member #\f #\F #\d #\D))

;; 3.12 Operators

(defprod operator ()
(/
":"
"?"
"~"
("!" (? "="))
("%" (? "="))
("&" (? (/ "=" "&")))
("*" (? "="))
("+" (? (/ "=" "+")))
("-" (? (/ "=" "-")))
("/" (? "="))
("<" (? "<") (? "="))
("=" (? "="))
(">" (? ">" (? ">")) (? "="))
("^" (? "="))
("|" (? (/ "=" "|")))))
;; This macro actually expands into a fuction java-lexer that takes
;; a string of text and tokenizes it according to the rules defined
;; above, starting from the "input" rule and returning as tokens
;; objects that represent identifiers, java-keywords, literals,
;; separators, and operators. Comments and whitespace are, thus,
;; discarded.

(deflexer java-lexer
((:start-rule input)
(:tokens identifier java-keyword literal separator operator)))
The Python code is more verbose in that regard because == isn't a
way to write a function. I assume you also have tests for things
like "should throw exception of type X" and "should not throw
expection" and "floating point within epsilon of expected value"?
Sure, you can put any boolean expression in a check and have it
treated as a test case. The macro code can figure out whether it's a
function call and if it is arranges to evaluate the arguments itself
so it can see what their values are and then passes the values to the
function. Because all the comparators such as =, EQL (object
equality), <, >, string<, etc. are functions, this works great.

[snip]
I expect the usefulness of showing the full expression to be smaller
when the expression is large, because it could be an intermediate in
the expression which has the problem, and you don't display those
intermediates.
Yeah. Though that's just because I didn't get around to implementing
that. As long as the expression consists of a tree of function calls,
I could do the same thing to sub-expressions I do to the top-level
expressions in the CHECK, and display them as well. If I really wanted
I could write a GUI to browse through the whole tree of function
calls. But I decided that the 80/20 rule applies and if I can't figure
out from the top-level expressions what's going on then I can always
resort to normal debugging. (Also the test framework can dump you into
the debugger at the point of a test failure so you can poke around
with the debugger to see any values you care about.)
So what is the equivalent non-macro code? Well the equivalent code
to the DEFTEST form (i.e. the macro expansion) is not *that* much
more complex--it just has to do the stuff I mentioned; binding the
test name variable and registering the test function. But it's
complex enough that I sure wouldn't want to have to type it over
and over again each time I write a test:


Python's introspection approach works by looking for classes of a
given type (yes, classes, not instances), then looking for methods
in that class which have a given prefix. These methods become the
test cases. I imagine Lisp could work the same way, except that
because other solutions exist (like macros), there's a prefered
reason to choose another style.


The other difference is that runtime introspection happens at runtime.
That's probably fine for a test framework. I've written several Java
test frameworks that use reflection in similar ways so I know about
that approach too. But for other tasks, such as my parsing example
above, even if they could be done using introspection, there's a big
advantage for doing a lot of the work once, at compile time.

[snip]
A solution which would get what you want without macros is the
addition of more parse tree information, like the start/end
positions of each expression. In that way the function could look up
the stack, find the context from which it was called, then get the
full text of the call. This gets at the code "from the other
direction", that is, from looking at the code after it was parsed
rather than before.
Heh. That's just a sort of, pardon my French, kludgy reimplementation
of macros. Macros are nothing more (or less) than a mechanism whereby
you can--at compile time--get a hold of the parse tree in a form that
your code can manipulate it to generate other code. (Your's is still
at runtime which makes it somewhat less useful, though it does work
for the test framework case.)
I'll let you decide if Lisp's introspection abilities provide an
alternate non-macro way to handle building test cases which is just
as short.


Sure. I could use Lisp's meta object protocol (MOP) to define all
sorts of crazy things. But I'd probably still end up wrapping them in
some nice syntactic abstractions (macros) to make them as expressive
as possible. The thing is, I'm not looking for a "non-macro way" to do
these things because macros (in Lisp) are no more problematic than
functions--they're just another way of definining abstractions. Like
any mechanism for defining abstractions they need to be used with good
design sense because bad abstractions can be worse than no
abstractions at all--at least if things are concrete you can see how
they work, even if you do have to wade through pages of code to see
it. But good abstractions--whether functions, classes, or macros--make
it even easier to understand the code.

-Peter

P.S. Pythonistas--I was originally following this thread in c.l.lisp.
Somewhere along the lines the follow-ups got set to c.l.python which
is fine with me as the Lisp guys (should) already know this. But feel
free let me know if you want me to shut up about this.

--
Peter Seibel pe***@javamonkey.com

Lisp is the red pill. -- John Fraser, comp.lang.lisp
Jul 18 '05 #520

P: n/a
Pascal Costanza wrote:
...
Does Python allow local function definitions? ...Can they shadow predefined functions?
... Yes, named objects, including functions can (locally) shadow
(override) builtins. It is considered a bad habit/practice unless
done intentionally with a functional reason.


Well, this proves that Python has a language feature that is as
dangerous as many people seem to think macros are.


Indeed, a chorus of "don't do that" is the typical comment each
and every time a newbie falls into that particular mis-use. Currently,
the --shadow option of PyChecker only warns about shadowing of
_variables_, not shadowing of _functions_, but there's really no
reason why it shouldn't warn about both. Logilab's pylint does
diagnose "redefining built-in" with a warning (I think they mean
_shadowing_, not actually _redefining_, but this may be an issue
of preferred usage of terms).

"Nailing down" built-ins (at first with a built-in warning for overriding
them, later in stronger ways -- slowly and gradually, like always, to
maintain backwards compatibility and allow slow, gradual migration of the
large existing codebase) is under active consideration for the next version
of Python, expected (roughly -- no firm plans yet) in early 2005.

So, yes, Python is not perfect today (or else, we wouldn't be planning a
2.4 release...:-). While it never went out of its way to give the user "as
much rope as needed to shoot oneself in the foot", neither did it ever
spend enormous energy in trying to help the user avoid many possible errors
and dubious usage. Such tools as PyChecker and pylint are a start, and
some of their functionality should eventually be folded back into the
core, just as tabnanny's was in the past with the -t switch. I don't think
the fundamental Python will ever nag you for missing comments or
docstrings, too-short names, etc, the way pylint does by default (at
least, I sure hope not...!-), but there's quite a bit I _would_ like to have
it do in terms of warnings and, eventually, error messages for
"feechurs" that only exist because it was once simple to allow than
to forbid them, not by a deliberate design decision to have them there.

Note that SOME built-ins exist SPECIFICALLY for the purpose of
letting you override them. Consider, for example, __import__ -- this
built-in function just exposes the inner mechanics of the import
statement (and friends) to let you get modules from some other
place (e.g., when your program must run off a relational database
rather than off a filesystem). In other word, it's a rudimentary hook
in a "Template Method" design pattern (it's also occasionally handy
to let you import a module whose name is in a string, without
going to the bother of an 'exec', so it will surely stay for that purpose
even though we now have a shiny brand-new architecture for
import hooks -- but that's another story). Having a single hook of
global effect has all the usual downsides, of course (which is exactly
why we DO have that new architecture;-): two or more complicated
packages doing import-hooks can't necessarily coexist within the
same Python application program (the only saving grace which let
us live with that simplistic hook for so many years is that importing
from strange places is typically a need of a certain deployment of
an overall application, not of a package -- still, such packages DO
exist, so the previous solution was far from perfect).

Anyway, back to your contention: I do not think that the fact that
the user can, within his functions, choose very debatable names,
such as those which shadow built-ins, is anywhere as powerful,
and therefore as dangerous, as macros. My own functions using
'sum' will get the built-in one even if yours do weird things with
that same name as a local variable of their own. The downsides
of shadowing are essentially as follows...

a newbie posts some fragment of his code asking for guidance,
and among other things that fragment has
for i in range(length(thenumbers)):
total = total + thenumbers[i]
he will receive many suggestions on how to make it better,
including the ideal one:
total = sum(thenumbers, total)
But then he tries it out and reports "it breaks" (newbies rarely
are clueful enough to just copy and paste error messages). And
we all waste lots of time finding out that this is because... the
hapless newbie had named HIS OWN FUNCTION 'sum', so
this was causing runaway recursion. Having met similar issues
over and over, one starts to warn newbies against shadowing
and get sympathetic with the idea of forbidding it:-).

That doesn't really compare to an extra feature in the language
that is deliberately designed to let reasonably-clueful users do
their thing, isn't deprecated nor warned against by anybody at
all (with a few isolated voices speaking about "abuse" of macros
in this thread, but still with an appreciation for macros when
_well_ used), and is MEANT to do what newbies _accidentally_
do with shadowing & much more besides;-).
Alex

Jul 18 '05 #521

P: n/a
Alex:
Yeah, well, I fear the answer will be yes (it could), but it won't
do so since you haven't _asked_ it to wake you up, only if it
could.
Pshaw. My hypothetical house of the 2050s or so will know
that "could" in this context is a command. :)
ME, I definitely don't want to use natural language with
all of its ambiguity for anything exept communicating with
other human beings, thankyouverymuch.
But what if computers someday become equally capable
as humans in understanding uncontrained speech? It
can be a dream, yes?
a language designed for text, not speed.


*blink* what does THAT doubtful assertion have to do with anything
else we were discussing just now...?


An unfortunate typo. I meant "speech" instead of "speed" but
my fingers are too used to typing the latter. Here I would like
a computer to ask "um, did you really mean that?" -- so long as
the false positive rate was low enough.
My spreadsheet program looks different from my word processor


Sure, so do mine, but their menus are quite similar --

... So I don't know what you're intending with this answer.
Loosing ... focus ... mind .. numb .. must stop answering thread.

It's a knee-jerk reaction and an example that I'm no longer
thinking before I start to reply.
For Lisp is a language tuned to keyboard input and not the full
range of human expression. (As with speech.)


Python even more so on the output side -- try getting a screen-reader to
do a halfway decent job with it. But what does this matter here?


The conjecture that computer programming languages are
contrained by the form of I/O and that other languages, based
on speech, free-form 2D writing, or other forms of input may
be more appropriate, at least for some domain.

This was in response to the idea that Lisp is the most appropriate
language for all forms of programming.
Hmmm.. Is the number '1' an object? Is a function an object?
What about a module? A list? A class?


Yes to all of the above, in Python. I don't get your point.


It was a rhetorical set of questions, to see what Pascal meant
about all the time being aware that we are dealing with classes
and object. When I wrote it, I didn't see the ambiguity that I
could be pointing out that these aren't classes/objects; in part
because I just didn't think about that alternative.
But I don't get your points -- neither Andrew's nor Pascal's. How does
this differ from the awareness I might have in some macro-enhanced
lisp where I would type (print (sum (range 100))) or the like?
That was my point.
But even if we provisionally concede your conjecture we are still
left wondering: is the degree of easing so high that it overcomes
the inevitable increase in complication, needed for a language to
have N+1 syntax forms where previously it only had N?


Good point. And as I no longer feel like following up on this
thread, I'll leave it at that.

Andrew
da***@dalkescientific.com
Jul 18 '05 #522

P: n/a
Andrew Dalke writes:
Still doesn't answer my question on how nicely Lisp handles
the 'unexpected' need of allocating objects from different
memory arenas.


If I understand things correctly, ITA Software's Orbitz system does
that. See the slides of Rodney Daughtrey's ILC 2002 talk "ITA Software
and Orbitz: Lisp in the Online Travel World" (International Lisp
Conference 2002 Proceedings, page 606).

A toy example is contained in Paul Graham's book "ANSI Common
Lisp". See section 13.5 "Example: Pools" on page 226.
Paolo
--
Paolo Amoroso <am*****@mclink.it>
Jul 18 '05 #523

P: n/a
Pascal Costanza <co******@web.de> wrote in message news:<bm**********@newsreader2.netcologne.de>...
Lispniks are driven by the assumption that there is always the
unexpected. No matter what happens, it's a safe bet that you can make
Lisp behave the way you want it to behave, even in the unlikely event
that something happens that no language designer has ever thought of
before. And even if you cannot find a perfect solution in some cases,
you will at least be able to find a good approximation for hard
problems.


This I believe is the very crux of the matter. The problem domain to
which lisp has historically been applied, artificial intelligence,
more or less guaranteed that lisp hackers would run up against the
sorts of problems that no one had ever seen before. The language
therefore evolved into a "programmable programming language," to quote
John Foderaro (or whoever first said or wrote this now famous line).

Lisp gives the programmer who knows he will be working in a domain
that is not completely cut and dried, the assurance that his language
will not prevent him for doing something that has never been done
before. Python gives me the distinct impression that I might very well
run up against the limitations of the language when dealing with very
complex problems.

For 90% of tasks, even large projects, Python will certainly have
enough in its ever expanding bag of tricks to provide a clean,
maintainable solution. But that other 10% keeps lisp hackers from
using Python for exploratory programming - seeking solutions in
problem domains that have not been solved before.
Jul 18 '05 #524

P: n/a
Alex Martelli wrote:
Pascal Costanza wrote:
...
Does Python allow local function definitions?
...
Can they shadow predefined functions?
...
Yes, named objects, including functions can (locally) shadow
(override) builtins. It is considered a bad habit/practice unless
done intentionally with a functional reason.
Well, this proves that Python has a language feature that is as
dangerous as many people seem to think macros are.

Indeed, a chorus of "don't do that" is the typical comment each
and every time a newbie falls into that particular mis-use. Currently,
the --shadow option of PyChecker only warns about shadowing of
_variables_, not shadowing of _functions_, but there's really no
reason why it shouldn't warn about both. Logilab's pylint does
diagnose "redefining built-in" with a warning (I think they mean
_shadowing_, not actually _redefining_, but this may be an issue
of preferred usage of terms).

"Nailing down" built-ins (at first with a built-in warning for overriding
them, later in stronger ways -- slowly and gradually, like always, to
maintain backwards compatibility and allow slow, gradual migration of the
large existing codebase) is under active consideration for the next version
of Python, expected (roughly -- no firm plans yet) in early 2005.


OK, I understand that the Python mindset is really _a lot_ different
than the Lisp mindset in this regard.
Note that SOME built-ins exist SPECIFICALLY for the purpose of
letting you override them. Consider, for example, __import__ -- this
built-in function just exposes the inner mechanics of the import
statement (and friends) to let you get modules from some other
place (e.g., when your program must run off a relational database
rather than off a filesystem). In other word, it's a rudimentary hook
in a "Template Method" design pattern (it's also occasionally handy
to let you import a module whose name is in a string, without
going to the bother of an 'exec', so it will surely stay for that purpose
even though we now have a shiny brand-new architecture for
import hooks -- but that's another story).
Ah, you want something like final methods in Java, or better probably
final implicitly as the default and means to make select methods
non-final, right?
Anyway, back to your contention: I do not think that the fact that
the user can, within his functions, choose very debatable names,
such as those which shadow built-ins, is anywhere as powerful,
and therefore as dangerous, as macros. My own functions using
'sum' will get the built-in one even if yours do weird things with
that same name as a local variable of their own. The downsides
of shadowing are essentially as follows...


What makes you think that macros have farther reaching effects in this
regard than functions? If I call a method and pass it a function object,
I also don't know what the method will do with it.

Overriding methods can also be problematic when they break contracts.
(Are you also considering to add DBC to Python? I would expect that by
now given your reply above.)

Can you give an example for the presumably dangerous things macros
supposedly can do that you have in mind?
Pascal

Jul 18 '05 #525

P: n/a
Pascal Costanza <co******@web.de> wrote in message news:<bm**********@newsreader2.netcologne.de>...
Lispniks are driven by the assumption that there is always the
unexpected. No matter what happens, it's a safe bet that you can make
Lisp behave the way you want it to behave, even in the unlikely event
that something happens that no language designer has ever thought of
before. And even if you cannot find a perfect solution in some cases,
you will at least be able to find a good approximation for hard
problems.


This I believe is the very crux of the matter. The problem domain to
which lisp has historically been applied, artificial intelligence,
more or less guaranteed that lisp hackers would run up against the
sorts of problems that no one had ever seen before. The language
therefore evolved into a "programmable programming language," to quote
John Foderaro (or whoever first said or wrote this now famous line).

Lisp gives the programmer who knows he will be working in a domain
that is not completely cut and dried, the assurance that his language
will not prevent him for doing something that has never been done
before. Python gives me the distinct impression that I might very well
run up against the limitations of the language when dealing with very
complex problems.

For 90% of tasks, even large projects, Python will certainly have
enough in its ever expanding bag of tricks to provide a clean,
maintainable solution. But that other 10% keeps lisp hackers from
using Python for exploratory programming - seeking solutions in
problem domains that have not been solved before.
Jul 18 '05 #526

P: n/a
Pascal Costanza wrote:
...
Where in that example are you aware that you are dealing with classes
and objects?


Well, maybe I am wrong. However, in a recent example, a unit test
expressed in Python apparently needed to say something like
"self.assertEqual ...". Who is this "self", and what does it have to do
with testing? ;)


Here self is 'the current test case' (instance of a class specializing
TestCase), and what it has to do with the organization of unit-testing
as depicted in Kent Beck's framework (originally for Smalltalk, adapted
into a lot of different languages as it, and test-driven design, became
deservedly popular) is "just about everything". I think you might like
reading Kent's book on TDD -- and to entice you, you'll even find him
criticizing the fact that 'self' is explicit in Python (it's kept implicit
in Smalltalk). If you don't like O-O architecture, you doubtlessly won't
like Ken's framework -- he IS a Smalltalk-thinking OO-centered guy.
Alex

Jul 18 '05 #527

P: n/a
Raffael Cavallaro wrote:
For 90% of tasks, even large projects, Python will certainly have
enough in its ever expanding bag of tricks to provide a clean,
maintainable solution. But that other 10% keeps lisp hackers from
using Python for exploratory programming - seeking solutions in
problem domains that have not been solved before.


I would like to add to that by pointing out that it is even a good idea
to use Lisp for problem domains that others have solved before but that
_I_ (or "you") don't understand completely yet.

To me, programming languages are tools that help me to explore domains
by writing models for them and interactively testing how they react to
commands. In a certain sense, they are an extension of my brain that
help me to manage the tedious and repetitive tasks, and let me focus on
the essential problems.

Many programming languages require you to build a model upfront, on
paper or at least in your head, and then write it down as source code.
This is especially one of the downsides of OOP - you need to build a
class hierarchy very early on without actually knowing if it is going to
work in the long run.

What you actually do when you build a model of something is that you
start _somewhere_, see how far you can get, take some other route, and
so on, until you have a promising conceptualization. The cool thing
about Lisp is that I can immediately sketch my thoughts as little
functions and (potential) macros from the very beginning, and see how
far I can get, exactly like I would when I would be restricted to work
on paper. Except that in such an exploratory programming mode, I can get
immediate feedback by trying to run the functions and expanding the
macros and see what they do.

I know that OOP languages have caught up in this regard by providing
refactoring tools and other IDE features. And I definitely wouldn't want
to get rid of OOP in my toolbox because of its clear advantages in
certain scenarios.

But I haven't yet seen a programming language that supports exploratory
thinking as well as Lisp. It's like that exactly because of the
s-expressions, or more specifically because of the fact that programs
and data are the same in Lisp.

Computers are there to make my tasks easier. Not using them from the
very beginning to help me solve programming tasks is a waste of
computing resources.

In one particular case, I have used the CLOS MOP to implement some
special case of a method combination. At a certain stage, I have
realized that I have made a conceptual mistake - I have tried to resolve
the particular method combination at the wrong stage. Instead of doing
it inside of the method combination it had to be done at the call site.
It was literally just a matter of placing a quote character at the right
place - in front of the code to be executed - that allowed me to it pass
to the right place as data, and then expand it at the call site. I can't
describe in words what an enlightening experience this was. In any other
language I have known until then, this change would have required a
complete restructuring of the source code, of the phases in which to
execute different parts of the code, of the representation for that
code, and so on. In Lisp, it was just one keystroke!

It's because of such experiences that Lispniks don't want to switch to
lesser languages anymore. ;-)
(More seriously, there are probably very different ways to think about
problems. So Lisp might not be the right language for everyone, because
other people might find completely different things helpful when they
try to tackle a problem. It would be interesting to do some research on
this topic. As much as I don't think that there is a single programming
paradigm that is best suited for all possible problems I also don't
think that there is a single programming style that is best suited for
all programmers.)
Pascal

Jul 18 '05 #528

P: n/a

"Peter Seibel" <pe***@javamonkey.com> wrote in message
news:m3************@javamonkey.com...
....
Python's eval--as I understand it--handles this differently. Common
Lisp's EVAL may be the way it is partially because it is not needed
for things like this given the existence of macros. There's are also
some semantic difficulties of what lexical environment the EVAL should occur in.


Now that you mention it, it seems sensible that a concept like 'eval
expression' might have slight but significantly different
implementations in different environments with different name-space
systems and different alternatives for doing similar jobs. Thanks for
the answer.

Terry J. Reedy


Jul 18 '05 #529

P: n/a

Pascal Costanza wrote:
I wrote:
Yes, named objects, including functions can (locally) shadow
(override) builtins. It is considered a bad habit/practice unless
done intentionally with a functional reason.
Well, this proves that Python has a language feature that is as
dangerous as many people seem to think macros are.


There are two reasons to not generally prohibit overriding builtins.

1. Every time a new builtin is added with a nice name like 'sum',
there is existing code that uses the same nice name. For 2.3 to have
broken every program with a 'sum' variable would have been nasty and
unpopular.

2. There are sometimes good reasons add additional or alternative
behavior. This is little different from a subclass redefining a
method in one of its base classes, and perhaps calling the base class
method as part of the subclass method.

The most dangerous and least sensible overriding action, anything like
import something; something.len = 'haha'
will probably become illegal.

Terry J. Reedy

Jul 18 '05 #530

P: n/a
Pascal Costanza wrote:
...
Well, this proves that Python has a language feature that is as
dangerous as many people seem to think macros are.
... Indeed, a chorus of "don't do that" is the typical comment each
and every time a newbie falls into that particular mis-use. Currently, ... large existing codebase) is under active consideration for the next
version of Python, expected (roughly -- no firm plans yet) in early 2005.
OK, I understand that the Python mindset is really _a lot_ different
than the Lisp mindset in this regard.


As in, no lisper will ever admit that a currently existing feature is
considered a misfeature?-)

Ah, you want something like final methods in Java, or better probably
final implicitly as the default and means to make select methods
non-final, right?
Not really, the issue I was discussing was specifically with importing.

Normally, an import statement "looks" for a module [a] among those
already loaded, [b] among the ones built-in to the runtime, [c] on
the filesystem (files in directories listed in sys.path). "import hooks"
can be used to let you get modules from other places yet (a database,
a server over the network, an encrypted version, ...). The new architecture
I mentioned lets many import hooks coexist and cooperate, while the
old single-hook architecture made that MUCH more difficult, that's all.

"final implicitly as the default and means to make select methods
non-final" is roughly what C++ has -- the "means" being the "virtual"
attribute of methods. Experience proves that's not what we want.
Rather, builtin (free, aka toplevel) _names_ should be locked down
just as today names of _attributes_ of builtin types are mostly
locked down (with specific, deliberate exceptions, yes). But I think
I'm in a minority in wanting similar mechanisms for non-built-ins,
akin to the 'freeze' mechanism of Ruby (and I'm dismayed by reading
that experienced Rubystas say that freeze LOOKS like a cool idea
but in practice it's almost never useful -- they have the relevant
experience, I don't, so I have to respect their evaluation).

What makes you think that macros have farther reaching effects in this
regard than functions? If I call a method and pass it a function object,
I also don't know what the method will do with it.
Of course not -- but it *cannot possibly* do what Gat's example of macros,
WITH-MAINTAINED-CONDITION, is _claimed_ to do... "reason" about the
condition it's meant to maintain (in his example a constraint on a variable
named temperature), about the code over which it is to be maintained
(three functions, or macros, that start, run, and stop the reactor),
presumably infer from that code a model of how a reactor _works_, and
rewrite the control code accordingly to ensure the condition _is_ in fact
being maintained. A callable passed as a parameter is _atomic_ -- you
call it zero or more times with arguments, and/or you store it somewhere
for later calling, *THAT'S IT*. This is _trivially simple_ to document and
reason about, compared to something that has the potential to dissect
and alter the code it's passed to generate completely new one, most
particularly when there are also implicit models of the physical world being
inferred and reasoned about. Given that I've seen nobody say, for days!,
that Gat's example was idiotic, as I had first I thought it might be, and
on the contrary I've seen many endorse it, I use it now as the simplest
way to show why macros are obviously claimed by their proponents to
be _scarily_ more powerful than functions. (and if a few voices out of
the many from the macro-lovers camp should suddely appear to claim
that the example was in fact idiotic, while most others keep concurring
with it, that will scale down "their proponents" to "most of their
proponents", not a major difference after all).

Overriding methods can also be problematic when they break contracts.
That typically only means an exception ends up being raised when
the method is used "inappropriately" - i.e. in ways depending on the
contract the override violates. The only issue is ensuring that the
diagnostics of the error are clear and complete (and giving clear and
complete error diagnostics is often not trivial, but that is common to
just about _any_ classes of errors that programmers do make).
(Are you also considering to add DBC to Python? I would expect that by
now given your reply above.)
Several different implementations of DBC for Python are around, just
like several different architectures for interfaces (or, as I hope,
Haskell-like typeclasses, a more powerful concept). [Note that the
lack of macros stops nobody from playing around with concepts they
would like to see in Python: they just don't get to make new syntax
to go with them, and, thus, to fragment the language thereby:-)].

Guido has already declared that ONE concept of interfaces (or
typeclasses, or protocols, etc) _will_ eventually get into Python -- but
_which one_, it's far too early to tell. I would be surprised if whichever
version does make it into Python doesn't let you express contracts.
A contract violation will doubtlessly only mean a clear and early error
diagnostic, surely a good thing but not any real change in the
power of the language.

Can you give an example for the presumably dangerous things macros
supposedly can do that you have in mind?


I have given this repeatedly: they can (and in fact have) tempt programmers
using a language which offers macros (various versions of lisp) to, e.g.,
"embed a domain specific language" right into the general purpose language.
I.e., exactly the use which is claimed to be the ADVANTAGE of macros. I
have seen computer scientists with modest grasp of integrated circuit design
embed half-baked hardware-description languages (_at least_ one different
incompatible such sublanguage per lab) right into the general-purpose
language, and tout it at conferences as the holy grail -- while competitors
were designing declarative languages intended specifically for the purpose
of describing hardware, with syntax and specifically limited semantics that
seemed to be designed in concert with the hardware guys who'd later be
USING the gd thing (and were NOT particularly interested in programming
except in as much it made designing hardware faster and cheaper). The
effort of parsing those special-purpose language was of course trivial (even
at the time -- a quarter of a century ago -- yacc and flex WERE already
around...!), there was no language/metalanguage confusion, specialists in
the domain actually did a large part of the domain-specific language design
(without needing macro smarts for the purpose) and ended up eating our
lunch (well, except that I jumped ship before then...;-).

Without macros, when you see you want to design a special-purpose
language you are motivated to put it OUTSIDE your primary language,
and design it WITH its intended users, FOR its intended purposes, which
may well have nothing at all to do with programming. You parse it with a
parser (trivial these days, trivial a quarter of a century ago), and off you
go. With macros, you're encouraged to do all the wrong things -- or, to
be more precise, encouraged to do just the things I saw causing the
many costly failures (one or more per lab, thanks to the divergence:-)
back in that my early formative experience.

I have no problem with macros _in a special-purpose language_ where
they won't tempt you to embed what you _should_ be "out-bedding",
so to speak -- if the problem of giving clear diagnostics of errors can be
mastered, denoting that some functions are to be called at compile
time to produce code in the SPL has no conceptual, nor, I think,
particular "sociological" problem. It's only an issue of weighing their
costs and usefulness -- does the SPL embody other ways to remove
duplication and encourage refactoring thereby, are there overlap
among various such ways, etc, etc. E.g., a purely declarative SPL,
with the purpose of describing some intricate structure, may have no
'functions' and thus no other real way to remove duplication than
macros (well, it depends on whether there may be other domain
specific abstractions that would be more useful than mere textual
expansions, of course -- e.g. inheritance/specialization, composition
of parts, and the like -- but they need not be optimal to capture some
inevitable "quasi-accidental duplications of SPL code" where a macro
might well be).
Alex

Jul 18 '05 #531

P: n/a
Andrew Dalke wrote:
Alex:
Yeah, well, I fear the answer will be yes (it could), but it won't
do so since you haven't _asked_ it to wake you up, only if it
could.


Pshaw. My hypothetical house of the 2050s or so will know
that "could" in this context is a command. :)


Good luck the first time you want to ask it about its capabilities,
and my best wishes that you'll remember to use VERY precise
phrasing then.

ME, I definitely don't want to use natural language with
all of its ambiguity for anything exept communicating with
other human beings, thankyouverymuch.


But what if computers someday become equally capable
as humans in understanding uncontrained speech? It
can be a dream, yes?


If computers become as complicated as human beings, and
I think that IS necessary for the understanding you mention,
I'll treat them as human beings. I also think we have enough
human beings, and very fun ways to make new ones, as is,
so I don't see it as a dream to have yet more but made of
(silicon or whatever material is then fashionable).

> a language designed for text, not speed.


*blink* what does THAT doubtful assertion have to do with anything
else we were discussing just now...?


An unfortunate typo. I meant "speech" instead of "speed" but
my fingers are too used to typing the latter. Here I would like
a computer to ask "um, did you really mean that?" -- so long as
the false positive rate was low enough.


Well, I and other humans didn't even think that you might have made
a simple 'fingero' (not quite a typo but equivalent)...!-)

> For Lisp is a language tuned to keyboard input and not the full
> range of human expression. (As with speech.)


Python even more so on the output side -- try getting a screen-reader to
do a halfway decent job with it. But what does this matter here?


The conjecture that computer programming languages are
contrained by the form of I/O and that other languages, based
on speech, free-form 2D writing, or other forms of input may
be more appropriate, at least for some domain.

This was in response to the idea that Lisp is the most appropriate
language for all forms of programming.


The syntax of Python would surely have to be changed drastically
if speech was the primary mean of I/O for it, yes. As for lisp, that's
less certain to me (good ways to pronounce open and closed
parens look easier to find than good ways to pronounce whitespace
AND the [expletivedeleted] case-sensitive identifiers...:-).
Alex

Jul 18 '05 #532

P: n/a
On Sat, 11 Oct 2003 20:42:57 -0400, Hans Nowak <ha**@zephyrfalcon.org>
wrote:
Doesn't it belong to the group that includes 'fructus'? Of course this has
nothing to do with the plural used in English, but still... :-)
It sort of does for nouns with a latin plural that plural is often
brought into English too.
This page, which has a lot of info on this issue, seems to think so:

http://www.perl.com/language/misc/virus.html


Thanks for the link interesting reading.

The Oxford Latin Dictionary and the Persus project classify 'virus' as
irregular second declension noun (of which there a few like virus).
Betts and others argue it is a 4th declension like 'census' and
'fructus', (though Betts still lists it as 2nd declension irregular in
his latin textbook which I was I was using). The matter turns on a
couple of surviving references to the genative singular.

And that argument doesn't affect the plural not being used. I will after
reading the page change my 'such nouns were usually only used in the
nominative and accusative singular in latin' to 'were only used in the
singular'.

dewatf.

Jul 18 '05 #533

P: n/a
Alex Martelli <al*****@yahoo.com> writes:
OK, I understand that the Python mindset is really _a lot_ different
than the Lisp mindset in this regard.
As in, no lisper will ever admit that a currently existing feature is
considered a misfeature?-)


You might want to search google groups for threads about "logical
pathnames" in cll :-)
Guido has already declared that ONE concept of interfaces (or
typeclasses, or protocols, etc) _will_ eventually get into Python -- but
_which one_, it's far too early to tell.


A propos interfaces in Python: The way they were done in earlier Zope
(with "magic" docstrings IIRC) was one of the things that led me to
believe language extensibilit was a must, together with the phletora
of SPLs the Java community came up with, either in comments (like
JavaDoc and XDoclet) or ad-hoc XML "configuration" files that grow and
grow until they are at least turing-complete at some point. (blech)

People /will/ extend the base language if it's not powerfull enough
for everything they want to do, macros or not. You can either give
them a powerfull, documented and portable standart way to do so, or
ignore it, hoping that the benevolent dictator will someday change the
core language in a way that blesses one of the extensions (most likely
a polished variant of an existing one) the "obvious", official one.

It is the difference between implementing a proper type system and
extending lint to check for consistent use of the hungarian notation
at the end of the day.
Jul 18 '05 #534

P: n/a
Alex Martelli wrote:
Pascal Costanza wrote:
What makes you think that macros have farther reaching effects in this
regard than functions? If I call a method and pass it a function object,
I also don't know what the method will do with it.

Of course not -- but it *cannot possibly* do what Gat's example of macros,
WITH-MAINTAINED-CONDITION, is _claimed_ to do... "reason" about the
condition it's meant to maintain (in his example a constraint on a variable
named temperature), about the code over which it is to be maintained
(three functions, or macros, that start, run, and stop the reactor),
presumably infer from that code a model of how a reactor _works_, and
rewrite the control code accordingly to ensure the condition _is_ in fact
being maintained.


....but such a macro _exclusively_ reasons about the code that is passed
to it. So the effects are completely localized. You don't do any damage
to the rest of the language because of such a macro.

Of course, such a macro needs to be well-defined and well-documented.
But that's the case for any code, isn't it?
(Are you also considering to add DBC to Python? I would expect that by
now given your reply above.) [...]
Guido has already declared that ONE concept of interfaces (or
typeclasses, or protocols, etc) _will_ eventually get into Python -- but
_which one_, it's far too early to tell. I would be surprised if whichever
version does make it into Python doesn't let you express contracts.


OK, I think I understand the Python mindset a little bit better. Thanks.
Can you give an example for the presumably dangerous things macros
supposedly can do that you have in mind?

I have given this repeatedly: they can (and in fact have) tempt programmers
using a language which offers macros (various versions of lisp) to, e.g.,
"embed a domain specific language" right into the general purpose language.
I.e., exactly the use which is claimed to be the ADVANTAGE of macros. I
have seen computer scientists with modest grasp of integrated circuit design
embed half-baked hardware-description languages (_at least_ one different
incompatible such sublanguage per lab) right into the general-purpose
language, and tout it at conferences as the holy grail


That's all? And you really think this has anything to do with macros?

Yes, macros allow you to write bad programs - but this is true for any
language construct.

Your proposed model means that for each DSL you might need you also need
to implement it as a separate language. Well, this has also been done
over and over again, with varying degrees of success. You can probably
name several badly designed "out-bedded" little languages. Does this
mean that your propose sucks as well? It doesn't seem to guarantee good
little languages, does it?

In reality, in both approaches we can just find both badly and well
designed DSLs.

Bad languages, no matter whether embedded or "out-bedded", exist not
because of the technology that is being used to implement them but
because of the fact that humans can fail when they undertake something.

You surely can name some badly desigend libraries, even for Python. Does
this mean that the respective languages suck? That the very concept of
libraries suck?

Here is a one of my favorite quotes, by Guy Steele and Gerald Sussman:
"No amount of language design can _force_ a programmer to write clear
programs. If the programmer's conception of the problem is badly
organized, then his program will also be badly organized. The extent to
which a programming language can help a programmer to organize his
problem is precisely the extent to which it provides features
appropriate to his problem domain. The emphasis should not be on
eliminating “bad” language constructs, but on discovering or inventing
helpful ones." (from "Lambda - The Ultimate Imperative")
Pythonistas seem to think otherwise wrt language design that can force
programmers to write clear programs. If you think that this is a good
summary of the Python mindset then we can stop the discussion. I simply
don't buy into such a mindset.
Pascal

Jul 18 '05 #535

P: n/a
In article <bm**********@newsreader2.netcologne.de>,
Pascal Costanza <co******@web.de> wrote:
Many programming languages require you to build a model upfront, on
paper or at least in your head, and then write it down as source code.
This is especially one of the downsides of OOP - you need to build a
class hierarchy very early on without actually knowing if it is going to
work in the long run.


This parallels Paul Graham's critique of the whole idea of program
"specifications." To paraphrase Graham,for any non-trivial software,
there is no such thing as a specification. For a specification to be
precise enough that programmers can convert it directly into code, it
must already be a working program! What specifications are in reality is
a direction in which programmers must explore, finding in the process
what doesn't work and what does, and how, precisely, to implement that.

Once you've realized that there is really no such thing as the waterfall
method, it follows inevitably that you'll prefer bottom up program
development by exploratory methods. Once you realize that programs are
discovered, not constructed from a blueprint, you'll inevitably prefer a
language that gives you freedom of movement in all directions, a
language that makes it difficult to paint yourself into a corner.
Jul 18 '05 #536

P: n/a
In article <pO**********************@news1.tin.it>,
Alex Martelli <al*****@yahoo.com> wrote:
And you would be wrong: forks are much less frequent than your theory
predicts. Read some Eric Raymond to understand this, he's got good
ideas about these issues.


Because the forks happen at a much higher level. People know that
they'll catch hell if they try to fork Perl, so they design Python or
Ruby instead.
Jul 18 '05 #537

P: n/a
Alex Martelli wrote:
Andrew Dalke wrote:
Alex:
Yeah, well, I fear the answer will be yes (it could), but it won't
do so since you haven't _asked_ it to wake you up, only if it
could.


Pshaw. My hypothetical house of the 2050s or so will know
that "could" in this context is a command. :)


Good luck the first time you want to ask it about its capabilities,
and my best wishes that you'll remember to use VERY precise
phrasing then.


Hehe, I hope I never scream "NAMESPACE-MAPPED-SYMBOLS" at my house :P

- DS

Jul 18 '05 #538

P: n/a
Dave Benjamin wrote:
Here's my non-PEP for such a feature:

return { |x, y|
print x
print y
}


This is scary! Some years ago I devised a language called "P" that
was translated into Postscript. Its parameterised code blocks looked
EXACTLY like that!

I wouldn't like to see this in Python, though -- it doesn't quite
look Pythonic enough, somehow.

Maybe instead of trying to find a way of shoehorning a compound
statement into an expression, we should be trying to find a way
of writing a procedure call, which would normally be an expression,
as a statement... maybe something like

with_directory "/foo/blarg" do:
print os.listdir(".")

which would be equivalent to

def _somefunc():
print os.listdir(".")
with_directory("/foo/blarg", do = _somefunc)

--
Greg Ewing, Computer Science Dept,
University of Canterbury,
Christchurch, New Zealand
http://www.cosc.canterbury.ac.nz/~greg

Jul 18 '05 #539

P: n/a
Andrew Dalke wrote:
It has sometimes been said that Lisp should use first and
rest instead of car and cdr


I used to think something like that would be more logical, too.
Until one day it occurred to me that building lists is only
one possible, albeit common, use for cons cells. A cons cell
is actually a completely general-purpose two-element data
structure, and as such its accessors should have names that
don't come with any preconceived semantic connotations.

From that point of view, "car" and "cdr" are as good
as anything!

--
Greg Ewing, Computer Science Dept,
University of Canterbury,
Christchurch, New Zealand
http://www.cosc.canterbury.ac.nz/~greg

Jul 18 '05 #540

P: n/a
Dave Benjamin wrote:
In that case, why do we eschew code blocks, yet have no problem with the
implicit invocation of an iterator,
I don't think code blocks per se are regarded as a bad thing.
The problem is that so far nobody has come up with an entirely
satisfactory way of fitting them into the Python syntax as
expressions.
What if I wrote:

for byte in file('input.dat'):
do_something_with(byte)

That would be a bit misleading, no?


It certainly would! But I think the implicitness which is
tripping the reader up here lies in the semantics of the
file object when regarded as a sequence, not in the for-loop
construct. There is more than one plausible way that a file
could be iterated over -- it could be a sequence of bytes
or a sequence of lines. An arbitrary choice has been made
that it will (implicitly) be a sequence of lines. If this
example shows anything, it's that this was perhaps a bad
idea, and that it might have been better to make it explicit
by requiring, e.g.

for line in f.iterlines():
...

or

for byte in f.iterbytes():
...

then if you wrote

for byte in f.iterlines():
...

the mistake would stick out just as much as it does in
Ruby.

--
Greg Ewing, Computer Science Dept,
University of Canterbury,
Christchurch, New Zealand
http://www.cosc.canterbury.ac.nz/~greg

Jul 18 '05 #541

P: n/a


Pascal Costanza wrote:
Many programming languages require you to build a model upfront, on
paper or at least in your head, and then write it down as source code.
This is especially one of the downsides of OOP - you need to build a
class hierarchy very early on without actually knowing if it is going to
work in the long run.


Whoa! The MOP and CLOS went to a lot of trouble to create an OOP for
Lisp that lived up to the Lisp heritage of figuring things out as we go.
I am forever refactoring class hierarchies, dragging slots from here to
there, adding some, erasing others, changing initforms and inheritance.
Best of all I can do all this for a couple of hours after landing in a
backtrace and then simply pick an appropriate stack frame from which to
restart and all the existing instances adjust themselves on the fly.

kenny

--
http://tilton-technology.com
What?! You are a newbie and you haven't answered my:
http://alu.cliki.net/The%20Road%20to%20Lisp%20Survey

Jul 18 '05 #542

P: n/a

In article <ue**********************@news2.tin.it>, Alex Martelli
<al*****@yahoo.com> wrote:
Let's start with that WITH-CONDITION-MAINTAINED example of Gat. Remember
it? OK, now, since you don't appear to think it was an idiotic example,
then SHOW me how it takes the code for the condition it is to maintain and
the (obviously very complicated: starting a reactor, operating the reactor,
stopping the reactor -- these three primitives in this sequence) program
over which it is to maintain it, and how does it modify that code to ensure
this purpose. Surely, given its perfectly general name, that macro does not
contain, in itself, any model of the reactor; so it must somehow infer it
(guess it?) from the innards of the code it's analyzing and modifying.
It is not necessary to exhibit a theory of how WITH-CONDITION-MAINTAINED
actually works to understand that if one had such a theory one can package
that theory for use more attractively as a macro than as a function. It
is not impossible to package up this functionality as a function, but it's
very awkward. Control constructs exist in programming languages for a
reason, despite the fact that none of them are really "necessary". For
example, we can dispense with IF statements and replace them with a purely
functional IF construct that takes closures as arguments. Or we can do
things the Java way and create a new Conditional object or some such
thing. But it's more convenient to write an IF statement.

The claim that macros are useful is nothing more and nothing less than the
claim that the set of useful control constructs is not closed. You can
believe that or not. To me it is self-evidently true, but I don't know
how to convince someone that it's true who doesn't already believe it.
It's rather like arguing over whether the Standard Model of Physics covers
all the useful cases. There's no way to know until someone stumbles
across a useful case that the Standard Model doesn't cover.
For example, the fact that Gat himself says that if what I want to write
are normal applications, macros are not for me: only for those who want
to push the boundaries of the possible are they worthwhile. Do you think
THAT is idiotic, or wise? Please explain either the reason of the drastic
disagreements in your camp, or why most of you do keep trying pushing
macros (and lisp in general) at those of us who are NOT particularly
interested in "living on the edge" and running big risks for their own sake,
accordingly to your answer to the preceding question, thanks.
I can't speak for anyone but myself of course, but IMO nothing worthwhile
is free of risks. I also think you overstate the magnitude of the risk.
You paint nightmare scenarios of people "changing the language"
willy-nilly in all sorts of divergent ways, but 1) in practice on a large
project people tend not to do that and 2) Lisp provides mechanisms for
isolating changes to the language and limiting the scope of their effect.
So while the possibility exists that someone will change the language in a
radical way, in practice this is not really a large risk. The risk of
memory corruption in C is vastly larger than the risk of "language
corruption" in Lisp, and most people seem to take that in stride.
...and there's another who has just answered in the EXACTLY opposite
way -- that OF COURSE macros can do more than HOF's. So, collectively
speaking, you guys don't even KNOW whether those macros you love so
much are really necessary to do other things than non-macro HOFs allow
(qualification inserted to try to divert the silly objection, already made
by others on your side, that macros _are_ functions), or just pretty things
up a little bit.


But all any high level language does is "pretty things up a bit". There's
nothing you can do in any language that can't be done in machine
language. "Prettying things up a bit" is the whole point. Denigrating
"prettying things up a bit" is like denigrating cars because you can get
from here to there just as well by walking, and all the car does is "speed
things up a bit".

E.
Jul 18 '05 #543

P: n/a
In article <29**********************@news1.tin.it>, Alex Martelli
<al*****@yahoo.com> wrote:
What makes you think that macros have farther reaching effects in this
regard than functions? If I call a method and pass it a function object,
I also don't know what the method will do with it.


Of course not -- but it *cannot possibly* do what Gat's example of macros,
WITH-MAINTAINED-CONDITION, is _claimed_ to do... "reason" about the
condition it's meant to maintain (in his example a constraint on a variable
named temperature), about the code over which it is to be maintained
(three functions, or macros, that start, run, and stop the reactor),
presumably infer from that code a model of how a reactor _works_, and
rewrite the control code accordingly to ensure the condition _is_ in fact
being maintained. A callable passed as a parameter is _atomic_ -- you
call it zero or more times with arguments, and/or you store it somewhere
for later calling, *THAT'S IT*. This is _trivially simple_ to document and
reason about, compared to something that has the potential to dissect
and alter the code it's passed to generate completely new one, most
particularly when there are also implicit models of the physical world being
inferred and reasoned about. Given that I've seen nobody say, for days!,
that Gat's example was idiotic, as I had first I thought it might be, and
on the contrary I've seen many endorse it, I use it now as the simplest
way to show why macros are obviously claimed by their proponents to
be _scarily_ more powerful than functions.


Why "scarily"?

E.
Jul 18 '05 #544

P: n/a
In article <m1************@tti5.uchicago.edu>,
Matthias Blume <fi**@my.address.elsewhere> wrote:
Most of the things that macros can do can be done with HOFs with just
as little source code duplication as with macros.
Most, but not all. From <http://okmij.org/ftp/papers/Macros-talk.pdf>

"One sometimes hears that higher-order functions (and
related non-strictness) make macros unnecessary. For
example, In Haskell, 'if' is a regular function. However,
every language with more syntax than lambda-calculus
has phrases that are not expressions. Examples of such
second-class forms are: type, module, fixity and other
declarations; binding forms; statements. Only macros
can expand into a second-class object. The result of a
function is limited to an expression or a value."

(And with macros
only the source code does not get duplicated, the same not being true
for compiled code. With HOFs even executable code duplication is
often avoided -- depending on compiler technology.)


So you're willing here to trade code size for readability. The pro-macro
camp (myself included) find that macros make source code easier to read
and write than the equivalent HOF solution. We're willing to trade that
ease of use for a little compiled code size, especially when this means
you can write your code in what amounts to a domain specific language.

This can only be accomplished with functions if you're
willing to write a set of functions that defer evaluation, by, say
parsing input, massaging it appropriately, and then passing it to the
compiler. At that point, however, you've just written your own macro
system, and invoked Greenspun's 10th Law.


This is false. Writing your own macro expander is not necessary for
getting the effect. The only thing that macros give you in this
regard is the ability to hide the lambda-suspensions.


But this hiding of the lambda-suspensions is the whole point. Why look
at how the code works unless you have to? Why not work in a syntax, a
domain specific language, that matches the problem? Put the complexity
into one place (the macro) and make the rest of the code easier to
write, and clearer to read.

For me, macros are about making the code one writes match the problem
one is thinking about. HOFs seem to me to be about looking cleverly
functional, not making the code look like the problem domain.
Jul 18 '05 #545

P: n/a
Alex Martelli wrote:
Besides,
"if you want PL/I you know where to find it" has nice precedents (in
the only other language which was widely successful in the real world
while adhering to "provide only one way to perform an operation" as
one of its guiding principles -- not perfectly, but, close enough:-).


Pardon? Wasn't PL/I the language that had two wildly different
syntaxes for declaring variables, one bearing a close resemblance
to Fortran, and the other looking suspiciously like Cobol?
(All right, two... there are *two* ways to do it...)

--
Greg Ewing, Computer Science Dept,
University of Canterbury,
Christchurch, New Zealand
http://www.cosc.canterbury.ac.nz/~greg

Jul 18 '05 #546

P: n/a
On Fri, 10 Oct 2003 16:28:11 GMT, Alex Martelli <al***@aleax.it> wrote:
Bengt Richter wrote:
...
This way lambda would only be needed for backwards compatibility, and
since "def(" is a syntax error now, IWT it could be introduced cleanly.


In theory, yes, I think it could (and wrt my similar idea with 'do' has
the advantage of not requiring a new keyword). In practice, trying to
hack the syntax to allow it seems a little nightmare. Wanna try your
hand at it? I'm thinking of Grammar/Grammar and Modules/parsermodule.c ...

Also tokenizer.c, so as not to ignore indentation when tokenizing a nameless
def inside a bracketed expression where (in|de)dents are otherwise ignored.

The thing is, the current tokenizer doesn't know def from foo, just that they're
names. So either indenting has to be generated all the time, and the job of
ignoring it passed on upwards, or the single keyword 'def' could be recognized
by the parser in a bracketed context, and it would generate a synthetic indent token
in front of the def name token as wide as if all spaces preceded the def, and then
continue doing indent/dedent generation like for a normal def, until the def suite closed,
at which point it would resume ordinary expression processing (if it was within brackets --
otherwise is would just be a discarded expression evaluated in statement context, and
in/de/dent processing would be on anyway. (This is speculative until really getting into it ;-)
Special-casing on a keyword in the tokenizer might be a practical implementation shortcut,
but it wouldn't be very aesthetic ;-/

IWT changes also in the compiler/code generator so it can handle generating code
for an anonymous def which will plug in like lambda in an expression, as
opposed to binding a name in a statement context, hopefully a slight change,
since it would be stacking a code object and calling makefunction either way.
The anonymous def just won't do the store to bind a name. Generating the code
object for the nameless def should be identical to the normal def IWT,
almost by definition ;-)

But the whole thing will be a bit of a chore I'm sure ;-)
Would it have a chance of getting adopted, do you think?

(Recent [cross]postings from the friendly :-)))) people got me to
playing with writing a little toy scheme environment in python, which is sooo
pleasant compared to using MASM on a 16-mhz 386 with 2mb ram (whoo, time flies).

Regards,
Bengt Richter
Jul 18 '05 #547

P: n/a
Alex Martelli wrote:
What makes you think that macros have farther reaching effects in this
regard than functions? If I call a method and pass it a function object,
I also don't know what the method will do with it.
Of course not -- but it *cannot possibly* do what Gat's example of macros,
WITH-MAINTAINED-CONDITION, is _claimed_ to do... "reason" about the
condition it's meant to maintain (in his example a constraint on a
variable named temperature), about the code over which it is to be
maintained (three functions, or macros, that start, run, and stop the
reactor), presumably infer from that code a model of how a reactor
_works_, and rewrite the control code accordingly to ensure the condition
_is_ in fact
being maintained. A callable passed as a parameter is _atomic_ -- you
call it zero or more times with arguments, and/or you store it somewhere
for later calling, *THAT'S IT*. This is _trivially simple_ to document
and reason about, compared to something that has the potential to dissect
and alter the code it's passed to generate completely new one, most
particularly when there are also implicit models of the physical world
being
inferred and reasoned about.


Don't fear code that rewrites code. Hell, if Psycho didn't exist (or if I
didn't think it did a good enough job), I'd sure like to say:

optimized:
for x in range(2**63):
pass

and let an 'optimized' special form rewrite my code into just "pass".

Or maybe I don't want to manually delete unexported functions in my modules:

clean_module exporting [a, b, d]
and exporting_from {some.module:[fun1,fun2]}:
import some.module
def a(): pass
def b(): pass
def c(): pass
def c(): pass

becomes:

import some.module
def a(): pass
def b(): pass
def c(): pass
def d(): pass
fun1 = some.module.fun1
fun2 = some.module.fun2
del c

Or cleanly write code using another module's namespace:

using_module some.module:
print x, some.other.module.y
using_module __main__:
print x

In this case using_module inspects the code and rewrites it, leaving all
qualified identifiers untouched but modifying the global references, so the
example becomes:

print some.module.x, some.other.module.y
print x

Is that so dangerous?
Without macros, when you see you want to design a special-purpose
language you are motivated to put it OUTSIDE your primary language,
and design it WITH its intended users, FOR its intended purposes, which
may well have nothing at all to do with programming. You parse it with a
parser (trivial these days, trivial a quarter of a century ago), and off
you
go.


Hmm, but isn't every program a language?

http://blogs.gotdotnet.com/emeijer/PermaLink.aspx
ea13a4da-7421-44af-99e8-fc86de84e29c

Guy Steele agrees:

http://java.sun.com/features/2003/05/steele_qa.html

////////////////////
Q: You have written that "a language design can no longer be a thing. It
must be a pattern -- a pattern for growth - a pattern for growing the
pattern for defining the patterns that programmers can use for their real
work and their main goal." You said that a good programmer does not just
write programs, but engages in language design, building on the frame of a
base language. Could you elaborate on this?

A: Sure. Every time you write a new function, a new method, and give it a
name, you have invented a new word. If you write a library for a new
application area, then the methods in that library are a collection of
related words, a new technical jargon for that application domain. Look at
the Collection API: it adds new words (or new meanings for words) such as
"add", "remove", "contains", "Set", "List", and "LinkedHashSet". With that
API added to Java, you have a bigger vocabulary, a richer set of concepts
to work with.

Some concepts are more powerful, more general, more widely used than others
-- "liberty" and "mortgage" are more widely used than "belly button ring"
or "faucet wrench". But every new word, every new meaning, every new idiom
enriches the language.
////////////////////////
But talk is cheap. Maybe my next project should be a Python preprocessor :)

- Daniel
Jul 18 '05 #548

P: n/a
> From that point of view, "car" and "cdr" are as good
as anything!

Well, if you're going to call the thing a 'cons' you might as well go
all the way and use 'car' and 'cdr' as operators. A little flavor is
nice, although I think that "4th" would be easier to read than
"cadddr"...
Jul 18 '05 #549

P: n/a
Rayiner Hashem wrote:
From that point of view, "car" and "cdr" are as good
as anything!


Well, if you're going to call the thing a 'cons' you might as well go
all the way and use 'car' and 'cdr' as operators. A little flavor is
nice, although I think that "4th" would be easier to read than
"cadddr"...


....but cadddr might not be "fourth". It might be some leaf in a tree. Or
something completely different. "fourth" doesn't always make sense.

(And just for the sake of completeness, Common Lisp does have FOURTH and
also (NTH 3 ...).)

Pascal

--
Pascal Costanza University of Bonn
mailto:co******@web.de Institute of Computer Science III
http://www.pascalcostanza.de Römerstr. 164, D-53117 Bonn (Germany)

Jul 18 '05 #550

699 Replies

This discussion thread is closed

Replies have been disabled for this discussion.