469,341 Members | 7,261 Online
Bytes | Developer Community
New Post

Home Posts Topics Members FAQ

Post your question to a community of 469,341 developers. It's quick & easy.

Python syntax in Lisp and Scheme

I think everyone who used Python will agree that its syntax is
the best thing going for it. It is very readable and easy
for everyone to learn. But, Python does not a have very good
macro capabilities, unfortunately. I'd like to know if it may
be possible to add a powerful macro system to Python, while
keeping its amazing syntax, and if it could be possible to
add Pythonistic syntax to Lisp or Scheme, while keeping all
of the functionality and convenience. If the answer is yes,
would many Python programmers switch to Lisp or Scheme if
they were offered identation-based syntax?
Jul 18 '05
699 28956
Corey Coughlin wrote:
I was never very fond of lisp. I guess I mean scheme technically, I
took the Ableson and Sussman course back in college, so that's what I
learned of scheme, lisp in general I've mostly used embedded in other
things. In general, it always seemed to me that a lot of the design
choices in lisp are driven more by elegance and simplicity than
usability.


You should give Common Lisp a try. Many Lispers think that that's
exactly one of the advantages of Common Lisp over Scheme: it focuses
more on usability than on elegance.

Of course, mileages vary, but you shouldn't draw conclusions about
Common Lisp from your experience with Scheme, and vice versa. Common
Lisp and Scheme are as similar as, say, C++ and Pascal.
Pascal

Jul 18 '05 #151
james anderson <ja************@setf.de> writes:
Eli Barzilay wrote:

james anderson <ja************@setf.de> writes:
Eli Barzilay wrote:
>
> (This hits one of the major differences between Lisp and
> Scheme -- in Lisp I'm not as happy to use HOFs because of the
> different syntax

which different [] syntax?


Huh?


that is, what is different about the syntax for higher-order
functions in lisp?


funcall, function, #', the double namespace.

Yes, but I was talking about the difference approaches, for
example:

(dolist (x foo)
(bar x))

vs:

(mapc #'bar foo)


are these not two examples of coding in common-lisp. how do they
demonstrate that "scheme is much more functional"?


The first is very popular, the second is hardly known. R5RS has
`for-each' which is exactly like `mapc', but no `dolist' equivalent.
In Scheme, this is not a problem, in Lisp, the syntax makes me worry
for the extra effort in creating a closure.

--
((lambda (x) (x x)) (lambda (x) (x x))) Eli Barzilay:
http://www.barzilay.org/ Maze is Life!
Jul 18 '05 #152
Marcin 'Qrczak' Kowalczyk <qr****@knm.org.pl> writes:
On Tue, 07 Oct 2003 21:59:11 +0200, Pascal Bourguignon wrote:
When you have macros such as loop that allow you to write stuff like:

(loop for color in '(blue white red) [...]

Well, some people say the "loop" syntax is not very lispish - it's unusual
that it uses many words and few parentheses. It still uses only words and
parentheses, no other punctuation, and it introduces one pair of parentheses
for its one nesting level.


Yes. The point is that the language is rather agnostic about any
topic, even about the syntax. Personnaly I don't like much LOOP, but
I take it as an example by the language designers showing us that it's
even possioble to avoid parethensis if you don't want them.

A richer alphabet is often more readable. Morse code can't be read as fast
as Latin alphabet because it uses too few different symbols. Japanese say
they won't abandon Kanji because it's more readable as soon as you know it -
you don't have to compose words from many small pieces which look alike
but each word is distinct. Of course *too* large alphabet requires long
learning and has technical difficulties, but Lisp expressions are too
little distinctive for my taste.
Well, I would say that kanji is badly designed, compared to latin
alphabet. The voyels are composed with consones (with diacritical
marks) and consones are written following four or five groups with
additional diacritical marks to distinguish within the groups. It's
more a phonetic code than a true alphabet.

I know I can implement infix operators with Lisp macros, but I even don't
know how they feel because nobody uses them (do I have to explicitly open
infix region and explicitly escape from it to regular syntax?), and
arithmetic is not enough.
Most probably, you would write a macro named WITH-INFIX and thus
automatically scope the infix part:

(with-infix 1 / x + 1 / ( x ^ 3 ) + 1 / ( x ^ 5 ) )

and if you write it well:

(with-infix
if a = b then format t "equal ~D~%" a ;
else format t "diff ~D /= ~D~%" a b ; endif ;
for i = 1 to 10 ; print i ; next i )

All Lisp code I've read uses lots of parentheses
and they pile up at the end of each large subexpression so it's hard to
match them (an editor is not enough, it won't follow my eyes and won't
work with printed code).

Syntax is the thing I like the least in Lisp & Scheme.


I'll tell you the secret: yes there are alot of parethesis. This is a
price all lispers pay for greater benefits. It gives us such
advantages that we gladly pay the price.

Otherwise, I would say:

1- don't pay so much attention to the parenthesis!

2- if you have to and it's hard, then it means the code is badly
structured (probably too big a function). Rewrite it, factorize.

3- what do you mean "printed"? A double-click on any parenthesis
selects the enclosed list so it's quite easy to see what it encloses.

--
__Pascal_Bourguignon__
http://www.informatimago.com/
Do not adjust your mind, there is a fault in reality.
Jul 18 '05 #153


Eli Barzilay wrote:

...
Yes, but I was talking about the difference approaches, for
example:

(dolist (x foo)
(bar x))

vs:

(mapc #'bar foo)
are these not two examples of coding in common-lisp. how do they
demonstrate that "scheme is much more functional"?


The first is very popular, the second is hardly known.


somehow i wonder if we're discussing the same language.
R5RS has
`for-each' which is exactly like `mapc', but no `dolist' equivalent.
In Scheme, this is not a problem, in Lisp, the syntax makes me worry
for the extra effort in creating a closure.


what me worry? about syntax?

? (defmacro Barzilay (type operator &rest lists)
`(map ',type (function ,operator) ,@lists))
BARZILAY
? (BARZILAY vector evenp '(1 2 3 4))
#(NIL T NIL T)
? (defmacro Barzilac (operator &rest lists)
`(mapc (function ,operator) ,@lists))
BARZILAC
? (Barzilac (lambda (x) (print x)) '(1 2 3 4))

1
2
3
4
(1 2 3 4)
? (defmacro Barzilar (operator &rest lists)
`(mapcar (function ,operator) ,@lists))
BARZILAR
? (Barzilar + '(1 2 3 4) '(5 6 7 8))
(6 8 10 12)
?

....
Jul 18 '05 #154
Alexander Schmolck <a.********@gmx.net> writes:
Joe Marshall <jr*@ccs.neu.edu> writes:

Alexander Schmolck <a.********@gmx.net> writes:
> pr***********@comcast.net writes: (I'm ignoring the followup-to because I don't read comp.lang.python)


Well, I supposed this thread has spiralled out of control already anyway:)
Indentation-based grouping introduces a context-sensitive element into
the grammar at a very fundamental level. Although conceptually a
block is indented relative to the containing block, the reality of the
situation is that the lines in the file are indented relative to the
left margin. So every line in a block doesn't encode just its depth
relative to the immediately surrounding context, but its absolute
depth relative to the global context.


I really don't understand why this is a problem, since its trivial to
transform python's 'globally context' dependent indentation block structure
markup into into C/Pascal-style delimiter pair block structure markup.


Of course it can. Any unambiguous grammar has a parse tree.
Significantly, AFAICT you can easily do this unambiguously and *locally*, for
example your editor can trivially perform this operation on cutting a piece of
python code and its inverse on pasting (so that you only cut-and-paste the
'local' indentation). Prima facie I don't see how you loose any fine control.
Only if your cut boundaries are at the same lexical level. If you cut
across boundaries, it is no longer clear what should happen at the paste.

Also, it is frequently the case that you need to `tweak' the code after
you paste it.
Additionally, each line encodes this information independently of the other
lines that logically belong with it, and we all know that when some data is
encoded in one place may be wrong, but it is never inconsistent.


Sorry, I don't understand this sentence, but maybe you mean that the potential
inconsitency between human and machine interpretation is a *feature* for Lisp,
C, Pascal etc!? If so I'm really puzzled.


You misunderstand me. In a python block, two expressions are
associated with each other if they are the same distance from the left
edge. This is isomorphic to having a nametag identifying the scope
of the line. Lines are associated with each other iff they have the
same nametag. Change one, and all must change.

If, instead, you use balanced delimiters, then a subexpression no
longer has to encode its position within the containing expression.

Let me demonstrate the isomorphism. A simple python expression:
(grrr.. I cut and paste it, but it lost its indentation between
the PDF file and Emacs. I hope I redo it right...)

def index(directory):
# like os.listdir, but traverses directory trees
stack = [directory]
files = []
while stack:
directory = stack.pop()
for file in os.listdir(directory):
fullname = os.path.join(directory, file)
files.append(fullname)
if os.path.isdir(fullname) and not os.path.islink(fullname):
stack.append(fullname)
return files

Now the reason we know that ` files.append(fullname)' and
` fullname = os.path.join(directory, file)' are part of the
same block is because they both begin with 12 spaces. The first
four spaces encode the fact that they belong to the same function,
the next four indicate that they belong in the while loop, and
the final four indicate that they belong in the for loop.
The ` return files', on the other hand, only has four spaces, so
it cannot be part of the while or for loop, but it is still part
of the function. I can represent this same information as a code:

t -def index(directory):
d - # like os.listdir, but traverses directory trees
d - stack = [directory]
d - files = []
d - while stack:
dw - directory = stack.pop()
dw - for file in os.listdir(directory):
dwf - fullname = os.path.join(directory, file)
dwf - files.append(fullname)
dwf - if os.path.isdir(fullname) and not os.path.islink(fullname):
dwfi- stack.append(fullname)
d - return files

The letter in front indicates what lexical group the line belongs to. This
is simply a different visual format for the leading spaces.

Now, suppose that I wish to protect the body of the while statement
within a conditional. Simply adding the conditional won't work:

d - while stack:
dw - if copacetic():
dw - directory = stack.pop()
dw - for file in os.listdir(directory):
dwf - fullname = os.path.join(directory, file)
dwf - files.append(fullname)
dwf - if os.path.isdir(fullname) and not os.path.islink(fullname):
dwfi- stack.append(fullname)

because the grouping information is replicated on each line, I have to
fix this information in the six different places it is encoded:

d - while stack:
dw - if copacetic():
dwi - directory = stack.pop()
dwi - for file in os.listdir(directory):
dwif - fullname = os.path.join(directory, file)
dwif - files.append(fullname)
dwif - if os.path.isdir(fullname) and not os.path.islink(fullname):
dwifi- stack.append(fullname)

The fact that the information is replicated, and that there is nothing
but programmer discipline keeping it consistent is a source of errors.
There is yet one more problem. The various levels of indentation encode
different things: the first level might indicate that it is part of a
function definition, the second that it is part of a FOR loop, etc. So on
any line, the leading whitespace may indicate all sorts of context-relevant
information.


I don't understand why this is any different to e.g. ')))))' in Lisp. The
closing ')' for DEFUN just looks the same as that for IF.


That is because the parenthesis *only* encode the grouping information,
they do not do double duty and encode what they are grouping. The key
here is to realize that the words `DEFUN' and the `IF' themselves look
very different.
Yet the visual representation is not only identical between all of these, it
cannot even be displayed.


I don't understand what you mean. Could you maybe give a concrete example of
the information that can't be displayed?


Sure. Here are five parens ))))) How much whitespace is there here:

Still, I'm sure you're familiar with the following quote (with which I most
heartily agree):

"[P]rograms must be written for people to read, and only incidentally for
machines to execute."

People can't "read" '))))))))'.


Funny, the people you just quoted would disagree with you about parenthesis.
I expect that they would disagree with you about whitespace as well.
Jul 18 '05 #155
james anderson <ja************@setf.de> writes:

The advantage of HOFs over macros is simplicity: You don't need additional
language constructs
when did common-lisp macros become an "additional language construct"?


That's what macros do: they add new language constructs.

I think that many Scheme students inadvertantly get taught `macros = evil'.
the other reason is that when i moved from scheme to lisp, in the
process of porting the code which i carried over, it occurred to me that much
of what i was using higher-order functions for could be expressed more clearly
with abstract classes and appropriately defined generic function method combinations.


I also think that many Scheme students are mislead and inadvertantly
taught that one should avoid everything but LAMBDA.
Jul 18 '05 #156
Pascal Costanza <co******@web.de> writes:
Corey Coughlin wrote:
I was never very fond of lisp. I guess I mean scheme technically, I
took the Ableson and Sussman course back in college, so that's what I
learned of scheme, lisp in general I've mostly used embedded in other
things. In general, it always seemed to me that a lot of the design
choices in lisp are driven more by elegance and simplicity than
usability.


You should give Common Lisp a try. Many Lispers think that that's
exactly one of the advantages of Common Lisp over Scheme: it focuses
more on usability than on elegance.


S&ICP (Abelson and Sussman) use Scheme to illustrate basic concepts by
stripping away things to get at the core of what's going on. But I
think too many people assume that the stripped-down version is
supposed to be `better' in some sense. (Other than the pedagogic
sense, that is.)

It is like learning about internal combustion engines by disassembling
a lawnmower. It is a very simple illustration of everything important
about 4-stroke engines. However, no one would suggest you actually
build a car that way!
Jul 18 '05 #157
Pascal Bourguignon wrote:
Marcin 'Qrczak' Kowalczyk <qr****@knm.org.pl> writes:

On Tue, 07 Oct 2003 21:59:11 +0200, Pascal Bourguignon wrote: [SNIP]A richer alphabet is often more readable. Morse code can't be read as fast
as Latin alphabet because it uses too few different symbols. Japanese say
they won't abandon Kanji because it's more readable as soon as you know it -
you don't have to compose words from many small pieces which look alike
but each word is distinct. Of course *too* large alphabet requires long
learning and has technical difficulties, but Lisp expressions are too
little distinctive for my taste.

Well, I would say that kanji is badly designed, compared to latin
alphabet. The voyels are composed with consones (with diacritical
marks) and consones are written following four or five groups with
additional diacritical marks to distinguish within the groups. It's
more a phonetic code than a true alphabet.

[SNIP]

Admittedly, I found the above paragraph pretty hard to parse and my
never stellar knowledge of Japanees has mostly evaporated over time, but
I'm pretty sure you are talking about Hiragana (or Katakana), not Kanji.
Japaneese has three alphabets, which they mix and match in ordinary
writing. Kanji aren't phonetic at all, they're ideograms, and can
typically be read at least two completely different ways depending on
the context, making reading Japanese extra confusing for the non fluent.

A random web search supplies this basic descripion of Hiragana, Katakana
and Kanji:

http://www.kanjisite.com/html/wak/wak1.html

-tim

Jul 18 '05 #158
james anderson <ja************@setf.de> writes:
Eli Barzilay wrote:
> Yes, but I was talking about the difference approaches, for
> example:
>
> (dolist (x foo)
> (bar x))
>
> vs:
>
> (mapc #'bar foo)

are these not two examples of coding in common-lisp. how do they
demonstrate that "scheme is much more functional"?


The first is very popular, the second is hardly known.


somehow i wonder if we're discussing the same language.


These are both Lisp examples. The first is much more popular than the
second. Scheme has an equivalent for the second, not for the first.

Conclusion: the first one is stylistically preferred in Lisp, the
(equivalent of the) second is stylistically preferred in Scheme.

R5RS has `for-each' which is exactly like `mapc', but no `dolist'
equivalent. In Scheme, this is not a problem, in Lisp, the syntax
makes me worry for the extra effort in creating a closure.


what me worry? about syntax?
[...]


You completely missed my point.

--
((lambda (x) (x x)) (lambda (x) (x x))) Eli Barzilay:
http://www.barzilay.org/ Maze is Life!
Jul 18 '05 #159
In article <xc*************@famine.ocf.berkeley.edu>,
tf*@famine.OCF.Berkeley.EDU (Thomas F. Burdick) writes:
Marcin 'Qrczak' Kowalczyk <qr****@knm.org.pl> writes:
I find the Lisp syntax hardly readable when everything looks alike,
mostly words and parentheses, and when every level of nesting requires
parens. I understand that it's easier to work with by macros, but it's
harder to work with by humans like I.


You find delimited words more difficult than symbols? For literate
people who use alphabet-based languages, I find this highly suspect.
Maybe readers of only ideogram languages might have different
preferences, but we are writing in English here...


well, there are a few occasions where symbols are preferrable. just
imagine mathematics with words only

hs

--

ceterum censeo SCO esse delendam
Jul 18 '05 #160
| 3- what do you mean "printed"? A double-click on any parenthesis
| selects the enclosed list so it's quite easy to see what it encloses.

I kept clicking on the parenthesis in all your samples. All my
newsreader did was go to text-select mode!

Btw. I believe the word "printed" means "printed"... you've seen that
stuff where ink is applied to paper, yes?

Yours, Lulu...

--
---[ to our friends at TLAs (spread the word) ]--------------------------
Echelon North Korea Nazi cracking spy smuggle Columbia fissionable Stego
White Water strategic Clinton Delta Force militia TEMPEST Libya Mossad
---[ Postmodern Enterprises <me***@gnosis.cx> ]--------------------------
Jul 18 '05 #161
Pascal Bourguignon <sp**@thalassa.informatimago.com> writes:
Most probably, you would write a macro named WITH-INFIX and thus
automatically scope the infix part:

(with-infix 1 / x + 1 / ( x ^ 3 ) + 1 / ( x ^ 5 ) )

and if you write it well:

(with-infix
if a = b then format t "equal ~D~%" a ;
else format t "diff ~D /= ~D~%" a b ; endif ;
for i = 1 to 10 ; print i ; next i )
CGOL is an Algol-like syntax for MacLisp that was designed by Vaughn
Pratt. I believe it has been ported to CL.

ABSTRACT

MACLISP programmers who feel comfortable with ALGOL-like
notation, that is, an algebraic style in which one might write a
matrix multiply routine as

for i in 1 to n do
for k in 1 to n do
(ac := 0;
for j in 1 to n do
ac := ac + a(i,j)*b(j,k);
c(i,k) := ac)

can now write LISP programs in just such a notation. This notation is
essentially transparent to the MACLISP system, and files containing
CGOL code (possibly mixed in with standard code) can be read by the
interpreter and compiled by the compiler just as though they were
written in straight LISP notation.
It has never caught on, though.

I'll tell you the secret: yes there are alot of parethesis. This is a
price all lispers pay for greater benefits. It gives us such
advantages that we gladly pay the price.


Excerpts from real C header files:

#define FromHex(n) (((n) >= 'A') ? ((n) + 10 - 'A') : ((n) - '0'))
#define StreamFromFOURCC(fcc) ((WORD) ((FromHex(LOBYTE(LOWORD(fcc))) << 4) + (FromHex(HIBYTE(LOWORD(fcc))))))
#define ToHex(n) ((BYTE) (((n) > 9) ? ((n) - 10 + 'A') : ((n) + '0')))
#define MAKEAVICKID(tcc,stream) MAKELONG((ToHex((stream) & 0x0f) << 8) | (ToHex(((stream) & 0xf0) >> 4)),tcc)

#define va_arg(AP, TYPE) \
(AP = (__gnuc_va_list) ((char *) (AP) + __va_rounded_size (TYPE)), \
*((TYPE *) (void *) ((char *) (AP) \
- ((sizeof (TYPE) < __va_rounded_size (char) \
? sizeof (TYPE) : __va_rounded_size (TYPE))))))

#define PLAUSIBLE_BLOCK_START_P(addr, offset) \
((*((format_word *) \
(((char *) (addr)) + ((offset) - (sizeof (format_word)))))) == \
((BYTE_OFFSET_TO_OFFSET_WORD(offset))))

Jul 18 '05 #162


Eli Barzilay wrote:

james anderson <ja************@setf.de> writes:
Eli Barzilay wrote:
...

R5RS has `for-each' which is exactly like `mapc', but no `dolist'
equivalent. In Scheme, this is not a problem, in Lisp, the syntax
makes me worry for the extra effort in creating a closure.


what me worry? about syntax?
[...]


You completely missed my point.


if the point was about something other than when and if a closure is created
and whether that differentiates lisp from scheme, then yes, i did.

....
Jul 18 '05 #163
Tim Hochberg <ti**********@ieee.org> writes:
Pascal Bourguignon wrote:
Marcin 'Qrczak' Kowalczyk <qr****@knm.org.pl> writes:
On Tue, 07 Oct 2003 21:59:11 +0200, Pascal Bourguignon wrote: [SNIP]A richer alphabet is often more readable. Morse code can't be read as fast
as Latin alphabet because it uses too few different symbols. Japanese say
they won't abandon Kanji because it's more readable as soon as you know it -
you don't have to compose words from many small pieces which look alike
but each word is distinct. Of course *too* large alphabet requires long
learning and has technical difficulties, but Lisp expressions are too
little distinctive for my taste. Well, I would say that kanji is badly designed, compared to
latin
alphabet. The voyels are composed with consones (with diacritical
marks) and consones are written following four or five groups with
additional diacritical marks to distinguish within the groups. It's
more a phonetic code than a true alphabet.

[SNIP]

Admittedly, I found the above paragraph pretty hard to parse and my
never stellar knowledge of Japanees has mostly evaporated over time,
but I'm pretty sure you are talking about Hiragana (or Katakana), not
Kanji. Japaneese has three alphabets, which they mix and match in
ordinary writing. Kanji aren't phonetic at all, they're ideograms, and
can typically be read at least two completely different ways depending
on the context, making reading Japanese extra confusing for the non
fluent.


Absolutely. My mistake, sorry. I wrote about katakana and that was
not the subject.

A random web search supplies this basic descripion of Hiragana,
Katakana and Kanji:

http://www.kanjisite.com/html/wak/wak1.html

-tim


--
__Pascal_Bourguignon__
http://www.informatimago.com/
Do not adjust your mind, there is a fault in reality.
Jul 18 '05 #164
pr***********@comcast.net writes:
Excerpts from real C header files:

#define FromHex(n) (((n) >= 'A') ? ((n) + 10 - 'A') : ((n) - '0'))
#define StreamFromFOURCC(fcc) ((WORD) ((FromHex(LOBYTE(LOWORD(fcc))) << 4) + (FromHex(HIBYTE(LOWORD(fcc))))))
#define ToHex(n) ((BYTE) (((n) > 9) ? ((n) - 10 + 'A') : ((n) + '0')))
#define MAKEAVICKID(tcc,stream) MAKELONG((ToHex((stream) & 0x0f) << 8) | (ToHex(((stream) & 0xf0) >> 4)),tcc)

#define va_arg(AP, TYPE) \
(AP = (__gnuc_va_list) ((char *) (AP) + __va_rounded_size (TYPE)), \
*((TYPE *) (void *) ((char *) (AP) \
- ((sizeof (TYPE) < __va_rounded_size (char) \
? sizeof (TYPE) : __va_rounded_size (TYPE))))))

#define PLAUSIBLE_BLOCK_START_P(addr, offset) \
((*((format_word *) \
(((char *) (addr)) + ((offset) - (sizeof (format_word)))))) == \
((BYTE_OFFSET_TO_OFFSET_WORD(offset))))

:-)
However, taking sources written by the same programmer (me), at about
the same period, I get thrice or twice as many parenthesis in Lisp
than in C:

------- ----- ----- ------------------------------------
(-paren loc (/lin Language
------- ----- ----- ------------------------------------
160 / 627 = 0.255 COBOL sources.
6697 / 18968 = 0.353 C sources (.h + .c)
327 / 825 = 0.396 Java sources.
9071 / 18968 = 0.478 C sources, counting both ( and {
15 / 31 = 0.484 FORTRAN-IV (I've got only one toy program).
15224 / 29990 = 0.508 Modula-2 sources (* comments! *).
14754 / 13890 = 1.062 Lisp sources.
------- ----- ----- ------------------------------------

That's not much more...


for f in *.{h,c,lisp} ; do printf "%6d " $( sed -e 's/[^(]//g'<$f|tr -d '\012'|wc -c ) ; wc $f ; done|awk '{p=$1;l=$2;w=$3;c=$4;n=$5;printf "%6d / %6d = %6.3f %s\n",p,l,p/l,n;pt+=p;lt+=l;}END{printf "%6d / %6d = %6.3f %s\n",pt,lt,pt/lt,"Total";}'
--
__Pascal_Bourguignon__
http://www.informatimago.com/
Do not adjust your mind, there is a fault in reality.
Jul 18 '05 #165

pr***********@comcast.net writes:
You misunderstand me. In a python block, two expressions are
associated with each other if they are the same distance from the left
edge. This is isomorphic to having a nametag identifying the scope
of the line. Lines are associated with each other iff they have the
same nametag. Change one, and all must change.
Exactly. What was that language where you wrote tags to indicate the
indenting of data structures:

01 DREP.
02 NO-REPR PIC 9999.
02 NO-SOTR PIC 9999.
02 NOM PIC X(20).
02 PRENOM PIC X(15).
02 A PIC 9.
02 B PIC 9.
02 FILLER PIC X.
02 D PIC 9.
02 FILLER PIC X(33).
01 DVTE.
02 NO-SOTV PIC 9999.
02 NO-REPV PIC 9999.
02 MTV PIC 9(6)V99.
02 FILLER PIC X(64).

If, instead, you use balanced delimiters, then a subexpression no
longer has to encode its position within the containing expression.
[...]


--
__Pascal_Bourguignon__
http://www.informatimago.com/
Do not adjust your mind, there is a fault in reality.
Jul 18 '05 #166
Doug Tolton wrote:
If you truly believe what you are saying, you really should be
programming in Java. Everything is explicit, [...]


<nitpick>
Not really. 'this' is implicit, for example. In fact, Java people have been
known to criticize Python because it's 'self' construct is explicit. ^_^
</nitpick>

--
Hans (ha**@zephyrfalcon.org)
http://zephyrfalcon.org/

Jul 18 '05 #167
Daniel P. M. Silva wrote:
Haven't some people implemented an entire class system as one huge macro?


YES! Been there, done that -- about 3 or 4 times, actually.
I went through a bit of a phase where writing OO implementations
for Scheme was one of my principal hobbies. :-)

By the way, Scheme was my Favourite Cool Language for quite
a while. Then I discovered Python, and while I still appreciate
all the things about Scheme that I appreciated then, I wouldn't
want to go back to using it on a regular basis now. So it's not
a given that any person who likes Scheme must inevitably dislike
Python!

I do "get" macros, and I appreciate having them available in
languages like Scheme, where they seem to fit naturally. But
I can't say I've missed them in Python, probably because Python
provides enough facilities of its own for constructing kinds of
mini-languages (keyword arguments, operator overloading,
iterators, etc.) to satisfy my needs without having to resort
to macros.

And I do regard macros as something that one "resorts" to, for
all the reasons discussed here, plus another fairly major one
that nobody has mentioned: Unless both the macro system and
the macros written in it are *extremely* well designed, error
reporting in the presence of macros of any complexity tends to
be abysmal, because errors get reported in terms of the expansion
of the macro rather than what the programmer originally wrote.

ALL macro systems of any kind that I have ever used have suffered
from this - cpp, C++ templates, Lisp/Scheme macros, TeX,
you name it. I'd hate to see Python grow the same problems.

--
Greg Ewing, Computer Science Dept,
University of Canterbury,
Christchurch, New Zealand
http://www.cosc.canterbury.ac.nz/~greg

Jul 18 '05 #168
Hans Nowak wrote:
Hmm, if I recall correctly, in Latin the plural of 'virus' is 'virus'.


Actually, the last discussion of this that I saw (can't remember where)
came to the conclusion that the word 'virus' didn't *have* a plural
in Latin at all, because its original meaning didn't refer to something
countable.

--
Greg Ewing, Computer Science Dept,
University of Canterbury,
Christchurch, New Zealand
http://www.cosc.canterbury.ac.nz/~greg

Jul 18 '05 #169
On 07 Oct 2003 14:50:04 +0200, Matthias <no@spam.pls> wrote:
bo**@oz.net (Bengt Richter) writes:
On 06 Oct 2003 12:54:30 +0200, Matthias <no@spam.pls> wrote:
>1.) Inventing new control structures (implement lazy data structures, You mean like an object olazy with olazy.x and olazy.y, where x might
be an integer 123 and y might be the text of the latest bug python report
on sourceforge? Not so hard with properties (you'd proabably want to cache
the latest for y and not re-check for some reasonable interval, but that
doesn't change client code (unless you later decide you want the value to
be a tuple of (timestamp, text), etc.)


Actually, I meant more lazy-like-lazy-in-Haskell. Infinite data
structures and such. "primes" being a list representing _all_ prime
numbers for instance. You can build this as soon as you have closures
but making the construction easy to use for the application programmer
might be a challenge without macros. But I don't know what
"properties" are in Python, possibly they are built for exactly that.

No, generators are closer to "exactly that" e.g., in the following function,
"yield" is the keyword that makes it into a generator. The initial call effectively
becomes a factory function call that returns an initialized generator, and calls
to the generators' .next() method resumes execution first at the beginning, running
up to the first yield, where execution is suspended and the yield value returned to
the next() method caller. A subsequent .next() call resumes execution right after the
last yield, and so on forever or until the generator exits without hitting a yield,
which terminates the sequence.
def lazyprimes(prime_range_top): ... import array
... primes = array.array('l',[2])
... yield 2
... for prime_candidate in xrange(3,prime_range_top,2):
... for p in primes:
... if prime_candidate%p==0: break
... else:
... primes.append(prime_candidate)
... yield prime_candidate
... primegen = lazyprimes(1000) # primes under 1000
primegen.next() 2

Properties allow you to create a class whose instances have a property attribute that
is accessed just like an ordinary attribute, but may invoke arbitrary functions to get
and/or set the apparent state. E.g.,
class Foo(object): ... def __init__(self, k, ptop):
... self.k = k
... self.pnext = lazyprimes(ptop).next
... p = property(lambda self: self.pnext())
... foo = Foo(123, 32)
Here k is an ordinary instance attribute and p looks like another in the syntax of the
access in an expression, but it is hooked into a lazy prime generator:
foo.k 123 foo.p 2 foo.p, foo.p (3, 5) foo.k 123 [(foo.k, foo.p, c) for c in 'abcd'] [(123, 7, 'a'), (123, 11, 'b'), (123, 13, 'c'), (123, 17, 'd')] foo.pnext() 19 foo.pnext() 23 foo.p 29 foo.p 31 foo.p Traceback (most recent call last):
File "<stdin>", line 1, in ?
File "<stdin>", line 5, in <lambda>
StopIteration

Well, I should have provided for dealing with the end-of-sequence exception, most likely,
unless reaching it was an error. Not hard.

Getting back to primegen, which last yielded the first prime 2 above,
I'll bind a shorter name to the next method (bound to the particular generator),
for easier typing ;-)
pn = primegen.next
pn() 3
Here we'll enumerate for i,p in enumerate(iter(pn,31)): print i,p ...
0 5
1 7
2 11
3 13
4 17
5 19
6 23
7 29 pn() # we used up 31 as the sentinel ending a series of calls to pn() so, 37 # here we got the next one after 31 ... for i in xrange(10): print pn(), # 10 more ...
41 43 47 53 59 61 67 71 73 79 pn() #one more 83 primegen.next() 89 for p in primegen: print p, # the rest ...
97 101 103 107 109 113 127 131 137 139 149 151 157 163 167 173 179 181 191 193 197 199 211 223 2
27 229 233 239 241 251 257 263 269 271 277 281 283 293 307 311 313 317 331 337 347 349 353 359 3
67 373 379 383 389 397 401 409 419 421 431 433 439 443 449 457 461 463 467 479 487 491 499 503 5
09 521 523 541 547 557 563 569 571 577 587 593 599 601 607 613 617 619 631 641 643 647 653 659 6
61 673 677 683 691 701 709 719 727 733 739 743 751 757 761 769 773 787 797 809 811 821 823 827 8
29 839 853 857 859 863 877 881 883 887 907 911 919 929 937 941 947 953 967 971 977 983 991 997
(In opportune line wrap on my console screen, but I assume it's clear enough).
> implement declarative control structures, etc.) You mean like case or such?


No. I was thinking about Prolog and such. Or nondeterministic
programming. Or multimethod dispatch.

For the latter in Python, see
http://www-106.ibm.com/developerwork.../l-pydisp.html
>2.) Serve as abbreviation of repeating code. Ever used a code
> generator? Discovered there was a bug in the generated code? Had
> to fix it at a zillion places?
> => Macros serve as extremely flexible code generators, and there
> is only one place to fix a bug.
> => Many Design Patterns can be implemented as macros, allowing you
> to have them explicitly in your code. This makes for better
> documentation and maintainability. You can generate code many ways in Python. What use case are you thinking of?


I was not talking about generating code /in/ Python but generating
code /for/ Python /within/ it. For the Design Pattern use case take a
look at http://norvig.com/design-patterns/

I was also talking about "generating codef /for/ Python /within/ it" -- even in
a recrusive routine, see answer to 4. below ;-)
>3.) Invent pleasant syntax in limited domains.
> => Some people don't like Lips' prefix syntax. It's changeable if you
> have macros.
> => This feature can also be misused. You can do this also.


You can change Python's syntax? Easily?

No, I didn't mean that literally, but just as you can create source dynamically
and compile it for subsequent execution (as with the trivial text source template below),
you could define an app-specific mini language and translate and compile it for use later
in the same program run. If you create a class and populate class variables from values
in a config file, are you doing this? I think so, at a trivial level. If you wrote an
import function that could import a subset of scheme and have it dynamically translated
to something that looks like a python module to the python user, have you changed python's
syntax? No. Have you made other syntax available to the python programmer? Yes.

I had an infatuation with scheme, some years back now. I still think she was sweet ;-)
>4.) Do computations at compile time instead of at runtime.
> => Have heard about template metaprogramming in the C++ world?
> People do a lot to get fast performance by shifting computation
> to compile time. Macros do this effortlessly. This also, but Python has so many possible compile times ;-)


I'm not sure I understand.

import time
def multicomp(maxrec=5, level=0): ... fooname = 'foo_level_%s'%level
... source = """
... def %s(): print 'I was compiled at level %s on %s.'
... """% (fooname, level, time.ctime())
... d={}
... exec source in d
... time.sleep(2) # to guarantee time stamp change
... if level<maxrec: return (d[fooname],)+ multicomp(maxrec,level+1)
... return (d[fooname],)
... mc = multicomp()
for f in mc: print 'My name is %r and'%(f.__name__,),; f() ...
My name is 'foo_level_0' and I was compiled at level 0 on Tue Oct 07 21:19:18 2003.
My name is 'foo_level_1' and I was compiled at level 1 on Tue Oct 07 21:19:20 2003.
My name is 'foo_level_2' and I was compiled at level 2 on Tue Oct 07 21:19:22 2003.
My name is 'foo_level_3' and I was compiled at level 3 on Tue Oct 07 21:19:24 2003.
My name is 'foo_level_4' and I was compiled at level 4 on Tue Oct 07 21:19:26 2003.
My name is 'foo_level_5' and I was compiled at level 5 on Tue Oct 07 21:19:28 2003.


Ok, the recursion was gratuitous, except that it shows compilation happening dynamically,
and you can easily see you could leave such routines compiled at the outer level for
execution any time you wanted, and thus get "many compile times" ;-)
Python is pretty sharp ;-)
I think we need some realistic use cases for your "specific" [categories of]
examples in order to compare how problems would be approached.


Well, if you don't want to learn about syntactic abstraction you'll
probably never miss it during your programming. Just keep in mind
that before oo-abstraction became fashionable people didn't miss OOP
either.


Actually, I would like to learn more about it. I am fascinated by the
interplay between the worlds of concrete representations and abstract entities,
and their interrelated transformations. ISTM macros definitely have a place in the pantheon.
I have yet to grok scheme's hygienic macro stuff, though ;-) One of these days...

Regards,
Bengt Richter
Jul 18 '05 #170
According to Pascal Bourguignon <sp**@thalassa.informatimago.com>:
Well, I would say that kanji is badly designed, compared to latin
alphabet. The voyels are composed with consones (with diacritical
marks) and consones are written following four or five groups with
additional diacritical marks to distinguish within the groups. It's
more a phonetic code than a true alphabet.


Kanji are ideograms borrowed from Chinese. Kanji literally means "Han
character".

I think the diacritical marks you mention are pronunciation guides, much
like Hanyu Pinyin is a Mandarin pronunciation guide for Chinese.

In Hanyu Pinyin, Kanji (read as a Chinese word phrase) is rendered "han4
zi4".

In Korean, Kanji is pronounced Hanja.

Same two-character word phrase, different pronunciations.
--
Ng Pheng Siong <ng**@netmemetic.com>

http://firewall.rulemaker.net -+- Manage Your Firewall Rulebase Changes
http://sandbox.rulemaker.net/ngps -+- Open Source Python Crypto & SSL
Jul 18 '05 #171
hs@heaven.nirvananet (Hartmann Schaffer) writes:
In article <xc*************@famine.ocf.berkeley.edu>,
tf*@famine.OCF.Berkeley.EDU (Thomas F. Burdick) writes:
Marcin 'Qrczak' Kowalczyk <qr****@knm.org.pl> writes:
I find the Lisp syntax hardly readable when everything looks alike,
mostly words and parentheses, and when every level of nesting requires
parens. I understand that it's easier to work with by macros, but it's
harder to work with by humans like I.


You find delimited words more difficult than symbols? For literate
people who use alphabet-based languages, I find this highly suspect.
Maybe readers of only ideogram languages might have different
preferences, but we are writing in English here...


well, there are a few occasions where symbols are preferrable. just
imagine mathematics with words only


Oh, certainly. Unlike most languages, Lisp lets you use symbols for
your own names (which is easily abused, but not very often). A bad
example:

;; Lets you swear in your source code, cartoonishly
(define-symbol-macro $%^&!
(error "Aw, $%^&! Something went wrong..."))

;; An example use
(defun foo (...)
(cond
...
(t $%^&!)))

And, although you generally use symbols from the KEYWORD package for
keyword arguments, you don't have to, and they don't have to be words:

(defgeneric convert-object (object new-type)
(:documentation "Like an extensible COERCE."))

(defun convert (object &key ((-> to)))
"Sugary"
(convert-object object to))

(defconstant -> '-> "More sugar")

;; Example usage
(convert *thing* -> (class-of *other-thing*))

Of course, these are lame examples, but they show that Lisp *can*
incorporate little ascii-picture-symbols. Good examples would
necessarily be very domain-dependant.

--
/|_ .-----------------------.
,' .\ / | No to Imperialist war |
,--' _,' | Wage class war! |
/ / `-----------------------'
( -. |
| ) |
(`-. '--.)
`. )----'
Jul 18 '05 #172
Greg Ewing (using news.cis.dfn.de) wrote:
Daniel P. M. Silva wrote:
Haven't some people implemented an entire class system as one huge macro?
YES! Been there, done that -- about 3 or 4 times, actually.
I went through a bit of a phase where writing OO implementations
for Scheme was one of my principal hobbies. :-)


Nice! I was alluding to MzScheme's class.ss but I guess that's a fun hobby
to have. :) Do you have your class systems available anywhere to download?
I would be especially interested in them if they allow multiple
inheritance, run-time pillaging of class contracts, and explicit "this"
arguments to methods...

By the way, Scheme was my Favourite Cool Language for quite
a while. Then I discovered Python, and while I still appreciate
all the things about Scheme that I appreciated then, I wouldn't
want to go back to using it on a regular basis now. So it's not
a given that any person who likes Scheme must inevitably dislike
Python!

I do "get" macros, and I appreciate having them available in
languages like Scheme, where they seem to fit naturally. But
I can't say I've missed them in Python, probably because Python
provides enough facilities of its own for constructing kinds of
mini-languages (keyword arguments, operator overloading,
iterators, etc.) to satisfy my needs without having to resort
to macros.
You still can't add new binding constructs or safe parameterizations like a
with_directory form:

with_directory("/tmp", do_something())

Where do_something() would be evaluated with the current directory set to "
tmp" and the old pwd would be restored afterward (even in the event of an
exception).

Last year -- I think at LL2 -- someone showed how they added some sort of
'using "filename":' form to Python... by hacking the interpreter.
And I do regard macros as something that one "resorts" to, for
all the reasons discussed here, plus another fairly major one
that nobody has mentioned: Unless both the macro system and
the macros written in it are *extremely* well designed, error
reporting in the presence of macros of any complexity tends to
be abysmal, because errors get reported in terms of the expansion
of the macro rather than what the programmer originally wrote.
I've yet to encounter this problem in the standard library included with my
Scheme implementation of choice, but ok.
ALL macro systems of any kind that I have ever used have suffered
from this - cpp, C++ templates, Lisp/Scheme macros, TeX,
you name it. I'd hate to see Python grow the same problems.


Some people use Python's hooks to create little languages inside Python (eg.
to change the meaning of instantiation), which are not free of problems:

class Object(object):
def __init__(this, *args, **kwargs):
this.rest = args
this.keys = kwargs

def new_obj_id(count=[0]):
count[0] = count[0] + 1
return count[0]

def tag_obj(obj, id):
obj.object_id = id
return obj

def obj_id(obj): return obj.object_id

type.__setattr__(Object, "__new__", staticmethod(lambda type, *args:
tag_obj(object.__new__(type), new_obj_id())))
Great, now all object instantiations (of our own Object class) will also tag
new objects with an ID:

obj = Object()
print "id: ", obj_id(obj)
print "another id: ", obj_id(Object())

Which gives you 1 and then 2. Hurrah.
Have you caught the bug yet?

# forgot to check for this case...
print Object(foo="bar")

This is of course illegal and I get the following error message:

Traceback (most recent call last):
File "n.py", line 27, in ?
print Object(foo="bar").rest
TypeError: <lambda>() got an unexpected keyword argument 'foo'

Hmm...

Jul 18 '05 #173
I'd humbly suggest that if you can't see *any* reason why someone
would prefer Lisp's syntax, then you're not missing some fact about
the syntax itself but about how other language features are supported
by the syntax.


sure, but it seems like noone was able to let CLOS have
(virtual) inner classes,
methods inside methods,
virtual methods (yeah I know about those stupid generic functions :),
method overloading,
A decent API (I tried playing with it.. it doesn't even have a freaking
date library as standard ;-p

Yes I agree with the compile time macro expansion is a nice thing.
However, if I want to do some serious changes to the structure of objects
and classes (i.e. create a new kind of objects) then I have to spend a
long time finding out how the CLOS people hacked together their
representation of classes, methods, method call etc... it has been far
easier for me to just do some small changes using __getattribute__ and
metaclasses in python. So in that respect Im not really sure the macro
idea is advantageous for other than 'straight away' macros...

yes this mail is provocative.. please count slowly to 10 before replying
if you disagree with my point of view (and I know Pascal will disagree ;-)
.... not that I ever seen him angry ;-)
Carlo van Dango

--
Using M2, Opera's revolutionary e-mail client: http://www.opera.com/m2/
Jul 18 '05 #174


"Carlo v. Dango" wrote:
I'd humbly suggest that if you can't see *any* reason why someone
would prefer Lisp's syntax, then you're not missing some fact about
the syntax itself but about how other language features are supported
by the syntax.


sure, but it seems like noone was able to let CLOS have
(virtual) inner classes,
methods inside methods,
virtual methods (yeah I know about those stupid generic functions :),
method overloading,
A decent API (I tried playing with it.. it doesn't even have a freaking
date library as standard ;-p

Yes I agree with the compile time macro expansion is a nice thing.
However, if I want to do some serious changes to the structure of objects
and classes (i.e. create a new kind of objects) then I have to spend a
long time finding out how the CLOS people hacked together their
representation of classes, methods, method call etc... it has been far
easier for me to just do some small changes using __getattribute__ and
metaclasses in python. So in that respect Im not really sure the macro
idea is advantageous for other than 'straight away' macros...

yes this mail is provocative.. please count slowly to 10 before replying
if you disagree with my point of view (and I know Pascal will disagree ;-)
... not that I ever seen him angry ;-)


one might benefit more from reasoned examples, comparisons, and questions than
from vacuous vitriol.

....
Jul 18 '05 #175
Peter Seibel <pe***@javamonkey.com> writes:
co************@attbi.com (Corey Coughlin) writes:
Using parentheses and rpn everywhere makes lisp very easy to parse,
but I'd rather have something easy for me to understand and hard for
the computer to parse.


Intrestingly enough, I think this is a question of getting used to
it. The notation is so relentlessly regular that once you got it,
there are no more syntactical ambiguities. None. Ever.

It is the difference (for me) between reading roman numerals (that
would be the baroque-ish syntax of other languages, full of
irregularities, special cases, and interference patterns), and arabic
numerals (that would be lisp). I never have a doubt about what the
S-expr. representation encodes.
Jul 18 '05 #176
Marcin 'Qrczak' Kowalczyk <qr****@knm.org.pl> writes:
All Lisp code I've read uses lots of parentheses
and they pile up at the end of each large subexpression so it's hard to
match them (an editor is not enough, it won't follow my eyes and won't
work with printed code).


The paranthesis argument. Are there really no new things under the sun? ;-)

Well, in Lisp as in most other languages (esp. Python) you read source
code by indentation, not by matching parentheses. That is why some
Lispers are a bit intolerant against non-standard indentation. They
use it (mentally) to parse the language. The parenthesis really are a
personal-taste-only issue.
Jul 18 '05 #177
On 08 Oct 2003 <sp**@thalassa.informatimago.com> wrote:
Marcin 'Qrczak' Kowalczyk <qr****@knm.org.pl> writes:
A richer alphabet is often more readable. Morse code can't be read as
fast as Latin alphabet because it uses too few different symbols.
Japanese say they won't abandon Kanji because it's more readable
as soon as you know it -
Rather like Lispers ;)
you don't have to compose words from many small pieces which look alike
but each word is distinct.
Very -ish. Kanji is pictographic and the Japanese borrowed their usage
from China several times over the course of a thousand years so from
what a westerner might call reading POV, it's a mess. Having learned
to read Kanji; however, I have to say that the leverage you get from
the pictograms is amazing. I found myself quite able to navigate myself
to (and through) government offices whose name I didn't even begin to
know,but whose function was clear from the kanji on the door.
Of course *too* large alphabet requires long
learning and has technical difficulties,
Indeed Japanese children spend most of gradeschool learning the first 2000
or so Kanji. By the time you finish university it can be necessary to know
up to 10,000 (or so my wife tells me).
but Lisp expressions are too
little distinctive for my taste.

I'll grant that Lisp is rough in vanilla VI, but who uses that anymore?
Syntax coloring and auto-indenting make it virtually identical to Python.
I would go so far as to say that I *read* lisp via indentation.
Well, I would say that kanji is badly designed, compared to latin
alphabet. The voyels are composed with consones (with diacritical
marks) and consones are written following four or five groups with
additional diacritical marks to distinguish within the groups. It's
more a phonetic code than a true alphabet.


Sorry, but that's all any alphabet is. The Kana are particularly aptly
suited to the Japanese language which is phonetically *very* simple. The
Kana encode the basic syllables - *all* of them. English/Latin is a
disaster by comparison.

All of which goes to show something like: languages make sense in the
context
where they are used - otherwise they wouldn't be used...

david rush
--
(\x.(x x) \x.(x x)) -> (s i i (s i i))
-- aki helin (on comp.lang.scheme)
Jul 18 '05 #178
On Tue, 07 Oct 2003, Peter Seibel <pe***@javamonkey.com> wrote:
But Lisp's syntax is not the way it is to make the compiler writer's
job easier. <macros> *That's* why we don't mind, and, in
fact, actively like, Lisp's syntax.


In fact, I have also noticed that programming in nearly all other languages
(Smalltalk, APL, and FORTH are the exceptions) tends to degenerate towards
a fully-parenthesized prefix notation in direct proportion to the size of
the code. This fully-parenthesized prefix notation is just everyday
function calls, BTW. Method invocations aren't really any different.

So if you're going to write in parenthesized prefix notation *anyway*, you
might as well get some benefit out of it -> s-expressions and macros

david rush
--
(\x.(x x) \x.(x x)) -> (s i i (s i i))
-- aki helin (on comp.lang.scheme)
Jul 18 '05 #179
On Tue, 07 Oct 2003 21:25:53 +0100, Alexander Schmolck wrote:
Python removes this significant problem, at as far as I'm aware no real cost
and plenty of additional gain (less visual clutter, no waste of delimiter
characters ('{','}') or introduction of keywords that will be sorely missed as
user-definable names ('begin', 'end')).


There are three choices for a lanuage syntax:
1. Line breaks and indents are significant (Haskell, Python).
2. Line breaks only are significant (Ruby, Unix shell, Visual Basic).
3. Neither is significant (most languages).

I found the syntax of Haskell and Python aesthetic, and tried to introduce
significant whitespace into my own little language. It was surprisingly hard.

The first attempt used a quite minimalistic syntax and had significant
indents. In effect indentation errors usually went undetected and the
program suddently had a different meaning. Since you wouldn't want to
consistently use indentation for all local functions, they used either
braces or indentation - but not both! so it looked very differently
depending on whether you wanted to make use of significant indents or not.
And since it was a functional language, it used quite a lot of nesting.
I quickly abandoned this version.

Although misindenting Haskell code can produce a valid parse, the error
is usually caught either by scoping rules or by the typechecker; my
language was dynamically typed. Haskell doesn't have the "inconsistency"
problem because when you omit a line break which would introduce or close
indentation, you usually don't have to insert braces - syntax rules say
that virtual closing braces are inserted when not inserting it would cause
a parse error. Unfortunately this rule is almost impossible to implement
correctly (current compilers fail to use it correctly in some subtle cases).
There are cases when the language requires a different indentation than
I would like to use (mostly 'if-then-else' and 'let' inside 'do').

Python has a simpler syntax, where indentation is used on the level of
statements as the only delimiting mechanism, and not on the level of
expressions - which can't contain statements. It doesn't allow to replace
indentation with explicit delimiters. Since most expressions have pending
open parens or brackets when cut in the middle (because of mandatory
parens around function arguments), most line breaks inside expressions are
identifiable as insignificant without explicit marking. So it's easy to
design rules which use indentation, at the cost of the inability to
express various things as expressions. It's an imperative language and
such syntax won't fit a functional language where you would want to have
a deeper nesting, and where almost everything can be used inside an
expression.

Moral: Haskell and Python happen to succeed with significant indents
but their rules are hard to adapt to other languages. Significant
indentation constrains the syntax - if you like these constraints, fine,
but it would hurt if a language were incompatible with these constraints.

Having failed with significant indents, I tried to use significant line
breaks in next incarnations of my language, which looked like a good
compromise. The language had a richer syntax this time and it worked
quite well, except that one too often wanted to break a line in a place
which had to be explicitly marked as an insignificant break. I had
troubles with designing a good syntax for some constructs, mainly
if-then-else, being constrained to syntaxes which can be nicely split
into lines.

After experimenting with various syntaxes which used significant line
breaks to separate declarations and statements (delimiting was first done
by an opening word and 'end', later by braces), I tried how it would look
like with explicit semicolons. Surprisingly this opened new ways to build
some syntactic constructs. I finally got an 'if' which I was happy with,
I was no longer forced to choose to either not break a particular long
line or to mark the line break as insignificant, and I could abandon
designing a built-in syntax for catching exceptions because using a
suitable function no longer interfered with line breaking.

Moral is the same. Although designing and implementing a syntax with
significant line breaks and insignificant indentation is much easier than
with significant indentation, it still takes away some freedom of syntax
design which might be noticeable. Perhaps there are subtle ways to apply
significant line breaks to various languages, which you might find with
some luck or experience... I've given up, designing a syntax with
insignificant whitespace is much safer.

--
__("< Marcin Kowalczyk
\__/ qr****@knm.org.pl
^^ http://qrnik.knm.org.pl/~qrczak/

Jul 18 '05 #180
Daniel P. M. Silva wrote:
...
You still can't add new binding constructs or safe parameterizations like
a with_directory form:

with_directory("/tmp", do_something())

Where do_something() would be evaluated with the current directory set to
" tmp" and the old pwd would be restored afterward (even in the event of
an exception).
Right: you need to code this very differently, namely:
with_directory("/tmp", do_something)
*deferring* the call to do_something to within the with_directory
function. Python uses strict evaluation order, so if and when you
choose to explicitly CALL do_something() it gets called,

So, I would code:

def with_directory(thedir, thefunc, *args, **kwds):
pwd = os.getcwd()
try: return thefunc(*args, **kwds)
finally: os.chdir(pwd)

this is of course a widespread idiom in Python, e.g. see
unittest.TestCase.assertRaises for example.

The only annoyance here is that there is no good 'literal' form for
a code block (Python's lambda is too puny to count as such), so you
do have to *name* the 'thefunc' argument (with a 'def' statement --
Python firmly separates statements from expressions).
Last year -- I think at LL2 -- someone showed how they added some sort of
'using "filename":' form to Python... by hacking the interpreter.
A "using" statement (which would take a specialized object, surely not
a string, and call the object's entry/normal-exit/abnormal-exit methods)
might often be a good alternative to try/finally (which makes no provision
for 'entry', i.e. setting up, and draws no distinction between normal
and abnormal 'exits' -- often one doesn't care, but sometimes yes). On
this, I've seen some consensus on python-dev; but not (yet?) enough on
the details. Consensus is culturally important, even though in the end
Guido decides: we are keen to ensure we all keep using the same language,
rather than ever fragmenting it into incompatible dialects.

Some people use Python's hooks to create little languages inside Python
(eg. to change the meaning of instantiation), which are not free of
problems:

class Object(object):
def __init__(this, *args, **kwargs):
[invariably spelt as 'self', not 'this', but that's another issue]
this.rest = args
this.keys = kwargs

def new_obj_id(count=[0]):
count[0] = count[0] + 1
return count[0]

def tag_obj(obj, id):
obj.object_id = id
return obj

def obj_id(obj): return obj.object_id

type.__setattr__(Object, "__new__", staticmethod(lambda type, *args:
tag_obj(object.__new__(type), new_obj_id()))) ... # forgot to check for this case...
print Object(foo="bar")


It's not an issue of "checking": you have written (in very obscure
and unreadable fashion) a callable which you want to accept (and
ignore) keyword arguments, but have coded it in such a way that it
in fact refuses keyword arguments. Just add the **kwds after the
*args. This bug is not really related to "little languages" at all:
you might forget to specify arguments which you do want your callable
to accept and ignore in a wide variety of other contexts, too.

A small but important help to avoid such mistakes is to express
your intentions more readably. Let's suppose the 'Object' class is
externally defined (so we don't want change its metaclass, which would
be a more usual approach in Python), that we know it does not
implement a significant __new__ nor __slots__, and that we want to add
to it the "tag all objects on creation" feechur, which must use/support
the also-existing and specified functions new_obj_id (for generating
new ids), tag_obj (to set them) and obj_id (to access them) -- each
of these specs is significant (we'd probably code differently if any
or all of them were changed).

Given all this, the normal way to code this functionality in Python
would be something like:

def tagging_new(cls, *args, **kwds):
new_object = object.__new__(cls)
new_id = new_obj_id()
return tag_obj(new_object, new_id)
Object.__new__ = staticmethod(tagging_new)

The use of "type.__setattr__(Object, "__new__", staticmethod(lambda ..."
in lieu of the elementarily simple "Object.__new__ = staticmethod(..."
would be quite peculiar. The use of a complicated lambda instead of
a simple def also decreases readability (and thus makes mistakes
such as forgetting that the new __new__ must also accept and ignore
**kwds more likely). Another helpful idiom is to name the first arg
of staticmethod's as 'cls', NOT 'type' (which would cause confusion
with the builtin 'type', often needed in similar contexts).

Moreover, it is possible that the auxiliary functions new_obj_id and
tag_obj were not externally specified, but, rather, coded ad hoc just
to enable the new __new__ to be written as a lambda (i.e., within the
strictures of Python's lambda -- just one expression, no statements).

If that is the case, then they might also be easily refactored out, e.g.:

class Tagger(object):
def __init__(self):
self.next_id = 0
def tag_new(self, cls, *args, **kwds):
new_object = object.__new__(cls)
self.next_id += 1
new_object.object_id = self.next_id
return new_object
tagging_new = Tagger().tag_new
Object.__new__ = staticmethod(tagging_new)

How best to subdivide the task "tag a new object" is debatable (since
a new tag should never be generated except during tagging, I do prefer
to have the generation of the tag and the affixion of the tag to an
object as inseparable -- but in some cases one might surely prefer to
have them separate from the __new__ so as to be able to tag already
existing objects -- that would, of course, be easy to achieve). But
the point is that the preferred way in Python to package up some state
(particularly mutable state) and some behavior is within a class; the
use of a mutable default argument in new_obj_id is non-idiomatic -- it
seems better to make a class for the purpose. This has the advantage
of grouping related state and behavior in ways that any Pythonista
will immediately recognize (the general advantage of using any given
language's idiomatic approach: by doing what readers expect you to do,
you make your code more readable than by being original and creative).

I have extracted the tag_new method of an instance of Tagger and
named it tagging_new in the assumption that we may want to then use
the SAME tagsequence for other classes as well (and may prefer to
avoid doing so by e.g. "OtherClass.__new__ = Object.__new__"). If
that is not the case, then merging the last two lines into

Object.__new__ = staticmethod(Tagger().tag_new)

would probably be preferable.
Alex

Jul 18 '05 #181
Terry Reedy wrote:
"Pascal Costanza" <co******@web.de> wrote in message
news:bl***********@f1node01.rhrz.uni-bonn.de...
What about dealing with an arbitrary number of filters?

[macro snipped]

What about it? Using macros for somewhat simple functions stikes me
as overkill.


You're right. The use of with-collectors makes it more appropriate to
express it as a macro, but of course, one can use a simple function when
you don't stick to with-collectors.
An example:
> (predicate-collect '(-5 -4 -3 -2 -1 0 1 2 3 4 5)

(function evenp)
(lambda (n) (< n 0))
(lambda (n) (> n 3)))
(-4 -2 0 2 4)
(-5 -3 -1)
(5)
(1 3)

In Python:

def multisplit(seq, *preds):
predn = len(preds)
bins = [[] for i in range(predn+1)]
predpends = [(p,b.append) for (p,b) in zip(preds,bins)]
rpend = bins[predn].append
for item in seq:
for pred,pend in predpends:
if pred(item):
pend(item)
break
else: rpend(item)
return bins

multisplit(range(-5,6), lambda i: not i%2, lambda i: i<0, lambda i:
i>3)

[[-4, -2, 0, 2, 4], [-5, -3, -1], [5], [1, 3]]


For the sake of completeness, here is the Lisp version:

(defun predicate-collect (list &rest predicates)
(let ((table (make-hash-table))
(preds (append predicates
(list (constantly t)))))
(dolist (elem list)
(loop for pred in preds
until (funcall pred elem)
finally (push elem (gethash pred table))))
(mapcar (lambda (pred)
(nreverse (gethash pred table)))
preds)))
? (predicate-collect
'(-5 -4 -3 -2 -1 0 1 2 3 4 5)
(function evenp)
(lambda (n) (< n 0))
(lambda (n) (> n 3)))

((-4 -2 0 2 4) (-5 -3 -1) (5) (1 3))
Pascal

Jul 18 '05 #182
Carlo v. Dango wrote:
I'd humbly suggest that if you can't see *any* reason why someone
would prefer Lisp's syntax, then you're not missing some fact about
the syntax itself but about how other language features are supported
by the syntax.

sure, but it seems like noone was able to let CLOS have
(virtual) inner classes,
methods inside methods,
virtual methods (yeah I know about those stupid generic functions :),
method overloading,


+ Inner classes only make sense when the language requires you to put
method definitions inside of class definitions. It doesn't make a lot of
sense to put a class definition inside another class when it only
consists of field definitions, as is the case in CLOS. (Except for
having the benefit of additional namespaces, but namespaces are handled
differently in Common Lisp.)

+ So what you want is method definitions inside of other methods. Of
course, this is possible. Here is a little toy example that sketches how
you can achieve this:

(defclass person ()
((name :accessor name :initarg :name)
(address :accessor address :initarg :address)))

(defun make-out-method (person)
(with-slots (name address) person
(defmethod out ((p (eql person)))
(format t "Name: ~A; address: ~A~%" name address))))

(defvar *pascal* (make-instance 'person :name "Pascal" :address "Bonn"))

(make-out-method *pascal*)

(out *pascal*)

=> Name: Pascal; address: Bonn

+ All methods in CLOS are virtual. What do you mean?

+ Method overloading is a way to have static dispatch, and this doesn't
fit well with a dynamic language. (Apart from that, static dispatch is a
source for some nasty bugs.)

What you probably really mean here is that there are some strict
compatibility requirements wrt the lambda lists of methods that belong
to the same generic function. I don't think Common Lispers have serious
issues with these requirements.

In general, dynamic type checking in Common Lisp makes these things much
easier than you might think in case you have only considered statically
typed languages so far.
A decent API (I tried playing with it.. it doesn't even have a freaking
date library as standard ;-p
No language with an official standard (ANSI, ISO, etc.) defines
everything you might ever need in its standard. That's simply not
possible. Standardized languages rely on vendor support, and more often
than not, community-driven de-facto standards emerge.

Single-vendor languages follow a totally different approach in this
regard. You are comparing apples and oranges here.

One can have a debate about language standards vs. single-vendor
languages, but that's a different issue altogether.

Baseline: If you are looking for a decent date library, check out what
Common Lisp vendors have to offer and/or what is available from third
parties.
Yes I agree with the compile time macro expansion is a nice thing.
However, if I want to do some serious changes to the structure of
objects and classes (i.e. create a new kind of objects) then I have to
spend a long time finding out how the CLOS people hacked together their
representation of classes, methods, method call etc... it has been far
easier for me to just do some small changes using __getattribute__ and
metaclasses in python. So in that respect Im not really sure the macro
idea is advantageous for other than 'straight away' macros...
Are you sure that you are not confusing macros and the CLOS MOP here?
(Your remarks are too general to be able to comment on this.)
yes this mail is provocative.. please count slowly to 10 before replying
if you disagree with my point of view (and I know Pascal will disagree
;-) ... not that I ever seen him angry ;-)


grrr

;)
Pascal

Jul 18 '05 #183
Dirk Thierbach wrote:

you can use macros to
do everything one could use HOFs for (if you really want).


Really? What about arbitrary recursion?

--
Andreas Rossberg, ro******@ps.uni-sb.de

"Computer games don't affect kids; I mean if Pac Man affected us
as kids, we would all be running around in darkened rooms, munching
magic pills, and listening to repetitive electronic music."
- Kristian Wilson, Nintendo Inc.

Jul 18 '05 #184
james anderson <ja************@setf.de> wrote:
> is the[re] no advantage to being able to do either - or both - as the
> occasion dictates?

Of course it's good to be able to do either. And Lisp macros are a
powerful tool, and it's good to have it when you need it. But it's
somewhat annoying to frequently read grossly exaggerated claims like
"lots of things you can do with Lisp macros are impossible to do in
any other language" when you can do a good part of these things even in
Lisp without macros.
The interesting part is that most Lisp'ers don't seem
to use them, or even to know that you can use them, and use macros instead.
while the first assertion might well be born out by a statistical
analysis of, for example, open-source code, i'm curious how one
reaches the second conclusion.
I don't have statistical evidence, this is just my personal
impression. On the one hand, there are the above claims, on the other
hand, I see code that would (IMHO) much more simple and elegant with
HOFs, but instead imperative assignment is used, or macros. So I then
I have the impression that the programmer doesn't really know how to
use HOFs, otherwise he would have used them. Maybe that's wrong and
they all do that on purpose, or because they don't like HOFs (like you
seem to do), but somehow this is difficult to imagine.
The advantage of HOFs over macros is simplicity: You don't need additional
language constructs when did common-lisp macros become an "additional language construct"?
It's "additional" in the sense that you can write programs without it,
and that different Lisp dialects use a different syntax and semantics
for macros. HOFs on the other hand are pure lambda calculus, and every
functional language has them.
(which may be different even for different Lisp
dialects, say), and other tools (like type checking) are available for
free; and the programmer doesn't need to learn an additional concept.

doesn't that last phrase contradict the previous one?
I don't see any contradiction; maybe you can be more explicit?
i do admit to infrequent direct use of higher-order functions. one
reason is that there is little advantage to articulating the
creation of functions which have dynamic extent only,
Why? The whole point (or one of them) of functional languages is that
functions are first class, and you can easily use them (and you
frequently do) in 'map', 'fold', or more complicated HOFs. It's simple,
elegant, reusable, type-safe, and avoids unecessary state. A definition
like

sum = foldr (+) 0

in Haskell is a lot better than doing an explicit loop. If you don't
use HOFs at all, then IMHO you're not doing proper functional
programming.
so in my use, most hof's are manifest through a macro
interface. it's the same distaste i have about inner and anonymous
java classes.
I don't like inner and anonymous classes either: They're just a clumsy
substitute for anonymous functions, and they have too much syntactic
garbage associated with them.
the other reason is that when i moved from scheme to lisp, in the
process of porting the code which i carried over, it occurred to me
that much of what i was using higher-order functions for could be
expressed more clearly with abstract classes and appropriately
defined generic function method combinations.


Sometimes it is more convenient to use other forms of parametrization
(like classes). Sometimes HOFs are more natural. It really depends on
the concrete example. And of course 'classes' or similar language
features are independent from macros, and many languages provide them,
even if they don't have macros.

- Dirk
Jul 18 '05 #185
Pascal Bourguignon <sp**@thalassa.informatimago.com> writes:
Most probably, you would write a macro named WITH-INFIX and thus
automatically scope the infix part:

(with-infix 1 / x + 1 / ( x ^ 3 ) + 1 / ( x ^ 5 ) )


How about a with-algebraic macro? Someone mentioned that Python uses a
nice algebraic syntax. That would obviously be different from the infix
syntax you illustrated. With infix syntax, it takes some examination to
notice that the above expression is the sum of three fractions. You
have to think about operator precedence and everything. With algebraic
syntax you could see it at a glance:

1 1 1
----- + ----- + -----
3 5
x + 1 x x
Of course, prefix notation would also make it obvious at a glance that
you have the sum of three fractions:

(+ (/ 1 (+ x 1))
(/ 1 (expt x 3))
(/ 1 (expt x 5)))

You're already pretty close to algebraic syntax once you upgrade from
infix to prefix notation. But that actual algebraic syntax must be
really cool in Python. Isn't there some math tool out there that also
does it?

When Python programmers put algebraic formulas in their code, does it
mess up the indentation at all? I'm curious exactly how it works.
Jul 18 '05 #186
Doug Tolton wrote:
...
Alex, this is pure un-mitigated non-sense.
Why, thanks! Nice to see that I'm getting on the nerves of _some_
people, too, not just having them get on mine.
Python's Metaclasses are
far more dangerous than Macro's. Metaclasses allow you to globally
change the underlying semantics of a program.
Nope: a metaclass only affects (part of the semantics of) the
classes that instantiate it. No "globally" about it. When I
write a class I can explicity control what metaclass it uses,
or inherit it. E.g., by writing 'class My(object):', with no
explicit metaclass, I ensure type(my)==type(object). The type
of the built-in named 'object' is the built-in named 'type'
(which most custom metaclasses subclass), which is also the
type of most other built-in types (numbers, strings, list,
tuple, dict, functions, methods, module, file, ...). I.e.,
your assertion is pure un-mitigated FUD.
Macros only allow you
to locally change the Syntax.
"Locally"? Not globally? Care to explain? Each 'invocation'
(expansion) of a macro must occur in a particular locus, sure,
but isn't the _point_ of (e.g.) defining a 'loop-my-way' macro
that every occurrence of 'loop-my-way' is such an expansion?

As for that mention of "the Syntax" AS IF somehow contrasted
with the "underlying semantics" just before, don't even bother
to try explaining: one example of macros' wonders offered by a
particularly vocal and emotional advocate was a macro
'with-condition-maintained' that was somehow supposed to make
whatever alterations might be needed in the control program of
a reactor in order to regulate temperature -- and it was
passed that code as three calls to functions (or expansions
of macros) NOT defined inside it, so how it could possibly
work "only...locally", when to do anything at all it MUST
necessarily find, "expand", and alter the local instantiations
of those functions (or macros)...?!

If that's an example of "only allow you to locally change
the syntax", what would be an example of *EVEN DEEPER
AND MORE PERVASIVE* changes ?!
Your comparison is spurious at best.
What "my comparison" are you blabbering about? My text that
you quoted, and called "pure un-mitigated non-sense", had no
"comparisons", neither my own nor others'. I see YOU attempting
to draw some "comparison" (eminently spurious, to be sure)
between metaclasses and macros...

Your argument simply shows a serious mis-understanding of Macros.
Macros as has been stated to you *many* times are similar to
functions. They allow a certain type of abstraction to remove
extraneous code.
Yeah, right. Kindly explain that 'with-condition-maintained'
example and the exhalted claims made and implied for it, then.

Based on your example you should be fully campaigning against
Metaclasses, FP constructs in python and Functions as first class
objects. All of these things add complexity to a given program,
"FUD" and "nonsense" (with or without a hyphen) would be serious
understatements in an attempt to characterize *THIS*. *HOW* do
"functions as first class objects" perform this devilish task of
"adding complexity to a given program", for example?! The extra
complexity would be in rules trying to FORBID normal usage of an
object (passing as argument, returning as value, appending to a
list, ...) based on the object's type. There is obviously no
complexity in saying "_WHATEVER_ x happens to stand for, you
can correctly call somelist.append(x)" [passing x as argument to
a method of the somelist which appends x to the list], for example.
The added complexity would come if you had to qualify this with
"UNLESS ..." for whatever value of ``...''.
however they also reduce the total number of lines. Reducing program
length is to date the only effective method I have seen of reducing
complexity.
For some (handwaving-defined) "appropriate" approach to measuring
"length" (and number of lines is most definitely not it), it is ONE
important way. But you're missing another crucial one, which is
the count of interactions, actual and potential, between "parts"
of the program -- the key reason why global effects do not in fact
effectively reduce complexity, but rather bid fair to increase it,
even though they might often "reduce the total [[length]]", is
exactly this. E.g., if large parts of my program needed all kinds
of comparisons between strings (including comparison-related
functionality such as hashing) to be case-insensitive, it might
make my program 'shorter' if I could set case insensitivity as
the global default -- but it might easily mess up totally unrelated
and otherwise stable modules that rely on the usual case sensitive
operations, causing weird, hard-to-trace malfunctionings. I've
mentioned my youthful APL experiences: with its quad-IO to globally
set index origin for arrays, and its quad-I forget what to globally
set comparison tolerance in all comparisons between floating point
numbers, APL was a prime example of this (among other things,
reusability-destroying) global-effects risk. Sure, it was cool
and made my program shorter to be able to check if "a < b" and
have this IMPLICITLY mean "to within N significant digits" (or
whatever) -- but it regularly broke other otherwise-stable modules
and thus destroyed reuse. Not to mention the mind-boggling effects
when a<b, a>b and a=b can ALL be 'true' at once thanks to the
"to within N significant digits" IMPLICIT proviso...

Complexity is not just program length, and reducing program length
not the only important thing in reducing complexity. Removing
*repetition* (boilerplate), sure, that's nice -- and if there was
a way to constrain macros to ONLY do that (as opposed to ending up
with examples such as 'with-condition-maintained', see above) I
would be very interested in seeing it. I doubt there is one, though.

If you truly believe what you are saying, you really should be
programming in Java. Everything is explicit, and most if not all of


Hmmm, one wonders -- are you a liar, or so totally ignorant of what
you're talking about that you don't even KNOW that one of Java's
most "cherished" features is that the "self." is just about ALWAYS
implicit...? Anyway, in my text which you quoted and characterized
as "pure un-mitigated non-sense" I was speaking of UNIFORMITY as
a plus -- and Java's use of { } for example ensures NON-uniformity
on a lexical plane, since everybody has different ideas about where
braces should go:-).

But I've NEVER argued in favour of boilerplate, of repetitiousness.
I think that the occasional error that you can catch by forcing
redundancy is generally outweighed by all the errors that just
would not be there if the language let me state things "once, and
only once". So, for example, when I write
x = 23
I most definitely don't WANT to have to redundantly state that,
by the way, there is a variable x, and, whaddyaknow, x refers
to an integer. As to whether it makes more sense to later let
the same name x in the same scope refer to OTHER objects (of
the same type; or, of any type) -- I still don't know; maybe
a single-assignment kind of functional language would in fact be
preferable, or maybe Python's relaxed attitude about re-bindings
is best, or maybe something in-between, allowing re-bindings but
only within a single type's items (for "re-bindings" you may
choose to read "assignments" if you wish, I'm not trying to
reopen THAT particular lexical flamewar for further debate;-).

So far, I'm pretty happy with Python's permissive approach to
mutation and re-binding, but I notice I don't mind (differently
from many others) the inability to re-bind SOME references
(e.g., items of tuples, or lexically-outer names) -- and in
Haskell or ML I don't recall ever feeling confined by the
inability to have the same name refer to different values at
successive times (in the same scope). [I _do_ recall some
unease at being unable to mutate "large" data structures, as
opposed to rebinding simple names, so it's not as if I can
claim any natural affinity for the functional [immutable-data]
approach to programming -- I just wonder if perhaps the current
widespread _emphasis_ on rebinding and mutation may not be a
TAD overdone -- but, enough for this aside].

I do, of course, truly believe in what I'm saying -- what
WOULD have stopped me from taking up any of a zillion different
languages, instead of Python, when I started studying it
about four years ago? Indeed, my opportunities for making
money, and the audience for my books, would be vaster if I
had stuck with what I was mainly using at work then (mostly C++,
some Java, VB, Perl), my academic respectability higher if I
had stuck with Haskell or some ML. But while I don't mind
money, nor fans, I care most about other values -- and the
amount to which "Python fits my brain" and makes me most
comfortable and productive meets and exceeds all claims I had
heard to this effect, PLUS, I have experiential proof (enough
to convince me personally, if nobody else:-) that it's just
as comfortable and productive for many others, from programming
newbies to highly experienced professionals. Sure, Java would
let me program my cellphone (which currently doesn't support
Python) -- oh well, I'll have to eschew that crucial pursuit
for a while longer now...
Alex

Jul 18 '05 #187


David Mertz wrote:
My answer sucked in a couple ways.

(1) As Bengt Ricther pointed out up-thread, I should have changed David
Eppstein's names 'filter' and 'iter' to something other than the
built-in names.

(2) The function categorize_compose() IS named correctly, but it doesn't
DO what I said it would. If you want to fulfill ALL the filters, you
don't to compose them, but... well, 'all()' them:

| def categorize_jointly(preds, it):
| results = [[] for _ in len(preds)]
| for x in it:
| results[all(filters)(x)].append(x)
| return results

Now if you wonder what the function 'all()' does, you could download:

http://gnosis.cx/download/gnosis/util/combinators.py

But the relevant part is:

from operator import mul, add, truth
apply_each = lambda fns, args=[]: map(apply, fns, [args]*len(fns))
bools = lambda lst: map(truth, lst)
bool_each = lambda fns, args=[]: bools(apply_each(fns, args))
conjoin = lambda fns, args=[]: reduce(mul, bool_each(fns, args))
all = lambda fns: lambda arg, fns=fns: conjoin(fns, (arg,))

For 'lazy_all()', look at the link.

See, Python is Haskell in drag.
Come on. Haskell has a nice type system. Python is an application of
Greespun's Tenth Rule of programming.

Cheers
--
Marco


Yours, David...

--
Keeping medicines from the bloodstreams of the sick; food from the bellies
of the hungry; books from the hands of the uneducated; technology from the
underdeveloped; and putting advocates of freedom in prisons. Intellectual
property is to the 21st century what the slave trade was to the 16th.


Jul 18 '05 #188


Carlo v. Dango wrote:
I'd humbly suggest that if you can't see *any* reason why someone
would prefer Lisp's syntax, then you're not missing some fact about
the syntax itself but about how other language features are supported
by the syntax.

sure, but it seems like noone was able to let CLOS have
(virtual) inner classes,
methods inside methods,
virtual methods (yeah I know about those stupid generic functions :),
method overloading,


From what you are saying it is obvious that you do not know what you
are talking about.

True, you do not have "inner" classes, but that has never stopped
anybody from writing good code. As for your comments on methods and
generic functions it is obvious that you do not know what multiple
dispatching is (yes, there is an ugly hacked up Python library to do
that floating around; I do not know if it will make it it 3.0), so you
comment looses value immediately.
A decent API (I tried playing with it.. it doesn't even have a freaking
date library as standard ;-p

Apart form the fact that the language has GET-UNIVERSAL-TIME,
DECODE-UNIVERSAL-TIME etc etc, you can get a nice and portable (across
all n > 1.8 CL implementations) date parsing library at

http://www.cliki.net/net-telent-date

Yes I agree with the compile time
The term "compile" should already make you think.
macro expansion is a nice thing.
However, if I want to do some serious changes to the structure of
objects and classes (i.e. create a new kind of objects) then I have to
spend a long time finding out how the CLOS people hacked together their
representation of classes, methods, method call etc... it has been far
easier for me to just do some small changes using __getattribute__ and
metaclasses in python. So in that respect Im not really sure the macro
idea is advantageous for other than 'straight away' macros...
And how exactly does CLOS forbid you to do the same? You can do that
using accessor, reader and writer generic functions (ooops, I forgot
that you do not know enough about them :) ) If that is not enough, the
CLOS Metaobject Protocol is available in practically all major CL
implementations (and that is more than the 1.8 Python implementations
out there). And it seems to me that

yes this mail is provocative.. please count slowly to 10 before replying
if you disagree with my point of view (and I know Pascal will disagree
;-) ... not that I ever seen him angry ;-)

I counted until 42 :)

Cheers
--
Marco
Jul 18 '05 #189
On Wed, 08 Oct 2003 14:22:43 GMT, Alex Martelli <al***@aleax.it>
wrote:
Doug Tolton wrote:
...
Alex, this is pure un-mitigated non-sense.
Why, thanks! Nice to see that I'm getting on the nerves of _some_
people, too, not just having them get on mine.


Yes, this discussion is frustrating. It's deeply frustrating to hear
someone without extensive experience with Macros arguing why they are
so destructive. Particularly to hear the claim the Macros make it
impossible to have large teams work on a project, and at the same time
supporting features in python that make it far more difficult to share
code between people. (ie, white space delimiting, the ability to
rebind interenals, Metaclasses). Personally I'm not against any of
these features, however they all suffer from serious potential
drawbacks.
Python's Metaclasses are
far more dangerous than Macro's. Metaclasses allow you to globally
change the underlying semantics of a program.
Nope: a metaclass only affects (part of the semantics of) the
classes that instantiate it. No "globally" about it. When I
write a class I can explicity control what metaclass it uses,
or inherit it. E.g., by writing 'class My(object):', with no
explicit metaclass, I ensure type(my)==type(object). The type
of the built-in named 'object' is the built-in named 'type'
(which most custom metaclasses subclass), which is also the
type of most other built-in types (numbers, strings, list,
tuple, dict, functions, methods, module, file, ...). I.e.,
your assertion is pure un-mitigated FUD.

Please explain to me how according to your logic, a semantic change to
the language is good, but a syntactic change is bad. Your logic is
extremely confusing to me. On the one hand you think Macro's are bad
because I can define a construct such as
(do-something-useful-here
arg1
arg2
arg3)
which operates according to all the regular semantic rules. In fact
there is even an explicit construct that will show me *exactly* what
this code is doing. Yet you apparently think using Metaclasses to
change the underlying semantics is somehow ok, because you can check
to see if it's built-in? So are you saying that using only built-in
constructs are good to use? If someone else gives you a class to use
which uses Metaclasses to change how it operates for some reason or
another, are you ok with that? What if they need to re-bind some of
the builtins to do something? Because you can't prevent that in python
either. In fact any piece of python code that runs on your system
could do that, yet you are ok with that?
Macros only allow you
to locally change the Syntax.


"Locally"? Not globally? Care to explain? Each 'invocation'
(expansion) of a macro must occur in a particular locus, sure,
but isn't the _point_ of (e.g.) defining a 'loop-my-way' macro
that every occurrence of 'loop-my-way' is such an expansion?

yes, but the point is that if I just wrote 'loop-my-way', it doesn't
change existant code in unexpected ways. It only affects new code
that is written using 'loop-my-way'. Whereas re-binding the buildins
*will* change existant code.
As for that mention of "the Syntax" AS IF somehow contrasted
with the "underlying semantics" just before, don't even bother
to try explaining: one example of macros' wonders offered by a
particularly vocal and emotional advocate was a macro
'with-condition-maintained' that was somehow supposed to make
whatever alterations might be needed in the control program of
a reactor in order to regulate temperature -- and it was
passed that code as three calls to functions (or expansions
of macros) NOT defined inside it, so how it could possibly
work "only...locally", when to do anything at all it MUST
necessarily find, "expand", and alter the local instantiations
of those functions (or macros)...?!
I'm not going to attempt to explain with-condition-maintained, as you
are clearly ignoring the general idea in favor of nitpicking
non-essential details of a hypothetical construct. When you descend
to the statements you have made about that construct, it's no longer
worth continuing discussion of it.

That's my exact problem though, your statements continually brand you
as ignorant of what macros are and how they operate on a fundamental
level, yet for some reason you feel qualified to extol their evils as
if you actually have significant experience with them.
If that's an example of "only allow you to locally change
the syntax", what would be an example of *EVEN DEEPER
AND MORE PERVASIVE* changes ?!
No idea what you are talking about here.
Your comparison is spurious at best.


What "my comparison" are you blabbering about? My text that
you quoted, and called "pure un-mitigated non-sense", had no
"comparisons", neither my own nor others'. I see YOU attempting
to draw some "comparison" (eminently spurious, to be sure)
between metaclasses and macros...

The only reason you think it's spurious is because of your
fundamentally flawed conception of Macros.
Your argument simply shows a serious mis-understanding of Macros.
Macros as has been stated to you *many* times are similar to
functions. They allow a certain type of abstraction to remove
extraneous code.
Yeah, right. Kindly explain that 'with-condition-maintained'
example and the exhalted claims made and implied for it, then.

I thought you didn't want it explained to you? If you are serious
about wanting to know what the *point* of the code snippet was, then
I'll explain it. If on the other hand you are going to make some
ridiculous argument about how with-condition-maintained isn't in fact
hooked to any control room circuitry, then forget it.
Based on your example you should be fully campaigning against
Metaclasses, FP constructs in python and Functions as first class
objects. All of these things add complexity to a given program,
"FUD" and "nonsense" (with or without a hyphen) would be serious
understatements in an attempt to characterize *THIS*. *HOW* do
"functions as first class objects" perform this devilish task of
"adding complexity to a given program", for example?! The extra
complexity would be in rules trying to FORBID normal usage of an
object (passing as argument, returning as value, appending to a
list, ...) based on the object's type. There is obviously no
complexity in saying "_WHATEVER_ x happens to stand for, you
can correctly call somelist.append(x)" [passing x as argument to
a method of the somelist which appends x to the list], for example.
The added complexity would come if you had to qualify this with
"UNLESS ..." for whatever value of ``...''.

Hmm...what if using a class with Metaclasses behaves in a totally
non-standard way? What if a function re-binds the builtins? What if
they over use FP constucts and nest 50 maps and filters? Are you ok
with all of these things? They are certainly more confusing than
Macros. To make the statement that *any* technology can't be abused
is foolish. To make that claim implies there is no correct usage,
only usage. In other words if there is no correct way to use
Metaclasses or re-bind builtins then any way that someone sees fit to
do it *is* the right way. We all know that is a ridiculous claim.
Macros are like any other sufficiently powerful technology. If they
aren't used right, they will complicate a program not simplify it.

I believe the crux of our difference is that you don't want to give
expressive power because you believe it will be misused. I on the
other hand want to give expressive power because I believe it could be
used correctly most of the time. For the times when it's not, well
that's why I have debugging skills. Sadly not eveyone uses looping
the way I would, but using my brain I can figure out what they are
doing.
however they also reduce the total number of lines. Reducing program
length is to date the only effective method I have seen of reducing
complexity.


For some (handwaving-defined) "appropriate" approach to measuring
"length" (and number of lines is most definitely not it), it is ONE


Both from my experience and Fred Brooks it's the only actual way I've
seen of measuring the time it will take to write a program.
important way. But you're missing another crucial one, which is
the count of interactions, actual and potential, between "parts"
of the program -- the key reason why global effects do not in fact
effectively reduce complexity, but rather bid fair to increase it,
even though they might often "reduce the total [[length]]", is
exactly this.
I understand this point very well. That's why I believe in building
layered software, and using good high order constructs to acheive
this. As I've said before, your statements reveal your fundamental
mis-understanding of the way Macro's work. To support Metaclasses,
classes, functions, first order functions etc as tools to support this
concept while at the same time reviling macros is simply showing an
un-educated bias about Macros. I wouldn't be suprised to hear you
respond with some argument about how you've read the writings by
people who have used Macros (as you've done in the past), but I
believe you do not have sufficient understanding to make the claims
you are making. If you really understood macros, I don't believe you
would be making such statements.
E.g., if large parts of my program needed all kinds
of comparisons between strings (including comparison-related
functionality such as hashing) to be case-insensitive, it might
make my program 'shorter' if I could set case insensitivity as
the global default -- but it might easily mess up totally unrelated
and otherwise stable modules that rely on the usual case sensitive
operations, causing weird, hard-to-trace malfunctionings. I've
mentioned my youthful APL experiences: with its quad-IO to globally
set index origin for arrays, and its quad-I forget what to globally
set comparison tolerance in all comparisons between floating point
numbers, APL was a prime example of this (among other things,
reusability-destroying) global-effects risk. Sure, it was cool
and made my program shorter to be able to check if "a < b" and
have this IMPLICITLY mean "to within N significant digits" (or
whatever) -- but it regularly broke other otherwise-stable modules
and thus destroyed reuse. Not to mention the mind-boggling effects
when a<b, a>b and a=b can ALL be 'true' at once thanks to the
"to within N significant digits" IMPLICIT proviso...

Well that was a long winded digression into something that is
completely un-related to Macros. Seems like a good argument why
re-binding the buildins is bad though
..Complexity is not just program length, and reducing program length
not the only important thing in reducing complexity. Removing
*repetition* (boilerplate), sure, that's nice -- and if there was
a way to constrain macros to ONLY do that (as opposed to ending up
with examples such as 'with-condition-maintained', see above) I
would be very interested in seeing it. I doubt there is one, though.
I agree that reducing complexity is the goal. I disagree that you can
*ever* guarantee a high order construct is always used correctly
though.
If you truly believe what you are saying, you really should be
programming in Java. Everything is explicit, and most if not all of
Hmmm, one wonders -- are you a liar, or so totally ignorant of what
you're talking about that you don't even KNOW that one of Java's
most "cherished" features is that the "self." is just about ALWAYS
implicit...? Anyway, in my text which you quoted and characterized
as "pure un-mitigated non-sense" I was speaking of UNIFORMITY as
a plus -- and Java's use of { } for example ensures NON-uniformity
on a lexical plane, since everybody has different ideas about where
braces should go:-).

Where braces should go is a trivial issues. However if braces is an
issue that seriously concerns you then I can see why macros are giving
you a heart attack.
But I've NEVER argued in favour of boilerplate, of repetitiousness.
I think that the occasional error that you can catch by forcing
redundancy is generally outweighed by all the errors that just
would not be there if the language let me state things "once, and
only once". So, for example, when I write
x = 23
I most definitely don't WANT to have to redundantly state that,
by the way, there is a variable x, and, whaddyaknow, x refers
to an integer. As to whether it makes more sense to later let
the same name x in the same scope refer to OTHER objects (of
the same type; or, of any type) -- I still don't know; maybe
a single-assignment kind of functional language would in fact be
preferable, or maybe Python's relaxed attitude about re-bindings
is best, or maybe something in-between, allowing re-bindings but
only within a single type's items (for "re-bindings" you may
choose to read "assignments" if you wish, I'm not trying to
reopen THAT particular lexical flamewar for further debate;-).

So far, I'm pretty happy with Python's permissive approach to
mutation and re-binding, but I notice I don't mind (differently
from many others) the inability to re-bind SOME references
(e.g., items of tuples, or lexically-outer names) -- and in
Haskell or ML I don't recall ever feeling confined by the
inability to have the same name refer to different values at
successive times (in the same scope). [I _do_ recall some
unease at being unable to mutate "large" data structures, as
opposed to rebinding simple names, so it's not as if I can
claim any natural affinity for the functional [immutable-data]
approach to programming -- I just wonder if perhaps the current
widespread _emphasis_ on rebinding and mutation may not be a
TAD overdone -- but, enough for this aside].

I do, of course, truly believe in what I'm saying -- what
WOULD have stopped me from taking up any of a zillion different
languages, instead of Python, when I started studying it
about four years ago? Indeed, my opportunities for making
money, and the audience for my books, would be vaster if I
had stuck with what I was mainly using at work then (mostly C++,
some Java, VB, Perl), my academic respectability higher if I
had stuck with Haskell or some ML. But while I don't mind
money, nor fans, I care most about other values -- and the
amount to which "Python fits my brain" and makes me most
comfortable and productive meets and exceeds all claims I had
heard to this effect, PLUS, I have experiential proof (enough
to convince me personally, if nobody else:-) that it's just
as comfortable and productive for many others, from programming
newbies to highly experienced professionals. Sure, Java would
let me program my cellphone (which currently doesn't support
Python) -- oh well, I'll have to eschew that crucial pursuit
for a while longer now...

The ironic thing is that I'm not bashing Python. I really like
python. It's a great language. I think we both use Python and Lisp
(for me) for the same reasons. If I wanted a higher paying job I'd be
using Java. I have aspirations to write books as well, I agree that
Python and Lisp aren't the biggest markets, and yet I use them because
they fit my brain also.

What I am in point of fact upset by is your constant barrage against
Macros. I feel that your stance is based on ignorance and
mis-information. You certainly don't have significant first hand
exposure to Lisp style Macros or you wouldn't be making statements
that were so obviously incorrect. Why don't you seriously try to
learn them? If you don't care to, why argue about them so much? I
haven't seen anyone bring up (I could've missed it) putting Macro's
into Python again. I personally don't think Macros would work very
well in Python, at least not as well as they do in Lisp. So
understanding that I'm not pushing for Macro's in python, why are you
so vehement against them? Are you on campaign to get Macros out of
Lisp?

Doug
Doug Tolton
(format t "~a@~a~a.~a" "dtolton" "ya" "hoo" "com")
Jul 18 '05 #190
Andreas Rossberg <ro******@ps.uni-sb.de> wrote:
Dirk Thierbach wrote:
you can use macros to do everything one could use HOFs for (if you
really want).


I should have added: As long as it should execute at compile time, of
course.
Really? What about arbitrary recursion?


I don't see the problem. Maybe you have an example? I am sure the
Lisp'ers here can come up with a macro solution for it.

- Dirk

Jul 18 '05 #191
On Wed, 08 Oct 2003 15:31:21 GMT, Doug Tolton <do**@nospam.com> wrote:
On Wed, 08 Oct 2003 14:22:43 GMT, Alex Martelli <al***@aleax.it>
Doug Tolton wrote:
...
Alex, this is pure un-mitigated non-sense.


Why, thanks! Nice to see that I'm getting on the nerves of _some_
people, too, not just having them get on mine.


Yes, this discussion is frustrating. It's deeply frustrating to hear
someone without extensive experience with Macros arguing why they are
so destructive.


You know I think that this thread has so far set a comp.lang.* record for
civilitiy in the face of a massively cross-posted language comparison
thread. I was even wondering if it was going to die a quiet death, too.

Ah well, We all knew it was too good to last. Have at it, lads!

Common Lisp is an ugly language that is impossible to understand with
crufty semantics

Scheme is only used by ivory-tower academics and is irerelevant to real
world programming

Python is a religion that worships at the feet of Guido vanRossum
combining the syntactic flaws of lisp with a bad case of feeping
creaturisms taken from languages more civilized than itself

There. Is everyone pissed off now?

david rush
--
Taking bets on how many more messages before Godwin's law kicks in...
Jul 18 '05 #192
<posted & mailed>

Alex Martelli wrote:
Daniel P. M. Silva wrote:
...
You still can't add new binding constructs or safe parameterizations like
a with_directory form:

with_directory("/tmp", do_something())

Where do_something() would be evaluated with the current directory set to
" tmp" and the old pwd would be restored afterward (even in the event of
an exception).
Right: you need to code this very differently, namely:
with_directory("/tmp", do_something)
*deferring* the call to do_something to within the with_directory
function. Python uses strict evaluation order, so if and when you
choose to explicitly CALL do_something() it gets called,

So, I would code:

def with_directory(thedir, thefunc, *args, **kwds):
pwd = os.getcwd()
try: return thefunc(*args, **kwds)
finally: os.chdir(pwd)

this is of course a widespread idiom in Python, e.g. see
unittest.TestCase.assertRaises for example.

The only annoyance here is that there is no good 'literal' form for
a code block (Python's lambda is too puny to count as such), so you
do have to *name* the 'thefunc' argument (with a 'def' statement --
Python firmly separates statements from expressions).


That was my point. You have to pass a callable object to with_directory,
plus you have to save in that object any variables you might want to use,
when you'd rather say:

x = 7
with_directory("/tmp",
print "well, now I'm in ", os.getpwd()
print "x: ", x
x = 3
)
Last year -- I think at LL2 -- someone showed how they added some sort of
'using "filename":' form to Python... by hacking the interpreter.


A "using" statement (which would take a specialized object, surely not
a string, and call the object's entry/normal-exit/abnormal-exit methods)
might often be a good alternative to try/finally (which makes no provision
for 'entry', i.e. setting up, and draws no distinction between normal
and abnormal 'exits' -- often one doesn't care, but sometimes yes). On
this, I've seen some consensus on python-dev; but not (yet?) enough on
the details. Consensus is culturally important, even though in the end
Guido decides: we are keen to ensure we all keep using the same language,
rather than ever fragmenting it into incompatible dialects.


The point is that the language spec itself is changed (along with the
interpreter in C!) to add that statement. I would be happier if I could
write syntax extensions myself, in Python, and if those extensions worked
on CPython, Jython, Python.Net, Spy, etc.

Some people use Python's hooks to create little languages inside Python
(eg. to change the meaning of instantiation), which are not free of
problems:

class Object(object):
def __init__(this, *args, **kwargs):


[invariably spelt as 'self', not 'this', but that's another issue]
this.rest = args
this.keys = kwargs

def new_obj_id(count=[0]):
count[0] = count[0] + 1
return count[0]

def tag_obj(obj, id):
obj.object_id = id
return obj

def obj_id(obj): return obj.object_id

type.__setattr__(Object, "__new__", staticmethod(lambda type, *args:
tag_obj(object.__new__(type), new_obj_id())))

...
# forgot to check for this case...
print Object(foo="bar")


It's not an issue of "checking": you have written (in very obscure
and unreadable fashion) a callable which you want to accept (and
ignore) keyword arguments, but have coded it in such a way that it
in fact refuses keyword arguments. Just add the **kwds after the
*args. This bug is not really related to "little languages" at all:
you might forget to specify arguments which you do want your callable
to accept and ignore in a wide variety of other contexts, too.


I think changing the meaning of __new__ is a pretty big language
modification...

- Daniel
Jul 18 '05 #193
Pascal Bourguignon <sp**@thalassa.informatimago.com> writes:
Well, I would say that kanji is badly designed, compared to latin
alphabet. The voyels are composed with consones (with diacritical
marks) and consones are written following four or five groups with
additional diacritical marks to distinguish within the groups. It's
more a phonetic code than a true alphabet.


Huh? You seem to be confused (BTW French is misleading here: it's vowels and
consonants in English). *Kanji* are not phonetic, you seem to be talking about
*kana*. And the blanket claim that Japanese spelling in kana is badly designed
compared to say, English orthography seems really rather dubious to me.

'as
Jul 18 '05 #194


Dirk Thierbach wrote:
james anderson <ja************@setf.de> wrote:
Matthias Blume wrote:
Most of the things that macros can do can be done with HOFs with just
as little source code duplication as with macros. (And with macros
only the source code does not get duplicated, the same not being true
for compiled code. With HOFs even executable code duplication is
often avoided -- depending on compiler technology.)


is the no advantage to being able to do either - or both - as the
occasion dictates?

I can't parse this sentence, but of course you can also use HOFs in Lisp
(all flavours). The interesting part is that most Lisp'ers don't seem
to use them, or even to know that you can use them, and use macros instead.

The only real advantage of macros over HOFs is that macros are guaranteed
to to executed at compile time. A good optimizing compiler (like GHC
for Haskell) might actually also evaluate some expressions including
HOFs at compile time, but you have no control over that.

i'd be interested to read examples of things which are better done
with HOF features which are not available in CL.

HOFs can of course be used directly in CL, and you can use macros to
do everything one could use HOFs for (if you really want).

The advantage of HOFs over macros is simplicity:


As R^nRS shows, simplicity leads to language specs without useful things
(like records/struct) in them.

You want to make things simple, not any simpler (was it Einstein who
said that?)
You don't need additional
language constructs (which may be different even for different Lisp
dialects, say),
As we well know, there is now one dominant Lisp, which is Common by
name. (No. ELisp does not count as you do (require 'cl) in your .emacs
file) This argument is moot.
and other tools (like type checking) are available for
free;
Yes. Type Checking is in CMUCL/SBCL.
and the programmer doesn't need to learn an additional concept.


The programmer needs to learn to use the tool at its best. If your tool
is limited you just have to learn less.

Cheers
--
Marco

Jul 18 '05 #195

tf*@famine.OCF.Berkeley.EDU (Thomas F. Burdick) writes:
(defconstant -> '-> "More sugar")

;; Example usage
(convert *thing* -> (class-of *other-thing*))

Of course, these are lame examples, but they show that Lisp *can*
incorporate little ascii-picture-symbols. Good examples would
necessarily be very domain-dependant.


Have a look ad DrScheme. There, you can use real images (gif, jgp) as
values. It should not be too difficult to use them as symbol names
too...

--
__Pascal_Bourguignon__
http://www.informatimago.com/
Do not adjust your mind, there is a fault in reality.
Jul 18 '05 #196
Daniel P. M. Silva wrote:
...
with_directory("/tmp", do_something())
... Right: you need to code this very differently, namely:
with_directory("/tmp", do_something) ... The only annoyance here is that there is no good 'literal' form for
a code block (Python's lambda is too puny to count as such), so you ...
That was my point. You have to pass a callable object to with_directory,
plus you have to save in that object any variables you might want to use,
when you'd rather say:

x = 7
with_directory("/tmp",
print "well, now I'm in ", os.getpwd()
print "x: ", x
x = 3
)
I'm definitely NOT sure I'd "rather" use this specific syntax to pass
a block of code to with_directory (even in Ruby, I would delimit the
block of code, e.g. with do/end).

I *AM* sure I would INTENSELY HATE not knowing whether

foo('bar', baz())

is being evaluated by normal strict rules -- calling baz and passing
the result as foo's second argument -- or rather a special form in
which the 'baz()' is "a block of code" which foo may execute zero or
more times in special ways -- depending on how foo happens to be
bound at this time. *SHUDDER*. Been there, done that, will NEVER
again use a language with such ambiguities if I can possibly help it.

the details. Consensus is culturally important, even though in the end
Guido decides: we are keen to ensure we all keep using the same language,
rather than ever fragmenting it into incompatible dialects.


The point is that the language spec itself is changed (along with the
interpreter in C!) to add that statement. I would be happier if I could
write syntax extensions myself, in Python, and if those extensions worked
on CPython, Jython, Python.Net, Spy, etc.


So, I hope the cultural difference is sharply clear. To us, consensus
is culturally important, we are keen to ensure we all keep using the
same language; *you* would be happier if you could use a language that
is different from those of others, thanks to syntax extensions you
write yourself. Since I consider programming to be mainly a group
activity, and the ability to flow smoothly between several groups to
be quite an important one, I'm hardly likely to appreciate the divergence
in dialects encouraged by such possibilities, am I?

I think changing the meaning of __new__ is a pretty big language
modification...


Surely you're jesting? Defining a class's __new__ and __init__
just means defining the class's *constructor*, to use the term
popular in C++ or Java; as I can change any other method, so I
can change the constructor, of course (classes being mutable
objects -- by design, please note, not by happenstance). "Pretty
big language modification" MY FOOT, with all due respect...
Alex

Jul 18 '05 #197
"Carlo v. Dango" <oe**@soetu.eu> writes:
I'd humbly suggest that if you can't see *any* reason why someone
would prefer Lisp's syntax, then you're not missing some fact about
the syntax itself but about how other language features are supported
by the syntax.
sure, but it seems like noone was able to let CLOS have
(virtual) inner classes,


This is kind of like saying we weren't able to have setjmp/longjmp;
yeah, but doing so makes no sense.
methods inside methods,
There was a proposal to add lexically-scoped methods, but it got
tossed because no one liked it.
virtual methods (yeah I know about those stupid generic functions :),
As has been already stated, we only have "virtual methods".
method overloading,
How could you have both noncongruent argument lists, and multiple
dispatch? With an either/or like that, Lisp chose the right one.
A decent API (I tried playing with it.. it doesn't even have a freaking
date library as standard ;-p
Who does? Have all that stuff standard, I mean. Python doesn't even
have a standard. We have some date support in ANSI -- get the rest from your
vendor (commercial or free).
yes this mail is provocative..


Seems more ignorant, to me. I guess when you're conversing on an
archived forum, that can seem like the same thing, though.

--
/|_ .-----------------------.
,' .\ / | No to Imperialist war |
,--' _,' | Wage class war! |
/ / `-----------------------'
( -. |
| ) |
(`-. '--.)
`. )----'
Jul 18 '05 #198
Alexander Schmolck <a.********@gmx.net> writes:
Pascal Bourguignon <sp**@thalassa.informatimago.com> writes:
The voyels are composed with consones


And the blanket claim that Japanese spelling in kana is badly
designed compared to say, English orthography seems really rather
dubious to me.


And you can tell by his phrasing that he was straining against writing
that in English ;-)

--
/|_ .-----------------------.
,' .\ / | No to Imperialist war |
,--' _,' | Wage class war! |
/ / `-----------------------'
( -. |
| ) |
(`-. '--.)
`. )----'
Jul 18 '05 #199
Alex Martelli <al***@aleax.it> writes:
Yeah, right. Kindly explain that 'with-condition-maintained'
example and the exhalted claims made and implied for it, then.


I posted some lisp macros used in a production environment.
(the msgid <ll**********@comcast.net> )

The `with-condition-maintained' was a hypothetical, but the ones I
posted are in actual use. The claim is that these macros
significantly *reduce* the intellectual burden of the people that
write and maintain the code that uses them.

Jul 18 '05 #200

This discussion thread is closed

Replies have been disabled for this discussion.

By using this site, you agree to our Privacy Policy and Terms of Use.