469,342 Members | 6,455 Online
Bytes | Developer Community
New Post

Home Posts Topics Members FAQ

Post your question to a community of 469,342 developers. It's quick & easy.

Python syntax in Lisp and Scheme

I think everyone who used Python will agree that its syntax is
the best thing going for it. It is very readable and easy
for everyone to learn. But, Python does not a have very good
macro capabilities, unfortunately. I'd like to know if it may
be possible to add a powerful macro system to Python, while
keeping its amazing syntax, and if it could be possible to
add Pythonistic syntax to Lisp or Scheme, while keeping all
of the functionality and convenience. If the answer is yes,
would many Python programmers switch to Lisp or Scheme if
they were offered identation-based syntax?
Jul 18 '05
699 28961
On Wed, 8 Oct 2003, Alex Martelli wrote:
Daniel P. M. Silva wrote:
with_directory("/tmp", do_something()) ... Right: you need to code this very differently, namely:
with_directory("/tmp", do_something) ... The only annoyance here is that there is no good 'literal' form for
a code block (Python's lambda is too puny to count as such), so you ...
That was my point. You have to pass a callable object to with_directory,
plus you have to save in that object any variables you might want to use,
when you'd rather say:

x = 7
with_directory("/tmp",
print "well, now I'm in ", os.getpwd()
print "x: ", x
x = 3
)


I'm definitely NOT sure I'd "rather" use this specific syntax to pass
a block of code to with_directory (even in Ruby, I would delimit the
block of code, e.g. with do/end).

I *AM* sure I would INTENSELY HATE not knowing whether

foo('bar', baz())

is being evaluated by normal strict rules -- calling baz and passing
the result as foo's second argument -- or rather a special form in
which the 'baz()' is "a block of code" which foo may execute zero or
more times in special ways -- depending on how foo happens to be
bound at this time. *SHUDDER*. Been there, done that, will NEVER
again use a language with such ambiguities if I can possibly help it.


My mistake in choosing stx_id(stmt). Maybe foo{'bar', baz()} would be
better, or
foo bar:
baz()
the details. Consensus is culturally important, even though in the end
Guido decides: we are keen to ensure we all keep using the same language,
rather than ever fragmenting it into incompatible dialects.


The point is that the language spec itself is changed (along with the
interpreter in C!) to add that statement. I would be happier if I could
write syntax extensions myself, in Python, and if those extensions worked
on CPython, Jython, Python.Net, Spy, etc.


So, I hope the cultural difference is sharply clear. To us, consensus
is culturally important, we are keen to ensure we all keep using the
same language; *you* would be happier if you could use a language that
is different from those of others, thanks to syntax extensions you
write yourself. Since I consider programming to be mainly a group
activity, and the ability to flow smoothly between several groups to
be quite an important one, I'm hardly likely to appreciate the divergence
in dialects encouraged by such possibilities, am I?


Yes, we disagree about how we use a language. It's too bad the choice
isn't left to the programmer, though.
I think changing the meaning of __new__ is a pretty big language
modification...


Surely you're jesting? Defining a class's __new__ and __init__
just means defining the class's *constructor*, to use the term
popular in C++ or Java; as I can change any other method, so I
can change the constructor, of course (classes being mutable
objects -- by design, please note, not by happenstance). "Pretty
big language modification" MY FOOT, with all due respect...


__init__ is the constructor, but isn't __new__ the allocator or factory?

A mutable __new__ means you are not guaranteed that Cls() gives you a Cls
object.

- Daniel
Jul 18 '05 #201
Doug Tolton:
I believe the crux of our difference is that you don't want to give
expressive power because you believe it will be misused. I on the
other hand want to give expressive power because I believe it could be
used correctly most of the time. For the times when it's not, well
that's why I have debugging skills. Sadly not eveyone uses looping
the way I would, but using my brain I can figure out what they are
doing.
That point has been made over and over to you. The argument is
that expressive power for a single developer can, for a group of
developers and especially those comprised of people with different
skill sets and mixed expertise, reduce the overall effectiveness of the
group.

If this is indeed the crux, then any justification which says "my brain"
and "I" is suspect, because that explicitly ignores the argument. By
comparison, Alex's examples bring up
- teaching languages to others
- interference between his code and others' (the APL example)
- production development
"Imagine a group of, say, a dozen programmers, working together ...
to develop a typical application program of a few tens of thousands
of
function points -- developing about 100,000 new lines of delivered
code
plus about as much unit tests, and reusing roughly the same amount"
- writing books for other people

which at the very least suggests the expertise and background
by which to evaluate the argument. It may be that his knowledge of
how and when to use macros is based on the statements of people he
respects rather than personal experience, but given the discussions on
this topic and the exhibited examples of when macros are appropriately
used, it surely does seem that metaclasses, higher-level functions, and
iterators can be used to implement a solution with a roughly equal amount
of effort and clarity. Th only real advantage to macros I've seen is the
certainty of "compile-time" evaluation, hence better performance than
run-time evaluation

Alex:
For some (handwaving-defined) "appropriate" approach to measuring
"length" (and number of lines is most definitely not it), it is ONE
You: Both from my experience and Fred Brooks it's the only actual way I've
seen of measuring the time it will take to write a program.
You mean "estimating"; for measuring I suspect you can use a
combination of a clock and a calendar. (This from a guy who recently
posted that the result of 1+1 is 4. ;)

You should use McConnell as a more recent reference than Brooks.
(I assume you are arguing from Mythical Man Month? Or from his
more recent writings?) In any case, in Rapid Development McConnell
considers various alternatives then suggests using LOC, on the view
that LOC is highly correlated with function points (among 3rd
generation programming languages! see below) and that LOC has a
good correlation to development time, excluding extremes like APL
and assembly. However, his main argument is that LOC is an easy
thing to understand.

The tricky thing about using McConnell's book is the implications
of table 31-2 in the section "Using Rapid Development Languages",
which talks about languages other than the 3rd generation ones used
to make his above estimate.

Table 31-2 shows the approximate "language levels" for a wider
variety of languages than Table 31-1. The "language level" is
intended to be a more specific replacement for the level implied
by the phrases "third-generation language" and "fourth-generation
language." It is defined as the number of assembler statements
that would be needed to replace one statement in the higher-level
language. ...

The numbers ... are subject to a lot of error, but they are the best
numbers available at this time, and they are accurate enough to
support this point: from a development poing of view, you should
implement your projects in the highest-level language possible. If
you can implement something in C, rather than assembler, C++
rather than C, or Visual Basic rather than C++, you can develop
faster.

And here's Table 31-2

Statements per
Language Level Function Point
-------- ----- --------------
Assembler 1 320
Ada 83 4.5 70
AWK 15 25
C 2.5 125
C++ 6.5 50
Cobol (ANSI 85) 3.5 90
dBase IV 9 35
spreadsheets ~50 6
Focus 8 40
Fortran 77 3 110
GW Basic 3.25 100
Lisp 5 65
Macro assembler 1.5 215
Modula 2 4 80
Oracle 8 40
Paradox 9 35
Pascal 3.5 90
Perl 15 25
Quick Basic 3 5.5 60
SAS, SPSS, etc. 10 30
Smalltalk (80 & V) 15 20
Sybase 8 40
Visual Basic 3 10 30

Source: Adapted from data in 'Programming Languages
Table' (Jones 1995a)
I'll use Perl as a proxy for Python; given that that was pre-OO
Perl I think it's reasonable that that sets a minimum level for
Python. Compare the Lisp and Perl numbers

Lisp 5 65
Perl 15 25

and the differences in "statements per function point" (which isn't
quite "LOC per function point") is striking. It suggests that
Python is more than twice as concise as Lisp, so if LOC is
used as the estimate for implementation time then it's a strong
recommendation to use Python instead of Lisp because it
will take less time to get the same thing done. And I do believe
Lisp had macros back in the mid-1990s.

Sadly, this is a secondary reference and I don't have a
copy of
Jones, Capers, 1995a. "Software Productivity Research
Programming Languages Table," 7th ed. March 1995.
and the referenced URL of www.spr.com/library/langtbl.htm
is no longer valid and I can't find that table on their site.

Well that was a long winded digression into something that is
completely un-related to Macros. Seems like a good argument why
re-binding the buildins is bad though


It was a long winded digression into how LOC can be a
wrong basis by which to judge the appropriateness of a
language feature.
a plus -- and Java's use of { } for example ensures NON-uniformity
on a lexical plane, since everybody has different ideas about where
braces should go:-).

Where braces should go is a trivial issues. However if braces is an
issue that seriously concerns you then I can see why macros are giving
you a heart attack.


See that smiley and the "--"? This is a throwaway point at the end
of the argument, and given Alex's noted verboseness, if it was a
serious point he would have written several pages on the topic.

Andrew
da***@dalkescientific.com
Jul 18 '05 #202
Marco Antoniotti:
Come on. Haskell has a nice type system. Python is an application of
Greespun's Tenth Rule of programming.


Oh? Where's the "bug-ridden" part? :)

That assertion also ignores the influence of user studies (yes,
research into how the syntax of a language affects its readabilty
and understandability) on Python's development; a topic which
is for the most part ignored in Lisp.

Andrew
da***@dalkescientific.com
Jul 18 '05 #203
"Andrew Dalke" <ad****@mindspring.com> writes:
That point has been made over and over to you. The argument is that
expressive power for a single developer can, for a group of
developers and especially those comprised of people with different
skill sets and mixed expertise, reduce the overall effectiveness of
the group.


This is true for all abstractions: Syntactic, linguistic, or
functional. Good abstractions are good, bad abstractions are bad,
regardless.

--
Frode Vatvedt Fjeld
Jul 18 '05 #204
In article <6C****************@newsread4.news.pas.earthlink.n et>,
"Andrew Dalke" <ad****@mindspring.com> wrote:

snip
And here's Table 31-2

Statements per
Language Level Function Point
-------- ----- --------------
Assembler 1 320
Ada 83 4.5 70
AWK 15 25
C 2.5 125
C++ 6.5 50
Cobol (ANSI 85) 3.5 90
dBase IV 9 35
spreadsheets ~50 6
Focus 8 40
Fortran 77 3 110
GW Basic 3.25 100
Lisp 5 65
Macro assembler 1.5 215
Modula 2 4 80
Oracle 8 40
Paradox 9 35
Pascal 3.5 90
Perl 15 25
Quick Basic 3 5.5 60
SAS, SPSS, etc. 10 30
Smalltalk (80 & V) 15 20
Sybase 8 40
Visual Basic 3 10 30

Source: Adapted from data in 'Programming Languages
Table' (Jones 1995a)


I thought these numbers were bogus. Weren't many
of them just guesses with actually zero data
or methodology behind them???
snip
Jul 18 '05 #205
In article <xc*************@famine.OCF.Berkeley.EDU>,
tf*@famine.OCF.Berkeley.EDU (Thomas F. Burdick) wrote:
method overloading,


How could you have both noncongruent argument lists, and multiple
dispatch?


C++ seems to manage it somehow.

#include <stdio.h>

void foo(int x, int y) { printf("1\n"); }
void foo(double x, int y) { printf("2\n"); }
void foo(char* x) { printf("3\n"); }

main() {
foo(1,2);
foo(1.2,2);
foo("foo");
}

compiles and runs without complaint.

E.
Jul 18 '05 #206
Marco Antoniotti <ma*****@cs.nyu.edu> wrote previously:
|As for your comments on methods and generic functions it is obvious
|that you do not know what multiple dispatching is (yes, there is an ugly
|hacked up Python library to do that floating around; I do not know if it
|will make it it 3.0), so you comment looses value immediately.

This is absolutely replete with ignorance. The multimethods module I
present at:

http://www-106.ibm.com/developerwork.../l-pydisp.html

Is neither ugly nor hacked, and is just as general as what you would do
in Lisp (but with nicer looking syntax). Neel Krishnaswami, and
probably other folks, have written similar modules before I did; I don't
claim to be all that original--it's not that difficult to do, after all.

You can grab it directly, btw, at:

http://www.gnosis.cx/download/gnosis...ultimethods.py

Now OF COURSE, multimethods.py will not "make it into 3.0"--it's not in
2.3 or any other version either. It's a 3rd party library... and it
will continue to work just fine whenever 3.0 comes out. Most likely
with no changes, but I'll update it if needed (unless I get hit by a
bus, I suppose, but someone else could easily do the same).

Yours, David...

--
mertz@ _/_/_/_/_/_/_/ THIS MESSAGE WAS BROUGHT TO YOU BY:_/_/_/_/ v i
gnosis _/_/ Postmodern Enterprises _/_/ s r
..cx _/_/ MAKERS OF CHAOS.... _/_/ i u
_/_/_/_/_/ LOOK FOR IT IN A NEIGHBORHOOD NEAR YOU_/_/_/_/_/ g s
Jul 18 '05 #207
Alexander Schmolck wrote:
Huh? You seem to be confused (BTW French is misleading here: it's
vowels and
consonants in English). *Kanji* are not phonetic, you seem to be
talking about
*kana*. And the blanket claim that Japanese spelling in kana is badly
designed
compared to say, English orthography seems really rather dubious to
me.


Yes, indeed. Kana is a far more efficient alphabet -- in terms of
graphemes mapping to the same sounds consistently -- than English. Of
course, its orthography fits the Japanese language, not English, so kana
is not nearly as effective for writing English words (of course, the
Japanese happily do it).

But if someone is writing Japanese in kana (rather than kanji), it is
far easier to accurately read back what is written based only on the
kana. The exceptions are some transformations done for brevity, usually
at the end of a sentence (desu ~= "dess"; "shita" ~= "shta"), and the
grammatical particles which for historical reasons are written different
than they're pronounced.

I saw a figure quoted for the efficiency (in that grapheme-phoneme
consistency mapping I mentioned earlier) of kana that put it well above
95%. I can't fathom how low English's efficiency would be.

--
Erik Max Francis && ma*@alcyone.com && http://www.alcyone.com/max/
__ San Jose, CA, USA && 37 20 N 121 53 W && &tSftDotIotE
/ \ That's what I'm about / Holding out / Holding out for my baby
\__/ Sandra St. Victor
Jul 18 '05 #208
Alexander Schmolck <a.********@gmx.net> writes:
Pascal Bourguignon <sp**@thalassa.informatimago.com> writes:
Well, I would say that kanji is badly designed, compared to latin
alphabet. The voyels are composed with consones (with diacritical
marks) and consones are written following four or five groups with
additional diacritical marks to distinguish within the groups. It's
more a phonetic code than a true alphabet.
Huh? You seem to be confused (BTW French is misleading here: it's vowels and
consonants in English). *Kanji* are not phonetic, you seem to be talking about
*kana*.


Yes, I was. Thank you.

And the blanket claim that Japanese spelling in kana is badly designed
compared to say, English orthography seems really rather dubious to me.

'as


I was criticising the graphical aspect on a discrimination stand-point.

For example, the difference between the latin glyphs for "ka" and "ga"
is bigger than that between the corresponding kana glyphs.

But it does not matter since the topic was kanji...
--
__Pascal_Bourguignon__
http://www.informatimago.com/
Do not adjust your mind, there is a fault in reality.
Jul 18 '05 #209
Marcin 'Qrczak' Kowalczyk <qr****@knm.org.pl> writes:
Moral: Haskell and Python happen to succeed with significant indents
but their rules are hard to adapt to other languages. Significant
indentation constrains the syntax - if you like these constraints, fine,
but it would hurt if a language were incompatible with these constraints.


I'm not surprised -- as I said it is not straighforward to map the significant
indentation scheme to a language that doesn't have python(C/Pascal)'s
statement/expression distinction. My main point is that it is an excellent
choice for a language such as python (and far superior to the alternatives a
la Pascal/C).

I'm not whitespace bigot: although I abhor C/Pascal I like the syntaxes of
smalltalk, prolog and lisp (maybe I could even partially warm to something in
the APL familiy) :)

'as
Jul 18 '05 #210


Andreas Rossberg wrote:
Dirk Thierbach wrote:

you can use macros to
do everything one could use HOFs for (if you really want).

Really? What about arbitrary recursion?


When you need arbitrary recursion you use HOF. When macro are more
appropriate you use them. If you don't have both you are out of luck.

Cheers
--
Marco

Jul 18 '05 #211
Rainer Joswig wrote:
In article <6C****************@newsread4.news.pas.earthlink.n et>,
"Andrew Dalke" <ad****@mindspring.com> wrote:

snip

And here's Table 31-2

Statements per
Language Level Function Point
-------- ----- --------------
Assembler 1 320
Ada 83 4.5 70
AWK 15 25
C 2.5 125
C++ 6.5 50
Cobol (ANSI 85) 3.5 90
dBase IV 9 35
spreadsheets ~50 6
Focus 8 40
Fortran 77 3 110
GW Basic 3.25 100
Lisp 5 65
Macro assembler 1.5 215
Modula 2 4 80
Oracle 8 40
Paradox 9 35
Pascal 3.5 90
Perl 15 25
Quick Basic 3 5.5 60
SAS, SPSS, etc. 10 30
Smalltalk (80 & V) 15 20
Sybase 8 40
Visual Basic 3 10 30

Source: Adapted from data in 'Programming Languages
Table' (Jones 1995a)

I thought these numbers were bogus. Weren't many
of them just guesses with actually zero data
or methodology behind them???


Apart from that, what does the author mean by "Lisp"? Is it Common Lisp
or some other Lisp dialect? Scheme?

According to this table, Modula-2 and Lisp are in the same league - I
have used both languages, and this just doesn't align with my experience.

Furthermore, CLOS can be regarded as a superset of Smalltalk. How can it
be that Smalltalk is more than three times better than Lisp? Even if you
take Scheme that doesn't come with an object system out of the box, you
can usually add one that is at least as powerful as Smalltalk. Or did
they add the LOC of infrastructure libraries to their results?
Pascal

Jul 18 '05 #212
"Andrew Dalke" <ad****@mindspring.com> writes:
And here's Table 31-2
Sorted by statement per function point:

Statements per
Language Level Function Point
-------- ----- --------------
spreadsheets ~50 6
Smalltalk (80 & V) 15 20
AWK 15 25
Perl 15 25
SAS, SPSS, etc. 10 30
Visual Basic 3 10 30
Paradox 9 35
dBase IV 9 35
Focus 8 40
Oracle 8 40
Sybase 8 40
C++ 6.5 50
Quick Basic 3 5.5 60
Lisp 5 65
Ada 83 4.5 70
Modula 2 4 80
Cobol (ANSI 85) 3.5 90
Pascal 3.5 90
GW Basic 3.25 100
Fortran 77 3 110
C 2.5 125
Macro assembler 1.5 215
Assembler 1 320
Source: Adapted from data in 'Programming Languages
Table' (Jones 1995a)
I'll use Perl as a proxy for Python; given that that was pre-OO
Perl I think it's reasonable that that sets a minimum level for
Python. Compare the Lisp and Perl numbers

Lisp 5 65
Perl 15 25

and the differences in "statements per function point" (which isn't
quite "LOC per function point") is striking. It suggests that
Python is more than twice as concise as Lisp, so if LOC is
used as the estimate for implementation time then it's a strong
recommendation to use Python instead of Lisp because it
will take less time to get the same thing done. And I do believe
Lisp had macros back in the mid-1990s.

Some differences in this table look suspect to me. Perhaps they did
not take into account other important factors, such as the use of
libraries.

For example, when I write awk code, I really don't feel like I'm
programming in a higher level languange than LISP... (and I won't
mention perl).

Also, the ordering of Fortran vs. C fell strange (given the libraries
I use in C and the fact that I don't use Fortran).
--
__Pascal_Bourguignon__
http://www.informatimago.com/
Do not adjust your mind, there is a fault in reality.
Jul 18 '05 #213
Rainer Joswig <jo****@lispmachine.de> writes:
In article <6C****************@newsread4.news.pas.earthlink.n et>,
"Andrew Dalke" <ad****@mindspring.com> wrote:

snip
And here's Table 31-2

Statements per
Language Level Function Point
-------- ----- --------------
Assembler 1 320
Ada 83 4.5 70
AWK 15 25
C 2.5 125
C++ 6.5 50
Cobol (ANSI 85) 3.5 90
dBase IV 9 35
spreadsheets ~50 6
Focus 8 40
Fortran 77 3 110
GW Basic 3.25 100
Lisp 5 65
Macro assembler 1.5 215
Modula 2 4 80
Oracle 8 40
Paradox 9 35
Pascal 3.5 90
Perl 15 25
Quick Basic 3 5.5 60
SAS, SPSS, etc. 10 30
Smalltalk (80 & V) 15 20
Sybase 8 40
Visual Basic 3 10 30

Source: Adapted from data in 'Programming Languages
Table' (Jones 1995a)


I thought these numbers were bogus. Weren't many of them just
guesses with actually zero data or methodology behind them???


Well, here are some other interesting entries (from the table on p.89
of Jones's _Applied Software Measurement_):

Language Level Function Point
-------- ----- --------------
CLOS 12.0 27
KSH 12.0 27
PERL 12.0 27 [it had 27, while the the other table had 25]
MAKE 15.0 21

I'm not sure what to make of CLOS being separate from Common Lisp, but
there it is. But it's sort of moot because by this measure, MAKE is a
higher level language than either Lisp, Perl, or C++. Personally, I
think I'll be looking for another metric.

-Peter

--
Peter Seibel pe***@javamonkey.com

Lisp is the red pill. -- John Fraser, comp.lang.lisp
Jul 18 '05 #214
Andrew Dalke wrote:
Doug Tolton:
I believe the crux of our difference is that you don't want to give
expressive power because you believe it will be misused. I on the
other hand want to give expressive power because I believe it could be
used correctly most of the time. For the times when it's not, well
that's why I have debugging skills. Sadly not eveyone uses looping
the way I would, but using my brain I can figure out what they are
doing.

That point has been made over and over to you. The argument is
that expressive power for a single developer can, for a group of
developers and especially those comprised of people with different
skill sets and mixed expertise, reduce the overall effectiveness of the
group.


Do you have some empirical data and/or references that back this claim?
Pascal

Jul 18 '05 #215
David Rush <dr***@aol.net> writes:
You know I think that this thread has so far set a comp.lang.* record
for civilitiy in the face of a massively cross-posted language
comparison thread. I was even wondering if it was going to die a quiet
death, too.

Ah well, We all knew it was too good to last. Have at it, lads!

Common Lisp is an ugly language that is impossible to understand with
crufty semantics

Scheme is only used by ivory-tower academics and is irerelevant to
real world programming

Python is a religion that worships at the feet of Guido vanRossum
combining the syntactic flaws of lisp with a bad case of feeping
creaturisms taken from languages more civilized than itself

There. Is everyone pissed off now?


No, that seems about right.
Jul 18 '05 #216
"Carlo v. Dango" <oe**@soetu.eu> writes:
I'd humbly suggest that if you can't see *any* reason why someone
would prefer Lisp's syntax, then you're not missing some fact about
the syntax itself but about how other language features are supported
by the syntax.
Sure, but it seems like no one was able to let CLOS have
(virtual) inner classes,


Um. What on earth would that mean? I know what it means in Java
and such, but since CLOS classes are not conflated with lexical
scope, there's nothing to be `inner' to.
methods inside methods,
What would one do with one of those? How would that differ
from, say, FLET or LABELS?
virtual methods (yeah I know about those stupid generic functions :),
Since all CLOS is dynamic dispatch (i.e. virtual), what are you
talking about?
method overloading,
Now I'm *really* confused. I thought method overloading involved
having a method do something different depending on the type of
arguments presented to it. CLOS certainly does that.
A decent API (I tried playing with it.. it doesn't even have a
freaking date library as standard ;-p
I was unaware that a date library was so critical to an
object-oriented implementation.
yes this mail is provocative.. please count slowly to 10 before
replying if you disagree with my point of view (and I know Pascal will
disagree ;-)


I'll wait until you have a coherent point of view to disagree with.
Jul 18 '05 #217
Andrew Dalke wrote:
Doug Tolton:
I believe the crux of our difference is that you don't want to give
expressive power because you believe it will be misused. I on the
other hand want to give expressive power because I believe it could be
used correctly most of the time. For the times when it's not, well
that's why I have debugging skills. Sadly not eveyone uses looping
the way I would, but using my brain I can figure out what they are
doing.

That point has been made over and over to you. The argument is
that expressive power for a single developer can, for a group of
developers and especially those comprised of people with different
skill sets and mixed expertise, reduce the overall effectiveness of the
group.

Yes and I have repeatedly stated that I disagree with it. I simply do
not by that allowing expressiveness via high level constructs detracts
from the effectiveness of the group. That argument is plainly
ridiculous, if it were true then Python would be worse than Java,
because Python is *far* more expressive.
If this is indeed the crux, then any justification which says "my brain"
and "I" is suspect, because that explicitly ignores the argument. Apparently you can't read very well. I simply stated that I believe our
point of contention to be that issue, I never stated I believe that
because it's some vague theory inside my head.
By
comparison, Alex's examples bring up
- teaching languages to others
- interference between his code and others' (the APL example)
- production development
"Imagine a group of, say, a dozen programmers, working together ...
to develop a typical application program of a few tens of thousands
of
function points -- developing about 100,000 new lines of delivered
code
plus about as much unit tests, and reusing roughly the same amount"
- writing books for other people With the exception of writing books for other people, I have done all of
those things. I have worked on fairly large development teams > 20
people. I have built multi-million dollar systems. I have taught
people programming languages, both on the job and as a University
Course. So don't come off with this attitude of I have no idea what I'm
talking about.

Macro's are precisely better for large groups of people. Any time you
are building systems with Large groups of people, and you want to have
re-usable code, you abstract it. There are all kinds of ways to do
that, Macros are just one. I have never seen any large successful
coding project that does not abstract things well. If you are incapable
of abstracting software successful and usefully then no project will be
successful, not if it's non-trivial.
which at the very least suggests the expertise and background
by which to evaluate the argument. It may be that his knowledge of
how and when to use macros is based on the statements of people he
respects rather than personal experience, but given the discussions on
this topic and the exhibited examples of when macros are appropriately
used, it surely does seem that metaclasses, higher-level functions, and
iterators can be used to implement a solution with a roughly equal amount
of effort and clarity. Th only real advantage to macros I've seen is the
certainty of "compile-time" evaluation, hence better performance than
run-time evaluation As I said to Alex, that's because you don't understand Macros. Relying
on what someone else says about Macros only gets you so far. At some
point, if you don't want to look like a complete idiot, you might want
to really learn them or just shut up about them. It's very difficult to
have a conversation with someone who really doesn't know what they are
talking about, but is instead just spouting an opinion they picked up
from someone else. The discussion doesn't go anywhere at that point.

Macros are like any things else, a tool in your tool box. If you know
how to use them they can be used very effectively. If you don't, you
can probably work around the problem and solve it a different way.
However as the toolset differential gets bigger, the person with more
tools in their arsenal will be able to outperform the people with less
tools.
Alex:
For some (handwaving-defined) "appropriate" approach to measuring
"length" (and number of lines is most definitely not it), it is ONE

You:
Both from my experience and Fred Brooks it's the only actual way I've
seen of measuring the time it will take to write a program.

You mean "estimating"; for measuring I suspect you can use a
combination of a clock and a calendar. (This from a guy who recently
posted that the result of 1+1 is 4. ;)

No, what I was referring to wasn't estimation. Rather I was referring
to the study that found that programmers on average write the same
number of lines of code per year regardless of the language they write
in. Therefore the only way to increase productivity is to write
software in a language that uses less lines to accomplish something
productive. See Paul Grahams site for a discussion.
You should use McConnell as a more recent reference than Brooks.
(I assume you are arguing from Mythical Man Month? Or from his
more recent writings?) In any case, in Rapid Development McConnell
considers various alternatives then suggests using LOC, on the view
that LOC is highly correlated with function points (among 3rd
generation programming languages! see below) and that LOC has a
good correlation to development time, excluding extremes like APL
and assembly. However, his main argument is that LOC is an easy
thing to understand.

The tricky thing about using McConnell's book is the implications
of table 31-2 in the section "Using Rapid Development Languages",
which talks about languages other than the 3rd generation ones used
to make his above estimate.

Table 31-2 shows the approximate "language levels" for a wider
variety of languages than Table 31-1. The "language level" is
intended to be a more specific replacement for the level implied
by the phrases "third-generation language" and "fourth-generation
language." It is defined as the number of assembler statements
that would be needed to replace one statement in the higher-level
language. ...

The numbers ... are subject to a lot of error, but they are the best
numbers available at this time, and they are accurate enough to
support this point: from a development poing of view, you should
implement your projects in the highest-level language possible. If
you can implement something in C, rather than assembler, C++
rather than C, or Visual Basic rather than C++, you can develop
faster.

And here's Table 31-2

Statements per
Language Level Function Point
-------- ----- --------------
Assembler 1 320
Ada 83 4.5 70
AWK 15 25
C 2.5 125
C++ 6.5 50
Cobol (ANSI 85) 3.5 90
dBase IV 9 35
spreadsheets ~50 6
Focus 8 40
Fortran 77 3 110
GW Basic 3.25 100
Lisp 5 65
Macro assembler 1.5 215
Modula 2 4 80
Oracle 8 40
Paradox 9 35
Pascal 3.5 90
Perl 15 25
Quick Basic 3 5.5 60
SAS, SPSS, etc. 10 30
Smalltalk (80 & V) 15 20
Sybase 8 40
Visual Basic 3 10 30

Source: Adapted from data in 'Programming Languages
Table' (Jones 1995a)
I'll use Perl as a proxy for Python; given that that was pre-OO
Perl I think it's reasonable that that sets a minimum level for
Python. Compare the Lisp and Perl numbers

Lisp 5 65
Perl 15 25 You are saying that Python and Perl are similarly compact?!?
You have got to be kidding right?
Perl is *far* more compact than Python is. That is just ludicrous.
and the differences in "statements per function point" (which isn't
quite "LOC per function point") is striking. It suggests that
Python is more than twice as concise as Lisp, so if LOC is
used as the estimate for implementation time then it's a strong
recommendation to use Python instead of Lisp because it
will take less time to get the same thing done. And I do believe
Lisp had macros back in the mid-1990s.

Sadly, this is a secondary reference and I don't have a
copy of
Jones, Capers, 1995a. "Software Productivity Research
Programming Languages Table," 7th ed. March 1995.
and the referenced URL of www.spr.com/library/langtbl.htm
is no longer valid and I can't find that table on their site.
It's always nice just to chuck some arbitrary table into the
conversation which conveniently backs some poitn you were trying to
make, and also conveniently can't be located for anyone to check the
methodology.

If you want some real world numbers on program length check here:
http://www.bagley.org/~doug/shootout/

Most of those programs are trivially small, and didn't use Macros.
Macros as well as high order functions etc only come into play in
non-trivial systems.

I just don't buy these numbers or the chart from Mcconell on faith. I
would have to see his methodolgy, and understand what his motivation in
conducting the test was.

Well that was a long winded digression into something that is
completely un-related to Macros. Seems like a good argument why
re-binding the buildins is bad though

It was a long winded digression into how LOC can be a
wrong basis by which to judge the appropriateness of a
language feature.

It still wasn't relevant to Macros. However, because neither of you
understand Macros, you of course think it is relevant.
a plus -- and Java's use of { } for example ensures NON-uniformity
on a lexical plane, since everybody has different ideas about where
braces should go:-).

Where braces should go is a trivial issues. However if braces is an
issue that seriously concerns you then I can see why macros are giving
you a heart attack.

See that smiley and the "--"? This is a throwaway point at the end
of the argument, and given Alex's noted verboseness, if it was a
serious point he would have written several pages on the topic.

This is something we are very much in agreement on.
My response was just a little dig, because it does seem to be indicative
of his attitude in general IMO.
Andrew
da***@dalkescientific.com

--
Doug Tolton
(format t "~a@~a~a.~a" "dtolton" "ya" "hoo" "com")

Jul 18 '05 #218
There's something pathological in my posting untested code. One more
try:

def categorize_jointly(preds, it):
results = [[] for _ in preds]
for x in it:
results[all(preds)(x)].append(x)
return results

|Come on. Haskell has a nice type system. Python is an application of
|Greespun's Tenth Rule of programming.

Btw. This is more nonsense. HOFs are not a special Lisp thing. Haskell
does them much better, for example... and so does Python.

Yours, David...

--
mertz@ _/_/_/_/_/_/_/ THIS MESSAGE WAS BROUGHT TO YOU BY:_/_/_/_/ v i
gnosis _/_/ Postmodern Enterprises _/_/ s r
..cx _/_/ MAKERS OF CHAOS.... _/_/ i u
_/_/_/_/_/ LOOK FOR IT IN A NEIGHBORHOOD NEAR YOU_/_/_/_/_/ g s
Jul 18 '05 #219
Alex Martelli:
|>Why, thanks! Nice to see that I'm getting on the nerves of _some_
|>people, too, not just having them get on mine.

Doug Tolton <do**@nospam.com> wrote previously:
|Yes, this discussion is frustrating. It's deeply frustrating to hear
|someone without extensive experience with Macros arguing why they are
|so destructive.

If that is meant to address Alex Martelli, it is very deeply misguided.
If there is anyone who I can say with confidence has much more
experience--and much better understanding--of macros (or of all things
Lisp) than does Doug Tolton, it is Alex.

Yours, Lulu...

Jul 18 '05 #220
David Mertz wrote:
There's something pathological in my posting untested code. One more
try:

def categorize_jointly(preds, it):
results = [[] for _ in preds]
for x in it:
results[all(preds)(x)].append(x)
return results

|Come on. Haskell has a nice type system. Python is an application of
|Greespun's Tenth Rule of programming.

Btw. This is more nonsense. HOFs are not a special Lisp thing. Haskell
does them much better, for example... and so does Python.
What is your basis for that statement? I personally like the way Lisp
does it much better, and I program in both Lisp and Python. With Python
it's not immediately apparent if you are passing in a simple variable
or a HOF. Whereas in lisp with #' it's immediately obvious that you are
receiving or sending a HOF that will potentially alter how the call
operates.

IMO, that syntax is far clearner.
Yours, David...

--
mertz@ _/_/_/_/_/_/_/ THIS MESSAGE WAS BROUGHT TO YOU BY:_/_/_/_/ v i
gnosis _/_/ Postmodern Enterprises _/_/ s r
.cx _/_/ MAKERS OF CHAOS.... _/_/ i u
_/_/_/_/_/ LOOK FOR IT IN A NEIGHBORHOOD NEAR YOU_/_/_/_/_/ g s

--
Doug Tolton
(format t "~a@~a~a.~a" "dtolton" "ya" "hoo" "com")

Jul 18 '05 #221
Lulu of the Lotus-Eaters wrote:
Alex Martelli:
|>Why, thanks! Nice to see that I'm getting on the nerves of _some_
|>people, too, not just having them get on mine.

Doug Tolton <do**@nospam.com> wrote previously:
|Yes, this discussion is frustrating. It's deeply frustrating to hear
|someone without extensive experience with Macros arguing why they are
|so destructive.

If that is meant to address Alex Martelli, it is very deeply misguided.
If there is anyone who I can say with confidence has much more
experience--and much better understanding--of macros (or of all things
Lisp) than does Doug Tolton, it is Alex.

Yours, Lulu...


That was an interestingly ingorant statement. I'm very suprised that
you would feel the *least* bit qualified to offer that statement. You
don't know me or my background. Alex has stated on many occasions that
he has not worked with Macros, but that he is relying on second hand
information.

I don't claim to be a guru on Lisp, however I believe I understand it
far better than Alex does. If the people who actually know and use
Common Lisp think I am mis-speaking and mis-representing Lisp, please
let me know and I will be quiet.

What is your background with Common Lisp David? Why do you feel so
eminently qualified to offer yourself as the expert on Lisp? I have
seen some FP from you, but I haven't seen much in the way of Lisp code.
Did you study it in school? Have you really tried to build production
quality system with Lisp?

Like I said, I'm not an expert at Lisp, but I think I understand the
spirit and semantics of Lisp far better than Alex, and from what I've
seen you say I wouldn't be suprised if I knew it better than you do as well.

--
Doug Tolton
(format t "~a@~a~a.~a" "dtolton" "ya" "hoo" "com")

Jul 18 '05 #222
>>>>> On 08 Oct 2003 11:47:45 -0700, Thomas F Burdick ("Thomas") writes:
Thomas> How could you have both noncongruent argument lists, and multiple
Thomas> dispatch? With an either/or like that, Lisp chose the right one.

This reason that Common Lisp made this choice is not because non-congruent
multiple-dispatch methods are impossible. To dispatch: instead of just
matching the types of the args, consider only those handler entries that
have the correct shape (length) as well.

JAVA has multi-methods with non-congruent arglist.
(It doesn't have multiple inheritence, but that doesn't
matter to dynamic type dispatching, except maybe in how
you implement searching your handler tables.) In JAVA,
the correspondance of the "signature" of the function call
and the method definition are what's important;
and there are no restricting "generic functions".

public class RandomAccessFile extends Object,implements DataOutput {
public void write(byte[] b) throws IOException;
public void write(byte[] b, int off, int len) throws IOException;
...
}

CLOS imposes an aesthetic design restriction on the programmer:
methods with the same name should be conceptually doing the
same function, and therefore should be all taking the same args.
The generic function is the documentation of that mono protocol.
This is one of the few places that Lisp takes a facist attitude.
Jul 18 '05 #223
Alex Martelli wrote:
The only annoyance here is that there is no good 'literal' form for
a code block (Python's lambda is too puny to count as such), so you
do have to *name* the 'thefunc' argument (with a 'def' statement --
Python firmly separates statements from expressions).


Here's my non-PEP for such a feature:

return { |x, y|
print x
print y
}

Which would be the equivalent of:

def anonymous_function(x, y):
print x
print y
return anonymous_function

It's unambiguous because no dictionary literal would ever start with
'{|', it looks almost identical to a certain other language <g>, and
instead of being a special case for a function, it would just be a plain
old HOF. No more talk of puny lambda:, and we can all go home and
happily write visitor patterns and event callbacks all day long.

Then, merge map, filter, and reduce into the list type, so we can play
Smalltalk and write stuff like:
print mylist.map({ |x| return x + 2 }, range(5))

0, 2, 4, 6, 8

If you wanted to span multiple lines, just take the first indentation
you find and start from there:

closure = True
some_screen_object.on_click = { |e|
print 'got an event: ' + e
handle_screen_object_click()
if closure = True:
go_home_happy()
}

You're welcome,
Dave ;)

Jul 18 '05 #224
Dave Benjamin wrote:
>>> print mylist.map({ |x| return x + 2 }, range(5))

0, 2, 4, 6, 8


Duh... sorry, that should read:
print range(5).map({ |x| return x + 2 })

(and yes, I realize a lambda would work here)

Dave

Jul 18 '05 #225
In article <m3************@javamonkey.com>,
Peter Seibel <pe***@javamonkey.com> wrote:
Rainer Joswig <jo****@lispmachine.de> writes:
In article <6C****************@newsread4.news.pas.earthlink.n et>,
"Andrew Dalke" <ad****@mindspring.com> wrote:

snip
And here's Table 31-2

Statements per
Language Level Function Point
-------- ----- --------------
Assembler 1 320
Ada 83 4.5 70
AWK 15 25
C 2.5 125
C++ 6.5 50
Cobol (ANSI 85) 3.5 90
dBase IV 9 35
spreadsheets ~50 6
Focus 8 40
Fortran 77 3 110
GW Basic 3.25 100
Lisp 5 65
Macro assembler 1.5 215
Modula 2 4 80
Oracle 8 40
Paradox 9 35
Pascal 3.5 90
Perl 15 25
Quick Basic 3 5.5 60
SAS, SPSS, etc. 10 30
Smalltalk (80 & V) 15 20
Sybase 8 40
Visual Basic 3 10 30

Source: Adapted from data in 'Programming Languages
Table' (Jones 1995a)


I thought these numbers were bogus. Weren't many of them just
guesses with actually zero data or methodology behind them???


Well, here are some other interesting entries (from the table on p.89
of Jones's _Applied Software Measurement_):

Language Level Function Point
-------- ----- --------------
CLOS 12.0 27
KSH 12.0 27
PERL 12.0 27 [it had 27, while the the other table had 25]
MAKE 15.0 21

I'm not sure what to make of CLOS being separate from Common Lisp, but
there it is. But it's sort of moot because by this measure, MAKE is a
higher level language than either Lisp, Perl, or C++. Personally, I
think I'll be looking for another metric.

-Peter


Just look at this:

http://www.theadvisors.com/langcomparison.htm

And read:

The languages and levels in Table 2 were gathered in four ways.

* Counting Function Points and Source Code
* Counting Source Code
* Inspecting Source Code
* Researching Languages

and

Researching Languages

Research was done by reading descriptions and genealogies
of languages and making an educated guess as to their levels. KL,
CLOS, TWAICE, and FASBOL are examples of languages that were
assigned tentative levels merely from descriptions of the
language, rather than from actual counts.

Well, I guess CLOS is, ... about, say, hmm, scratching my head, hmm,
let's say 73.

Right?

Let's say it differently, the comparison is a BAD joke.
Jul 18 '05 #226
Christopher C. Stacy wrote:
JAVA has multi-methods with non-congruent arglist.
(It doesn't have multiple inheritence, but that doesn't
matter to dynamic type dispatching, except maybe in how
you implement searching your handler tables.) In JAVA,
the correspondance of the "signature" of the function call
and the method definition are what's important;
and there are no restricting "generic functions".

public class RandomAccessFile extends Object,implements DataOutput {
public void write(byte[] b) throws IOException;
public void write(byte[] b, int off, int len) throws IOException;
...
}


Maybe this is just a terminological issue, but in my book these are not
multi-methods. In Java, methods with the same name but different
signatures are selected at compile time, and this is rather like having
the parameter types as part of the method name.

This can lead to subtle bugs. Here is an example in Java:

public class test {

static void m(Object o) {
System.out.println("m/Object");
}

static void m(String s) {
System.out.println("m/String");
}

static void n(Object o) {
m(o);
}

public static void main(String[] args) {
n("test");
}

}

This prints "m/Object", instead of "m/String" as you might expect. (This
is one of the reasons why the Visitor pattern is relatively tedious to
implement in Java.)
Pascal

Jul 18 '05 #227
On 08 Oct 2003 22:17:48 +0200, Pascal Bourguignon
Alexander Schmolck <a.********@gmx.net> writes:
And the blanket claim that Japanese spelling in kana is badly designed
compared to say, English orthography seems really rather dubious to me. I was criticising the graphical aspect on a discrimination stand-point.

For example, the difference between the latin glyphs for "ka" and "ga"
is bigger than that between the corresponding kana glyphs.


But the actual pronunciation difference (consonantal vocalization) is far
closer to the graphical difference in kana.
But it does not matter since the topic was kanji...


But Kanji are not, and no-one has ever even remotely claimed that Kanji
are phonetic. In fact their main advantage is that they are *not* phonetic.
They were made that way by the Chinese so that all parts of the empire
could communicate even though they spoke widely varying dialects.

david rush
--
(\x.(x x) \x.(x x)) -> (s i i (s i i))
-- aki helin (on comp.lang.scheme)
Jul 18 '05 #228
Peter Seibel:
PERL 12.0 27 [it had 27, while the the other table had 25]

I double checked. McConnell lists "25".
MAKE 15.0 21

I'm not sure what to make of CLOS being separate from Common Lisp, but
there it is. But it's sort of moot because by this measure, MAKE is a
higher level language than either Lisp, Perl, or C++. Personally, I
think I'll be looking for another metric.


McConnell said that if you *could* do it in a given higher level
language then you should. Eg, spreadsheets are listed as a very
high level language, but you wouldn't write a word processor
in one.

I don't know the background behind the paper you referenced.
I suspect is has the same qualifier. If so, it's quite true that
using make for its domain is more appropriate than a more general
general purpose language.

Andrew
da***@dalkescientific.com
Jul 18 '05 #229
>>>>> On Wed, 08 Oct 2003 16:51:44 -0400, Joe Marshall ("Joe") writes:
Joe> "Carlo v. Dango" <oe**@soetu.eu> writes:
method overloading,


Joe> Now I'm *really* confused. I thought method overloading involved
Joe> having a method do something different depending on the type of
Joe> arguments presented to it. CLOS certainly does that.

He probably means "operator overloading" -- in languages where
there is a difference between built-in operators and functions,
their OOP features let them put methods on things like "+".

Lisp doesn't let you do that, because it turns out to be a bad idea.
When you go reading someone's program, what you really want is for
the standard operators to be doing the standard and completely
understood thing. Most commonly, the operators that people in C++
like to overload are the mathematical ones, like "*" and "+".
Lisp has carefully defined those operations to do all the things
that are both well-understood (like complex numbers) and missing
from languages like C++. And in Lisp if you want to do some
other kind of arithmetic, you must make up your names for those
operators. This is considered to be a good feature.

Not surprisingly, the designers of JAVA understood this as well:
JAVA doesn't let you overload operators, either.
Jul 18 '05 #230
On Wed, 08 Oct 2003 16:41:44 -0400, Joe Marshall <jr*@ccs.neu.edu> wrote:
David Rush <dr***@aol.net> writes:
You know I think that this thread has so far set a comp.lang.* record
for civilitiy in the face of a massively cross-posted language
comparison thread. I was even wondering if it was going to die a quiet
death, too.

Ah well, We all knew it was too good to last. Have at it, lads!

Common Lisp is an ugly language that is impossible to understand with
crufty semantics

Scheme is only used by ivory-tower academics and is irerelevant to
real world programming

Python is a religion that worships at the feet of Guido vanRossum
combining the syntactic flaws of lisp with a bad case of feeping
creaturisms taken from languages more civilized than itself

There. Is everyone pissed off now?


No, that seems about right.

LOL ;-)

Regards,
Bengt Richter
Jul 18 '05 #231

Rainer Joswig
I thought these numbers were bogus. Weren't many
of them just guesses with actually zero data
or methodology behind them???

Pascal Costanza Apart from that, what does the author mean by "Lisp"? Is it Common Lisp
or some other Lisp dialect? Scheme?
As I mentioned, I don't have the primary sources, so I can't
give any more information about the data.

In general, there are very few methodological studies on the
effectiveness of different languages on general purpose
programming problems when used by sets of people with
roughly comparable experience. The only other one I have
is Prechelt's "An empirical comparison ..."
http://www.ipd.uka.de/~prechelt/Biblio/
with the followup of Java v. Lisp at
http://www.flownet.com/gat/papers/lisp-java.pdf
That data also suggests that Tcl/Perl/Python/Lisp development time
is comparable.
According to this table, Modula-2 and Lisp are in the same league - I
have used both languages, and this just doesn't align with my experience.
Good thing I didn't try to skew the data by leaving that out. ;)
Furthermore, CLOS can be regarded as a superset of Smalltalk. How can it
be that Smalltalk is more than three times better than Lisp? Even if you
take Scheme that doesn't come with an object system out of the box, you
can usually add one that is at least as powerful as Smalltalk. Or did
they add the LOC of infrastructure libraries to their results?


Again, I don't have the primary source.

Andrew
da***@dalkescientific.com
Jul 18 '05 #232
Pascal Bourguignon:
Some differences in this table look suspect to me. Perhaps they did
not take into account other important factors, such as the use of
libraries.
Anyone here know more about the original reference?
For example, when I write awk code, I really don't feel like I'm
programming in a higher level languange than LISP... (and I won't
mention perl).
The specific definition of "higher level language" is the number
of assembly instructions replaced. Eg, spreadsheets are in that
table despite not being a good general purpose programming
language.

I also suspect the numbers were taken from the analysis
of existing programs, and people would have used awk
for cases where it was appropriate.
Also, the ordering of Fortran vs. C fell strange (given the libraries
I use in C and the fact that I don't use Fortran).


Hmm... You don't do scientific programming, do you. ;)

Andrew
da***@dalkescientific.com
Jul 18 '05 #233
In comp.lang.lisp David Rush <dr***@aol.net> wrote:
Ah well, We all knew it was too good to last. Have at it, lads!

Common Lisp is an ugly language that is impossible to understand with
crufty semantics

Scheme is only used by ivory-tower academics and is irerelevant to real
world programming

Python is a religion that worships at the feet of Guido vanRossum
combining the syntactic flaws of lisp with a bad case of feeping
creaturisms taken from languages more civilized than itself

There. Is everyone pissed off now?


This seems like a good juncture to post my list of common myths and
misconceptions about popular programming languages. Contributions are
welcome; flames only if they're funny. Anyone who needs to see :) on
things to know they're meant in jest should stop reading now.
== Programming Language Myths ==

BASIC Myth: People who learn BASIC go on to learn other languages.
Reality: Most people who learn BASIC go on to find less nerdy ways of
writing "Mr. Gzabowski is a lame teacher" over and over again.

C Myth: C programs are insecure, full of buffer overflows and such.
Reality: C programs are only insecure if written by imperfect programmers.
Since all C programmers know that they are perfect, there's no
problem.

COBOL Myth: COBOL is dead.
Reality: It stalks from out the ancient vaults of death, its putrid mind
drawn to the blood of the living.

Forth Myth: Forth makes no sense.
Reality: backwards. think to have just you sense, perfect makes Forth

Java Myth: You need Java to do business applications.
Reality: You need Java to get a job.

Lisp Myth: Lisp is an interpreted language.
Reality: Lisp is COMPILED DAMMIT COMPILED! IT'S IN THE FUCKING STANDARD!!!

Pascal Myth: Pascal is a toy.
Reality: Oh, wait, that is not a myth, it is true ...

Perl Myth: Perl is impossible to read.
Reality: You are not taking enough psychedelics.

Python Myth: Python's only problem is the whitespace thing.
Reality: Python's only problem is that it is fucking slow.

--
Karl A. Krueger <kk******@example.edu>
Woods Hole Oceanographic Institution
Email address is spamtrapped. s/example/whoi/
"Outlook not so good." -- Magic 8-Ball Software Reviews
Jul 18 '05 #234

In article <bm*********@newsreader2.netcologne.de>, Pascal Costanza
<co******@web.de> wrote:
Christopher C. Stacy wrote:
JAVA has multi-methods with non-congruent arglist.
(It doesn't have multiple inheritence, but that doesn't
matter to dynamic type dispatching, except maybe in how
you implement searching your handler tables.) In JAVA,
the correspondance of the "signature" of the function call
and the method definition are what's important;
and there are no restricting "generic functions".

public class RandomAccessFile extends Object,implements DataOutput {
public void write(byte[] b) throws IOException;
public void write(byte[] b, int off, int len) throws IOException;
...
}
Maybe this is just a terminological issue, but in my book these are not
multi-methods. In Java, methods with the same name but different
signatures are selected at compile time,


When the selection is done is immaterial. The point is that it's done at
all. There's no reason why the same algorithm that is used to select
methods at compile time can't also be used to select methods at run time.
The claim that congruent argument lists are necessary for multi-method
dispatch is clearly false.
and this is rather like having
the parameter types as part of the method name.
No. It's the "system" keeping track of the types, not the user. That's
the key. The fact that C++ and Java just happen to do it at compile time
rather than at run time is a red herring.

This can lead to subtle bugs. Here is an example in Java:


These "subtle bugs" are a reflection of the limitation of compile-time
type inference in Java, not a limitation of multi-method dispatch with
non-congruent argument lists.

E.
Jul 18 '05 #235
Dave Benjamin wrote:
Dave Benjamin wrote:
>>> print mylist.map({ |x| return x + 2 }, range(5))

0, 2, 4, 6, 8


Duh... sorry, that should read:
print range(5).map({ |x| return x + 2 })


I either case it will be [2, 3, 4, 5, 6] :)

Instead of lambda use list comprehensions:

print [x+2 for x in range(5)]

Unnamed code blocks considered evil :), use named instead (functions).

With nested scopes you can do amazing things. I myself was used to code
blocks due to perl background, but python idiom are not worse to say the
least.
For example:

class C(object):
...
def aprop():
def reader(self): return 0
def writer(self,newval): pass
return reader, writer
aprop=property(aprop())
Mike


Jul 18 '05 #236
Karl A. Krueger wrote:
This seems like a good juncture to post my list of common myths and
misconceptions about popular programming languages. Contributions are
welcome; flames only if they're funny. Anyone who needs to see :) on
things to know they're meant in jest should stop reading now.


Haha... that's really funny... except the last one. Not that I'm a
Python purist (a big fan, yes, but not a purist), but I rarely complain
about its slowness. Java is too easy of a target for that one... =)

Python Myth: Python's only problem is the whitespace thing.
Reality: Python's only problem is that it is not Common Lisp.

How about that?

Dave

Jul 18 '05 #237
On Wed, 08 Oct 2003 23:05:28 GMT, Dave Benjamin <da**@3dex.com> wrote:
Python Myth: Python's only problem is the whitespace thing.
Reality: Python's only problem is that it is not Common Lisp.

How about that?


A good one... :)

Edi.
Jul 18 '05 #238
"Andrew Dalke" <ad****@mindspring.com> writes:
Marco Antoniotti:
Come on. Haskell has a nice type system. Python is an application of
Greespun's Tenth Rule of programming.


Oh? Where's the "bug-ridden" part? :)

That assertion also ignores the influence of user studies (yes,
research into how the syntax of a language affects its readabilty
and understandability) on Python's development; a topic which
is for the most part ignored in Lisp.


For some reason people who don't like fully parenthesized polish
notation seem to think that Lisp hackers don't know any better,
haven't seen anything else, and that if we were only shown the light,
we'd switch in an instant.

Allow me to point out that `standard algabraic notation' has been
available in Lisp for 40 years. McCarthy designed M-expressions for
Lisp in 1962 (if not earlier) and the Lisp 1.5 manual is written using
them.

Vaughn Pratt's CGOL (which I mentioned before) was written in 1977.

Dylan adopted an algabraic syntax sometime in the late 80's

In every one of these cases, algabraic syntax never quite caught on.

So either the syntax doesn't make a whole hell of a lot of difference
in readability, or readability doesn't make a whole hell of a lot of
difference in utility.

Jul 18 '05 #239
In article <h6**********************@news1.tin.it>, Alex Martelli
<al***@aleax.it> writes
......
So, I hope the cultural difference is sharply clear. To us, consensus
is culturally important, we are keen to ensure we all keep using the
same language; *you* would be happier if you could use a language that
is different from those of others, thanks to syntax extensions you
write yourself. Since I consider programming to be mainly a group
activity, and the ability to flow smoothly between several groups to
be quite an important one, I'm hardly likely to appreciate the divergence
in dialects encouraged by such possibilities, am I?
.......
I'm fairly sure I agree with the central point. Average programming
should be understandable by a large population ie concensus.

However, I don't think that the comprehensibility argument is reasonable
against the far corners of a language.

I know that that that that that boy said is OK. There are deep corners
even in English.

Perhaps a truly expressive programming language should be allowed to
express truths about itself.

This statement is false. This language is inconsistent. Let's do Geo.
Bool and have no far corners. No sir!
Alex


--
Robin Becker
Jul 18 '05 #240
"Andrew Dalke" <ad****@mindspring.com> writes:
In general, there are very few methodological studies on the
effectiveness of different languages on general purpose
programming problems when used by sets of people with
roughly comparable experience. The only other one I have
is Prechelt's "An empirical comparison ..."
http://www.ipd.uka.de/~prechelt/Biblio/
with the followup of Java v. Lisp at
http://www.flownet.com/gat/papers/lisp-java.pdf
That data also suggests that Tcl/Perl/Python/Lisp development time
is comparable.


I'd be interested in a comparison of maintenance time. Perl feels
like a write-only language.
--
__Pascal_Bourguignon__
http://www.informatimago.com/
Do not adjust your mind, there is a fault in reality.
Jul 18 '05 #241


Pascal Bourguignon wrote:
"Andrew Dalke" <ad****@mindspring.com> writes:
And here's Table 31-2

Sorted by statement per function point:

Statements per
Language Level Function Point
-------- ----- --------------
spreadsheets ~50 6


OK, this where Cells would fall, they are spreadsheets for objects.

:)

--
http://tilton-technology.com
What?! You are a newbie and you haven't answered my:
http://alu.cliki.net/The%20Road%20to%20Lisp%20Survey

Jul 18 '05 #242
Erann Gat wrote:
In article <bm*********@newsreader2.netcologne.de>, Pascal Costanza
<co******@web.de> wrote:

Christopher C. Stacy wrote:

JAVA has multi-methods with non-congruent arglist.
(It doesn't have multiple inheritence, but that doesn't
matter to dynamic type dispatching, except maybe in how
you implement searching your handler tables.) In JAVA,
the correspondance of the "signature" of the function call
and the method definition are what's important;
and there are no restricting "generic functions".

public class RandomAccessFile extends Object,implements DataOutput {
public void write(byte[] b) throws IOException;
public void write(byte[] b, int off, int len) throws IOException;
...
}


Maybe this is just a terminological issue, but in my book these are not
multi-methods. In Java, methods with the same name but different
signatures are selected at compile time,

When the selection is done is immaterial. The point is that it's done at
all. There's no reason why the same algorithm that is used to select
methods at compile time can't also be used to select methods at run time.
The claim that congruent argument lists are necessary for multi-method
dispatch is clearly false.

and this is rather like having
the parameter types as part of the method name.

No. It's the "system" keeping track of the types, not the user. That's
the key. The fact that C++ and Java just happen to do it at compile time
rather than at run time is a red herring.

This can lead to subtle bugs. Here is an example in Java:

These "subtle bugs" are a reflection of the limitation of compile-time
type inference in Java, not a limitation of multi-method dispatch with
non-congruent argument lists.


OK, you're right. It seems to me that the common terminology uses
"multiple dispatch" for method selection at runtime and "method
overloading" for method selection at compile time. (for example, see
http://www.wikipedia.org/wiki/Multiple_dispatch )

However, maybe it would be better to instead talk about static and
dynamic dispatch, which also reflects the analogy to static and dynamic
typing.

According to this proposed terminology, Java would be a language that
uses single dynamic dispatch and multiple static dispatch. C++ would be
a language that generally uses multiple static dispatch but allows for
single dynamic dispatch. Smalltalk uses single dynamic dispatch and no
multiple dispatch whatsoever.

Sounds good. ;)
Pascal

Jul 18 '05 #243
Mike Rovner wrote:
>>> print mylist.map({ |x| return x + 2 }, range(5))
0, 2, 4, 6, 8
Duh... sorry, that should read:
print range(5).map({ |x| return x + 2 })


I either case it will be [2, 3, 4, 5, 6] :)


Yeah yeah yeah, I really should read what I write before I post. But you
know what I mean, damnit! =)
Instead of lambda use list comprehensions:

print [x+2 for x in range(5)]

Unnamed code blocks considered evil :), use named instead (functions).
Why are they evil? Does being anonymous automatically make you evil?

For instance, I always thought this was a cooler alternative to the
try/finally block to ensure that a file gets closed (I'll try not to
mess up this time... ;) :

open('input.txt', { |f|
do_something_with(f)
do_something_else_with(f)
})

Rather than:

f = open('input.txt')
try:
do_something_with(f)
do_something_else_with(f)
finally:
f.close()

Now, I suppose you could always do:

def with_open_file(filename, func):
f = open(filename)
try:
func(f)
finally:
f.close()

# ...

def thing_doer(f):
do_something_with(f)
do_something_else_with(f)
with_open_file('input.txt', thing_doer)

But the anonymous version still looks more concise to me.
With nested scopes you can do amazing things. I myself was used to code
blocks due to perl background, but python idiom are not worse to say the
least.
For example:

class C(object):
...
def aprop():
def reader(self): return 0
def writer(self,newval): pass
return reader, writer
aprop=property(aprop())


Yeah, wasn't something like that up on ASPN? That's an interesting
trick... are you sure it's not supposed to be "property(*aprop())"
though? (who's being pedantic now? =)

Dave

Jul 18 '05 #244
Pascal Bourguignon <sp**@thalassa.informatimago.com> writes:
Alexander Schmolck <a.********@gmx.net> writes:
I was criticising the graphical aspect on a discrimination stand-point.

For example, the difference between the latin glyphs for "ka" and "ga"
is bigger than that between the corresponding kana glyphs.


Hear, hear: a lisper arguing for trading off simplicity, extensibility[1] and
regularity for discriminatability :)

Maybe someone more proficient in Japanese might want to correct me, but I
really suspect it isn't worth it, actually. I don't think the heightened
discrimnatibility is needed that much and in fact would often even be
detrimental because a certain semantic compound often changes from
unvoiced to voiced (or from "big" to "little" TSU) in a fairly regular manner
in compound words (e.g. TETSU
(iron); 鉄;てつ and HAN 板;はん(plate, inter alia) ->
TEPPAN, 鉄板; てっぱん'iron plate/teppan cooking';
there is no TETSUHAN to confuse it with, AFAIK, so the fact that the
similarity between compound and parts is retained in the kana (unlike the
romanji transliteration) is likely to be rather desirable).

'as

[hmm, never tried embedding japanese characters in usenet posting, hope it
works :|]

[1] yep, I mean it: katakana have actually be straighforwardly extended
relatively recently to provide better transliterations for (mainly
English) loan words.
Jul 18 '05 #245


Edi Weitz wrote:
On Wed, 08 Oct 2003 23:05:28 GMT, Dave Benjamin <da**@3dex.com> wrote:

Python Myth: Python's only problem is the whitespace thing.
Reality: Python's only problem is that it is not Common Lisp.

How about that?

A good one... :)


I think Python's problem is its success. Whenever something is
succesful, the first thing people want is more features. Hell, that is
how you know it is a success. The BDFL still talks about simplicity, but
that is history. GvR, IMHO, should chased wish-listers away with "use
Lisp" and kept his gem small and simple.
--
http://tilton-technology.com
What?! You are a newbie and you haven't answered my:
http://alu.cliki.net/The%20Road%20to%20Lisp%20Survey

Jul 18 '05 #246
Doug Tolton:
Yes and I have repeatedly stated that I disagree with it. I simply do
not by that allowing expressiveness via high level constructs detracts
from the effectiveness of the group. That argument is plainly
ridiculous, if it were true then Python would be worse than Java,
because Python is *far* more expressive.
I disagree with your summary. Compare:

The argument is that expressive power for a single developer can, for
a group of developers and especially those comprised of people with
different skill sets and mixed expertise, reduce the overall effectiveness
of the group.

Notice the "can". Now your summary is:

...allowing expressiveness via high level constructs detracts
from the effectiveness of the group

That implies that at least I assert that *all* high level constructs
detract from group effectiveness, when clearly I am not saying
that.

If this is indeed the crux, then any justification which says "my brain"
and "I" is suspect, because that explicitly ignores the argument.

Apparently you can't read very well. I simply stated that I believe our
point of contention to be that issue, I never stated I believe that
because it's some vague theory inside my head.


Nor can you, because I did not say that. I said that the arguments you
use to justify your assertions could be stronger if you were to include
cases in your history and experience which show that you understand
the impacts of a language feature on both improving and detracting from
a group effort. Since you do have that experience, bring it up. But
since your arguments are usually along the lines of "taking tools out
of your hands", they carry less weight for this topic.

(Ambiguity clarification: "your hands" is meant as 2nd person singular
possessive and not 2nd person plural. :)
No, what I was referring to wasn't estimation. Rather I was referring
to the study that found that programmers on average write the same
number of lines of code per year regardless of the language they write
in.
McConnell's book has the same study, with outliers for assembly
and APL. Indeed, I mentioned this in my reply:
... and that LOC has a
good correlation to development time, excluding extremes like APL
and assembly.

Therefore the only way to increase productivity is to write
software in a language that uses less lines to accomplish something
productive. See Paul Grahams site for a discussion.
I assume you refer to "Succinctness is Power" at
http://www.paulgraham.com/power.html

It does not make as strong a case as you state here. It argues
that "succintness == power" but doesn't make any statement
about how much more succinct Lisp is over Python. He doesn't
like Paul Prescod's statement, but there's nothing to say that
Python can't be both easier to read and more succinct. (I am
not making that claim, only pointing out that that essay is pure
commentary.)

Note also that it says nothing about group productivity.
If it takes me 5% longer to write a program in language X
then language Y, but where I can more easily use code and
libraries developed by others then it might be a good choice
for me to use a slightly less succinct language.

Why don't people use APL/J/K with it's succinctness?

I also disagree with Graham's statement: the most accurate measure of the relative power of
programming languages might be the percentage of
people who know the language who will take any job
where they get to use that language, regardless of the
application domain.
I develop software for computational life sciences. I would
do so in Perl, C++, Java, even Javascript because I find
the domain to be very interesting. I would need to be very
low on money to work in, say, accounting software, even if
I had the choice of using Python.

You are saying that Python and Perl are similarly compact?!?
You have got to be kidding right?
Perl is *far* more compact than Python is. That is just ludicrous.
Yes. In this I have a large body of expertise by which to compare
things. Perl dominates bioinformatics sofware development, and the
equivalent Python code is quite comparable in side -- I argue that
Python is easier to understand, but it's still about the same size.
It's always nice just to chuck some arbitrary table into the
conversation which conveniently backs some poitn you were trying to
make, and also conveniently can't be located for anyone to check the
methodology.
"Can't be located"!?!?! I gave a full reference to the secondary material,
included the full quote (with no trimming to bias the table more my way),
gave the context to describe the headings, and gave you a reference
to the primary source! And I made every reasonable effort to find both
sources online.

Since you can't be suggesting that I tracked down and destroyed
every copy of McConnell's book and of the primary literature (to make
it truely unlocatable) then what's your real complaint? That things exist
in the world which aren't accessible via the web? And how is that my
fault?
If you want some real world numbers on program length check here:
http://www.bagley.org/~doug/shootout/
If I want some real world numbers on program length, I do it myself:
http://pleac.sourceforge.net/
I wrote most of the Python code there

Still, since you insist, I went to the scorecard page and changed
the weights to give LOC a multipler of 1 and the others a multiplier
of 0. This is your definition of succinctness, yes? This table
is sorted (I think) by least LOC to most.

SCORES
Language Implementation Score Missing
Ocaml ocaml 584 0
Ocaml ocamlb 584 0
Ruby ruby 582 0
Scheme guile 578 0
Python python 559 0
Pike pike 556 0
Perl perl 556 0
Common Lisp cmucl 514 0
Scheme bigloo 506 1
Lua lua 492 2
Tcl tcl 478 3
Java java 468 0
Awk mawk 457 6
Awk gawk 457 6
Forth gforth 449 2
Icon icon 437 7
C++ g++ 435 0
Lisp rep 427 3
Haskell ghc 413 5
Javascript njs 396 5
Erlang erlang 369 8
PHP php 347 9
Emacs Lisp xemacs 331 9
C gcc 315 0
SML mlton 284 0
Mercury mercury 273 8
Bash bash 264 14
Forth bigforth 264 10
SML smlnj 256 0
Eiffel se 193 4
Scheme stalin 131 17

So:
- Why aren't you using Ocaml?
- Why is Scheme at the top *and* bottom of the list?
- Python is right up there with the Lisp/Scheme languages
- ... and with Perl.

Isn't that conclusion in contradiction to your statements
that 1) "Perl is *far* more compact than Python is" and 2)
the implicit one that Lisp is significantly more succinct than
Python? (As you say, these are small projects .. but you did
point out this site so implied it had some relevance.)
I just don't buy these numbers or the chart from Mcconell on faith. I
would have to see his methodolgy, and understand what his motivation in
conducting the test was.
I invite you to dig up the original paper (which wasn't McConnell)
and enlighten us. Until then, I am as free to agree with McConnell --
more so because his book is quite good and comprehensive with
sound arguments comparing and contrasting the different
approaches and with no strong hidden agenda that I can detect.
It still wasn't relevant to Macros. However, because neither of you
understand Macros, you of course think it is relevant.


My lack of knowledge not withstanding, the question I pose to
you is, in three parts:
- is it possible for a language feature to make a single programmer
more expressive/powerful while hindering group projects?
- can you list three examples of situations where that's occured?
- can you list one example where the increased flexibility was, in
general, a bad idea? That is, was there a language which would
have been better without a language feature.

Note that I did not at all make reference to macros. Your statements
to date suggest that your answer to the first is "no."

Andrew
da***@dalkescientific.com
Jul 18 '05 #247
Dave Benjamin <da**@3dex.com> wrote previously:
|return { |x, y|
| print x
| print y
|}
|It's unambiguous because no dictionary literal would ever start with
|'{|', it looks almost identical to a certain other language <g>

Btw. I think Dave is thinking of Ruby as that "certain other language."
But Clipper/xBase used the same syntax for the same thing before Ruby
was a glimmer in Matz' eye. I'm not sure if that's where he got it
though... it might be from somewhere older I don't know about.

Yours, Lulu...

--
---[ to our friends at TLAs (spread the word) ]--------------------------
Echelon North Korea Nazi cracking spy smuggle Columbia fissionable Stego
White Water strategic Clinton Delta Force militia TEMPEST Libya Mossad
---[ Postmodern Enterprises <me***@gnosis.cx> ]--------------------------
Jul 18 '05 #248
pr***********@comcast.net:
So either the syntax doesn't make a whole hell of a lot of difference
in readability, or readability doesn't make a whole hell of a lot of
difference in utility.


Or the people who prefer the awesome power that is Lisp and
Scheme don't find the limited syntax to be a problem.

Andrew
da***@dalkescientific.com
Jul 18 '05 #249
Andrew Dalke wrote:
Doug Tolton:

Therefore the only way to increase productivity is to write
software in a language that uses less lines to accomplish something
productive. See Paul Grahams site for a discussion.

I assume you refer to "Succinctness is Power" at
http://www.paulgraham.com/power.html

It does not make as strong a case as you state here. It argues
that "succintness == power" but doesn't make any statement
about how much more succinct Lisp is over Python.


He provides more information at http://www.paulgraham.com/icad.html
Pascal

Jul 18 '05 #250

This discussion thread is closed

Replies have been disabled for this discussion.

By using this site, you agree to our Privacy Policy and Terms of Use.