469,342 Members | 6,455 Online
Bytes | Developer Community
New Post

Home Posts Topics Members FAQ

Post your question to a community of 469,342 developers. It's quick & easy.

Python syntax in Lisp and Scheme

I think everyone who used Python will agree that its syntax is
the best thing going for it. It is very readable and easy
for everyone to learn. But, Python does not a have very good
macro capabilities, unfortunately. I'd like to know if it may
be possible to add a powerful macro system to Python, while
keeping its amazing syntax, and if it could be possible to
add Pythonistic syntax to Lisp or Scheme, while keeping all
of the functionality and convenience. If the answer is yes,
would many Python programmers switch to Lisp or Scheme if
they were offered identation-based syntax?
Jul 18 '05
699 28961
On Thu, 09 Oct 2003 23:15:17 GMT, "Andrew Dalke" <ad****@mindspring.com> wrote:
My primary machine is Mac OS X. I always got frustrated getting
fonts, sound, and movies working under the various Linux-based
distributions, and I suspect there are the same problems with
BSD-based distributions.
Although I've been using Linux and/or FreeBSD since about 1999 I
understand what you mean. It's still much too hard to set up a machine
if you just want to get your work done and don't like fiddling with
the OS internals.

But there are a couple of free Common Lisps available for Mac OS X -
OpenMCL, CLISP, SBCL, ECL, maybe more. (Plus the trial versions of
MCL, AllegroCL and LispWorks the latter of which has a fantastic
cross-platform GUI toolkit.) What's missing is someone who integrates
these Lisps and the available packages with something like Fink to
make OS X as convenient as Debian as far as Lisp is concerned. I think
this wouldn't be too hard - AFAIK Fink uses the same package format -
but somebody's gotta do the work. (If someone's willing to donate an
iBook or PowerBook to me I'll do it... :)

Just to show you that the situation isn't as bleak as you might think
here's the list of Debian packages maintained by Kevin Rosenberg:

<http://qa.debian.org/developer.php/developer.php?gpg_key=C4A3823E>

That's more than 110 and almost all of them are for Common Lisp. Plus,
there are Common Lisp packages maintained by other people, of
course. Still far less than what can be found on CPAN but a very good
base to start with. (And who needs Lingua::Romana::Perligata anyway?
Pardon me that all my examples are for Perl but I'm much more familiar
with Perl than with Python.)
No one has told me they would hire me for contract work "if only you
were a Lisp programmer."
No one has told me either. But I've had a lot of contract work in
Common Lisp since I started with it in 2000. If I were always waiting
for someone to tell me which language to use I'd probably be stuck
with Java, PHP, and Visual Basic forever. Yuck!
3. Run around complaining that you can't use Lisp because a certain
combination of features is not available for free. We have far too
many of these guys on c.l.l.


Technically I'm cross-posting from c.l.py. And I actually complain
for other reasons. ;)


Yes, I wasn't talking about you in particular.
So far I've made ... 4(?) half-hearted attempts at learning Lisp.
Next time try it wholeheartedly... :)
*sigh*. Another package that doesn't (err, won't) work on my Mac.


Same answer as above. The CD is based on SBCL so it should be possible
to do something similar for OS X (should even be easier in theory
because you don't need the Knoppix stuff for HW detection). It's just
that someone has to do it...

Cheers,
Edi.
Jul 18 '05 #401
On Thu, 09 Oct 2003 23:31:46 GMT, "Andrew Dalke" <ad****@mindspring.com> wrote:
Conjecture: Is it that the commericial versions of Lisp pull away
some of the people who would otherwise help raise the default
functionality of the free version? I don't think so... but then
why?


I'm pretty sure this is the case. If you lurk in c.l.l for a while
you'll see that a large part of the regular contributors aren't what
you'd call Open Source zealots. Maybe that's for historical reasons, I
don't know. But of course that's different from languages which have
always been "free" like Perl or Python.

To give one example: One of the oldest dynamic HTTP servers out there
is CL-HTTP.[1] I think it is very impressive but it has a somewhat
dubious license which doesn't allow for commercial deployment - it's
not "free." Maybe, I dunno, it would have a much higher market share
if it had been licensed like Apache when it was released. I'm sure
there's much more high-quality Lisp software out there that hasn't
even been released.

Edi.

[1] <http://www.ai.mit.edu/projects/iiip/doc/cl-http/home-page.html>

Don't be fooled by the "Last updated" line. There's still active
development - see

<ftp://ftp.ai.mit.edu/pub/users/jcma/cl-http/devo>.
Jul 18 '05 #402
On Fri, 10 Oct 2003 00:34:10 -0400, Lulu of the Lotus-Eaters wrote:
Kenny Tilton <kt*****@nyc.rr.com> wrote previously:
|> It's only been out, what, twenty years? And another twenty before that
|> for other lisps... How much time do you think you need? |Hey, we've been dead for thirty years, give us a break.
|The bad news for other languages is that the evolution of programming
|languages, like baseball, is a game played without a clock. I would think Lisp is more like cricket: wickets bracket both ends, no
one can actually understand the rules, but at least the players wear
white.


Oh, come on! Anyone can understand cricket! There are two teams.
The team that's in sits out, except for two batsmen, and the other
team come out and try to get the men that are in out. When a man goes
out, he goes in and another man comes out. When the team that's in
are all out, except for the one who's not out, the other team goes in,
until they're all out, too; and then a second innings is played.
That's more or less all there is to it!

--
Cogito ergo I'm right and you're wrong. -- Blair Houghton

(setq reply-to
(concatenate 'string "Paul Foley " "<mycroft" '(#\@) "actrix.gen.nz>"))
Jul 18 '05 #403
In article <bm************@ID-169208.news.uni-berlin.de>, Greg Ewing (using news.cis.dfn.de) wrote:
That's true, although you don't really need macros for that,
just something like Smalltalk-style code blocks -- if anyone
can come up with a way of grafting them into the Python
syntax...


Well, you'll have to do a better job than I did, because my proposal made
a distinct "thud" when I dropped it here. ;)

Dave

--
..:[ dave benjamin (ramenboy) -:- www.ramenfest.com -:- www.3dex.com ]:.
: d r i n k i n g l i f e o u t o f t h e c o n t a i n e r :
Jul 18 '05 #404
On Friday 10 October 2003 05:52 am, Daniel Berlin wrote:
On Oct 9, 2003, at 5:33 PM, Alex Martelli wrote:
Rainer Deyke wrote:
Pascal Costanza wrote:
Pick the one Common Lisp implementation that provides the stuff you
need. If no Common Lisp implementation provides all the stuff you
need, write your own libraries or pick a different language. It's as
simple as that.

Coming from a C/C++ background, I'm surprised by this attitude. Is
portability of code across different language implementations not a
priority for LISP programmers?
Libraries distributed as binaries are not portable across different C++
implementations on the same machine (as a rule).


This isn't true anymore (IE for newer compilers).


Wow, that IS great news! Does it apply to 32-bit Intel-oid machines (the
most widespread architecture) and the newest releases of MS VC++ (7.1)
and gcc, the most widespread compilers for it? I can't find any docs on what
switches or whatever I should give the two compilers to get seamless interop.

Specifically, the standard Python on Windows has long been built with MSVC++
and this has given problems to C-coded extension writers who don't own that
product -- it IS possible to use other compilers to build the extensions, but
only with much pain and some limitations (e.g on FILE* arguments). If this
has now gone away there would be much rejoicing -- with proper docs on the
Python side of things and/or use of whatever switches are needed to enable
this, if any, when we do the standard Python build on Windows.

Mangling, exception handling, etc, is all covered by the ABI.

IBM's XLC 6.0 for OSX also follows this C++ ABI, and is thus compatible
with G++ 3.x on OSX.


I'm not very familiar with Python on the Mac but I think it uses another
commercial compiler (perhaps Metrowerks?), so I suspect the same question
may apply here. It's not as crucial on other architectures where Python is
more normally built with free compilers, but it sure WOULD still be nice to
think of possible use of costly commercial compilers with hypothetically
great optimizations for the distribution of some "hotspot" object files, if
that sped the interpreter up without giving any interoperability problems.
Alex
Jul 18 '05 #405
james anderson:
i'm trying to understand how best to approach unicode representations.
i am told the pep-0261 is the standard for python.
PEP 261 is the standard for the 4 byte wide implementation. It was
implemented after 2 byte Unicode which was documented after-the-fact in PEP
100.
it was not clear what mechanism it entails for access to os-level text
management facilities on the order of osx's "apple type services for unicode imaging"[0].
ATSUI is a text rendering library. Core Python doesn't include
text-rendering, leaving this up to GUI toolkits. Python does ship with Tk,
which has Unicode text support.
i looked through the mac extensions, but did not discern anything
relevant. can anyone point me to code in wide and narrow builds which uses
such os-level facilities. i was given a reference which appeared to concern windows' file names, but that, as is the case with direct stream codecs, is primarly a static situation.
Static as opposed to what? A fixed API that is explicitly wrapped versus
a dynamically wrapped system call convention as is done on Windows by
PythonCOM or ctypes?
i would also be interested to hear if there have been any data collected on preponderance of wide builds, and on the consequences in those installations for storage and algorithm efficiency.


Red Hat Linux 9.0 ships with a 4 byte wide build of Python and that is
quite widely distributed. On Windows, I would expect 4 byte to be very rare
as 2 byte matches the system conventions and the binary downloads available
from python.org are 2 byte builds.

Neil
Jul 18 '05 #406
Kenny Tilton <kt*****@nyc.rr.com> writes:
Paul Rubin wrote:
Kenny Tilton <kt*****@nyc.rr.com> writes:
I think Python's problem is its success. Whenever something is
succesful, the first thing people want is more features. Hell, that is
how you know it is a success. The BDFL still talks about simplicity,
but that is history. GvR, IMHO, should chased wish-listers away with
"use Lisp" and kept his gem small and simple.

That's silly. Something being successful means people want to use it
to get things done in the real world. At that point they start
needing the tools that other languages provide for dealing with the
real world. The real world is not a small and simple place, and small
simple systems are not always enough to cope with it. If GVR had kept
his gem small and simple, it would have remained an academic toy, and
I think he had wide-reaching ambitions than that.


I agree with everything you said except that last bit, and I only
disagree with that because of what I have heard from Pythonistas, so
maybe I missed something. I did not think Python (or GVR or both) had
aspirations of being a full-blown language vs just being a powerful
scripting language.

Do they ever plan to do a compiler for it?


Python always compiles to byte-code, saved in a ".pyc" file, apart
(possibly) from the main file. Things that get "imported" will eb
compiled and saved out and re-compiled if the source-file is ewer than
the dumped compiled code.

//Ingvar
--
(defun p(i d)(cond((not i)(terpri))((car i)(let((l(cadr i))(d(nthcdr(car i)d
)))(princ(elt(string(car d))l))(p(cddr i)d)))(t(princ #\space)(p(cdr i)d))))
(p'(76 2 1 3 1 4 1 6()0 5()16 10 0 7 0 8 0 9()2 6 0 0 12 4 23 4 1 4 8 8)(sort
(loop for x being the external-symbols in :cl collect (string x)) #'string<))
Jul 18 '05 #407
Andrew Dalke wrote:
Pascal Costanza:
Furthermore note that this is not an implementation difference. The ANSI
standard defines the function COMPILE.

Is the implementation free to *not* compile the code when the
COMPILE function is called? That is, to leave it as is? How
would a user tell the difference without running a timing test?


....by looking at the documentation. ;)
Seriously, ANSI CL defines "minimal compilation". See
http://www.lispworks.com/reference/H...ody/03_bbb.htm

Most CL implementations do more than that.
Pascal

--
Pascal Costanza University of Bonn
mailto:co******@web.de Institute of Computer Science III
http://www.pascalcostanza.de Römerstr. 164, D-53117 Bonn (Germany)

Jul 18 '05 #408
Andrew Dalke wrote:
Pascal Costanza:
Furthermore note that this is not an implementation difference. The ANSI
standard defines the function COMPILE.

Is the implementation free to *not* compile the code when the
COMPILE function is called? That is, to leave it as is? How
would a user tell the difference without running a timing test?


In CL, COMPILE (and COMPILE-FILE) are not required by the CL spec
to convert lisp code to machine language. Many implementations do,
however, and it tends to be one of the major defining characteristics
of a CL implementation.

A conforming implementation must perform certain minimal steps
at compilation time. See section 3.2.2.2 of the CL spec. It states:

Minimal compilation is defined as follows:

* All compiler macro calls appearing in the source code being
compiled are expanded, if at all, at compile time; they will
not be expanded at run time.

* All macro and symbol macro calls appearing in the source code
being compiled are expanded at compile time in such a way that
they will not be expanded again at run time. macrolet and
symbol-macrolet are effectively replaced by forms corresponding
to their bodies in which calls to macros are replaced by their expansions.

* The first argument in a load-time-value form in source code
processed by compile is evaluated at compile time; in source code
processed by compile-file, the compiler arranges for it to be
evaluated at load time. In either case, the result of the evaluation
is remembered and used later as the value of the load-time-value
form at execution time.

Other than this,a conforming implementation is allowed to leave the
source code alone, compile it to byte codes, compile it to machine
language, or any other correctness-preserving transformation.

It would be conforming to do the minimal compilation, produce byte codes, then
dynamically convert the byte codes to machine language at run time as in Psyco.

Paul

Jul 18 '05 #409
"Andrew Dalke" <ad****@mindspring.com> writes:
Björn Lindberg:
Apart from the usual problems with micro benchmarks, there are a few
things to consider regarding the LOC counts on that site:
I wasn't the one who pointed out those micro benchmarks. Kenny
Tilton pushed the idea that more concise code is better and that Lisp
gives the most concise code, and that Perl is much more compact
than Python. He suggested I look at some comparisons, so I followed
his suggestion and found that 1) the Lisp code there was not more
succinct than Python and 2) neither was the Perl code.
* Declarations. Common Lisp gives the programmer the ability to
optimize a program by adding declarations to it.


While OCaml, which has the smallest size, does type inferencing....


The good Lisp compilers do type inferencing too AFAIK, but since Lisp
is fully dynamic it is not always possible for the compiler to do full
type inference at compile time. Declarations help the compiler in this
respect.

<snip>
* In many languages, any program can be written on a single
line. This goes for Lisp, ut also for C and other languages.


Absolutely correct. Both Alex Martellli and I tried to dissuade
Kenny Tilton that LOC was the best measure of succinctness and
appropriateness, and he objected.


I think in general there *is* a correlation between LOC and
succinctness, eg LOC(assembler) > LOC(C) > LOC(awk). It is probably
not a very strong correlation though, and it would probably be more
accurate for larger programs than small code snippets.
* I don't think the LOC saving qualities of Lisp is made justice in
micro benchmarks. The reason Lisp code is so much shorter than the
equivalent code in other languages is because of the abstractive
powers of Lisp, which means that the difference will be more
visible the larger the program is.


Agreed. I pointed out elsewhere that there has been no systematic
study to show that Lisp code is indeed "so much shorter than the
equivalent code in other languages" where "other languages" include
Python, Perl, or Ruby.


It would be interesting to see such studies made.
The closest is
http://www.ipd.uka.de/~prechelt/Biblio/
where the example program, which was non-trivial in size
took about 100LOC in Tcl/Rexx/python/perl and about 250LOC
in Java/C/C++.
That is an interesting study, although there are some possible flaws
(eg no controlled selection of participants). The programming problem
in that study is far to small to meaningfully test any abstraction
capabilities in the language on the level of macros, OO or HOF
though.

<snip>
In any case, it implies you need to get to some serious sized
programs (1000 LOC? 10000LOC? A million?) before
the advantages of Lisp appear to be significant.


I think that goes for any advantages due to abstraction capabilities
of macros, OO or HOF. The small program in the study above seems to
capture the scripting languages higher level compared to the
close-to-the-machine languages C & C++. (I have not read all of it
though.) To show advantages of the abstraction facilities we have been
discussing in this thread, I believe much larger programs are needed.
Björn
Jul 18 '05 #410
Pascal Bourguignon wrote:
...
The question being whether it's better to be needing several different
languages to solve a set of problems because none of them languages is
powerful enough, or if it's better to have one good and powerful
language that helps you solve all your problems?


A reasonably good way to highlight the key difference between the "horses
for courses" and "one ring to bind them all" schools of thought.

Would I rather have just one means of transportation "powerful enough"
to help me solve all my "going from A to B" problems? Nope! I want a
bicycle for going short and middle distances on sunny days, a seat on
a reasonably fast jet plane for much longer trips, and several things
in-between. A single ``thing'' able to cater for such hugely disparate
needs would be way too complicated to be an optimal solution to any
single one of them.

Would I rather have just one woodworking tool "powerful enough" to help
me solve all my "working wood" problems? No way! A suitably large
and complicated "Swiss Army Knife" may be handy in emergencies, but in
my little woodworking shop I want several separate, optimized tools, each
optimized for its own range of tasks. A single "multi-blade" tool able
to cater for all of the disparate needs that arise in working wood would
be way too complicated and unwieldy to be an optimal solution to any
single one of them.

Do I want a single writing tool, or separate pencils, pens, markers,
highlighters...? Do I want a single notation to write down ANYthing
on paper, or separate ones for music, algebraic formulas, shopping
lists, accounting, ...? Do I want a single font to be used for ANY
writing, from books to billboards to shops' signs to handwriting...?

More generally, are there ANY situations in which "one size fits all"
is a GOOD, OPTIMAL solution, rather than a make-do approach? Maybe
some can be found, but it seems to me that in most cases a range of
tools / solutions / approaches tailored to different classes of
problems may well be preferable. So, it should come as no surprise
that I think this applies to computer languages. In fact I am
sometimes amazed at how wide a range of problems I can solve quite
well with Python -- but I still want C for e.g. device drivers,
spreadsheets to "program" simple what-if scenarios and play around
interactively with parameters, make (or one of its successors, such
as SCons), bash for interactively typed one-liners, HTML / SGML etc
for documents, XML for data interchange with alien apps, SQL to
access relational databases, etc, etc -- and no doubt more besides.

See http://www.strakt.com/sol_capsao_7.html for example -- not
very detailed, but a generic description of a specialized declarative
language for Entity - Relationship descriptions, with embedded
actions in procedural languages [currently, Python only], that we're
developing as part of the CAPS framework (no macros were used in the
production of that language, just traditional boring parsers &c in
Python -- of course, it IS quite possible that _the specialized
language itself_ might benefit from having macros, that's a separate
issue from the one of _implementing_ that "BLM language").
Alex

Jul 18 '05 #411
David Eppstein wrote:
In article <bm**********@newsreader2.netcologne.de>,
Pascal Costanza <co******@web.de> wrote:

It's probably just because the Common Lisp community is still relatively
small at the moment. But this situation has already started to improve a
lot.

It's only been out, what, twenty years? And another twenty before that
for other lisps... How much time do you think you need?


AFAIK, Lisp was very popular in the 70's and 80's, but not so in the
90's. At the moment, Common Lisp is attracting a new generation of
programmers.

The basic idea of Lisp (programs = data) was developed in the 50's (see
http://www.paulgraham.com/rootsoflisp.html ). This idea is fundamentally
different and much more powerful than the approach taken by almost all
other languages.

You can't argue that. You can argue whether you want that power or not,
but Lisp is definitely more powerful than other languages in this
regard. As Eric Raymond put it, "Lisp is worth learning for the profound
enlightenment experience you will have when you finally get it; that
experience will make you a better programmer for the rest of your days,
even if you never actually use Lisp itself a lot."

And, as Paul Graham put it, if you take a language and "add that final
increment of power, you can no longer claim to have invented a new
language, but only to have designed a new dialect of Lisp". (see
http://www.paulgraham.com/diff.html )

These are the essential reasons why it is just a matter of time that
Lisp will be reinvented and/or rediscovered again and again, and will
continue to attract new followers. It is a consequential idea once you
have got it.

"What was once thought can never be unthought." - Friedrich Dürrenmatt

Pascal

--
Pascal Costanza University of Bonn
mailto:co******@web.de Institute of Computer Science III
http://www.pascalcostanza.de Römerstr. 164, D-53117 Bonn (Germany)

Jul 18 '05 #412
Andrew Dalke <ad****@mindspring.com> wrote:
+---------------
| (and yes, I know about the lawsuit against disk drive manufacturors
| and their strange definition of "gigabyte"... )
+---------------

Oh, you mean the fact that they use the *STANDARD* international
scientific/engineering notation for powers of 10 instead of the
broken, never-quite-right-except-in-a-few-cases pseudo-binary
powers of 10?!?!? [Hmmm... Guess you can tell which side of *that*
debate I'm on, eh?] The "when I write powers of 10 which are 3*N
just *asssume* that I meant powers of 2 which are 10*N" hack simply
fails to work correctly when *some* of the "powers of 10" are *really*
powers of 10. It also fails to work correctly with things that aren't
instrinsically quantized in powers of 2 at all.

Examples: I've had to grab people by the scruff of the neck and push
their faces into the applicable reference texts before they believe me
when I say that gigabit Ethernet really, really *is* 1000000000.0 bits
per second [peak payload, not encoded rate], not 1073741824, and that
64 kb/s DS0 telephone circuits really *are* 64,000.0 bits/sec, not 65536.
[And, yes, 56 kb/s circuits are 56000 bits/sec, not 57344.]

Solution: *Always* use the internationally-recognized binary prefixes
<URL:http://physics.nist.gov/cuu/Units/binary.html> when that's really
what you mean, and leave the old scientific/engineering notation alone,
as pure powers of 10. [Note: The historical notes on that page are well
worth reading.]
-Rob

p.s. If you're hot to file a lawsuit, go after the Infiniband Trade
Association for its repeated claims that 4x IB is "10 Gb/s". It isn't,
it's 8 Gb/s [peak user payload rate, not encoded rate]. Go read the
IBA spec if you don't believe me; it's right there.

-----
Rob Warnock <rp**@rpw3.org>
627 26th Avenue <URL:http://rpw3.org/>
San Mateo, CA 94403 (650)572-2607

Jul 18 '05 #413
Pascal Costanza wrote:
David Eppstein wrote:

It's only been out, what, twenty years? And another twenty before that
for other lisps... How much time do you think you need?

AFAIK, Lisp was very popular in the 70's and 80's, but not so in the
90's. At the moment, Common Lisp is attracting a new generation of
programmers.


Early lisp usage was driven by government money and the AI bubble.
That declined in the late 1980s or in the 1990s. Individuals could
not afford adequate hardware to run lisp until sometime in the 1990s.

But now, hardware has more than caught up with the demands of
lisp and individuals are carrying it forward. This is something
that drives the newer languages also. The cycle time for
improving the languages or their implementations goes down
as the hardware gets faster.

Paul
Jul 18 '05 #414
"Andrew Dalke" <ad****@mindspring.com> writes:
No one has told me they would hire me for contract work "if only
you were a Lisp programmer."


Next time I'm hiring, I'll be sure to let you know.
Jul 18 '05 #415
Björn Lindberg wrote:
...
Agreed. I pointed out elsewhere that there has been no systematic
study to show that Lisp code is indeed "so much shorter than the
equivalent code in other languages" where "other languages" include
Python, Perl, or Ruby.


It would be interesting to see such studies made.


Absolutely! But funding such studies would seem hard. Unless some
company or group of volunteers had their own reasons to take some
existing large app coded in Lisp/Python/Perl/Ruby, and recode it in
one of the other languages with essentially unchanged functionality,
which doesn't seem all that likely. And if it happened, whatever
group felt disappointed in the results would easily find a zillion
methodological flaws to prove that the results they dislike should
be ignored, nay, reversed.

In practice, such a re-coding would likely involve significant
changes in functionality, making direct comparisons iffy, I fear.

I know (mostly by hearsay) of some C++/Java conversions done
within companies (C++ -> Java for portability, Java -> C++ for
performance) with strong constraints on functionality being "just
the same" between the two versions (and while that's far from
a "scientific result", a curious pattern seems to emerge: going
from C++ to Java seems to produce the same LOC's, apparently a
disappointment for some; going from Java to C++ seems to expand
LOC's by 10%/20%, ditto -- but how's one to say if the C++ code
had properly exploited the full macro-like power of templates,
for example...?). But I don't even have hearsay about any such
efforts between different higher-level languages (nothing beyond
e.g. a paltry few thousand lines of Perl being recoded to Python
and resulting in basically the same LOC's; or PHP->Python similarly,
if PHP can count as such a language, perhaps in a restricted context).

In any case, it implies you need to get to some serious sized
programs (1000 LOC? 10000LOC? A million?) before
the advantages of Lisp appear to be significant.


I think that goes for any advantages due to abstraction capabilities
of macros, OO or HOF. The small program in the study above seems to
capture the scripting languages higher level compared to the
close-to-the-machine languages C & C++. (I have not read all of it
though.) To show advantages of the abstraction facilities we have been
discussing in this thread, I believe much larger programs are needed.


Yes, and perhaps to show advantages of one such abstraction facility
(say macros) wrt another (say HOFs) would require yet another jump up
in application size, if it could be done at all. Unless some great
benefactors with a few megabucks to wast^H^H^H^H invest for the general
benefit of humanity really feel like spending them in funding such
studies, I strongly suspect they're never really going to happen:-(.
Alex

Jul 18 '05 #416
j-*******@rcn.com (Jon S. Anthony) writes:
If your problems are trivial, I suppose the presumed lower startup
costs of Python may mark it as a good solution medium.


I find no significant difference in startup time between python and
mzscheme.
Jul 18 '05 #417
Pascal Costanza <co******@web.de> writes:
AFAIK, Lisp was very popular in the 70's and 80's, but not so in the
90's.


`Popular' in this case being somewhat relative, much like
the way pustular psoriasis is more popular than leprosy.
Jul 18 '05 #418

On Oct 10, 2003, at 5:12 AM, Alex Martelli wrote:
On Friday 10 October 2003 05:52 am, Daniel Berlin wrote:
On Oct 9, 2003, at 5:33 PM, Alex Martelli wrote:
Rainer Deyke wrote:
Pascal Costanza wrote:
> Pick the one Common Lisp implementation that provides the stuff you
> need. If no Common Lisp implementation provides all the stuff you
> need, write your own libraries or pick a different language. It's
> as
> simple as that.

Coming from a C/C++ background, I'm surprised by this attitude. Is
portability of code across different language implementations not a
priority for LISP programmers?

Libraries distributed as binaries are not portable across different
C++
implementations on the same machine (as a rule).
This isn't true anymore (IE for newer compilers).


Wow, that IS great news! Does it apply to 32-bit Intel-oid machines
(the
most widespread architecture)


Yes, but not windows.

and the newest releases of MS VC++ (7.1)
and gcc, the most widespread compilers for it?
GCC, yes.

MS is not participating in the ABI (take that to mean what you will),
AFAIK.

http://codesourcery.com/cxx-abi
(it's not really draft anymore since compilers are shipping using it,
but it is updated for bug fixes occasionally)

"This document was developed jointly by an informal industry coalition
consisting of (in alphabetical order) CodeSourcery, Compaq, EDG, HP,
Intel, Red Hat, IBM and SGI. Additional contributions were provided by
a variety of individuals."

I can't find any docs on what
switches or whatever I should give the two compilers to get seamless
interop.

Specifically, the standard Python on Windows has long been built with
MSVC++
and this has given problems to C-coded extension writers who don't own
that
product -- it IS possible to use other compilers to build the
extensions, but
only with much pain and some limitations (e.g on FILE* arguments). If
this
has now gone away there would be much rejoicing -- with proper docs on
the
Python side of things and/or use of whatever switches are needed to
enable
this, if any, when we do the standard Python build on Windows.

Mangling, exception handling, etc, is all covered by the ABI.

IBM's XLC 6.0 for OSX also follows this C++ ABI, and is thus
compatible
with G++ 3.x on OSX.
I'm not very familiar with Python on the Mac but I think it uses
another
commercial compiler (perhaps Metrowerks?), so I suspect the same
question
may apply here.


It depends. I've built it with both. It's not as crucial on other architectures where Python is
more normally built with free compilers, but it sure WOULD still be
nice to
think of possible use of costly commercial compilers with
hypothetically
great optimizations for the distribution of some "hotspot" object
files, if
that sped the interpreter up without giving any interoperability
problems.


At least on Mac, Apple's gcc -fast is better than any other compiler
around, according to recent benchmarks.

Unsurprising to me, but i'm a gcc hacker, so i might be biased a bit. :P

Most, if not all, optimizations that commercial compilers implement are
or are being implemented in gcc for 3.5/3.6.

--Dan
Jul 18 '05 #419


Björn Lindberg wrote:
"Andrew Dalke" <ad****@mindspring.com> writes:

Björn Lindberg:
Apart from the usual problems with micro benchmarks, there are a few
things to consider regarding the LOC counts on that site:


I wasn't the one who pointed out those micro benchmarks. Kenny
Tilton pushed the idea that more concise code is better and that Lisp
gives the most concise code, and that Perl is much more compact
than Python. He suggested I look at some comparisons, so I followed
his suggestion and found that 1) the Lisp code there was not more
succinct than Python and 2) neither was the Perl code.
You might be thinking of someone else. I remember a recent discussion
focused on this, but IIRC all other things were equal.

All else being equal, shorter is better. But then right away things can
get longer, since cryptic languages like APL, K, and (I gather) Perl are
not equal in value to nice long function and data names.

As for that ridiculous study, it includes VB. VB suffers from The 4GL
Problem. It reduces LOC by making decisions for you. But no general tool
can successfully get the decision right for all the people all the time.
And 4GL tools are not meant to be tailored to individual requirements.
Where hooks even exist, one ends up in the dread situation of Fighting
the Tool.

So leave me out of this. :)
Absolutely correct. Both Alex Martellli and I tried to dissuade
Kenny Tilton that LOC was the best measure of succinctness and
appropriateness, and he objected.


All things being equal.
--
http://tilton-technology.com
What?! You are a newbie and you haven't answered my:
http://alu.cliki.net/The%20Road%20to%20Lisp%20Survey

Jul 18 '05 #420
Paul Foley <se*@below.invalid> writes:
On Fri, 10 Oct 2003 00:34:10 -0400, Lulu of the Lotus-Eaters wrote:
Kenny Tilton <kt*****@nyc.rr.com> wrote previously:
|> It's only been out, what, twenty years? And another twenty before that
|> for other lisps... How much time do you think you need?

|Hey, we've been dead for thirty years, give us a break.
|The bad news for other languages is that the evolution of programming
|languages, like baseball, is a game played without a clock.

I would think Lisp is more like cricket: wickets bracket both ends, no
one can actually understand the rules, but at least the players wear
white.


Oh, come on! Anyone can understand cricket! There are two teams.
The team that's in sits out, except for two batsmen, and the other
team come out and try to get the men that are in out. When a man goes
out, he goes in and another man comes out. When the team that's in
are all out, except for the one who's not out, the other team goes in,
until they're all out, too; and then a second innings is played.
That's more or less all there is to it!


In other words, the man that's in may be out or in. If he's in, he
can go back out, but if he's out, then he can't go back in. Once
everyone is out, everyone goes out, then once the in team is out
again, the out team goes in again and everyone in can go out again.

Thanks for straightening that out!
Jul 18 '05 #421
Bruce Lewis <br*****@yahoo.com> writes:
j-*******@rcn.com (Jon S. Anthony) writes:
If your problems are trivial, I suppose the presumed lower startup
costs of Python may mark it as a good solution medium.


I find no significant difference in startup time between python and
mzscheme.


My preliminary results in this very important benchmark indicates that
python performs equally well to the two benchmarked Common Lisps:

200 bjorn@nex:~> time for ((i=0; i<100; i++)); do lisp -noinit -eval '(quit)'; done

real 0m2,24s
user 0m1,36s
sys 0m0,83s
201 bjorn@nex:~> time for ((i=0; i<100; i++)); do lisp -noinit -eval '(quit)'; done

real 0m2,24s
user 0m1,39s
sys 0m0,82s
202 bjorn@nex:~> time for ((i=0; i<100; i++)); do clisp -q -x '(quit)'; done

real 0m2,83s
user 0m1,74s
sys 0m1,03s
203 bjorn@nex:~> time for ((i=0; i<100; i++)); do clisp -q -x '(quit)'; done

real 0m2,79s
user 0m1,67s
sys 0m1,09s
204 bjorn@nex:~> time for ((i=0; i<100; i++)); do python -c exit; done

real 0m2,41s
user 0m1,85s
sys 0m0,52s
205 bjorn@nex:~> time for ((i=0; i<100; i++)); do python -c exit; done

real 0m2,41s
user 0m1,89s
sys 0m0,52s

</sarcasm>
Björn
Jul 18 '05 #422


Rainer Deyke wrote:
Pascal Costanza wrote:
Pick the one Common Lisp implementation that provides the stuff you
need. If no Common Lisp implementation provides all the stuff you
need, write your own libraries or pick a different language. It's as
simple as that.

Coming from a C/C++ background, I'm surprised by this attitude. Is
portability of code across different language implementations not a priority
for LISP programmers?


It is. However, history has run against Lisp in this respect. First of
all, there are more than 1.84 implementations of Lisp (4 commercial
ones) and the vendors do not have much incentive in making something
completely portable. OTOH *there are* cross platforms compatibility
layers for many of the things you need. But the problem facing any
Common Lisp library writer is to decide how much to go in terms of cross
implementation and cross platform portability.

Having said that, lets note however, that the actual footprint of CL is
large enough to allow you to write nice portable programs in a much
easier way than e.g. in Scheme or in pre- (and, to some extent post-)
STL C++.

Cheers
--
Marco

Jul 18 '05 #423


Andrew Dalke wrote:
Edi Weitz ....
So, here are your choices:

1. Buy a commercial Lisp. I've done that and I think it was a good
decision.

I'm already making my living from doing Python, so I've got an
incentive to stay with it. ;)

In the scientific conferences I attend, no one I've seen uses Lisp
for their work, excepting those old enough that they started before
there were other high-quality high-level languages.


Maybe thos of us "old enough" know that some high-level high-quality
languages are better than others :)


No one has told me they would hire me for contract work "if only
you were a Lisp programmer."

If the barrier to entry to do what are common-place tasks requires
I buy a commercial Lisp then it's much less likely others will use
my code. I like having others use my code.

(Then why do I use Python? It's a tradeoff, since writing Java/C++
is just too tedious. And I like the people in Python (Hi Laura!).
And I'm picky the domain -- I like doing computational life sciences.)
Well, I am doing that too. Do you know what is the core of
Biocyc/Ecocyc/Metacyc written in?
2. Try to improve the situation of the free CL implementations by
writing libraries or helping with the infrastructure. That's how
this "Open Source" thingy is supposed to work. I'm also doing this.

And I'm doing it for Python. For my domain, it seems like a much
better language choice, for reasons I've mentioned here several times.


Your reasons seem to boil down to the "I do not know Lisp enough" thingy
you hear over and over. I know I sound trite, but that is exactly the
point. Meanwhile, CL languishes because people don't understand
Greespun's Tenth :)

I know I am whining :) I *am* an old geezer :)


3. Run around complaining that you can't use Lisp because a certain
combination of features is not available for free. We have far too
many of these guys on c.l.l.

Technically I'm cross-posting from c.l.py. And I actually complain
for other reasons. ;)

4. Just don't use it. That's fine with me.

So far I've made ... 4(?) half-hearted attempts at learning Lisp.
And 1 at learning Haskell. And 0.1 at learning OCaml.

It currently looks like the number of people choosing #2 is
increasing. Looks promising. You are invited to take part - it's a
great language and a nice little community... :)

"A rising tide lifts all boats". The same is true in Python, in
Java, in Ruby, in ...


With the main difference that Greespun's Tenth Rule of Programming does
not apply in only one case :)

Cheers
--
marco

Jul 18 '05 #424


Andrew Dalke wrote:
Thomas F. Burdick:
With Lisp, you're not at the mercy of your vendor; if
you know damn well that some readable code A can be transformed into
equivalent, but efficient code B, you can cause it to happen!

Not at the mercy of your vendor unless you want to use something
which isn't in the standard, like unicode (esp "wide" unicode, >16bit),


Given that there are more than 1.84 implementations of Common Lisp, yes,
you are at the mercy of the implementor to have access to a good UNICODE
implementation. (Now, whn is the last time I really really really
needed to write error messages in Tamil script? >:| )
regular expressions (esp. regexps of unicode),
There are several completely portable regexps libraires. For UNICODE
see above.
sockets,
There are at least two completely portable sockets libraries for CL.
or ffi?
Last I checked UFFI did pretty much the right thing.

But that's just flaming -- ignore me. ;)


I am a fireman :)

Cheers
--
Marco

Jul 18 '05 #425
Kenny Tilton wrote:
...
Doug Tolton <do**@nospam.com> wrote previously:
|Yes, this discussion is frustrating. It's deeply frustrating to hear
|someone without extensive experience with Macros arguing why they are
|so destructive.
... Hey! No pulling rank! :) Actually, I think we heard Mr. Martelli say
something along these lines at one point, tho I grok that he does know
his stuff. As for having "a better understanding", hmmm, check your
lotus, I think you'll find something in there about Beginner's Mind.
Doug was not talking about understanding, but experience. I've had
such experience -- perhaps not "extensive" enough for him? -- and I
have personally experienced and suffered the problems.

Alex reports his experience of The Divergence Problem and blames macros.
Hell, I worked in Tall Buildings for years. I saw Divergence,
Convergence, /and/ the dread Regurgitation Problems everywhere I went.
No macros, tho, just groups of programmers.

So I think the groups are the problem.


I never claimed that without macros there would be no possible problems
whatsoever. Human biology guarantees that such sociological problems
remain possible. However, technology aspects, as well as cultural ones,
can either ameliorate or exacerbate the issues. Python's overall culture
of consensus and simplicity matters: at a BOF as OSCON, somebody was
asking "...so I don't know if I should just do the simple thing here or
rather do something clever..." and the audience immediately gave their
feedback -- if you think of it as "something clever", _don't do it_.
(Not in production code, not in code that you AND more relevantly many
others will have to maintain, enhance, and change for years to come).

Are you familiar with the "JAPH" idea, those bazillion clever ways that
Perl programmers have dreamed up to emit the string "just another perl
hacker" in delightfully contorted ways? Well, somebody once asked how
a Pythonista would go about it -- and the answer was unanimous:
print "Just another Python hacker"
Sure, this will get you no kudos for cleverness, but the point is that
cleverness does NOT garner kudos among Pythonistas. Simplicity, clarity,
explicitness, directness -- these are the community's core values. Do
we all always live by them? No way -- we're not saints, just practical
guys trying to get our jobs done -- and perhaps make the world a little
better along the way.

Technological aspects interplay with the cultural ones. Again speaking
in terms of ideals and targets, I quote Antoine de Saint-Exupery:
"La perfection est atteinte non quand il ne reste rien ā ajouter, mais
quand il ne reste rien ā enlever." (perfection is achieved not when
nothing is left to add, but when nothing is left to take away). Now,
since "practicality beats purity", one doesn't (e.g.) remove 'if' just
because it can reasonably be substituted by 'while' -- we're not talking,
in fact, about truly minimalist practice. But the only apparently
irreducible use of macros would appear to be in (some form of) code
that "reasons about itself" (not just by simple reflection and
introspection, quite easy to achieve without macros, but in that
"with-condition-maintained" example which was apparently [or allegedly]
able to analyze and modify reactor-control code to affect the reactor's
temperature limits). Do I need or want such exoterica in the kind of
code that I am interested in writing, and helping others write? No way:
such "creative", deep merging of discourse and meta-discourse does not
appear to be at all necessary in these endeavours -- and if it not
strictly necessary, I would MUCH rather use a simpler, lighter-weight
tool that does not support it. When I need to modify (e.g.) some
class's constructor at runtime, I can do it by such blazingly obvious
code as
someclass.__new__ = staticmethod(costructor_implementation)
though the Lisp'er originally proposing such needs, in an attempt to
show how complicated Python's approach was, used a hugely messy call to
type.__setattr__ with a weirdly and unjustifiably long lambda in it.

Won't fly, friends. We're simple, down-to-earth folks, doing simple,
down-to-earth things. I suspect some kind of 10/90 rules apply: our
simple tools may be barely 10% of what CL has got, but they cover the
needs arising in 90% of applications. (Maybe it's 15/85 or whatever:
no, I don't have scientific studies in the matter to quote so you
can exert your ingenuity at shooting them down:-). When we need to
process some specialized language we may use roughly-traditional parser
technology, keeping language and meta-language separated, rather than
embed and entwine them inside each other.

My perhaps-not-extensive-enough experience with macros showed them
being used to merge language and meta-language -- in widely different
ways in different labs or even within a given lab -- while at the
same time other firms were using languages without macros (APL and
variants thereof) and processing them with different and separate
metalanguages AND thereby managing to achieve better (intra-firm, at
least) cooperation. As "adventures in programming", those glorious
lisp-dialects-cum-hardware-description-languages-inside-them were,
no doubt, a hoot. For somebody looking for a more productive way to
design chips, and particularly to foster cooperation in this design
task, they looked (and in retrospect still look) sub-optimal to me.

The macros ended up being used to bend and stretch the general
purpose language to express specialized issues (about hardware design,
in that case) which it was not optimally suited to express -- and
since it WAS a case of bending and stretching, it was inevitable that
each different lab and faction would stretch and bend in divergent
directions. The computer-scientists in question were no doubt happy
as larks with their toys and in some cases their shiny new lisp
machines (I think TI ended up making a LM of their own a bit later,
but that was after my time); us engineers weren't _quite_ as happy,
though. And the chips didn't get designed as well, nor as fast...
Alex

Jul 18 '05 #426


Alex Martelli wrote:
Kenny Tilton wrote:
.....
The very 'feature' that was touted by Erann Gat as macros' killer advantage
in the WITH-CONDITION-MAINTAINED example he posted is the crucial
difference: functions (HO or not) and classes only group some existing code
and data; macros can generate new code based on examining, and presumably to
some level *understanding*, a LOT of very deep things about the code
arguments they're given. If all you do with your macros is what you could
do with HOF's, it's silly to have macros in addition to HOF's -- just
MTOWTDItis encouraging multiple different approaches to solve any given
problem -- this, of course, in turn breeds divergence when compared to a
situation in which just one approach is encouraged. If you do use the
potential implied in that example from Gat, to do things that functions and
classes just couldn't _begin_ to, it's worse -- then you're really
designing your own private divergent language (which most posters from
the Lisp camp appear to assert is an unalloyed good, although admittedly
far from all). This is far from the first time I'm explaining this, btw.
I am extremely careful to design new macros for my "extensions". And
when I do so I do it in my specialized packages. Moreover, I am
personally against blindly importing names when you do not actually need to.

This may or may not cause language divergence. It is a social issue
that is rather independent. For example, people forget Greenspun's
Tenth Rule of programming every other day and continue to diverge :)


Oh, and if you're one of those who disapprove of Gat's example feel free
to say so, but until I hear a substantial majority denouncing it as idiotic
(and I haven't seen anywhere near this intensity of disapproval for it from
your camp) I'm quite justifyied in taking it as THE canonical example of a
macro doing something that is clearly outside the purview of normal tools
such as functions and classes. As I recall there was a lot of that going
on in TI labs, too -- instead of writing and using compilers for hardware
description languages, circuit simulators, etc, based on appropriate and
specialized languages processed with the help general-purpose ones,
the specialized languages (divergent and half-baked) were embedded in
programs coded in the general-purpose languages (Lisp variants, including
Scheme; that was in 1980) using macros that were supposed to do
everything but serve you coffee while you were waiting -- of course when
the snippets you passed (to represent hardware operation) were correct
from the GP language viewpoint but outside the limited parts thereof that
the macros could in fact process significantly down to circuit design &c,
the error messages you got (if you were lucky enough to get error
messages rather than just weird behavior) were QUITE interesting.
What people were doing not too long ago (1998) in a major electronic CAD
company was to develop special intermediate languages to represent some
design modules (we are talking about a not-so-cheap application here).
Guess what. They were using a tabbed format. Going from version 1.0 of
the product to version 2.0 involved writing a complex "migration" tool,
as the previous format would break (not to mention the common place "cut
and paste" errors).

How would you do that today? You would write a XML DTD (or Schema, if
you are so inclined) to achieve the same goal. Now, given that XML is
S-expr in a drag, Greenspun's Tenth applies again.

This has nothing to do with HOF vs Macros etc etc, but it shows that you
are always using some "language design" thingy while you program. After
all, Stroustroup correctly said that "library design" is "language
design". Jumping back to the topic, the bottom line is that you want
both macros and HOFs. If you do not want both you are just reconfirming
Greenspun's Tenth Rule of Programming :)

One popular macro is WITH-OUTPUT-TO-FILE. My budding RoboCup starter kit
was a vital WITH-STD-ATTEMPT macro. Oh god, no! I need to see the ANSI

Do they do things a HOF or class could do? If so why bother using such
an over-powered tool as macros instead of HOFs or classes? If not, how do
they specially process and understand code snippets they're passed?


Because you have them and because they are easier to use than a HOF. If
you have both you can make the best of both. If you miss either, you
have one less tool in your belt. As for the previous examples, you do
not necessarily need to understand the code snippets that are passed to
the macros. Most of the time macros are used as code transformations.
If you use them carefully, then your (Common Lisp) programs get more
succinct and more readable (and, incidentally more efficient, as Common
Lisp can use macros to shortcut the road to the *NATIVE CODE* compiler).
You cannot achieve this effect if you do not have both.

Cheers
--
marco


Jul 18 '05 #427
Bengt Richter wrote:
...
This way lambda would only be needed for backwards compatibility, and
since "def(" is a syntax error now, IWT it could be introduced cleanly.


In theory, yes, I think it could (and wrt my similar idea with 'do' has
the advantage of not requiring a new keyword). In practice, trying to
hack the syntax to allow it seems a little nightmare. Wanna try your
hand at it? I'm thinking of Grammar/Grammar and Modules/parsermodule.c ...
Alex

Jul 18 '05 #428

Ok. At this point I feel the need to apoligize to everybody for my
rants and I promise I will do my best to end this thread.

I therefor utter the H-word and hopefully cause this thread to stop.

Cheers
--
marco

Jul 18 '05 #429
Lulu of the Lotus-Eaters wrote:
...
Python never had an "aspiration" of being "just a scripting language",
Hmmm, at the start it sure seemed like that. Check out

http://www.python.org/search/hyperma...1992/0001.html

actually from late '91. By 1991 Guido was already calling Pyhton
"a prototyping language" and "a programming language" (so, the
"just a scripting language" was perhaps only accurate in 1990), but
in late '91 he still wrote:

"""
The one thing that Python definitely does not want to be is a GENERAL
purpose programming language. Its lack of declarations and general laziness
about compile-time checking is definitely aimed at small-to-medium-sized
programs.
"""

Apparently, it took us (collectively speaking) quite a while to realize
that the lack of declarations and compile-time checks aren't really a
handicap for writing larger programs (admittedly, Lispers already knew
it then -- so did I personally, thanks also to experiences with e.g.
Rexx -- but I didn't know of Python then). So, it _is_ historically
interesting to ascertain when the issue of large programs first arose.
nor WAS it ever such a thing. From its release, Python was obviously a
language very well suited to large scale application development (as


Well, clearly that was anything but obvious to Guido, from the above
quote. Or maybe you mean by "release" the 1.0.0 one, in 1994? At
that time, your contention becomes quite defensible (though I can't
find a Guido quote to support it, maybe I'm just not looking hard
enough), e.g. http://www.python.org/search/hyperma...94q1/0050.html
where Bennett Todd muses
"""
I think Python will outstrip every other language out there, and Python
(extended where necessary in C) will be the next revolutionary programming
tool ... Perl seems (in my experience) to be weak for implementing large
systems, and having them run efficiently and be clear and easy to maintain.
I hope Python will do better.
"""
So, here, the idea or hope that Python "will do better" (at least wrt
Perl) "for implementing large systems" seems already in evidence, though
far from a community consensus yet.
I do find it fascinating that such primary sources are freely available
on the net -- a ball for us amateur historians...!-)

Alex

Jul 18 '05 #430
In article <Wp**********************@news1.tin.it>, al***@aleax.it wrote:
Björn Lindberg wrote:
...
Agreed. I pointed out elsewhere that there has been no systematic
study to show that Lisp code is indeed "so much shorter than the
equivalent code in other languages" where "other languages" include
Python, Perl, or Ruby.


It would be interesting to see such studies made.


Absolutely!


Lutz Prechelt has done a number (at least two that I know of) of such
studies. I did one too: http://www.flownet.com/gat/lisp-java.pdf

E.
Jul 18 '05 #431
Pascal Costanza wrote:
Matthias wrote:
Why the smiley? Many hours of discussions could be spared if there
were real, scientific, solid studies on the benefit of certain
language features or languages in certain domains or for certain types
of programmers.


This presumes that language features can be judged in isolation. I think
it's rather more likely that good programming languages are holistic
systems, in the sense that the whole language is more than the sum of
its features.


....and/or less, if N features are just offering N different ways to
perform essentially the same tasks, of course. Still, be the whole
more or less than "the sum of the parts", one still can't rule out
(as no "hard-scientific studies" are ever likely to exist) such
non-linearities and complications. This, of course, points out that
programming languages are NOT "mathematics", as some claim -- they
are engineering designs, and interact with human minds, sociology
of groups, cultural and educational features, at least as much as
they interact with the architecture and capabilities of computers.
Alex

Jul 18 '05 #432
Edi Weitz wrote:
...
> > I think it's about a single namespace (Scheme, Python, Haskell,
> > ...) vs CLisp's dual namespaces. People get used pretty fast
> > to having every object (whether callable or not) "first-class"
> > -- e.g. sendable as an argument without any need for stropping
... He's talking about NAMESPACES. "namespace" occurs twice in his
paragraph, while "function" occurs only once, that should have given
you a hint.
Thanks, I think my reading comprehension is quite good. What you said
doesn't change the fact that Mr. Martelli's wording insinuates that in
Scheme and Python functions are first-class objects and in Common Lisp


I put "first class" in quotes and immediately explained what I meant.
they're not. For the sake of c.l.p readers who might not know CL I
think this should be corrected.

* (let ((fn (lambda (x) (* x x))))
(mapcar fn (list 1 2 3 4 5)))

(1 4 9 16 25)

There you have it. I can create a function, assign it to a variable
and pass it to another function like any other object, that's what I'd
call a "first-class object."


Yes, but:
Namely, he's saying that people used to write: (mapcar cadr '((a 1)
(b 2))) don't like having to write: (mapcar #'cadr '((a 1) (b 2)))
in Common-Lisp.


This old namespace debate only makes me yawn, sorry.


If so then why jump on assertions related exactly just to that --
namespaces? Re-read my quote above, o you of self-proclaimed "quite
good" reading comprehension: I was trying to explain to Doug Tolton
why many think that Haskell, Scheme or Python "do HOFs better",
while he was claiming that the use of #' if "far clearner" (sic)
because "in lisp with #' it's immediately obvious that you are
receiving or sending a HOF that will potentially alter how the
call operates". I.e., it IS strictly a namespace debate from the
word go. Whether it SHOULD be emphasized with horns and bells
that "warning, HOF coming!!!" -- as Doug claimed -- or not.

If you're bored by debating namespaces, don't jump into a debate
on namespaces -- seems simple common sense (as opposed to
common lisp?)...
Alex

Jul 18 '05 #433
Bruce Lewis <br*****@yahoo.com> writes:
j-*******@rcn.com (Jon S. Anthony) writes:
If your problems are trivial, I suppose the presumed lower startup
costs of Python may mark it as a good solution medium.


I find no significant difference in startup time between python and
mzscheme.


Category error. The context (I would have thought) clearly indicated
that "startup costs" concerned the effort needed to use the language!

/Jon
Jul 18 '05 #434
Alex Martelli wrote:
Pascal Costanza wrote:

Matthias wrote:

Why the smiley? Many hours of discussions could be spared if there
were real, scientific, solid studies on the benefit of certain
language features or languages in certain domains or for certain types
of programmers.


This presumes that language features can be judged in isolation. I think
it's rather more likely that good programming languages are holistic
systems, in the sense that the whole language is more than the sum of
its features.

...and/or less, if N features are just offering N different ways to
perform essentially the same tasks, of course. Still, be the whole
more or less than "the sum of the parts", one still can't rule out
(as no "hard-scientific studies" are ever likely to exist) such
non-linearities and complications. This, of course, points out that
programming languages are NOT "mathematics", as some claim -- they
are engineering designs, and interact with human minds, sociology
of groups, cultural and educational features, at least as much as
they interact with the architecture and capabilities of computers.


I definitely agree. Computer science is more a sociological science than
a natural science IMHO.

Pascal

Jul 18 '05 #435
Kenny Tilton wrote:
...
But methinks a number of folks using Emacs Elisp and Autocad's embedded
Lisp are non-professionals.


Methinks there are a great many more people using the VBA
interface to AutoCAD than its Lisp interface. In fact, my friends
(ex-Autodesk) told me that's the case.


Sheesh, who hasn't been exposed to basic? From my generation, that is.
:) But no matter, the point is anyone can handled parens if they try for
more than an hour.


Yes, but will that make them most happy or productive? The Autocad
case appears to be relevant, though obviously only Autodesk knows
for sure. When I was working in the mechanical CAD field, I had
occasion to speak with many Autocad users -- typically mechanical
drafters, or mechanical or civil engineers, by training and main working
experience -- who HAD painfully (by their tales) learned to "handle
parens", because their work required them occasionally to write Autocad
macros and once upon a time Autolisp was the only practical way to do it --
BUT had jumped ship gleefully to the VBA interface, happily ditching
years of Autolisp experience, just as soon as they possibly could (or
earlier, i.e. when the VBA thingy was very new and still creaky in its
integration with the rest of Autocad -- they'd rather brave the bugs
of the creaky new VBA thingy than stay with the Autolisp devil they
knew). I don't know if syntax was the main determinant. I do know
that quite a few of those people had NOT had any previous exposure to
any kind of Basic -- we're talking about mechanics-junkies, more likely
to spend their spare time hot-rodding their cars at home (Bologna is,
after all, about 20 Km from Ferrari, 20 Km on the other side from
Minardi, while the Ducati motorcycle factory is right here in town,
etc -- *serious* mechanics-freaks country!), rather than playing with
the early home computers, or program for fun.

So, I think Autocad does prove that non-professional programmers
(mechanical designers needing to get their designs done faster) CAN
learn to handle lisp if no alternatives are available -- and also
that they'd rather not do so, if any alternatives are offered. (I
don't know how good a lisp Autolisp is, anyway -- so, as I mentioned,
there may well be NON-syntactical reasons for those guys' dislike
of it despite years of necessarily using it as the only tool with
which they could get their real job done -- but I have no data that
could lead me to rule out syntax as a factor, at least for users
who were OCCASIONAL users anyway, as programming never was their
REAL, MAIN job, just means to an end).

You (Alex?) also worry about groups of programmers and whether what is
good for the gurus will be good for the lesser lights.


If you ever hear me call anyone who is not an expert programmer
a "lesser light" then I give you -- or anyone else here -- permission
to smack me cross-side the head.


Boy, you sure can read a lot into a casually chosen cliche. But can we
clear up once and for all whether these genius scientists are or are not
as good a programmer as you? I thought I heard Python being recommended
as better for non-professional programmers.


Dunno 'bout Andrew, but -- if the scientists (or their employers) are
paying Andrew for programming consultancy, training, and advice, would
it not seem likely that they consider that he's better at those tasks
than they are...? Otherwise why would they bother? Most likely the
scientists are better than him at _other_ intellectual pursuits -- be
it for reasons of nature, nurture, or whatever, need not interest us
here, but it IS a fact that some people are better at some tasks.
There is too much programming to be done, to let ONLY professional
programmers do it -- just like there's too much driving to be done, to
let only professional drivers do it -- still, the professionals can be
expected to be better at their tasks of specialistic expertise.
Alex

Jul 18 '05 #436
rp**@rpw3.org (Rob Warnock) writes:
Andrew Dalke <ad****@mindspring.com> wrote:
+---------------
| (and yes, I know about the lawsuit against disk drive manufacturors
| and their strange definition of "gigabyte"... )
+---------------

Oh, you mean the fact that they use the *STANDARD* international
scientific/engineering notation for powers of 10 instead of the
broken, never-quite-right-except-in-a-few-cases pseudo-binary
powers of 10?!?!?

No we mean the fact that they subreptitiously switched from the
industry standard of defining giga as 2^30 to the scientific standard
of defining giga as 10^9, which allowed them to display bigger size
while in fact they did not have bigger hard drives. That was a pure
marketing trick. Happily, after these lawsuits, they now write the
exact number of byte storable on their devices. But be assured that
they would have never switched if 2^30 had been smaller than 10^9.

[Hmmm... Guess you can tell which side of *that*
debate I'm on, eh?] The "when I write powers of 10 which are 3*N
just *asssume* that I meant powers of 2 which are 10*N" hack simply
fails to work correctly when *some* of the "powers of 10" are *really*
powers of 10. It also fails to work correctly with things that aren't
instrinsically quantized in powers of 2 at all.

Examples: I've had to grab people by the scruff of the neck and push
their faces into the applicable reference texts before they believe me
when I say that gigabit Ethernet really, really *is* 1000000000.0 bits
per second [peak payload, not encoded rate], not 1073741824, and that
64 kb/s DS0 telephone circuits really *are* 64,000.0 bits/sec, not 65536.
[And, yes, 56 kb/s circuits are 56000 bits/sec, not 57344.]
Yes, that's because telecoms are not computers. In particular,
telecoms were invented long before computers and binary base became
interesting.

On the other hand, hard drives are purely computer stuff...

Solution: *Always* use the internationally-recognized binary prefixes
<URL:http://physics.nist.gov/cuu/Units/binary.html> when that's really
what you mean, and leave the old scientific/engineering notation alone,
as pure powers of 10. [Note: The historical notes on that page are well
worth reading.]


Perhaps we should start serriously to use the kibi (Ki), mibi (Mi),
gibi (Gi), tibi (Ti), etc, that have been proposed.
--
__Pascal_Bourguignon__
http://www.informatimago.com/
Do not adjust your mind, there is a fault in reality.
Jul 18 '05 #437
Andrew Dalke wrote:
...[quoting me indirectly]...
> If Python's syntax defined
> other forms of suites, e.g. hypothetically:
>
> with <object>:
> <suite>
>
> meaning to call the object (or some given method[s] in it, whatever)
> with the suite as its argument, it would be just as explicit as, e.g.:
>
> for <name> in <object>:
> <suite>


A reasonable point. However, inside the 'with' statement it's hard
to know if

print x


Sorry, I was NOT using 'with' in a Pascal/Basic sense, but rather
to mean, and I quote: "meaning to call ... with the suite" (others
have proposed 'using' etc for this construct in python-dev). I
was using 'with' only because so many macros quoted on the xposted
thread appear to start with "WITH-..." ...!-)
Alex

Jul 18 '05 #438
|Lulu of the Lotus-Eaters wrote:
|> I would think Lisp is more like cricket: wickets bracket both ends, no
|> one can actually understand the rules, but at least the players wear
|> white.

Paul Foley <se*@below.invalid> wrote previously:
|Oh, come on! Anyone can understand cricket! There are two teams.
|The team that's in sits out, except for two batsmen...

I apologize, I overstated it. I meant "No American can understand..."

FWIW. I very much enjoyed watching part of an amateur cricket match on
my vacation to Vancouver a few weeks back. But exactly what they were
doing was as perplexing as the Lisp code in this thread :-).

Yours, Lulu...

P.S. It's odd that I hadn't KNOWN about my Dutch ancestry... but Python
fits my brain, so there must be some.

--
mertz@ _/_/_/_/_/_/_/ THIS MESSAGE WAS BROUGHT TO YOU BY:_/_/_/_/ v i
gnosis _/_/ Postmodern Enterprises _/_/ s r
..cx _/_/ MAKERS OF CHAOS.... _/_/ i u
_/_/_/_/_/ LOOK FOR IT IN A NEIGHBORHOOD NEAR YOU_/_/_/_/_/ g s
Jul 18 '05 #439
<mi*****@ziplip.com> wrote in message
news:FT**************************************@zipl ip.com...
I think everyone who used Python will agree that its syntax is
the best thing going for it. It is very readable and easy
for everyone to learn. But, Python does not a have very good
macro capabilities, unfortunately. I'd like to know if it may
be possible to add a powerful macro system to Python, while
keeping its amazing syntax, and if it could be possible to
add Pythonistic syntax to Lisp or Scheme, while keeping all
of the functionality and convenience. If the answer is yes,
would many Python programmers switch to Lisp or Scheme if
they were offered identation-based syntax?


What about an editor that simply hides outer parenthesis and displays them
as
tabs, for Scheme for example. Then you could edit in any program, or use an
editor designed for it. Kind of like editing raw HTML or using an HTML
editor.

I might just adapt this idea for my pet language which uses indentation for
blocks. I like code to flow like an outline, with as few extraneous symbols
and junk as possible.

Mike
Jul 18 '05 #440
"Lulu of the Lotus-Eaters" <me***@gnosis.cx> wrote in message
news:ma**********************************@python.o rg...
"Vis Mike" <visionary25@_nospam_hotmail.com> wrote previously:
|Something like this seems more logical to me:
|for line in file('input.txt').lines:
| do_something_with(line)
|for byte in file('input.txt').bytes:
| do_something_with(byte)

Well, it's spelled slightly differently in Python:

for line in file('input.txt').readlines():
do_something_with(line)

for byte in file('input.txt').read():
do_something_with(byte)

Of course, both of those slurp in the whole thing at once. Lazy lines
are 'fp.xreadlines()', but there is no standard lazy bytes.
xreadlines()? What kind of naming convention is that: :)

what about 'eachline()'?
A method 'fp.xread()' might be useful, actually. And taking a good idea
from Dave Benjamin up-thread, so might 'fp.xreadwords()'. Of course, if
you were happy to write your own class 'File' that provided the extra
iterations, you'd only need to capitalize on letter to get these extra
options.


Mike
Jul 18 '05 #441
Kenny Tilton wrote:
...
As for non-professional programmers, the next question is whether a good
language for them will ever be anything more than a language for them.
Being a professional programmer, I find Python makes me very productive
at my job -- and yet I know from experience it's also an excellent language
for non-professional programmers. So, the experiential answer is clear to
me.
Perhaps Python should just stay with the subset of capabalities that
made it a huge success--it might not be able to scale to new
sophistication without destroying the base simplicity.
Interestingly enough, you and I probably agree on that. I don't _want_
Python to grow in any way that would "destroy the base simplicity"; indeed,
I'm looking forward to 3.0 (even though it's probably 3 years off or so)
exactly because it may get simplified again then, shedding some accumulated
legacy baggage (there is no reason, except backwards compatibility, to
have e.g "classic classes", range, xrange, etc etc). Any proposed new
feature, in my opinion, must be judged strictly on the criterion of: how
much better than today's set of features would it let us do our jobs? If
it provides another roughly equivalent way to perform some of the same
tasks, that's a substantial minus (as it encourages divergence of language
dialects) and must be compensated by really big advantages elsewhere.

Today's Python has the features we need to productively build ambitious
frameworks for asynchronous network clients and servers (Twisted), spam
filters that apparently work better than Graham's (spambayes), search
engines (Verity Ultraseek, nee Infoseek, as wel as Google), ship-design
optimization apps (Tribon Vitesse), commercial games (Freedom Force, EVE
Online, Star Trek Bridge Commander...), collaborative enterprise app
frameworks (CAPS), scientific visualization tools such as MayaVi, business
logic for factory and tool control (IBM/Philips Fishkill plant)... oh,
you're perfectly capable of reading the various "Python success stories"
sites and booklets yourself, I don't want to bore you TOO much:-). The
point is, it MIGHT, as you point out, be unfeasible to "scale to new
sophistication" -- beyond the few 100,000s function points MAXIMUM of
any of these Python successes (roughly equivalent to, say, a few 10's
of millions of lines of C code), to e.g. many millions of function points.

I think the world needs LOTS AND LOTS of application programs in
the 1000-100,000 function points range -- and Python's current set of
features has proven amply sufficient to provide those without damage
to "the base simplicity" which you mention. If many applications of
many millions FP's (roughly equivalent to a billion lines of C, or so)
are needed, I don't know -- but, if so, I share your doubts about it making
any sense to destroy Python's simplicity in an attempt to tackle THOSE
monsters, "scaling to new sophistication".

You (Alex?) also worry about groups of programmers and whether what is
good for the gurus will be good for the lesser lights. What you are
saying is that the guru will dazzle the dorks with incomprehensible
gobbledygook. That does happen, but those kinds of gurus should be fired.
That's only part of the problem, of course -- programmers who are not
quite as good as they think they are and inflict their "enhancements" to
the language to everybody else are another issue. Maybe they should
be fired, but such turnover in the team would decrease productivity
anyway.
On a suffciently large project (well, there's another point: with Lisp
one does not hire ten people (unless one is doing three projects)) the
Lisp isn't able to let 10 people work together productively on the
same project? Oh my -- now THAT would be a huge problem;-).
team should be divided into those who will be building the
domain-specific embedded language and those who will be using it.
Ideally the latter could be not just non-professional programmers, but
even non-programmers.


If they're programming (in whatever language) they can't be
non-programmers, by definition. But anyway, this misses the key
issue: who are the *real* experts of the application domain that
your "domain-specific" language is supposed to address so well?
E.g., the *real* experts on turbo-compressor design, on optimization
of ship designs, on the business logic of tool control, on logical
and physical design of integrated circuits, etc, etc? Answer: they
are likely to be non-professional programmers. Are THEY designing
the domain-specific language -- or is it going to be designed by
computer scientists who don't really grasp the intricacies of turbo
compressors, ships, etc, etc?

The Agile Programming (Extreme Programming, in particular) answer
to this enormously important issue is that the whole team, customer
included, *works together* and *collectively owns* the whole body of code.

The computer scientist learns enough about turbo compressors, and
the turbo compressor expert enough about programming, by the
incredibly productive social process of *pair-programming* -- sitting
side by side and working together at testing and building code (in
this order -- but I won't bore you with test-driven-design paeans...;-).

This holistic approach is incompatible with your favourite "the gurus
build the domain-specific language, the peons just use it" approach.
It works particularly well when the language is as simple, as good at
"getting out of your way", as Python (or, admittedly, Ruby -- I have
enormous respect for Ruby! -- though I have some issues on quite
another plane that make me keep preferring Python for this specific,
and very important to me, kind of tasks).

It surely can feel "cooler" for the Guru (uppercase G mandatory) not
to have to mingle with such low-lifes as the lusers (a Guru's favourite
spelling of "users", apparently) -- to just sit in their ivory tower
spitting out domain-specific embedded languages for domains they
_aren't_ as expert at as the peons. But I've seen that approach at
work (for hardware design with various lisp variants and dialects) and
don't like the results I have observed, particularly in application domains
where the intended users ARE quite expert in their field (which is the
case for many interesting apps -- not just those targeting the people
which our society acknowledges as "respected professionals", mind you:
a good secretary knows FAR more than I ever will on how to make an
office run, a shopkeeper on how things work in a shop, etc, etc).
Alex

Jul 18 '05 #442
>>>>> "james" == james anderson <ja************@setf.de> writes:

james> are there examples where these little beasties are used in production?

If you mean Oleg's little beasties, then the answer is yes, and in
mission-critical, million-dollar-stake applications at that.

--
Cheers =8-} Mike
Friede, Völkerverständigung und überhaupt blabla
Jul 18 '05 #443
Alex Martelli <al***@aleax.it> writes:
Pascal Costanza wrote:
Matthias wrote:
Why the smiley? Many hours of discussions could be spared if there
were real, scientific, solid studies on the benefit of certain
language features or languages in certain domains or for certain types
of programmers.


This presumes that language features can be judged in isolation. I think
it's rather more likely that good programming languages are holistic
systems, in the sense that the whole language is more than the sum of
its features.


...and/or less, if N features are just offering N different ways to
perform essentially the same tasks, of course. Still, be the whole
more or less than "the sum of the parts", one still can't rule out
(as no "hard-scientific studies" are ever likely to exist) such
non-linearities and complications. This, of course, points out that
programming languages are NOT "mathematics", as some claim -- they
are engineering designs, and interact with human minds, sociology
of groups, cultural and educational features, at least as much as
they interact with the architecture and capabilities of computers.


You are right, of course.

But that it is a complicated matter to study does not mean that it's
not worthwhile: An example where human minds, sociology, culture,
etc. intervene in a complicated way is education. In Germany we've
had /ages/ of hot debate on how to educate children (at elementary and
highschool, mainly). Then scientists came and did some tests. They
defined some educational goals ("children at a certain age should be
able to read this text and solve such kind of math problem") and
looked which factors influence how well students met the goals
previously defined. The results were quite surprising to our
education experts: Certain factors which were previously believed to
matter a great deal (like teacher/student ratio) were found to be of
almost no importance. Other factors (like parental income and ethnic
origin) had an alarmingly high influence. Now our education experts
have, for the first time, real data as input, and they can start to
work on the real problems.

In the context of programming languages I find studies from Lutz
Prechtel <http://www.ipd.uka.de/~prechelt/Biblio/> or Erann Gat's
Lisp/Java paper interesting. Doing such studies on a larger scale and
with non-self selected participants should be possible. In "Patterns
of Software" Peter Gabriel reports (p. 128) that a group of advanced
Lisp developers experienced a 30% drop in productivity one year after
switching to C++. This is merely an anecdote, but if you have a
reasonable measure of programmers' productivity (I know, that's hard)
and examine how language-switchers in industry perform after 1, 2, 3
years you might find other interesting results. One could also try to
compare small software companies which do well (e.g., financially)
with those that do not so well.

In all these cases defining acceptable performance measures and/or
getting enough data is hard, and no single study would reveal "the
truth". But scientifically examining the act of producing software
should be possible (within limits) if one tries and has enough
funding. ;-)
Jul 18 '05 #444


Michael Sperber wrote:
>> "james" == james anderson <ja************@setf.de> writes:

james> are there examples where these little beasties are used in production?

If you mean Oleg's little beasties, then the answer is yes, and in
mission-critical, million-dollar-stake applications at that.


is one at liberty to give anything more than a rhetorical answer?

i would be interested to observe how they express themselves in the large.

--
Cheers =8-} Mike
Friede, Völkerverständigung und überhaupt blabla

Jul 18 '05 #445
Matthias wrote:
...
In the context of programming languages I find studies from Lutz
Prechtel <http://www.ipd.uka.de/~prechelt/Biblio/> or Erann Gat's
Yes, Lutz has been quoted several times on this thread -- and, of course,
his studies have been impugned just as many times as they've been
quoted, anytime somebody did not like any of their implications.
truth". But scientifically examining the act of producing software
should be possible (within limits) if one tries and has enough
funding. ;-)


"shud" is a 4-letter word;-). As this huge thread makes abundantly
clear, social and political considerations, not technical ones, dominate
most discussions of this ilk. It's just like with, say, recreational drugs:
a study appears to show ecstasy can damage the brain, prohibitionists
jump on it with glee and proclaim it the most crucial scientific result of
all times; months later the authors shamefacedly retract the study,
after they and many others had uselessly tried to reproduce its findings, as
they discovered their drug samples had been mis-labeled so they had in
fact been studying a _different_ substance by mistake -- and the
prohibitionists poo-poo the study's retraction as changing nothing of any
importance whatsoever. Who'll try to reproduce the findings of such long
and expensive studies of "the act of producing software" -- and will they
make any real difference, or just be used as argument fodder who people who
already know what they _want_ to believe? Remember the famous
Microsoft-financed benchmarks of Linux vs NT, for example...?-)

I may feel a bit pessimistic at this point, but after the huge amount of
time devoted to this thread and the tiny ROI, I think that's justified!-)
Alex

Jul 18 '05 #446
Vis Mike wrote:
"Lulu of the Lotus-Eaters" <me***@gnosis.cx> wrote in message

...
for line in file('input.txt').readlines():
do_something_with(line)

for byte in file('input.txt').read():
do_something_with(byte)

Of course, both of those slurp in the whole thing at once. Lazy lines
are 'fp.xreadlines()', but there is no standard lazy bytes.


xreadlines()? What kind of naming convention is that: :)


An obsolete one (to go with 'xrange'). Since about 3 years, the
correct Python spelling is just "for line in file("input.txt"):" .

A method 'fp.xread()' might be useful, actually. And taking a good idea
from Dave Benjamin up-thread, so might 'fp.xreadwords()'. Of course, if


I think that using methods for such things is not a particularly good idea.

A generator that takes a sequence (typically an iterator) of strings and
returns as the items the single bytes or words is more general:

def eachbyte(seq):
for s in seq:
for c in s:
yield c

def eachword(seq):
for s in seq:
for w in s.split():
yield w

and now you can loop "for b in eachbyte(file("input.txt")):" etc -- AND you
have also gained the ability to loop per-byte or per-word on any other
sequence of strings. Actually eachbyte is much more general than its
name suggests -- feed it e.g. a list of files, and it will return the lines
of each file -- one after the other -- as a single sequence.

OTOH, eachbyte is NOT particularly good for arbitrary binary files -- if
there happen to be no \n bytes at convenient point it may suck in much
more memory than needed. Besides, typical need on arbitrary binary
files is to loop on block of N bytes for some N -- N==1 is a rather special
case. So one might prefer:

def eachblock(afile, N):
while 1:
block = afile.read(N)
if not block: break
yield block

or variations thereon.
Alex

Jul 18 '05 #447
Pascal Costanza wrote:
...
programming languages are NOT "mathematics", as some claim -- they
are engineering designs, and interact with human minds, sociology
of groups, cultural and educational features, at least as much as
they interact with the architecture and capabilities of computers.


I definitely agree. Computer science is more a sociological science than
a natural science IMHO.


Amen, hallelujah. So, since I've decided to limit my participation in this
thread to c.l.python, would you kindly set right the guys (such as your
namesake) who (on c.l.lisp with copy to my mailbox but not to here) are
currently attacking me because, and I quote,
"""
Software is a department of mathematics.
"""
....?
Alex

Jul 18 '05 #448
Erann Gat wrote:
In article <Wp**********************@news1.tin.it>, al***@aleax.it wrote:
Björn Lindberg wrote:
...
>> Agreed. I pointed out elsewhere that there has been no systematic
>> study to show that Lisp code is indeed "so much shorter than the
>> equivalent code in other languages" where "other languages" include
>> Python, Perl, or Ruby.
>
> It would be interesting to see such studies made.


Absolutely!


Lutz Prechelt has done a number (at least two that I know of) of such
studies. I did one too: http://www.flownet.com/gat/lisp-java.pdf


Yes, Lutz's studies have been quoted repeatedly on this thread -- and
dissed (by Lispers) for not being good enough (task too simple thus
programs too short, etc etc).
Alex

Jul 18 '05 #449
In article <qC**********************@news2.tin.it>, Alex Martelli wrote:

I think that using methods for such things is not a particularly good idea.

A generator that takes a sequence (typically an iterator) of strings and
returns as the items the single bytes or words is more general:

def eachbyte(seq):
for s in seq:
for c in s:
yield c

def eachword(seq):
for s in seq:
for w in s.split():
yield w

and now you can loop "for b in eachbyte(file("input.txt")):" etc -- AND you
have also gained the ability to loop per-byte or per-word on any other
sequence of strings. Actually eachbyte is much more general than its
name suggests -- feed it e.g. a list of files, and it will return the lines
of each file -- one after the other -- as a single sequence.


eachbyte is in fact so general, I'd be tempted to give it the name
"iflatten", though I can never decide whether a shallow flatten or a
recursive flatten is worthy of the name "flatten". Here's another way to
loop through words lazily, this time using itertools:

import string
from itertools import imap

def iflatten(seq):
for subseq in seq:
for item in subseq:
yield item

for word in iflatten(imap(string.split, file('input.txt'))):
print word

--
..:[ dave benjamin (ramenboy) -:- www.ramenfest.com -:- www.3dex.com ]:.
: d r i n k i n g l i f e o u t o f t h e c o n t a i n e r :
Jul 18 '05 #450

This discussion thread is closed

Replies have been disabled for this discussion.

By using this site, you agree to our Privacy Policy and Terms of Use.