473,386 Members | 1,712 Online
Bytes | Software Development & Data Engineering Community
Post Job

Home Posts Topics Members FAQ

Join Bytes to post your question to a community of 473,386 software developers and data experts.

Python's "only one way to do it" philosophy isn't good?

I've just read an article "Building Robust System" by Gerald Jay
Sussman. The article is here:
http://swiss.csail.mit.edu/classes/s...st-systems.pdf

In it there is a footprint which says:
"Indeed, one often hears arguments against building exibility into an
engineered sys-
tem. For example, in the philosophy of the computer language Python it
is claimed:
\There should be one|and preferably only one|obvious way to do
it."[25] Science does
not usually proceed this way: In classical mechanics, for example, one
can construct equa-
tions of motion using Newtonian vectoral mechanics, or using a
Lagrangian or Hamiltonian
variational formulation.[30] In the cases where all three approaches
are applicable they are
equivalent, but each has its advantages in particular contexts."

I'm not sure how reasonable this statement is and personally I like
Python's simplicity, power and elegance. So I put it here and hope to
see some inspiring comments.

Jun 9 '07
206 8147
On Jun 26, 10:03 am, Paul Rubin <http://phr...@NOSPAM.invalidwrote:
Map doesn't work on generators or iterators because they're not part
of the common lisp spec, but if someone implemented them as a library,
said library could easily include a map that handled them as well.

Right, more scattered special purpose kludges instead of a powerful
uniform interface.
Huh? The interface could continue to be (map ...).

Python's for statement relies on the fact that python is mostly object
oriented and many of the predefined types have an iterator interface.
Lisp lists and vectors currently aren't objects and very few of the
predefined types have an iterator interface.

It's easy enough to get around the lack of objectness and add the
equivalent of an iterator iterface, in either language. The fact that
lisp folks haven't bothered suggests that this isn't a big enough
issue.

The difference is that lisp users can easily define python-like for
while python folks have to wait for the implementation.

Syntax matters.
Jun 27 '07 #151
On 6/27/07, Andy Freeman <an****@earthlink.netwrote:
On Jun 26, 10:03 am, Paul Rubin <http://phr...@NOSPAM.invalidwrote:
Map doesn't work on generators or iterators because they're not part
of the common lisp spec, but if someone implemented them as a library,
said library could easily include a map that handled them as well.
Right, more scattered special purpose kludges instead of a powerful
uniform interface.

Huh? The interface could continue to be (map ...).

Python's for statement relies on the fact that python is mostly object
oriented and many of the predefined types have an iterator interface.
Lisp lists and vectors currently aren't objects and very few of the
predefined types have an iterator interface.

It's easy enough to get around the lack of objectness and add the
equivalent of an iterator iterface, in either language. The fact that
lisp folks haven't bothered suggests that this isn't a big enough
issue.
Is this where I get to call Lispers Blub programmers, because they
can't see the clear benefit to a generic iteration interface?
The difference is that lisp users can easily define python-like for
while python folks have to wait for the implementation.
Yes, but Python already has it (so the wait time is 0), and the Lisp
user doesn't.
Jun 27 '07 #152
On Jun 27, 10:51 am, Paul Rubin <http://phr...@NOSPAM.invalidwrote:
I personally use Emacs Lisp every day and I think Hedgehog Lisp (a
tiny functional Lisp dialect intended for embedded platforms like cell
phones--the runtime is just 20 kbytes) is a very cool piece of code.
But using CL for new, large system development just seems crazy today.
It seems that many of the hardcore Lisp developers are busy developing
the core of new airline system software (pricing, reservation, ...)
in Common Lisp. It replaced already some mainframes...
Kind of crazy. I guess that counts as very large systems development.

There is sure also lots of Python involved, IIRC.
Jun 27 '07 #153
"Chris Mellon" <ar*****@gmail.comwrites:
Is this where I get to call Lispers Blub programmers, because they
can't see the clear benefit to a generic iteration interface?
I think you overstate your case. Lispers understand iteration
interfaces perfectly well, but tend to prefer mapping fuctions to
iteration because mapping functions are both easier to code (they are
basically equivalent to coding generators) and efficient (like
non-generator-implemented iterators). The downside is that they are
not quite as flexible as iterators (which can be hard to code) and
generators, which are slow.

Lispers have long since understood how to write mapping function to
iterator converters using stack groups or continuations, but Common
Lisp never mandated stack groups or continuations for conforming
implementations. Scheme, of course, has continuations, and there are
implementations of Common Lisp with stack groups.
>The difference is that lisp users can easily define python-like for
while python folks have to wait for the implementation.
Yes, but Python already has it (so the wait time is 0), and the Lisp
user doesn't.
So do Lispers, provided that they use an implementation of Lisp that
has the aforementioned extensions to the standard. If they don't,
they are the unfortunately prisoners of the standardizing committees.

And, I guarantee you, that if Python were specified by a standardizing
committee, it would suffer this very same fate.

Regarding there being way too many good but incompatible
implementations of Lisp -- I understand. The very same thing has
caused Ruby to incredibly rapidly close the lead that Python has
traditionally had over Ruby. There reason for this is that there are
too many good but incompatible Python web dev frameworks, and only one
good one for Ruby. So, we see that while Lisp suffers from too much
of a good thing, so does Python, and that may be the death of it if
Ruby on Rails keeps barreling down on Python like a runaway train.

|>oug
Jun 27 '07 #154
On Jun 27, 8:09 am, "Chris Mellon" <arka...@gmail.comwrote:
On 6/27/07, Andy Freeman <ana...@earthlink.netwrote:
It's easy enough to get around the lack of objectness and add the
equivalent of an iterator iterface, in either language. The fact that
lisp folks haven't bothered suggests that this isn't a big enough
issue.

Is this where I get to call Lispers Blub programmers, because they
can't see the clear benefit to a generic iteration interface?
The "Blub" argument relies on inability to implement comparable
functionality in "blub". (For example, C programmers don't get to
call Pythonists Blub programmers because Python doesn't use {} and
Pythonistas don't get to say the same about C programmers because C
doesn't use whitespace.) Generic iterators can be implemented by lisp
programmers and some have. Others haven't had the need.
The difference is that lisp users can easily define python-like for
while python folks have to wait for the implementation.

Yes, but Python already has it (so the wait time is 0), and the Lisp
user doesn't.
"for" isn't the last useful bit of syntax. Python programmers got to
wait until 2.5 to get "with". Python 2.6 will probably have syntax
that wasn't in Python 2.5.

Lisp programmers with a syntax itch don't wait anywhere near that long.

Jun 27 '07 #155
Douglas Alan wrote:
>
Lispers have long since understood how to write mapping function to
iterator converters using stack groups or continuations, but Common
Lisp never mandated stack groups or continuations for conforming
implementations. Scheme, of course, has continuations, and there are
implementations of Common Lisp with stack groups.
Those stack groups

http://common-lisp.net/project/bknr/...mman/fd-sg.xml

remind me of Python greenlets

http://cheeseshop.python.org/pypi/greenlet .
---
Lenard Lindstrom
<le***@telus.net>
Jun 28 '07 #156
"Chris Mellon" <ar*****@gmail.comwrites:
On 6/27/07, Douglas Alan <do**@alum.mit.eduwrote:
>The C++ folks feel so strongly about this, that they refuse to provide
"finally", and insist instead that you use destructors and RAII to do
resource deallocation. Personally, I think that's taking things a bit
too far, but I'd rather it be that way than lose the usefulness of
destructors and have to use "when" or "finally" to explicitly
deallocate resources.
This totally misrepresents the case. The with statement and the
context manager is a superset of the RAII functionality.
No, it isn't. C++ allows you to define smart pointers (one of many
RAII techniques), which can use refcounting or other tracking
techniques. Refcounting smart pointers are part of Boost and have
made it into TR1, which means they're on track to be included in the
next standard library. One need not have waited for Boost, as they can
be implemented in about a page of code.

The standard library also has auto_ptr, which is a different sort of
smart pointer, which allows for somewhat fancier RAII than
scope-based.
It doesn't overload object lifetimes, rather it makes the intent
(code execution upon entrance and exit of a block) explicit.
But I don't typically wish for this sort of intent to be made
explicit. TMI! I used "with" for *many* years in Lisp, since this is
how non-memory resource deallocation has been dealt with in Lisp since
the dawn of time. I can tell you from many years of experience that
relying on Python's refcounter is superior.

Shouldn't you be happy that there's something I like more about Python
than Lisp?
Nobody in their right mind has ever tried to get rid of explicit
resource management - explicit resource management is exactly what you
do every time you create an object, or you use RAII, or you open a
file.
This just isn't true. For many years I have not had to explicitly
close files in Python. Nor have I had to do so in C++. They have
been closed for me implicitly. "With" is not implicit -- or at least
not nearly as implicit as was previous practice in Python, or as is
current practice in C++.
*Manual* memory management, where the tracking of references and
scopes is placed upon the programmer, is what people are trying to
get rid of and the with statement contributes to that goal, it
doesn't detract from it.
As far as I am concerned, memory is just one resource amongst many,
and the programmer's life should be made easier in dealing with all
such resources.
Before the with statement, you could do the same thing but you
needed nested try/finally blocks
No, you didn't -- you could just encapsulate the resource acquisition
into an object and allow the destructor to deallocate the resource.
RAII is a good technique, but don't get caught up on the
implementation details.
I'm not -- I'm caught up in the loss of power and elegance that will
be caused by deprecating the use of destructors for resource
deallocation.
The with statement does exactly the same thing, but is actually
superior because

a) It doesn't tie the resource managment to object creation. This
means you can use, for example, with lock: instead of the C++ style
Locker(lock)
I know all about "with". As I mentioned above, Lisp has had it since
the dawn of time. And I have nothing against it, since it is at times
quite useful. I'm just dismayed at the idea of deprecating reliance
on destructors in favor of "with" for the majority of cases when the
destructor usage works well and is more elegant.
b) You can tell whether you exited with an exception, and what that
exception is, so you can take different actions based on error
conditions vs expected exit. This is a significant benefit, it
allows the application of context managers to cases where RAII is
weak. For example, controlling transactions.
Yes, for the case where you might want to do fancy handling of
exceptions raised during resource deallocation, then "when" is
superior, which is why it is good to have in addition to the
traditional Python mechanism, not as a replacement for it.
>Right, but that doesn't mean that 99.9% of the time, the programmer
can't immediately tell that cycles aren't going to be an issue.
They can occur in the most bizarre and unexpected places. To the point
where I suspect that the reality is simply that you never noticed your
cycles, not that they didn't exist.
Purify tells me that I know more about the behavior of my code than
you do: I've *never* had any memory leaks in large C++ programs that
used refcounted smart pointers that were caused by cycles in my data
structures that I didn't know about.
And if you think you won't need it because python will get "real" GC
you're very confused about what GC does and how.
Ummm, I know all about real GC, and I'm quite aware than Python has
had it for quite some time now. (Though the implementation is rather
different last I checked than it would be for a language that didn't
also have refcounted GC.)
A generic threadsafe smart pointer, in fact, is very nearly a GIL.
And how's that? I should think that modern architectures would have
an efficient way of adding and subtracting from an int atomically. If
they don't, I have a hard time seeing how *any* multi-threaded
applications are going to be able to make good use of multiple
processors.
Get cracking then. You're hardly the first person to say this.
However, of the people who say it, hardly anyone actually produces
any code and the only person I know of who did dropped it when
performance went through the floor. Maybe you can do better.
I really have no desire to code in C, thank you. I'd rather be coding
in Python. (Hence my [idle] desire for macros in Python, so that I
could do even more of my work in Python.)
There's no particular reason why Lisp is any better for AI research
than anything.
Yes, there is. It's a very flexible language that can adapt to the
needs of projects that need to push the boundaries of what computer
programmers typically do.
I'm not familiar with the TIOBE metric, but I can pretty much
guarantee that regardless of what it says there is far more COBOL
code in the wild, being actively maintained (or at least babysat)
than there is lisp code.
I'm agree that there is cedrtainly much more Cobol code being
maintained than there is Lisp code, but that doesn't mean that there
are more Cobol programmers writing new code than there are Lisp
programmers writing new code. A project would have to be run by a
madman to begin a new project in Cobol.
>Re Lisp, though, there used to be a joke (which turned out to be
false), which went, "I don't know what the most popular programming
language will be in 20 years, but it will be called 'Fortran'". In
reality, I don't know what the most popular language will be called 20
years from now, but it will *be* Lisp.
And everyone who still uses the language actually called Lisp will
continue to explain how it isn't a "real" lisp for a laundry list of
reasons that nobody who gets work done actually cares about.
And where are you getting this from? I don't know anyone who claims
that any commonly used dialect of Lisp isn't *really* Lisp.

|>oug
Jun 28 '07 #157
Douglas Woodrow <ne********@nospam.demon.co.ukwrites:
On Wed, 27 Jun 2007 01:45:44, Douglas Alan <do**@alum.mit.eduwrote
>>A chaque son gout
I apologise for this irrelevant interruption to the conversation, but
this isn't the first time you've written that.
The word "chaque" is not a pronoun.
A chacun son epellation.

|>oug
Jun 28 '07 #158
Dennis Lee Bieber wote:
But if these "macros" are supposed to allow one to sort of extend
Python syntax, are you really going to code things like

macrolib1.keyword

everywhere?
I don't see why that *shouldn't* work. Or "from macrolib1 import
keyword as foo". And to be truly Pythonic the keywords would have to
be scoped like normal Python variables. One problem is that such a
system wouldn't be able to redefine existing keywords.

Lets wait for a concrete proposal before delving into this rats'
cauldron any further.
Graham

Jun 28 '07 #159
Dennis Lee Bieber <wl*****@ix.netcom.comwrites:
But if these "macros" are supposed to allow one to sort of extend
Python syntax, are you really going to code things like

macrolib1.keyword
everywhere?
No -- I would expect that macros (if done the way that I would like
them to be done) would work something like so:

from setMacro import macro set, macro let
let x = 1
set x += 1

The macros "let" and "set" (like all macro invocations) would have to
be the first tokens on a line. They would be passed either the
strings "x = 1" and "x += 1", or some tokenized version thereof.
There would be parsing libraries to help them from there.

For macros that need to continue over more than one line, e.g.,
perhaps something like

let x = 1
y = 2
z = 3
set x = y + z
y = x + z
z = x + y
print x, y, z

the macro would parse up to when the indentation returns to the previous
level.

For macros that need to return values, a new bracketing syntax would
be needed. Perhaps something like:

while $(let x = foo()):
print x

|>oug
Jun 28 '07 #160
Douglas Alan <do**@alum.mit.eduwrites:
Before the with statement, you could do the same thing but you
needed nested try/finally blocks

No, you didn't -- you could just encapsulate the resource acquisition
into an object and allow the destructor to deallocate the resource.
But without the try/finally blocks, if there is an unhandled
exception, it passes a traceback object to higher levels of the
program, and the traceback contains a pointer to the resource, so you
can't be sure the resource will ever be freed. That was part of the
motivation for the with statement.
And how's that? I should think that modern architectures would have
an efficient way of adding and subtracting from an int atomically.
I'm not sure. In STM implementations it's usually done with a
compare-and-swap instruction (CMPXCHG on the x86) so you read the old
integer, increment a local copy, and CMPXCHG the copy into the object,
checking the swapped-out value to make sure that nobody else changed
the object between the copy and the swap (rollback and try again if
someone has). It might be interesting to wrap Python refcounts that
way, but really, Python should move to a compacting GC of some kind,
so the heap doesn't get all fragmented. Cache misses are a lot more
expensive now than they were in the era when CPython was first
written.
If they don't, I have a hard time seeing how *any* multi-threaded
applications are going to be able to make good use of multiple processors.
They carefully manage the number of mutable objects shared between
threads is how. A concept that doesn't mix with CPython's use of
reference counts.
Yes, there is. [Lisp] it's a very flexible language that can adapt
to the needs of projects that need to push the boundaries of what
computer programmers typically do.
Really, if they used better languages they'd be able to operate within
boundaries instead of pushing them.
Jun 28 '07 #161
On 2007-06-23, Steven D'Aprano <st***@REMOVE.THIS.cybersource.com.auwrote:
On Fri, 22 Jun 2007 13:21:14 -0400, Douglas Alan wrote:
>I.e., I could write a new object system for Lisp faster than I could
even begin to fathom the internal of CPython. Not only that, I have
absolutely no desire to spend my valuable free time writing C code.
I'd much rather be hacking in Python, thank you very much.

Which is very valuable... IF you care about writing a new object system. I
don't, and I think most developers don't, which is why Lisp-like macros
haven't taken off.
I find this is a rather sad kind of argument. It seems to imply that
python is only for problems that are rather common or similar to
those. If most people don't care about the kind of problem you
are working on, it seems from this kind of argument that python
is not the language you should be looking at.

--
Antoon Pardon
Jun 28 '07 #162
On 6/27/07, Douglas Alan <do**@alum.mit.eduwrote:
"Chris Mellon" <ar*****@gmail.comwrites:
On 6/27/07, Douglas Alan <do**@alum.mit.eduwrote:
The C++ folks feel so strongly about this, that they refuse to provide
"finally", and insist instead that you use destructors and RAII to do
resource deallocation. Personally, I think that's taking things a bit
too far, but I'd rather it be that way than lose the usefulness of
destructors and have to use "when" or "finally" to explicitly
deallocate resources.
This totally misrepresents the case. The with statement and the
context manager is a superset of the RAII functionality.

No, it isn't. C++ allows you to define smart pointers (one of many
RAII techniques), which can use refcounting or other tracking
techniques. Refcounting smart pointers are part of Boost and have
made it into TR1, which means they're on track to be included in the
next standard library. One need not have waited for Boost, as they can
be implemented in about a page of code.

The standard library also has auto_ptr, which is a different sort of
smart pointer, which allows for somewhat fancier RAII than
scope-based.
Obviously. But theres nothing about the with statement that's
different than using smart pointers in this regard. I take it back,
there's one case - when you need only one scope in a function, with
requires an extra block while C++ style RAII allows you to
It doesn't overload object lifetimes, rather it makes the intent
(code execution upon entrance and exit of a block) explicit.

But I don't typically wish for this sort of intent to be made
explicit. TMI! I used "with" for *many* years in Lisp, since this is
how non-memory resource deallocation has been dealt with in Lisp since
the dawn of time. I can tell you from many years of experience that
relying on Python's refcounter is superior.
I question the relevance of your experience, then. Refcounting is fine
for memory, but as you mention below, memory is only one kind of
resource and refcounting is not necessarily the best technique for all
resources. Java has the same problem, where you've got GC so you don't
have to worry about memory, but no tools for managing non-memory
resources.
Shouldn't you be happy that there's something I like more about Python
than Lisp?
I honestly don't care if anyone prefers Python over Lisp or vice
versa. If you like Lisp, you know where it is.
Nobody in their right mind has ever tried to get rid of explicit
resource management - explicit resource management is exactly what you
do every time you create an object, or you use RAII, or you open a
file.

This just isn't true. For many years I have not had to explicitly
close files in Python. Nor have I had to do so in C++. They have
been closed for me implicitly. "With" is not implicit -- or at least
not nearly as implicit as was previous practice in Python, or as is
current practice in C++.
You still don't have to manually close files. But you cannot, and
never could, rely on them being closed at a given time unless you did
so. If you need a file to be closed in a deterministic manner, then
you must close it explicitly. The with statement is not implicit and
never has been. Implicit resource management is *insufficient* for
the general resource management case. It works fine for memory, it's
okay for files (until it isn't), it's terrible for thread locks and
network connections and database transactions. Those things require
*explicit* resource management.
*Manual* memory management, where the tracking of references and
scopes is placed upon the programmer, is what people are trying to
get rid of and the with statement contributes to that goal, it
doesn't detract from it.

As far as I am concerned, memory is just one resource amongst many,
and the programmer's life should be made easier in dealing with all
such resources.
Which is exactly what the with statement is for.
Before the with statement, you could do the same thing but you
needed nested try/finally blocks

No, you didn't -- you could just encapsulate the resource acquisition
into an object and allow the destructor to deallocate the resource.
If you did this in Python, your code was wrong. You were coding C++ in
Python. Don't do it.
RAII is a good technique, but don't get caught up on the
implementation details.

I'm not -- I'm caught up in the loss of power and elegance that will
be caused by deprecating the use of destructors for resource
deallocation.
Python has *never had this*. This never worked. It could seem to work
if you carefully, manually, inspected your code and managed your
object lifetimes. This is much more work than the with statement.

To the extent that your code ever worked when you relied on this
detail, it will continue to work. There are no plans to replace
pythons refcounting with fancier GC schemes that I am aware of.
The with statement does exactly the same thing, but is actually
superior because

a) It doesn't tie the resource managment to object creation. This
means you can use, for example, with lock: instead of the C++ style
Locker(lock)

I know all about "with". As I mentioned above, Lisp has had it since
the dawn of time. And I have nothing against it, since it is at times
quite useful. I'm just dismayed at the idea of deprecating reliance
on destructors in favor of "with" for the majority of cases when the
destructor usage works well and is more elegant.
Nothing about Pythons memory management has changed. I know I'm
repeating myself here, but you just don't seem to grasp this concept.
Python has *never* had deterministic destruction of objects. It was
never guaranteed, and code that seemed like it benefited from it was
fragile.
b) You can tell whether you exited with an exception, and what that
exception is, so you can take different actions based on error
conditions vs expected exit. This is a significant benefit, it
allows the application of context managers to cases where RAII is
weak. For example, controlling transactions.

Yes, for the case where you might want to do fancy handling of
exceptions raised during resource deallocation, then "when" is
superior, which is why it is good to have in addition to the
traditional Python mechanism, not as a replacement for it.
"with". And it's not replacing anything.
Right, but that doesn't mean that 99.9% of the time, the programmer
can't immediately tell that cycles aren't going to be an issue.
They can occur in the most bizarre and unexpected places. To the point
where I suspect that the reality is simply that you never noticed your
cycles, not that they didn't exist.

Purify tells me that I know more about the behavior of my code than
you do: I've *never* had any memory leaks in large C++ programs that
used refcounted smart pointers that were caused by cycles in my data
structures that I didn't know about.
I'm talking about Python refcounts. For example, a subtle resource
leak that has caught me before is that tracebacks hold references to
locals in the unwound stack. If you relied on refcounting to clean up
a resource, and you needed exception handling, the resource wasn't
released until *after* the exception unwound, which could be a
problem. Also holding onto tracebacks for latter processing (not
uncommon in event based programs) would artificially extend the
lifetime of the resource. If the resource you were managing was a
thread lock this could be a real problem.
And if you think you won't need it because python will get "real" GC
you're very confused about what GC does and how.

Ummm, I know all about real GC, and I'm quite aware than Python has
had it for quite some time now. (Though the implementation is rather
different last I checked than it would be for a language that didn't
also have refcounted GC.)
A generic threadsafe smart pointer, in fact, is very nearly a GIL.

And how's that? I should think that modern architectures would have
an efficient way of adding and subtracting from an int atomically. If
they don't, I have a hard time seeing how *any* multi-threaded
applications are going to be able to make good use of multiple
processors.
Get cracking then. You're hardly the first person to say this.
However, of the people who say it, hardly anyone actually produces
any code and the only person I know of who did dropped it when
performance went through the floor. Maybe you can do better.

I really have no desire to code in C, thank you. I'd rather be coding
in Python. (Hence my [idle] desire for macros in Python, so that I
could do even more of my work in Python.)
In this particular conversation, I really don't think that theres much
to say beyond put up or shut up. The experts in the field have said
that it's not practical. If you think they're wrong, you're going to
need to prove it with code, not by waving your hand.
There's no particular reason why Lisp is any better for AI research
than anything.

Yes, there is. It's a very flexible language that can adapt to the
needs of projects that need to push the boundaries of what computer
programmers typically do.
That doesn't make Lisp any better at AI programming than it is for
writing databases or spreadsheets or anything else.
I'm not familiar with the TIOBE metric, but I can pretty much
guarantee that regardless of what it says there is far more COBOL
code in the wild, being actively maintained (or at least babysat)
than there is lisp code.

I'm agree that there is cedrtainly much more Cobol code being
maintained than there is Lisp code, but that doesn't mean that there
are more Cobol programmers writing new code than there are Lisp
programmers writing new code. A project would have to be run by a
madman to begin a new project in Cobol.
More than you'd think, sadly. Although depending on your definition of
"new project", it may not count. There's a great deal of new code
being written in COBOL to run on top of old COBOL systems.
Re Lisp, though, there used to be a joke (which turned out to be
false), which went, "I don't know what the most popular programming
language will be in 20 years, but it will be called 'Fortran'". In
reality, I don't know what the most popular language will be called 20
years from now, but it will *be* Lisp.
And everyone who still uses the language actually called Lisp will
continue to explain how it isn't a "real" lisp for a laundry list of
reasons that nobody who gets work done actually cares about.

And where are you getting this from? I don't know anyone who claims
that any commonly used dialect of Lisp isn't *really* Lisp.
The language of the future will not be Common Lisp, and it won't be a
well known dialect of Lisp. It will have many Lisp like features, and
"true" Lispers will still claim it doesn't count, just as they do
about Ruby and Python today.
Jun 28 '07 #163
On Jun 27, 11:41 pm, John Nagle <n...@animats.comwrote:
One right answer would be a pure reference counted system where
loops are outright errors, and you must use weak pointers for backpointers.
... The general
idea is that pointers toward the leaves of trees should be strong
pointers, and pointers toward the root should be weak pointers.
While I agree that weak pointers are good and can not be an
afterthought, I've written code where "back" changed dynamically, and
I'm pretty sure that Nagle has as well.

Many programs with circular lists have an outside pointer to the
current element, but the current element changes. All of the links
implementing the list have to be strong enough to keep all of the list
alive.

Yes, one can implement a circular list as a vector with a current
index, but that has space and/or time consequences. It's unclear that
that approach generalizes for more complicated structures. (You can't
just pull all of the links out into such lists.)

In short, while disallowing loops with strong pointers is "a" right
answer, it isn't always a right answer, so it can't be the only
answer.

-andy

Jun 28 '07 #164
Does anyone have Python code for writing Targa (TGA) image files?
Ideally, with options for bit-depth/alpha channel, and RLE compression,
but I'm probably reaching there.

PIL is read-only for TGAs, unfortunately, and so far I'm striking out on
Google.
Thanks.

--
Adam Pletcher
Technical Art Director
www.volition-inc.com
Jun 28 '07 #165
Douglas Woodrow wrote:
On Wed, 27 Jun 2007 01:45:44, Douglas Alan <do**@alum.mit.eduwrote
>A chaque son gout

I apologise for this irrelevant interruption to the conversation, but
this isn't the first time you've written that.

The word "chaque" is not a pronoun.

http://grammaire.reverso.net/index_a...s/Fiche220.htm
Right, he probably means "Chaqu'un son gout" (roughly, each to his own
taste).

regards
Steve
--
Steve Holden +1 571 484 6266 +1 800 494 3119
Holden Web LLC/Ltd http://www.holdenweb.com
Skype: holdenweb http://del.icio.us/steve.holden
--------------- Asciimercial ------------------
Get on the web: Blog, lens and tag the Internet
Many services currently offer free registration
----------- Thank You for Reading -------------

Jun 29 '07 #166
Steve Holden <st***@holdenweb.comwrites:
Douglas Woodrow wrote:
>On Wed, 27 Jun 2007 01:45:44, Douglas Alan <do**@alum.mit.eduwrote
>>A chaque son gout
>I apologise for this irrelevant interruption to the conversation,
but this isn't the first time you've written that. The word
"chaque" is not a pronoun.
>http://grammaire.reverso.net/index_a...s/Fiche220.htm
Right, he probably means "Chaqu'un son gout" (roughly, each to his
own taste).
Actually, it's "chacun". And the "" may precede the "chacun".

|>oug
Jun 29 '07 #167
Steve Holden <st***@holdenweb.comwrites:
>Actually, it's "chacun". And the "" may precede the "chacun".
>|>oug
"chacun" is an elision of the two words "Chaque" (each) and "un"
(one), and use of those two words is at least equally correct, though
where it stands in modern usage I must confess I have no idea.
Google can answer that: 158,000 hits for "chaqu'un", 57 million for
"chacun".

|>oug
Jun 29 '07 #168
On Jun 29, 6:44 am, Douglas Alan <d...@alum.mit.eduwrote:
>
I've written plenty of Python code that relied on destructors to
deallocate resources, and the code always worked.
You have been lucky:

$ cat deallocating.py
import logging

class C(object):
def __init__(self):
logging.warn('Allocating resource ...')

def __del__(self):
logging.warn('De-allocating resource ...')
print 'THIS IS NEVER REACHED!'

if __name__ == '__main__':
c = C()

$ python deallocating.py
WARNING:root:Allocating resource ...
Exception exceptions.AttributeError: "'NoneType' object has no
attribute 'warn'" in <bound method C.__del__ of <__main__.C object at
0xb7b9436c>ignored

Just because your experience has been positive, you should not
dismiss the opinion who have clearly more experience than you on
the subtilities of Python.

Michele

Jun 29 '07 #169
"Adam Pletcher" <ad**@volition-inc.comwrites:
Does anyone have Python code for writing Targa (TGA) image files?
Please post your question as a new message, instead of a reply to an
existing thread that has nothing to do with the question you're
asking. Otherwise your message will be obscured among the many other
messages in the same thread.

--
\ "I have never made but one prayer to God, a very short one: 'O |
`\ Lord, make my enemies ridiculous!' And God granted it." -- |
_o__) Voltaire |
Ben Finney
Jun 29 '07 #170
Ben Finney wrote:
"Adam Pletcher" <ad**@volition-inc.comwrites:
>Does anyone have Python code for writing Targa (TGA) image files?

Please post your question as a new message, instead of a reply to an
existing thread that has nothing to do with the question you're
asking. Otherwise your message will be obscured among the many other
messages in the same thread.
This blender script writes a tga with python only functions.

http://projects.blender.org/plugins/...er&view=markup

--
Campbell J Barton (ideasman42)
Jun 29 '07 #171
Douglas Alan <do**@alum.mit.eduwrites:
I think you overstate your case. Lispers understand iteration
interfaces perfectly well, but tend to prefer mapping fuctions to
iteration because mapping functions are both easier to code (they
are basically equivalent to coding generators) and efficient (like
non-generator-implemented iterators). The downside is that they are
not quite as flexible as iterators (which can be hard to code) and
generators, which are slow.
Why do you think generators are any slower than hand-coded iterators?
Consider a trivial sequence iterator:

$ python -m timeit -s 'l=[1] * 100
class foo(object):
def __init__(self, l):
self.l = l
self.i = 0
def __iter__(self):
return self
def next(self):
self.i += 1
try:
return self.l[self.i - 1]
except IndexError:
raise StopIteration
' 'tuple(foo(l))'
10000 loops, best of 3: 173 usec per loop

The equivalent generator is not only easier to write, but also
considerably faster:

$ python -m timeit -s 'l=[1] * 100
def foo(l):
i = 0
while 1:
try:
yield l[i]
except IndexError:
break
i += 1
' 'tuple(foo(l))'
10000 loops, best of 3: 46 usec per loop
Jun 29 '07 #172
On 6/28/07, Douglas Alan <do**@alum.mit.eduwrote:
"Chris Mellon" <ar*****@gmail.comwrites:
Obviously. But theres nothing about the with statement that's
different than using smart pointers in this regard.

Sure there is -- smart pointers handle many sorts of situations, while
"with" only handles the case where the lifetime of the object
corresponds to the scope.
The entire point of RAII is that you use objects who's lifetime
corresponds with a scope. Smart pointers are an RAII technique to
manage refcounts, not a resource management technique in and of
themselves.

To the extent that your code ever worked when you relied on this
detail, it will continue to work.

I've written plenty of Python code that relied on destructors to
deallocate resources, and the code always worked.
This is roughly equivilent to someone saying that they don't bother
initializing pointers to 0 in C, because it's always worked for them.
The fact that it works in certain cases (in the C case, when you're
working in the debug mode of certain compilers or standard libs) does
not mean that code that relies on it working is correct.
There are no plans to replace pythons refcounting with fancier GC
schemes that I am aware of.

This is counter to what other people have been saying. They have been
worrying me by saying that the refcounter may go away and so you may
not be able to rely on predictable object lifetimes in the future.
Well, the official language implementation explicitly warns against
relying on the behavior you've been relying on. And of course, for the
purposes you've been using it it'll continue to work even if python
did eliminate refcounting - "soon enough" deallocation of non-time
sensitive resources. So I don't know what you're hollering about.

You're arguing in 2 directions here. You don't want refcounting to go
away, because you rely on it to close things exactly when there are no
more references. On the other hand, you're claiming that implicit
management and its pitfalls are fine because most of the time you
don't need the resource to be closed in a deterministic manner.

If you're relying on refcounting for timely, guaranteed,
deterministic resource managment then your code is *wrong* already,
for the same reason that someone who assumes that uninitialized
pointers in C will be 0 is wrong.

If you're relying on refcounting for "soon enough" resource management
then it'll continue to work no matter what GC scheme python may or may
not move to.
Nothing about Pythons memory management has changed. I know I'm
repeating myself here, but you just don't seem to grasp this
concept. Python has *never* had deterministic destruction of
objects. It was never guaranteed, and code that seemed like it
benefited from it was fragile.

It was not fragile in my experience. If a resource *positively*,
*absolutely* needed to be deallocated at a certain point in the code
(and occasionally that was the case), then I would code that way. But
that has been far from the typical case for me.
Your experience was wrong, then. It's fragile because it's easy for
external callers to grab refcounts to your objects, and it's easy for
code modifications to cause resources to live longer. If you don't
*care* about that, then by all means, don't control the resource
explicitly. You can continue to do this no matter what - people work
with files like this in Java all the time, for the same reason they do
it in Python. Memory and files are not the end all of resources.

You're arguing against explicit resource management with the argument
that you don't need to manage resources. Can you not see how
ridiculously circular this is?
Jun 29 '07 #173
Hrvoje Niksic <hn*****@xemacs.orgwrites:
Douglas Alan <do**@alum.mit.eduwrites:
>I think you overstate your case. Lispers understand iteration
interfaces perfectly well, but tend to prefer mapping fuctions to
iteration because mapping functions are both easier to code (they
are basically equivalent to coding generators) and efficient (like
non-generator-implemented iterators). The downside is that they are
not quite as flexible as iterators (which can be hard to code) and
generators, which are slow.
Why do you think generators are any slower than hand-coded iterators?
Generators aren't slower than hand-coded iterators in *Python*, but
that's because Python is a slow language. In a fast language, such as
a Lisp, generators are like 100 times slower than mapping functions.
(At least they were on Lisp Machines, where generators were
implemented using a more generator coroutining mechanism [i.e., stack
groups]. *Perhaps* there would be some opportunities for more
optimization if they had used a less general mechanism.)

CLU, which I believe is the language that invented generators, limited
them to the power of mapping functions (i.e., you couldn't have
multiple generators instantiated in parallel), making them really
syntactic sugar for mapping functions. The reason for this limitation
was performance. CLU was a fast language.

|>oug
Jun 29 '07 #174
Michele Simionato <mi***************@gmail.comwrites:
>I've written plenty of Python code that relied on destructors to
deallocate resources, and the code always worked.
You have been lucky:
No I haven't been lucky -- I just know what I'm doing.
$ cat deallocating.py
import logging

class C(object):
def __init__(self):
logging.warn('Allocating resource ...')

def __del__(self):
logging.warn('De-allocating resource ...')
print 'THIS IS NEVER REACHED!'

if __name__ == '__main__':
c = C()

$ python deallocating.py
WARNING:root:Allocating resource ...
Exception exceptions.AttributeError: "'NoneType' object has no
attribute 'warn'" in <bound method C.__del__ of <__main__.C object at
0xb7b9436c>ignored
Right. So? I understand this issue completely and I code
accordingly.
Just because your experience has been positive, you should not
dismiss the opinion who have clearly more experience than you on
the subtilities of Python.
I don't dismiss their opinion at all. All I've stated is that for my
purposes I find that the refcounting semantics of Python to be useful,
expressive, and dependable, and that I wouldn't like it one bit if
they were removed from Python.

Those who claim that the refcounting semantics are not useful are the
ones who are dismissing my experience. (And the experience of
zillions of other Python programmers who have happily been relying on
them.)

|>oug
Jun 29 '07 #175
Dennis Lee Bieber <wl*****@ix.netcom.comwrites:
LISP and FORTH are cousins...
Not really. Their only real similarity (other than the similarities
shared by most programming languages) is that they both use a form of
Polish notation.

|>oug
Jun 29 '07 #176
"Chris Mellon" <ar*****@gmail.comwrites:
You're arguing against explicit resource management with the argument
that you don't need to manage resources. Can you not see how
ridiculously circular this is?
No. It is insane to leave files unclosed in Java (unless you know for
sure that your program is not going to be opening many files) because
you don't even know that the garbage collector will ever even run, and
you could easily run out of file descriptors, and hog system
resources.

On the other hand, in Python, you can be 100% sure that your files
will be closed in a timely manner without explicitly closing them, as
long as you are safe in making certain assumptions about how your code
will be used. Such assumptions are called "preconditions", which are
an understood notion in software engineering and by me when I write
software.

|>oug
Jun 29 '07 #177
On 6/29/07, Douglas Alan <do**@alum.mit.eduwrote:
"Chris Mellon" <ar*****@gmail.comwrites:
You're arguing against explicit resource management with the argument
that you don't need to manage resources. Can you not see how
ridiculously circular this is?

No. It is insane to leave files unclosed in Java (unless you know for
sure that your program is not going to be opening many files) because
you don't even know that the garbage collector will ever even run, and
you could easily run out of file descriptors, and hog system
resources.

On the other hand, in Python, you can be 100% sure that your files
will be closed in a timely manner without explicitly closing them, as
long as you are safe in making certain assumptions about how your code
will be used. Such assumptions are called "preconditions", which are
an understood notion in software engineering and by me when I write
software.
Next time theres one of those "software development isn't really
engineering" debates going on I'm sure that we'll be able to settle
the argument by pointing out that relying on *explicitly* unreliable
implementation details is defined as "engineering" by some people.
Jun 29 '07 #178
"Chris Mellon" <ar*****@gmail.comwrites:
>On the other hand, in Python, you can be 100% sure that your files
will be closed in a timely manner without explicitly closing them, as
long as you are safe in making certain assumptions about how your code
will be used. Such assumptions are called "preconditions", which are
an understood notion in software engineering and by me when I write
software.
Next time theres one of those "software development isn't really
engineering" debates going on I'm sure that we'll be able to settle
the argument by pointing out that relying on *explicitly* unreliable
implementation details is defined as "engineering" by some people.
The proof of the pudding is in it's eating. I've worked on very large
programs that exhibited very few bugs, and ran flawlessly for many
years. One managed the memory remotely of a space telescope, and the
code was pretty tricky. I was sure when writing the code that there
would be a number of obscure bugs that I would end up having to pull
my hair out debugging, but it's been running flawlessly for more than
a decade now, without require nearly any debugging at all.

Engineering to a large degree is knowing where to dedicate your
efforts. If you dedicate them to where they are not needed, then you
have less time to dedicate them to where they truly are.

|>oug
Jun 29 '07 #179
Douglas Alan <do**@alum.mit.eduwrites:
>> The downside is that they are not quite as flexible as iterators
(which can be hard to code) and generators, which are slow.
>Why do you think generators are any slower than hand-coded iterators?

Generators aren't slower than hand-coded iterators in *Python*, but
that's because Python is a slow language.
But then it should be slow for both generators and iterators.
*Perhaps* there would be some opportunities for more optimization if
they had used a less general mechanism.)
Or if the generators were built into the language and directly
supported by the compiler. In some cases implementing a feature is
*not* a simple case of writing a macro, even in Lisp. Generators may
well be one such case.
Jun 29 '07 #180
Douglas Alan wrote:
Michele Simionato <mi***************@gmail.comwrites:
>>I've written plenty of Python code that relied on destructors to
deallocate resources, and the code always worked.
>You have been lucky:

No I haven't been lucky -- I just know what I'm doing.
>$ cat deallocating.py
import logging

class C(object):
def __init__(self):
logging.warn('Allocating resource ...')

def __del__(self):
logging.warn('De-allocating resource ...')
print 'THIS IS NEVER REACHED!'

if __name__ == '__main__':
c = C()

$ python deallocating.py
WARNING:root:Allocating resource ...
Exception exceptions.AttributeError: "'NoneType' object has no
attribute 'warn'" in <bound method C.__del__ of <__main__.C object at
0xb7b9436c>ignored

Right. So? I understand this issue completely and I code
accordingly.
>Just because your experience has been positive, you should not
dismiss the opinion who have clearly more experience than you on
the subtilities of Python.

I don't dismiss their opinion at all. All I've stated is that for my
purposes I find that the refcounting semantics of Python to be useful,
expressive, and dependable, and that I wouldn't like it one bit if
they were removed from Python.

Those who claim that the refcounting semantics are not useful are the
ones who are dismissing my experience. (And the experience of
zillions of other Python programmers who have happily been relying on
them.)

|>oug
"Python" doesn't *have* any refcounting semantics. If you rely on the
behavior of CPython's memory allocation and garbage collection you run
the risk of producing programs that won't port tp Jython, or IronPython,
or PyPy, or ...

This is a trade-off that many users *are* willing to make.

regards
Steve
--
Steve Holden +1 571 484 6266 +1 800 494 3119
Holden Web LLC/Ltd http://www.holdenweb.com
Skype: holdenweb http://del.icio.us/steve.holden
--------------- Asciimercial ------------------
Get on the web: Blog, lens and tag the Internet
Many services currently offer free registration
----------- Thank You for Reading -------------

Jun 29 '07 #181
Hrvoje Niksic <hn*****@xemacs.orgwrites:
>Generators aren't slower than hand-coded iterators in *Python*, but
that's because Python is a slow language.
But then it should be slow for both generators and iterators.
Python *is* slow for both generators and iterators. It's slow for
*everything*, except for cases when you can have most of the work done
within C-coded functions or operations that perform a lot of work
within a single call. (Or, of course, cases where you are i/o
limited, or whatever.)
>*Perhaps* there would be some opportunities for more optimization if
they had used a less general mechanism.)
Or if the generators were built into the language and directly
supported by the compiler. In some cases implementing a feature is
*not* a simple case of writing a macro, even in Lisp. Generators may
well be one such case.
You can't implement generators in Lisp (with or without macros)
without support for generators within the Lisp implementation. This
support was provided as "stack groups" on Lisp Machines and as
continuations in Scheme. Both stack groups and continuations are
slow. I strongly suspect that if they had provided direct support for
generators, rather than indirectly via stack groups and continuations,
that that support would have been slow as well.

|>oug
Jun 29 '07 #182
Douglas Alan <do**@alum.mit.eduwrites:
But that's a library issue, not a language issue. The technology
exists completely within Lisp to accomplish these things, and most
Lisp programmers even know how to do this, as application frameworks
in Lisp often do this kind. The problem is getting anything put into
the standard. Standardizing committees just suck.
Lisp is just moribund, is all. Haskell has a standardizing committee
and yet there are lots of implementations taking the language in new
and interesting directions all the time. The most useful extensions
become de facto standards and then they make it into the real
standard.
Jun 30 '07 #183
On Jun 29, 3:42 pm, Douglas Alan <d...@alum.mit.eduwrote:
Michele Simionato <michele.simion...@gmail.comwrites:
I've written plenty of Python code that relied on destructors to
deallocate resources, and the code always worked.
You have been lucky:

No I haven't been lucky -- I just know what I'm doing.
$ cat deallocating.py
import logging
class C(object):
def __init__(self):
logging.warn('Allocating resource ...')
def __del__(self):
logging.warn('De-allocating resource ...')
print 'THIS IS NEVER REACHED!'
if __name__ == '__main__':
c = C()
$ python deallocating.py
WARNING:root:Allocating resource ...
Exception exceptions.AttributeError: "'NoneType' object has no
attribute 'warn'" in <bound method C.__del__ of <__main__.C object at
0xb7b9436c>ignored

Right. So? I understand this issue completely and I code
accordingly.
What does it mean you 'code accordingly'? IMO the only clean way out
of this issue
is to NOT rely on the garbage collector and to manage resource
deallocation
explicitely, not implicitely. Actually I wrote a recipe to help with
this
a couple of months ago and this discussion prompted me to publish it:
http://aspn.activestate.com/ASPN/Coo.../Recipe/523007
But how would you solve the issue using destructors only? I am just
curious,
I would be happy if there was a simple and *reliable* solution, but I
sort
of doubt it. Hoping to be proven wrong,
Michele Simionato

Jun 30 '07 #184
Paul Rubin <http://ph****@NOSPAM.invalidwrites:
Douglas Alan <do**@alum.mit.eduwrites:
>But that's a library issue, not a language issue. The technology
exists completely within Lisp to accomplish these things, and most
Lisp programmers even know how to do this, as application frameworks
in Lisp often do this kind. The problem is getting anything put into
the standard. Standardizing committees just suck.
Lisp is just moribund, is all. Haskell has a standardizing committee
and yet there are lots of implementations taking the language in new
and interesting directions all the time. The most useful extensions
become de facto standards and then they make it into the real
standard.
You only say this because you are not aware of all the cool dialetcs
of Lisp that are invented. The problem is that they rarely leave the
tiny community that uses them, because each community comes up with
it's own different cool dialect of Lisp. So, clearly the issue is not
one of any lack of motivation or people working on Lisp innovations --
it's getting them to sit down together and agree on a standard.

This, of course is a serious problem. One that is very similar to the
problem with Python vs. Ruby on Rails. It's not the problem that you are
ascribing to Lisp, however.

|>oug

P.S. Besides Haskell is basically a refinement of ML, which is a
dialect of Lisp.

P.P.S. I doubt that any day soon any purely (or even mostly)
functional language is going to gain any sort of popularity outside of
academia. Maybe 20 years from now, they will, but I wouldn't bet on
it.
Jun 30 '07 #185
Douglas Alan <do**@alum.mit.eduwrites:
P.S. Besides Haskell is basically a refinement of ML, which is a
dialect of Lisp.
I'd say Haskell and ML are descended from Lisp, just like mammals are
descended from fish.
Jun 30 '07 #186
Michele Simionato <mi***************@gmail.comwrites:
>Right. So? I understand this issue completely and I code
accordingly.
What does it mean you 'code accordingly'? IMO the only clean way out
of this issue is to NOT rely on the garbage collector and to manage
resource deallocation explicitely, not implicitely.
(1) I don't rely on the refcounter for resources that ABSOLUTELY,
POSITIVELY must be freed before the scope is left. In the code that
I've worked on, only a small fraction of resources would fall into
this category. Open files, for instance, rarely do. For open files,
in fact, I actually want access to them in the traceback for debugging
purposes, so closing them using "with" would be the opposite of what I
want.

(2) I don't squirrel away references to tracebacks.

(3) If a procedure catches an exception but isn't going to return
quickly, I clear the exception.

|>oug
Jun 30 '07 #187
Paul Rubin <http://ph****@NOSPAM.invalidwrites:
Douglas Alan <do**@alum.mit.eduwrites:
>P.S. Besides Haskell is basically a refinement of ML, which is a
dialect of Lisp.
I'd say Haskell and ML are descended from Lisp, just like mammals are
descended from fish.
Hardly -- they all want to share the elegance of lambda calculus,
n'est-ce pas? Also, ML was originally implemented in Lisp, and IIRC
correctly, at least in early versions, shared much of Lisp's syntax.

Also, Scheme has a purely functional core (few people stick to it, of
course), and there are purely functional dialects of Lisp.

|>oug
Jun 30 '07 #188
Douglas Alan <do**@alum.mit.eduwrites:
I'd say Haskell and ML are descended from Lisp, just like mammals are
descended from fish.

Hardly -- they all want to share the elegance of lambda calculus,
n'est-ce pas? Also, ML was originally implemented in Lisp, and IIRC
correctly, at least in early versions, shared much of Lisp's syntax.
Haskell and ML are both evaluate typed lambda calculus unlike Lisp
which is based on untyped lambda calculus. Certainly the most
familiar features of Lisp (dynamic typing, S-expression syntax,
programs as data (Lisp's macro system results from this)) are absent
from Haskell and ML. Haskell's type system lets it do stuff that
Lisp can't approach. I'm reserving judgement about whether Haskell is
really practical for application development, but it can do stuff that
no traditional Lisp can (e.g. its concurrency and parallelism stuff,
with correctness enforced by the type system). It makes it pretty
clear that Lisp has become Blub.

ML's original implementation language is completely irrelevant; after
all Python is still implemented in C.
Also, Scheme has a purely functional core (few people stick to it, of
course), and there are purely functional dialects of Lisp.
Scheme has never been purely functional. It has had mutation since
the beginning.

Hedgehog Lisp (purely functional, doesn't have setq etc.) is really
cute. I almost used it in an embedded project but that got cancelled
too early. It seems to me more like a poor man's Erlang though, than
anything resemblant to ML.
Jun 30 '07 #189
Paul Rubin <http://ph****@NOSPAM.invalidwrites:
Haskell and ML are both evaluate typed lambda calculus unlike Lisp
which is based on untyped lambda calculus. Certainly the most
familiar features of Lisp (dynamic typing, S-expression syntax,
programs as data (Lisp's macro system results from this)) are absent
from Haskell and ML.
And that is supposed to make them better and more flexible??? The
ideal language of the future will have *optional* manifest typing
along with type-inference, and will have some sort of pramgma to turn
on warnings when variables are forced to become dynamic due to there
not being enough type information to infer the type. But it will
still allow programming with dynamic typing when that is necessary.

The last time I looked at Haskell, it was still in the stage of being
a language that only an academic could love. Though, it was certainly
interesting.
Haskell's type system lets it do stuff that Lisp can't approach.
What kind of stuff? Compile-time polymorphism is cool for efficiency
and type safety, but doesn't actually provide you with any extra
functionality that I'm aware of.
I'm reserving judgement about whether Haskell is really practical
for application development, but it can do stuff that no traditional
Lisp can (e.g. its concurrency and parallelism stuff, with
correctness enforced by the type system). It makes it pretty clear
that Lisp has become Blub.
Where do you get this idea that the Lisp world does not get such
things as parallelism? StarLisp was designed for the Connection
Machine by Thinking Machines themselves. The Connection Machine was
one of the most parallel machines ever designed. Alas, it was ahead of
it's time.

Also, I know a research scientist at CSAIL at MIT who has designed and
implemented a version of Lisp for doing audio and video art. It was
designed from the ground-up to deal with realtime audio and video
streams as first class objects. It's actually pretty incredible -- in
just a few lines of code, you can set up a program that displays the
same video multiplied and tiled into a large grid of little videos
tiles, but where a different filter or delay is applied to each tile.
This allows for some stunningly strange and interesting video output.
Similar things can be done in the language with music (though if you
did that particular experiment it would probably just sound
cacophonous).

Does that sound like an understanding of concurrency to you? Yes, I
thought so.

Also, Dylan has optional manifests types and type inference, so the
Lisp community understands some of the benefits of static typing.
(Even MacLisp had optional manifest types, but they weren't there for
safety, but rather for performance. Using them, you could get Fortran
level of performance out of Lisp, which was quite a feat at the time.)
ML's original implementation language is completely irrelevant;
after all Python is still implemented in C.
Except that in the case of ML, it was mostly just a thin veneer on
Lisp that added a typing system and type inference.
>Also, Scheme has a purely functional core (few people stick to it, of
course), and there are purely functional dialects of Lisp.
Scheme has never been purely functional. It has had mutation since
the beginning.
I never said that was purely functional -- I said that it has a purely
functional core. I.e., all the functions that have side effects have
and "!" on their ends (or at least they did when I learned the
language), and there are styles of programming in Scheme that
discourage using any of those functions.

|>oug

P.S. The last time I took a language class (about five or six years
ago), the most interesting languages I thought were descended from
Self, not any functional language. (And Self, of course is descended
from Smalltalk, which is descended from Lisp.)
Jun 30 '07 #190
Lenard Lindstrom <le***@telus.netwrites:
Explicitly clear the exception? With sys.exc_clear?
Yes. Is there a problem with that?

|>oug
Jun 30 '07 #191
I wrote:
P.S. The last time I took a language class (about five or six years
ago), the most interesting languages I thought were descended from
Self, not any functional language. (And Self, of course is descended
from Smalltalk, which is descended from Lisp.)
I think that Cecil is the particular language that I was most thinking
of:

http://en.wikipedia.org/wiki/Cecil_programming_language

|>oug
Jun 30 '07 #192
Douglas Alan <do**@alum.mit.eduwrites:
Haskell and ML are both evaluate typed lambda calculus unlike Lisp
which is based on untyped lambda calculus. Certainly the most
familiar features of Lisp (dynamic typing, S-expression syntax,
programs as data (Lisp's macro system results from this)) are absent
from Haskell and ML.

And that is supposed to make them better and more flexible???
Well no, by itself the absence of those Lisp characteristics mainly
means it's a pretty far stretch to say that Haskell and ML are Lisp
dialects.
The ideal language of the future will have *optional* manifest
typing along with type-inference, and will have some sort of pramgma
to turn on warnings when variables are forced to become dynamic due
to there not being enough type information to infer the type. But
it will still allow programming with dynamic typing when that is
necessary.
If I understand correctly, in Haskell these are called existential types:

http://haskell.org/hawiki/ExistentialTypes
The last time I looked at Haskell, it was still in the stage of being
a language that only an academic could love.
I used to hear the same thing said about Lisp.
Haskell's type system lets it do stuff that Lisp can't approach.

What kind of stuff? Compile-time polymorphism is cool for efficiency
and type safety, but doesn't actually provide you with any extra
functionality that I'm aware of.
For example, it can guarantee referential transparency of functions
that don't live in certain monads. E.g. if a function takes an
integer arg and returns an integer (f :: Integer -Integer), the type
system guarantees that computing f has no side effects (it doesn't
mutate arrays, doesn't open network sockets, doesn't print messages,
etc). That is very helpful for concurrency, see the paper "Composable
Memory Transactions" linked from here:

http://research.microsoft.com/Users/.../stm/index.htm

other stuff there is interesting too.
Where do you get this idea that the Lisp world does not get such
things as parallelism? StarLisp was designed for the Connection
Machine...
Many parallel programs have been written in Lisp and *Lisp, and
similarly in C, C++, Java, and Python, through careful use of manually
placed synchronization primitives, just as many programs using dynamic
memory allocation have been written in C with manual use of malloc and
free. This presentation shows some stuff happening in Haskell that
sounds almost as cool as bringing garbage collection to the
malloc/free era:

http://research.microsoft.com/~simon.../NdpSlides.pdf

As for where languages are going, I think I already mentioned Tim
Sweeney's presentation on "The Next Mainstream Programming Language":

http://www.st.cs.uni-sb.de/edu/semin...ocs/sweeny.pdf

It's not Haskell, but its type system is even more advanced than Haskell's.
Jul 1 '07 #193
Douglas Alan wrote:
Lenard Lindstrom <le***@telus.netwrites:
>Explicitly clear the exception? With sys.exc_clear?

Yes. Is there a problem with that?
As long as nothing tries to re-raise the exception I doubt it breaks
anything:
>>import sys
try:
raise StandardError("Hello")
except StandardError:
sys.exc_clear()
raise
Traceback (most recent call last):
File "<pyshell#6>", line 5, in <module>
raise
TypeError: exceptions must be classes, instances, or strings
(deprecated), not NoneType
But it is like calling the garbage collector. You are tuning the program
to ensure some resource isn't exhausted. It relies on implementation
specific behavior to be provably reliable*. If this is indeed the most
obvious way to do things in your particular use case then Python, and
many other languages, is missing something. If the particular problem is
isolated, formalized, and general solution found, then a PEP can be
submitted. If accepted, this would ensure future and cross-platform
compatibility.
* reference counting is an integral part of the CPython C api so cannot
be changed without breaking a lot of extension modules. It will remain
as long as CPython is implemented in C.
---
Lenard Lindstrom
<le***@telus.net>
Jul 1 '07 #194
Lenard Lindstrom <le***@telus.netwrites:
>>Explicitly clear the exception? With sys.exc_clear?
>Yes. Is there a problem with that?
As long as nothing tries to re-raise the exception I doubt it breaks
anything:
>>import sys
>>try:
raise StandardError("Hello")
except StandardError:
sys.exc_clear()
raise
Traceback (most recent call last):
File "<pyshell#6>", line 5, in <module>
raise
TypeError: exceptions must be classes, instances, or strings
(deprecated), not NoneType
I guess I don't really see that as a problem. Exceptions should
normally only be re-raised where they are caught. If a piece of code
has decided to handle an exception, and considers it dealt with, there
is no reason for it not to clear the exception, and good reason for it
to do so. Also, any caught exception is automatically cleared when
the catching procedure returns anyway, so it's not like Python has
ever considered a caught exception to be precious information that
ought to be preserved long past the point where it is handled.
But it is like calling the garbage collector. You are tuning the
program to ensure some resource isn't exhausted.
I'm not sure I see the analogy: Calling the GC can be expensive,
clearing an exception is not. The exception is going to be cleared
anyway when the procedure returns, the GC wouldn't likely be.

It's much more like explicitly assigning None to a variable that
contains a large data structure when you no longer need the contents
of the variable. Doing this sort of thing can be a wise thing to do
in certain situations.
It relies on implementation specific behavior to be provably
reliable*.
As Python is not a formally standardized language, and one typically
relies on the fact that CPython itself is ported to just about every
platform known to Man, I don't find this to be a particular worry.
If this is indeed the most obvious way to do things in your
particular use case then Python, and many other languages, is
missing something. If the particular problem is isolated,
formalized, and general solution found, then a PEP can be
submitted. If accepted, this would ensure future and cross-platform
compatibility.
Well, I think that the refcounting semantics of CPython are useful,
and allow one to often write simpler, easier-to-read and maintain
code. I think that Jython and IronPython, etc., should adopt these
semantics, but I imagine they might not for performance reasons. I
don't generally use Python for it's speediness, however, but rather
for it's pleasant syntax and semantics and large, effective library.

|>oug
Jul 2 '07 #195
Lenard Lindstrom <le***@telus.netwrites:
>You don't necessarily want a function that raises an exception to
deallocate all of its resources before raising the exception, since
you may want access to these resources for debugging, or what have
you.
No problem:

[...]
>>class MyFile(file):
def __exit__(self, exc_type, exc_val, exc_tb):
if exc_type is not None:
self.my_last_posn = self.tell()
return file.__exit__(self, exc_type, exc_val, exc_tb)
I'm not sure I understand you here. You're saying that I should have
the foresight to wrap all my file opens is a special class to
facilitate debugging?

If so, (1) I don't have that much foresight and don't want to have
to. (2) I debug code that other people have written, and they often
have less foresight than me. (3) It would make my code less clear to
ever file open wrapped in some special class.

Or are you suggesting that early in __main__.main(), when I wish to
debug something, I do something like:

__builtins__.open = __builtins__.file = MyFile

?

I suppose that would work. I'd still prefer to clear exceptions,
though, in those few cases in which a function has caught an exception
and isn't going to be returning soon and have the resources generally
kept alive in the traceback. To me, that's the more elegant and
general solution.

|>oug
Jul 2 '07 #196
Douglas Alan wrote:
Lenard Lindstrom <le***@telus.netwrites:
>>>Explicitly clear the exception? With sys.exc_clear?
>>Yes. Is there a problem with that?
>As long as nothing tries to re-raise the exception I doubt it breaks
anything:
> >>import sys
try:
raise StandardError("Hello")
except StandardError:
sys.exc_clear()
raise
Traceback (most recent call last):
File "<pyshell#6>", line 5, in <module>
raise
TypeError: exceptions must be classes, instances, or strings
(deprecated), not NoneType

I guess I don't really see that as a problem. Exceptions should
normally only be re-raised where they are caught. If a piece of code
has decided to handle an exception, and considers it dealt with, there
is no reason for it not to clear the exception, and good reason for it
to do so.
It is only a problem if refactoring the code could mean the exception is
re-raised instead of handled at that point. Should the call to exc_clear
be overlooked then the newly added raise will not work.
Also, any caught exception is automatically cleared when
the catching procedure returns anyway, so it's not like Python has
ever considered a caught exception to be precious information that
ought to be preserved long past the point where it is handled.
That's the point. Python takes care of clearing the traceback. Calls to
exc_clear are rarely seen. If they are simply a performance tweak then
it's not an issue *. I was just concerned that the calls were necessary
to keep resources from being exhausted.
>But it is like calling the garbage collector. You are tuning the
program to ensure some resource isn't exhausted.

I'm not sure I see the analogy: Calling the GC can be expensive,
clearing an exception is not. The exception is going to be cleared
anyway when the procedure returns, the GC wouldn't likely be.
The intent of a high level language is to free the programmer from such
concerns as memory management. So a call to the GC is out-of-place in a
production program. Anyone encountering such a call would wonder what is
so critical about that particular point in the execution. So
encountering an exc_clear would make me wonder why it is so important to
free that traceback. I would hope the comments would explain it.
It's much more like explicitly assigning None to a variable that
contains a large data structure when you no longer need the contents
of the variable. Doing this sort of thing can be a wise thing to do
in certain situations.
I just delete the name myself. But this is different. Removing a name
from the namespace, or setting it to None, prevents an accidental access
later. A caught traceback is invisible.
>It relies on implementation specific behavior to be provably
reliable*.

As Python is not a formally standardized language, and one typically
relies on the fact that CPython itself is ported to just about every
platform known to Man, I don't find this to be a particular worry.
But some things will make it into ISO Python. Registered exit handlers
will be called at program termination. A context manager's __exit__
method will be called when leaving a with statement. But garbage
collection will be "implementation-defined" **.
>If this is indeed the most obvious way to do things in your
particular use case then Python, and many other languages, is
missing something. If the particular problem is isolated,
formalized, and general solution found, then a PEP can be
submitted. If accepted, this would ensure future and cross-platform
compatibility.

Well, I think that the refcounting semantics of CPython are useful,
and allow one to often write simpler, easier-to-read and maintain
code.
Just as long as you have weighed the benefits against a future move to a
JIT-accelerated, continuation supporting PyPy interpreter that might not
use reference counting.
I think that Jython and IronPython, etc., should adopt these
semantics, but I imagine they might not for performance reasons. I
don't generally use Python for it's speediness, however, but rather
for it's pleasant syntax and semantics and large, effective library.
Yet improved performance appeared to be a priority in Python 2.4
development, and Python's speed continues to be a concern.
* I see in section 26.1 of the Python 2.5 /Python Library Reference/ as
regards exc_clear: "This function can also be used to try to free
resources and trigger object finalization, though no guarantee is made
as to what objects will be freed, if any." So using exc_clear is not so
much frowned upon as questioned.

** A term that crops up a lot in the C standard /ISO/IEC 9899:1999 (E)/. :-)

--
Lenard Lindstrom
<le***@telus.net>

Jul 3 '07 #197
Lenard Lindstrom <le***@telus.netwrites:
>Also, any caught exception is automatically cleared when
the catching procedure returns anyway, so it's not like Python has
ever considered a caught exception to be precious information that
ought to be preserved long past the point where it is handled.
That's the point. Python takes care of clearing the traceback. Calls
to exc_clear are rarely seen.
But that's probably because it's very rare to catch an exception and
then not return quickly. Typically, the only place this would happen
is in main(), or one of its helpers.
If they are simply a performance tweak then it's not an issue *. I
was just concerned that the calls were necessary to keep resources
from being exhausted.
Well, if you catch an exception and don't return quickly, you have to
consider not only the possibility that there could be some open files
left in the traceback, but also that there could be a large and now
useless data structures stored in the traceback.

Some people here have been arguing that all code should use "with" to
ensure that the files are closed. But this still wouldn't solve the
problem of the large data structures being left around for an
arbitrary amount of time.
But some things will make it into ISO Python.
Is there a movement afoot of which I'm unaware to make an ISO standard
for Python?
Just as long as you have weighed the benefits against a future move
to a JIT-accelerated, continuation supporting PyPy interpreter that
might not use reference counting.
I'll worry about that day when it happens, since many of my calls to
the standard library will probably break anyway at that point. Not to
mention that I don't stay within the confines of Python 2.2, which is
where Jython currently is. (E.g., Jython does not have generators.)
Etc.
>I think that Jython and IronPython, etc., should adopt these
semantics, but I imagine they might not for performance reasons. I
don't generally use Python for it's speediness, however, but rather
for it's pleasant syntax and semantics and large, effective
library.
Yet improved performance appeared to be a priority in Python 2.4
development, and Python's speed continues to be a concern.
I don't think the refcounting semantics should slow Python down much
considering that it never has aimed for C-level performance anyway.
(Some people claim it's a drag on supporting threads. I'm skeptical,
though.) I can see it being a drag on something like Jython, though,
were you are going through a number of different layers to get from
Jython code to the hardware.

Also, I imagine that no one wants to put in the work in Jython to have
a refcounter when the garbage collector comes with the JVM for free.

|>oug
Jul 3 '07 #198
Douglas Alan wrote:
Lenard Lindstrom <le***@telus.netwrites:
>>Or are you suggesting that early in __main__.main(), when I wish to
debug something, I do something like:
__builtins__.open = __builtins__.file = MyFile
?
I suppose that would work.
>No, I would never suggest replacing a builtin like that. Even
replacing a definite hook like __import__ is risky, should more than
one package try and do it in a program.

That misinterpretation of your idea would only be reasonable while
actually debugging, not for standard execution. Standard rules of
coding elegance don't apply while debugging, so I think the
misinterpretation might be a reasonable alternative. Still I think
I'd just prefer to stick to the status quo in this regard.
I totally missed the "when I wish to debug something". Skimming when I
should be reading.

---
Lenard Lindstrom
<le***@telus.net>
Jul 3 '07 #199
On 7/2/07, Douglas Alan <do**@alum.mit.eduwrote:
Lenard Lindstrom <le***@telus.netwrites:
If they are simply a performance tweak then it's not an issue *. I
was just concerned that the calls were necessary to keep resources
from being exhausted.

Well, if you catch an exception and don't return quickly, you have to
consider not only the possibility that there could be some open files
left in the traceback, but also that there could be a large and now
useless data structures stored in the traceback.

Some people here have been arguing that all code should use "with" to
ensure that the files are closed. But this still wouldn't solve the
problem of the large data structures being left around for an
arbitrary amount of time.
I don't think anyone has suggested that. Let me be clear about *my*
position: When you need to ensure that a file has been closed by a
certain time, you need to be explicit about it. When you don't care,
just that it will be closed "soonish" then relying on normal object
lifetime calls is sufficient. This is true regardless of whether
object lifetimes are handled via refcount or via "true" garbage
collection. Relying on the specific semantics of refcounting to give
certain lifetimes is a logic error.

For example:

f = some_file() #maybe it's the file store for a database implementation
f.write('a bunch of stuff')
del f
#insert code that assumes f is closed.

This is the sort of code that I warn against writing.

f = some_file()
with f:
f.write("a bunch of stuff")
#insert code that assumes f is closed, but correctly this time

is better.

On the other hand,
f = some_file()
f.write("a bunch of stuff")
#insert code that doesn't care about the state of f

is also fine. It *remains* fine no matter what kind of object lifetime
policy we have. The very worst case is that the file will never be
closed. However, this is exactly the sort of guarantee that GC can't
make, just as it can't ensure that you won't run out of memory. That's
a general case argument about refcounting semantics vs GC semantics,
and there are benefits and disadvantages to both sides.

What I am arguing against are explicit assumptions based on implicit
behaviors. Those are always fragile, and doubly so when the implicit
behavior isn't guaranteed (and, in fact, is explicitly *not*
guaranteed, as with refcounting semantics).
Jul 5 '07 #200

This thread has been closed and replies have been disabled. Please start a new discussion.

Similar topics

226
by: Stephen C. Waterbury | last post by:
This seems like it ought to work, according to the description of reduce(), but it doesn't. Is this a bug, or am I missing something? Python 2.3.2 (#1, Oct 20 2003, 01:04:35) on linux2 Type...
22
by: Tuang | last post by:
I'm checking out Python as a candidate for replacing Perl as my "Swiss Army knife" tool. The longer I can remember the syntax for performing a task, the more likely I am to use it on the spot if...
7
by: Michele Simionato | last post by:
So far, I have not installed Prothon, nor I have experience with Io, Self or other prototype-based languages. Still, from the discussion on the mailing list, I have got the strong impression that...
25
by: John Morgan | last post by:
Though I have designed and implemented a number of large reasonably well received web sites I do not consider myself a graphics designer I am now for the first time going to work with a ...
7
by: Russell Mangel | last post by:
I have been doing some C++ Interop using the new VS2005 (June Beta). I am exposing these methods to .NET clients. I ran into some WinAPI methods which use LPVOID types, and I don't understand...
150
by: tony | last post by:
If you have any PHP scripts which will not work in the current releases due to breaks in backwards compatibility then take a look at http://www.tonymarston.net/php-mysql/bc-is-everything.html and...
191
by: Xah Lee | last post by:
Software Needs Philosophers by Steve Yegge, 2006-04-15. Software needs philosophers. This thought has been nagging at me for a year now, and recently it's been growing like a tumor. One...
22
by: Xah Lee | last post by:
The Nature of the “Unix Philosophy” Xah Lee, 2006-05 In the computing industry, especially among unix community, we often hear that there's a “Unix Philosophy”. In this essay, i...
5
by: Mathias Panzenboeck | last post by:
Hi. I wrote a small hashlib for C. Because I'm new to hashes I looked at pythons implementation and reused *some* of the code... or more the mathematical "hash-function", not really the code. ...
0
by: taylorcarr | last post by:
A Canon printer is a smart device known for being advanced, efficient, and reliable. It is designed for home, office, and hybrid workspace use and can also be used for a variety of purposes. However,...
0
by: Charles Arthur | last post by:
How do i turn on java script on a villaon, callus and itel keypad mobile phone
0
by: aa123db | last post by:
Variable and constants Use var or let for variables and const fror constants. Var foo ='bar'; Let foo ='bar';const baz ='bar'; Functions function $name$ ($parameters$) { } ...
0
by: ryjfgjl | last post by:
If we have dozens or hundreds of excel to import into the database, if we use the excel import function provided by database editors such as navicat, it will be extremely tedious and time-consuming...
0
by: ryjfgjl | last post by:
In our work, we often receive Excel tables with data in the same format. If we want to analyze these data, it can be difficult to analyze them because the data is spread across multiple Excel files...
0
by: emmanuelkatto | last post by:
Hi All, I am Emmanuel katto from Uganda. I want to ask what challenges you've faced while migrating a website to cloud. Please let me know. Thanks! Emmanuel
1
by: Sonnysonu | last post by:
This is the data of csv file 1 2 3 1 2 3 1 2 3 1 2 3 2 3 2 3 3 the lengths should be different i have to store the data by column-wise with in the specific length. suppose the i have to...
0
by: Hystou | last post by:
There are some requirements for setting up RAID: 1. The motherboard and BIOS support RAID configuration. 2. The motherboard has 2 or more available SATA protocol SSD/HDD slots (including MSATA, M.2...
0
Oralloy
by: Oralloy | last post by:
Hello folks, I am unable to find appropriate documentation on the type promotion of bit-fields when using the generalised comparison operator "<=>". The problem is that using the GNU compilers,...

By using Bytes.com and it's services, you agree to our Privacy Policy and Terms of Use.

To disable or enable advertisements and analytics tracking please visit the manage ads & tracking page.