469,366 Members | 2,396 Online
Bytes | Developer Community
New Post

Home Posts Topics Members FAQ

Post your question to a community of 469,366 developers. It's quick & easy.

BIG successes of Lisp (was ...)

In the context of LATEX, some Pythonista asked what the big
successes of Lisp were. I think there were at least three *big*
successes.

a. orbitz.com web site uses Lisp for algorithms, etc.
b. Yahoo store was originally written in Lisp.
c. Emacs

The issues with these will probably come up, so I might as well
mention them myself (which will also make this a more balanced
post)

a. AFAIK Orbitz frequently has to be shut down for maintenance
(read "full garbage collection" - I'm just guessing: with
generational garbage collection, you still have to do full
garbage collection once in a while, and on a system like that
it can take a while)

b. AFAIK, Yahoo Store was eventually rewritten in a non-Lisp.
Why? I'd tell you, but then I'd have to kill you :)

c. Emacs has a reputation for being slow and bloated. But then
it's not written in Common Lisp.

Are ViaWeb and Orbitz bigger successes than LATEX? Do they
have more users? It depends. Does viewing a PDF file made
with LATEX make you a user of LATEX? Does visiting Yahoo
store make you a user of ViaWeb?

For the sake of being balanced: there were also some *big*
failures, such as Lisp Machines. They failed because
they could not compete with UNIX (SUN, SGI) in a time when
performance, multi-userism and uptime were of prime importance.
(Older LispM's just leaked memory until they were shut down,
newer versions overcame that problem but others remained)

Another big failure that is often _attributed_ to Lisp is AI,
of course. But I don't think one should blame a language
for AI not happening. Marvin Mins ky, for example,
blames Robotics and Neural Networks for that.
Jul 18 '05
303 15526
Bruce Hoult <br***@hoult.org> wrote previously:
|~bruce$ ls -l `which emacs`
|-rwxr-xr-x 1 root wheel 4596224 Sep 24 04:29 /usr/bin/emacs
|~bruce$ ls -l /Applications/Microsoft\ Office\ X/Microsoft\ Word
|-rwxr-xr-x 1 bruce admin 10568066 Sep 26 2002
|/Applications/Microsoft Office X/Microsoft Word

Not even Windows users use MS-Word to edit program code; this is a
completely irrelevant comparison.

For more realistic ones (from my OS/2 machine that I'm sitting at, and
the editors I actually use):

D:\editors % ls boxer\b2.exe jEdit4.2pre1\jedit.jar fte\fte.exe
3-24-95 7:00a 317127 0 b2.exe
5-06-03 1:36a 2797098 219 jedit.jar
1-04-00 9:19p 585235 0 fte.exe

On a Linux box where my website is hosted (and emacs isn't installed
even if I wanted to use it):

~$ ls -l /usr/bin/joe /usr/bin/vim
-r-xr-xr-x 1 root root 166160 Feb 28 2001 /usr/bin/joe*
-rwxr-xr-x 1 root root 1172464 Dec 5 2001 /usr/bin/vim*

On my Macs I use jEdit too, so the size is basically the same (although
I have a bit more up-to-date version there). On my FreeBSD box maybe
I'll use kEdit or kDevel, or jed or vim; but I'm not sure of file sizes
from here.

IOW: Emacs is BLOATED.

Yours, David...

--
Keeping medicines from the bloodstreams of the sick; food from the bellies
of the hungry; books from the hands of the uneducated; technology from the
underdeveloped; and putting advocates of freedom in prisons. Intellectual
property is to the 21st century what the slave trade was to the 16th.

Jul 18 '05 #151

me***@gnosis.cx (David Mertz) writes:

[examples snipped]
IOW: Emacs is BLOATED.


I have 512 MB of memory on my $300 PC. Mozilla at the time of posting
eats 36M, emacs 15 M. I reckon that BLOAT will be approaching a real
problem when I run, like, twenty or thirty emacses or so, at once.

In short, a non-issue for the ordinary computer user.

Best,
Thomas
--
Thomas Lindgren
"It's becoming popular? It must be in decline." -- Isaiah Berlin

Jul 18 '05 #152
me***@gnosis.cx (David Mertz) writes:
|-rwxr-xr-x 1 root wheel 4596224 Sep 24 04:29 /usr/bin/emacs
|~bruce$ ls -l /Applications/Microsoft\ Office\ X/Microsoft\ Word
|-rwxr-xr-x 1 bruce admin 10568066 Sep 26 2002
|/Applications/Microsoft Office X/Microsoft Word

Not even Windows users use MS-Word to edit program code; this is a
completely irrelevant comparison.
OK, compare it to Visual Studio (not counting the compilers) instead.
IOW: Emacs is BLOATED.


Or whatever else you're using is underpowered...
Jul 18 '05 #153
Alex Martelli <al***@aleax.it> wrote in message news:<rD***********************@news2.tin.it>...
Pascal Costanza wrote:
Exactly. Lisp-style Macros make these things a breeze. The other
alternatives you suggest are considerably more complicated. As I have
What's "considerably more complicated" in, say,
my_frobazzer = frobaz_compiler('''
oh my pretty, my beauteous,
my own all my own special unique frambunctious *LANGUAGE*!!!
''')
and later on call my_frobazzed(bim. bum, bam) at need?


The problem is that

program = compiler(character-string)

is too much of a closed design. A more flexible design resembles:

abstract-syntax-tree = reader(character-string)

target-syntax-tree = translator(abstract-syntax-tree)

program = compiler(target-syntax-tree)

The input to a compiler should not be a character string, but
structured data.
The complexity
of the frobaz_compiler factory callable depends exclusively on the
complexity of the language you want it to parse and compile, and you
In Lisp, the complexity of the compiler is constant; it's a language
builtin. The complexity of the reader depends on the complexity of the
lexical properties of the language, and the complexity of the
translator depends on the semantic complexity of the language.
The vast majority of applications has absolutely no need to tinker
with the language's syntax and fundamental semantics.
This is a common fallacy: namely that the difficulty of doing
something, and the consequent infrequency of doing it, constitute
evidence for a lack of need of doing it. In fact this is nothing more
than rationalization: ``I know I have inadequate tools, but I really
don't need them''. It's not unlike ``I can't reach those grapes, but I
know they are sour anyway''.

In Lisp, tinkering with the syntax and semantics is done even in
trivial programs.

By the way, your use of ``fundamental'' suggests a misunderstanding:
namely that some kind of destructive manipulation of the language is
going on to change the foundations, so that existing programs are no
longer understood or change in meaning. This is not so: rather, users
extend the language to understand new constructs. The fundamental
syntax and semantics stay what they are; they are the stable target
language for the new constructs.
When you need
a language with different syntax, this is best treated as a serious
task and devoted all the respect it deserves, NOT treated as "a
breeze".
This is a common viewpoint in computer science. Let me state it like
this: ``language design is a Hard Problem that requires you to whip
out lexical analyzers, parser constructors, complex data structures
for symbol table management, intermediate code generation, target code
generation, instruction selection, optimization, etc.'' But when you
have this:

abstract-syntax-tree = reader(character-string)

target-syntax-tree = translator(abstract-syntax-tree)

program = compiler(target-syntax-tree)

you can do most of the language design work in the second step:
translation of syntax trees into other trees. This is where macro
programming is done, and there is a large amount of clever
infrastructure in Lisp which makes it easy to work at this level. Lisp
contains a domain language for language construction.
PARTICULARLY for domain-specific languages, the language's
designers NEED access to domain-specific competence, which typically
they won't have enough of for any domain that doesn't happen to be
what they've spent many years of their life actually DOING (just


Bingo! This is where Lisp comes in; it gives the domain experts the
power to express what they want, without requiring them to become
compiler construction experts. This is why Lisp is used by some
artificial intelligence researchers, biologists, linguists, musicians,
etc.
Jul 18 '05 #154
me***@gnosis.cx (David Mertz) wrote in message news:<ma*************************************@pyth on.org>...
IOW: Emacs is BLOATED.


In an era of 250 GB hard drives, and GB RAM modules, who cares that
emacs is about 5MB in size, and uses 4 (real) to 10 (virtual) MB of
RAM.

Moore's law is lisp's friend. ;^)
Jul 18 '05 #155
Kaz Kylheku:
The problem is that

program = compiler(character-string)

is too much of a closed design. A more flexible design resembles:

abstract-syntax-tree = reader(character-string)

target-syntax-tree = translator(abstract-syntax-tree)

program = compiler(target-syntax-tree)
But you didn't see the implementation of Alex's 'compiler' function.
It looks like

def compiler(character_string):
return compile_tree(translator(reader(character_string)))

and those intermediates are part of the publically usable
API.
The input to a compiler should not be a character string, but
structured data.
I think you're arguing naming preferences here. That's fine;
Alex's point remains unchanged. In a dynamic language like
Python, parsing a domain specific language can be done at
run-time, parsed, converted to a Python parse tree, and
compiled to Python byte codes, just like what you want and
in the form you want it.
In Lisp, the complexity of the compiler is constant; it's a language
builtin. The complexity of the reader depends on the complexity of the
lexical properties of the language, and the complexity of the
translator depends on the semantic complexity of the language.
Replace 'Lisp' with 'Python' and the result is still true. Ditto
for 'C'. So I'm afraid I don't understand your point.
The vast majority of applications has absolutely no need to tinker
with the language's syntax and fundamental semantics.


This is a common fallacy: namely that the difficulty of doing
something, and the consequent infrequency of doing it, constitute
evidence for a lack of need of doing it. In fact this is nothing more
than rationalization: ``I know I have inadequate tools, but I really
don't need them''. It's not unlike ``I can't reach those grapes, but I
know they are sour anyway''.


That's not Alex's argument. Python has the ability to do exactly
what you're saying (domain language -> AST -> Python code or AST ->
compiler). It's rarely needed (I've used it twice now in my six years
or so of Python), so why should a language cater to make that
easy at the expense of making frequent things harder?
In Lisp, tinkering with the syntax and semantics is done even in
trivial programs.
And that's a good thing? That means that everyone looking at
new Lisp code needs to understand the modifications to the syntax
and semantics. That may be appropriate for an insular organization,
but otherwise it makes it harder for others to understand any code.
By the way, your use of ``fundamental'' suggests a misunderstanding:
namely that some kind of destructive manipulation of the language is
going on to change the foundations, so that existing programs are no
longer understood or change in meaning. This is not so: rather, users
extend the language to understand new constructs. The fundamental
syntax and semantics stay what they are; they are the stable target
language for the new constructs.
Alex and others have responded to this argument many times. The
summary is that 1) in practice those who extend the language are most
often not the domain experts so the result doesn't correctly capture
the domain, 2) because it's easy to do, different groups end up with
different, incompatible domain-specific modifications, 3) rarely
is that approach better than using OOP, HOF and other approaches,
where better is defined as more flexible, easier to read, more
succinct, etc.
This is a common viewpoint in computer science. Let me state it like
this: ``language design is a Hard Problem that requires you to whip
out lexical analyzers, parser constructors, complex data structures
for symbol table management, intermediate code generation, target code
generation, instruction selection, optimization, etc.''
Actually, language design doesn't require any of those. They
are needed to implement a language.

Let me add that implementing a language *was* a Hard Problem,
but effectively solved in the 1970s. The solutions are now well
known and there are a huge number of tools to simplify all
the steps in that process, books on the subject, and people with
experience in doing it.

There are still hard problems, but they are hard engineering
problems, not hard scientific ones where the theory is not
well understood.
But when you have this:

abstract-syntax-tree = reader(character-string)

target-syntax-tree = translator(abstract-syntax-tree)

program = compiler(target-syntax-tree)

you can do most of the language design work in the second step:
translation of syntax trees into other trees. This is where macro
programming is done, and there is a large amount of clever
infrastructure in Lisp which makes it easy to work at this level. Lisp
contains a domain language for language construction.


Python has all that, including the ability to turn a string into
a Python AST, manipulate that tree, and compile that tree.

It's not particularly clever; there's no real need for that. In
general, the preference is to be clear and understandable over
being clever. (The New Jersey approach, perhaps?)

(Though the compiler module is pretty clumsy to use.)
PARTICULARLY for domain-specific languages, the language's
designers NEED access to domain-specific competence, which typically
they won't have enough of for any domain that doesn't happen to be
what they've spent many years of their life actually DOING (just


Bingo! This is where Lisp comes in; it gives the domain experts the
power to express what they want, without requiring them to become
compiler construction experts. This is why Lisp is used by some
artificial intelligence researchers, biologists, linguists, musicians,
etc.


Speaking as a representative from the biology community, the
Lisp programmers are a minority and far behind C, Fortran, Perl,
and still behind Python, Tcl, and even Ruby.

*If* the domain expert is also an expert Lisp program then
what you say is true. It's been my experience that most domain
experts are not programmers -- most domains aren't programming.
Even in what I do, computational life sciences, most chemists
and biologists can do but a smattering of programming. That's
why they hire people like me. And I've found that objects and
functions are good enough to solve the problems in that domain;
from a CS point of view, it's usually pretty trivial.

Andrew
da***@dalkescientific.com
Jul 18 '05 #156
"Andrew Dalke" <ad****@mindspring.com> writes:
It's not particularly clever; there's no real need for that. In
general, the preference is to be clear and understandable over
being clever. (The New Jersey approach, perhaps?)


Some people (academics) are paid for being clever. Others (engineers)
are paid for creating systems that work (in the wide meaning of the
word), in a timeframe that the company/client can afford.

In the for-fun area, by analogy, some people get the kick from
creating systems that work (be it a Linux distribution or a network
programming framework), and some from creating an uber-3133t hacks in
order to impress their friends.

Macros provide billions of different ways to be "clever", so obviously
Lisp gives greater opportunity of billable hours for people who can
bill for clever stuff. I'm studying Grahams "On Lisp" as bad-time
reading ATM, and can also sympathize w/ people who use Lisp just for
the kicks.

Lisp might have a good future ahead of it if it was only competing
againt C++, Java and others. Unfortunately for Lisp, other dynamic
languages exist at the moment, and they yield greater
productivity. Most bosses are more impressed with getting stuff done
fast than getting it done slowly, using gimmicks that would have given
you an A+ if it was a CS research project.

--
Ville Vainio http://www.students.tut.fi/~vainio24
Jul 18 '05 #157
Ville Vainio <vi********************@spamtut.fi> writes:

bill for clever stuff. I'm studying Grahams "On Lisp" as bad-time

^^^

Typo, s/bad/bed, obviously :).

--
Ville Vainio http://www.students.tut.fi/~vainio24
Jul 18 '05 #158
me***@gnosis.cx (David Mertz) writes:
Not even Windows users use MS-Word to edit program code; this is a
completely irrelevant comparison.


I'm not sure you're right. Using Words little brother Wordpad is just
as weird, and we just had proof of that.
--
(espen)
Jul 18 '05 #159
Andrew Dalke wrote:
That's not Alex's argument. Python has the ability to do exactly
what you're saying (domain language -> AST -> Python code or AST ->
compiler). It's rarely needed (I've used it twice now in my six years
or so of Python), so why should a language cater to make that
easy at the expense of making frequent things harder?
Maybe you have only rarely used it because it is hard, and therefore
just think that you rarely need it. At least, this is my assessment of
what I have thought to be true before I switched to Lisp.
In Lisp, tinkering with the syntax and semantics is done even in
trivial programs.


And that's a good thing? That means that everyone looking at
new Lisp code needs to understand the modifications to the syntax
and semantics. That may be appropriate for an insular organization,
but otherwise it makes it harder for others to understand any code.


That's true for any language. In any language you build new data
structures, classes, methods/functions/procedures, and everyone looking
at new code in any language needs to understand these new definitions.
There is no difference here _whatsoever_.

Modifying syntax and creating new language abstractions only _sounds_
scary, but these things are like any other activity during programming
that take care, as soon as the language you use supports them well.
By the way, your use of ``fundamental'' suggests a misunderstanding:
namely that some kind of destructive manipulation of the language is
going on to change the foundations, so that existing programs are no
longer understood or change in meaning. This is not so: rather, users
extend the language to understand new constructs. The fundamental
syntax and semantics stay what they are; they are the stable target
language for the new constructs.

Alex and others have responded to this argument many times. The
summary is that 1) in practice those who extend the language are most
often not the domain experts so the result doesn't correctly capture
the domain,


Are these non-experts any better off with just data structures and
functions?
2) because it's easy to do, different groups end up with
different, incompatible domain-specific modifications,
Do they also come up with different APIs? How is divergence of APIs
solved in practice? Can't you use the same solutions for macro libraries?
3) rarely
is that approach better than using OOP, HOF and other approaches,
where better is defined as more flexible, easier to read, more
succinct, etc.


Are you guessing, or is this based on actual experience?

BTW, macros are definitely more flexible and more succinct. The only
claim that I recall being made by Pythonistas is that macros make code
harder to read. The Python argument is that uniformity eases
readability; the Lisp argument is that a better match to the problem
domain eases readability. I think that...

(with-open-file (f "...")
...
(read f)
...)

....is much clearer than...

try:
f=open('...')
...
f.read()
...
finally:
f.close()

Why do I need to say "finally" here? Why should I even care about
calling close? What does this have to do with the problem I am trying to
solve? Do you really think it does not distract from the problem when
you first encounter that code and try to see the forest from the trees?

BTW, this is one of the typical uses for macros: When designing APIs,
you usually want to make sure that certain protocols are followed. For
the typical uses of your library you can provide high-level macros that
hide the details of your protocol.

Here is an arbitrary example that I have just picked from the
documentation for a Common Lisp library I have never used before:

(with-transaction
(insert-record :into [emp]
:attributes '(x y z)
:values '(a b c))
(update-records [emp]
:attributes [dept]
:values 50
:where [= [dept] 40])
(delete-records :from [emp]
:where [> [salary] 300000]))

(see
http://www.lispworks.com/reference/l...m#pgfId-889797
)

What do you think the with-transaction macro does? Do you need any more
information than that to understand the code?

BTW, note that the third line of this example is badly indented. Does
this make reading the code more difficult?

Here is why I think that Python is successful: it's because it favors
dynamic approaches over static approaches (wrt type system, and so on).
I think this is why languages like Ruby, Perl and PHP are also
successful. Languages like Java, C and C++ are very static, and I am
convinced that static approaches create more problems than they solve.

It's clear that Python is a very successful language, but I think this
fact is sometimes attributed to the wrong reasons. I don't think its
success is based on prettier syntax or uniformity. Neither give you an
objectively measurable advantage.
Pascal

--
Pascal Costanza University of Bonn
mailto:co******@web.de Institute of Computer Science III
http://www.pascalcostanza.de Römerstr. 164, D-53117 Bonn (Germany)

Jul 18 '05 #160
Ville Vainio wrote:
Lisp might have a good future ahead of it if it was only competing
againt C++, Java and others. Unfortunately for Lisp, other dynamic
languages exist at the moment, and they yield greater
productivity.
This is true for the things that are currently en vogue.
Most bosses are more impressed with getting stuff done
fast than getting it done slowly, using gimmicks that would have given
you an A+ if it was a CS research project.


I have implemented an AOP extension for Lisp that took about a weekend
to implement. The implementation is one page of Lisp code and is rather
efficient (wrt usability and performance) because of macros.

I have heard rumors that the development of an AOP extension for Python
would take considerably longer.
Pascal

--
Pascal Costanza University of Bonn
mailto:co******@web.de Institute of Computer Science III
http://www.pascalcostanza.de Römerstr. 164, D-53117 Bonn (Germany)

Jul 18 '05 #161
Ville Vainio <vi********************@spamtut.fi> writes:
"Andrew Dalke" <ad****@mindspring.com> writes:
It's not particularly clever; there's no real need for that. In
general, the preference is to be clear and understandable over
being clever. (The New Jersey approach, perhaps?)
Some people (academics) are paid for being clever. Others
(engineers) are paid for creating systems that work (in the wide
meaning of the word), in a timeframe that the company/client can
afford.


And we all know that nothing made by academics actually works.
Conversely there is no need to be clever if you are an engineer.
Macros provide billions of different ways to be "clever", so obviously
Lisp gives greater opportunity of billable hours for people who can
bill for clever stuff. I'm studying Grahams "On Lisp" as bad-time
reading ATM, and can also sympathize w/ people who use Lisp just for
the kicks.
Shhh! Don't alert the PHB's to how we pad our hours!
Lisp might have a good future ahead of it if it was only competing
againt C++, Java and others. Unfortunately for Lisp, other dynamic
languages exist at the moment, and they yield greater productivity.
For instance, a productivity study done by Erann Gat showed ... no
wait. Where was that productivity study that showed how far behind
Lisp was?
Most bosses are more impressed with getting stuff done fast than
getting it done slowly, using gimmicks that would have given you an
A+ if it was a CS research project.


Which is *precisely* the reason that bosses have adopted C++ over C.
Jul 18 '05 #162
Alex Martelli <al***@aleax.it> writes:
Pascal Costanza wrote:
...
In the case of Python, couldn't you rightfully regard it as driven by a
one-man commitee? ;-)
Ah, what a wonderfully meaningful view that is.
Specifically: when you want to ALTER SYNTAX...


If it were only about making small alterations to the syntax, I wouldn't


I didn't say SMALL. Small or large, it's about alteration to the
syntax. Other lispers have posted (on several of this unending
multitude of threads, many but not all of which I've killfiled)
stating outright that there is no semantic you can only implement
with macros: that macros are ONLY to "make things pretty" for
given semantics. If you disagree with them, I suggest pistols at
ten paces, but it's up to you lispers of course -- as long as
you guys with your huge collective experience of macros stop saying
a million completely contradictory things about them and chastising
me because (due, you all keep claiming, to my lack of experience)
I don't agree with all of them, I'll be glad to debate this again.


Why is this so surprising? Maybe different lispers use macros for
different things, or see different advantages to them? What all have
in common though, is that they all consider macros a valuable and
important part of a programming language. What I have seen in this
thread are your (Alex) lengthy posts where you reiterate the same
uniformed views of macros over and over again. You seem to have made
your mind up already, even though you don not seem to have fully
understood Common Lisp macros yet. Am I wrong?
Till then, this is yet another thread that get killfiled.
Why do you bother to post if you are not even going to read the
responses?

<snip>
But, until then -- bye. And now, to killfile this thread too....


What is the point in initiating a subthread by an almost 300-line,
very opinionated post just to immediately killfile it?
Björn
Jul 18 '05 #163
Pascal Costanza wrote:
BTW, macros are definitely more flexible and more succinct. The only
claim that I recall being made by Pythonistas is that macros make code
harder to read. The Python argument is that uniformity eases
readability; the Lisp argument is that a better match to the problem
domain eases readability. I think that...

(with-open-file (f "...")
...
(read f)
...)

...is much clearer than...

try:
f=open('...')
...
f.read()
...
finally:
f.close()

Why do I need to say "finally" here? Why should I even care about
calling close? What does this have to do with the problem I am trying to
solve? Do you really think it does not distract from the problem when
you first encounter that code and try to see the forest from the trees? Your two examples do completely different things and the second is
written rather poorly as f might might not exist in the finally block.
It certainly won't if the file doesn't exist.

A better comparison would (*might*, I haven't used lisp in a while) be:

(with-open-file (f filename :direction :output :if-exists :supersede)
(format f "Here are a couple~%of test data lines~%")) => NIL

if os.path.exists(filename):
f = open(filename, 'w')
print >> f, "Here are a couple of test data lines"

I think that the latter is easier to read and I don't have to worry
about f.close(), but that is just me. I don't know what with-open-file
does if filename actually can't be opened but you haven't specified what
to do in this case with your example so I won't dig deeper.
BTW, this is one of the typical uses for macros: When designing APIs,
you usually want to make sure that certain protocols are followed. For
the typical uses of your library you can provide high-level macros that
hide the details of your protocol. I tend to use a model in this case. For example if I want to always
retrieve a writable stream even if a file isn't openable I just supply a
model function or method.

model.safeOpen(filename)
"""filename -> return a writable file if the file can be opened or a
StringIO buffer object otherwise"""

In this case what happens is explicit as it would be with a macro. Note
that I could have overloaded the built-in "open" function but I don't
really feel comfortable doing that. So far, I haven't been supplied an
urgent need to use macros.
Here is an arbitrary example that I have just picked from the
documentation for a Common Lisp library I have never used before:

(with-transaction
(insert-record :into [emp]
:attributes '(x y z)
:values '(a b c))
(update-records [emp]
:attributes [dept]
:values 50
:where [= [dept] 40])
(delete-records :from [emp]
:where [> [salary] 300000]))

(see
http://www.lispworks.com/reference/l...m#pgfId-889797
)

What do you think the with-transaction macro does? Do you need any more
information than that to understand the code?
Yep. Where's the database? I have to look at the specification you
provided to realize that it is provided by *default-database*. I also
am not aware from this macro whether the transaction is actually
committed or not and whether it rolls back if anything goes wrong. I
can *assume* this at my own peril but I, of course, had to look at the
documentation to be sure. Now this might be lisp-centric but I, being a
lowly scientist, couldn't use this macro without that piece of
knowledge. Of course, now that I know what the macro does I am free to
use it in the future. And yet, if I actually want to *deal* with errors
that occur in the macro besides just rolling back the transaction I
still need to catch the exceptions ( or write another macro :) )

The macro's that you have supplied seem to deal with creating a standard
API for dealing with specific exceptions.

Again, I could create an explicit database model

model.transaction(commands)
"""(commands) Execute a list of sql commands in a transaction.
The transaction is rolled back if any of the commands fail
and the correspoding failed exception is raised"""

BTW, note that the third line of this example is badly indented. Does
this make reading the code more difficult? This is a red herring. Someone had to format this code to bake it
suitably readable. I could add a red-herring of my own reformatting
this into an undreadable blob but that would be rather childish on my
part and completely irrelevant.
Here is why I think that Python is successful: it's because it favors
dynamic approaches over static approaches (wrt type system, and so on).
I think this is why languages like Ruby, Perl and PHP are also
successful. Languages like Java, C and C++ are very static, and I am
convinced that static approaches create more problems than they solve. We are in agreement here.
It's clear that Python is a very successful language, but I think this
fact is sometimes attributed to the wrong reasons. I don't think its
success is based on prettier syntax or uniformity. Neither give you an
objectively measurable advantage. It all depends on your perspective. I think that I have limited brain
power for remembering certain operations. A case in point, I was
re-writing some documentation yesterday for the embedded metakit
database. Python uses a slice notation for list operations:

list[lo:hi] -> returns a list from index lo to index hi-1

The database had a function call select:

view.select(lo, hi) -> returns a list from index lo to index hi

While it seems minor, this caused me major grief in usage and I wish it
had been uniform with the way python selects ranges. Now I have two
things to remember. I can objectively measure the difference in this
case. The difference is two hours of debugging because of a lack of
uniformity. Now, I brought this on myself by not reading the
documentation closely enough and missing the word "(inclusive)" so I
can't gripe to much. I will just say that the documentation now clearly
shows this lack of uniformity from the standard pythonic way. Of course
we could talk about the "should indexed arrays start with 0 or 1?" but I
respect that there are different desired levels of uniformity. Mine is
probably a little higher than yours :)

Note, that I have had similar experiences in lisp where macros that I
expected to work a certain way, as they were based on common CLHS
macros, didn't. For example, you wouldn't expect (with-open-file ...)
to behave fundamentally different from (with-open-stream ...) and would
probably be annoyed if they did.

In case anyone made it this far, I'm not dissing lisp or trying to
promote python. Both languages are remarkably similar. Macros are one
of the constructs that make lisp lisp and indentation are one of the
things that make python python. Macros could be extremely useful in
python and perhaps to someone who uses them regularly, their ommision is
a huge wart. Having used macros in the past, all I can say is that for
*MY* programming style, I can't say that I miss them that much and have
given a couple of examples of why not.
Pascal

Brian Kelley
Whitehead institute for Biomedical Research
Jul 18 '05 #164
Pascal Costanza <co******@web.de> writes:
Andrew Dalke wrote:

(with-open-file (f "...")
...
(read f)
...)

...is much clearer than...

try:
f=open('...')
...
f.read()
...
finally:
f.close()

Why do I need to say "finally" here? Why should I even care about
calling close? What does this have to do with the problem I am trying
to solve? Do you really think it does not distract from the problem
when you first encounter that code and try to see the forest from the
trees?
snip


Can you implement with-open-file as a function? If you could how would
it compare to the macro version? It would look something like:

(defun with-open-file (the-func &rest open-args)
(let ((stream (apply #'open open-args)))
(unwind-protect
(funcall the-func stream)
(close stream))))

(defun my-func (stream)
... operate on stream...
)

(defun do-stuff ()
(with-open-file #'my-func "somefile" :direction :input))

One of the important differences is that MY-FUNC is lexically isolated
from the environment where WITH-OPEN-FILE appears. The macro version
does not suffer this; and it is often convenient for the code block
in the WITH-OPEN-FILE to access that environment.

Jul 18 '05 #165
Ville Vainio <vi********************@spamtut.fi> wrote in message news:<du*************@mozart.cc.tut.fi>...
Some people (academics) are paid for being clever. Others (engineers)
are paid for creating systems that work (in the wide meaning of the
word), in a timeframe that the company/client can afford.
[ snip ]
--
Ville Vainio http://www.students.tut.fi/~vainio24

^^^^^^^^

Tee hee! :)
Jul 18 '05 #166
"Andrew Dalke" <ad****@mindspring.com> wrote in message news:<V5***************@newsread4.news.pas.earthli nk.net>...
Kaz Kylheku:
The input to a compiler should not be a character string, but
structured data.


I think you're arguing naming preferences here. That's fine;
Alex's point remains unchanged. In a dynamic language like
Python, parsing a domain specific language can be done at
run-time, parsed, converted to a Python parse tree, and
compiled to Python byte codes, just like what you want and
in the form you want it.


Ah, but in Lisp, this is commonly done at *compile* time. Moreover,
two or more domain-specific languages can be mixed together, nested in
the same lexical scope, even if they were developed in complete
isolation by different programmers. Everything is translated and
compiled together. Some expression in macro language B can appear in
an utterance of macro language A. Lexical references across these
nestings are transparent:

(language-a
... establish some local variable foo ...
(language-b
... reference to local variable foo set up in language-a!
))

The Lisp parse tree is actually just normal Lisp code. There is no
special target language for the compiler; it understands normal Lisp,
and that Lisp is very conveniently manipulated by the large library of
list processing gadgets. No special representation or API is required.

Do people write any significant amount of code in the Python parse
tree syntax? Can you use that syntax in a Python source file and have
it processed together with normal code?

What is Python's equivalent to the backquote syntax, if I want to put
some variant pieces into a parse tree template?
Jul 18 '05 #167
On Wed, Oct 22, 2003 at 12:30:01PM -0400, Brian Kelley wrote:
Your two examples do completely different things and the second is
written rather poorly as f might might not exist in the finally block.
It certainly won't if the file doesn't exist.

A better comparison would (*might*, I haven't used lisp in a while) be:

(with-open-file (f filename :direction :output :if-exists :supersede)
(format f "Here are a couple~%of test data lines~%")) => NIL

if os.path.exists(filename):
f = open(filename, 'w')
print >> f, "Here are a couple of test data lines"


How are these in any way equivalent? Pascal posted his example with
try...finally and f.close () for a specific reason. In your Python
example, the file is not closed until presumably the GC collects the
descriptor and runs some finalizer (if that is even the case). This is
much different from the Lisp example which guarantees the closing of the
file when the body of the WITH-OPEN-FILE is exited. In addition:

``When control leaves the body, either normally or abnormally (such as
by use of throw), the file is automatically closed. If a new output file
is being written, and control leaves abnormally, the file is aborted and
the file system is left, so far as possible, as if the file had never
been opened.''

http://www.lispworks.com/reference/H...with-open-file

So in fact, you pointed out a bug in Pascal's Python example, and one
that is easy to make. All this error-prone code is abstracted away by
WITH-OPEN-FILE in Lisp.

--
; Matthew Danish <md*****@andrew.cmu.edu>
; OpenPGP public key: C24B6010 on keyring.debian.org
; Signed or encrypted mail welcome.
; "There is no dark side of the moon really; matter of fact, it's all dark."
Jul 18 '05 #168
On Wed, Oct 22, 2003 at 09:52:42AM -0700, Jock Cooper wrote:
One of the important differences is that MY-FUNC is lexically isolated
from the environment where WITH-OPEN-FILE appears. The macro version
does not suffer this; and it is often convenient for the code block
in the WITH-OPEN-FILE to access that environment.


(call-with-open-file
#'(lambda (stream)
...)
"somefile"
:direction :input)

WITH-OPEN-FILE happens to be one of those macros which doesn't require
compile-time computation, but rather provides a convenient interface to
the same functionality as above.

--
; Matthew Danish <md*****@andrew.cmu.edu>
; OpenPGP public key: C24B6010 on keyring.debian.org
; Signed or encrypted mail welcome.
; "There is no dark side of the moon really; matter of fact, it's all dark."
Jul 18 '05 #169
Kaz Kylheku:
Ah, but in Lisp, this is commonly done at *compile* time.
Compile vs. runtime is an implementation issue. Doesn't
change expressive power, only performance. Type inferencing
suggests that there are other ways to get speed-ups from
dynamic languages.
Moreover,
two or more domain-specific languages can be mixed together, nested in
the same lexical scope, even if they were developed in complete
isolation by different programmers.
We have decidedly different definitions of what a "domain-specific
language" means. To you it means the semantics expressed as
an s-exp. To me it means the syntax is also domain specific. Eg,
Python is a domain specific language where the domain is
"languages where people complain about scope defined by
whitespace." ;)

Yes, one can support Python in Lisp as a reader macro -- but
it isn't done because Lispers would just write the Python out
as an S-exp. But then it wouldn't be Python, because the domain
language *includes*domain*syntax*.

In other words, writing the domain language as an S-exp
is a short cut to make it easier on the programmer, and not
on the domain specialist. Unless the domain is programming.
And you know, very few of the examples of writing a domain
specific language in Lisp have been for tasks other than
programming.
Do people write any significant amount of code in the
Python parse tree syntax?
No. First, it isn't handled as a syntax, it's handled as
as operations on a tree data structure. Second -- and
this point has been made several times -- that style of
programming isn't often needed, so there of course isn't
a "significant amount."
Can you use that syntax in a Python source file and have
it processed together with normal code?
Did you look at my example doing just that? I built
an AST for Python and converted it into a normal function.
What is Python's equivalent to the backquote syntax, if I
want to put some variant pieces into a parse tree template?


There isn't. But then there isn't need. The question isn't
"how do I do this construct that I expect in Lisp?" it's "how
do I solve this problem?" There are other ways to solve
that problem than creating a "parse tree template" and to
date there have been few cases where the alternatives were
significantly worse -- even in the case of translating a domain
language into local syntax, which is a Lisp specialty, it's only
about twice as long for Python as for Lisp and definitely
not "impossible" like you claimed. Python is definitely worse
for doing research in new programming styles, but then
again that's a small part of what most programmers need,
and an even smaller part of what most non-professional
programmers need. (Eg, my science work, from a computer
science viewpoint, is dead boring.)

There's very little evidence that Lisp is significantly better
than Python (or vice versa) for solving most problems.
It's close enough that it's a judgement call to decide which
is more appropriate.

But that's a Pythonic answer, which acknowledges that
various languages are better for a given domain and that it's
relatively easy to use C/Java bindings, shared memory, sockets,
etc to make them work together, and not a Lispish answer,
which insists that Lisps are the best and only languages
people should consider. (Broad brush, I know.)

Andrew
da***@dalkescientific.com
Jul 18 '05 #170
"Andrew Dalke" <ad****@mindspring.com> writes:
In other words, writing the domain language as an S-exp
is a short cut to make it easier on the programmer, and not
on the domain specialist. Unless the domain is programming.
And you know, very few of the examples of writing a domain
specific language in Lisp have been for tasks other than
programming.
Actually in my experience that hasn't been a problem. For example, I
wrote a program that crunched EDI documents. There were hundreds of
different document types each with its own rudimentary syntax. We had
a big 3-ring binder containing a printed copy of the ANSI standard
that had a semi-formal English description the syntax of each of these
documents. My program had an embedded Lisp interpreter and worked by
letting you give it the syntax of each document type as a Lisp
S-expression. The stuff in the S-expression followed the document
description in the printed EDI standard pretty closely. I typed in
the first few syntax specs and then was able to hand off the rest to a
non-programmer, who was able to see pretty quickly how the
S-expressions worked and code the rest of them. I think that the
version of the system we actually shipped to customers still had the
S-expression syntax specs buried in its guts somewhere, but my memory
about that is hazy.

The instructions about how to process specific documents were also
entered as Lisp programs at first. That let us very quickly determine
the semantic features we wanted in processing scripts, even though we
knew that our customers wouldn't tolerate Lisp. Once we had the
semantics figured out, we were able to design a language that
superficially looked like an unholy marriage of Basic and Cobol. We
wrote a Yacc script that parsed that language and built up
S-expressions in memory, and then eval'd them with the Lisp
interpreter.
Do people write any significant amount of code in the Python parse
tree syntax?


Peter Norvig talks about this some in his Python/Lisp comparison page:

http://www.norvig.com/python-lisp.html

Basically the Python AST structure is awful, but you arrange your life
so you don't have to deal with it very much.
There's very little evidence that Lisp is significantly better
than Python (or vice versa) for solving most problems.
It's close enough that it's a judgement call to decide which
is more appropriate.


There's one area where Lisp absolutely rules, which is providing a way
to write down complicated data structures without much fuss. These
days, XML is used as a Bizarro cousin of Lisp S-expressions in all
kinds of applications for similar purposes. The EDI program I
mentioned earlier was not originally intended to have an embedded
interpreter. I typed in some EDI syntax specs as S-expressions just
to have something to work with. I then wrote something like a Lisp
reader to read the S-expressions. I found myself then writing a Lisp
printer to debug the Lisp reader. Having a reader and printer it was
then entirely natural to add an eval and gc. The result became a
fairly important product in the EDI world for a time.
Jul 18 '05 #171
Some people (academics) are paid for being clever. Others (engineers) Ville Vainio http://www.students.tut.fi/~vainio24 ^^^^^^^^

Tee hee! :)


Yes, I have an account at a university. I prefer to use it instead of
that of my ISP (because ISP's come and go) or my work account (to
avoid associating my company with any of my opinions, which I think
should be a standard policy.. also, they don't provide web publishing
space for obvious reasons).

--
Ville Vainio http://www.students.tut.fi/~vainio24
Jul 18 '05 #172
Paul Rubin:
Actually in my experience that hasn't been a problem. For example, I
wrote a program that crunched EDI documents.
Ahh, you're right. I tend to omit business-specific languages
when I think about programming.

I conjecture it would have be about the same amount of work to
do it in Python, based solely on your description, but I defer to
you on it.
superficially looked like an unholy marriage of Basic and Cobol.
heh-heh :)
Do people write any significant amount of code in the Python parse
tree syntax?
Peter Norvig talks about this some in his Python/Lisp comparison page:

http://www.norvig.com/python-lisp.html

Basically the Python AST structure is awful, but you arrange your life
so you don't have to deal with it very much.


His description starts

] Python does not have macros. Python does have access to the
] abstract syntax tree of programs, but this is not for the faint of
] heart. On the plus side, the modules are easy to understand,
] and with five minutes and five lines of code I was able to get this:
] >>> parse("2 + 2")
] ['eval_input', ['testlist', ['test', ['and_test', ['not_test',
['comparison',
] ['expr', ['xor_expr', ['and_expr', ['shift_expr', ['arith_expr', ['term',
] ['factor', ['power', ['atom', [2, '2']]]]], [14, '+'], ['term',
['factor',
] ['power', ['atom', [2, '2']]]]]]]]]]]]]]], [4, ''], [0, '']]

I completely agree. Manipulating Python AST and the parse tree
are not for the faint of heart. However, he's not working with
the AST
import compiler
compiler.parse("2+2") Module(None, Stmt([Discard(Add((Const(2), Const(2))))]))


The code I wrote uses the AST and, while clunky, isn't as bad
as his example suggests.
There's one area where Lisp absolutely rules, which is providing a way
to write down complicated data structures without much fuss. These
days, XML is used as a Bizarro cousin of Lisp S-expressions in all
kinds of applications for similar purposes.


That's an old debate. Here's a counter-response
http://www.prescod.net/xml/sexprs.html

Andrew
da***@dalkescientific.com
Jul 18 '05 #173

one of the ironic, telling things about that exposition, is that the
syntax-error contraposition is not quite right.

Andrew Dalke wrote:


There's one area where Lisp absolutely rules, which is providing a way
to write down complicated data structures without much fuss. These
days, XML is used as a Bizarro cousin of Lisp S-expressions in all
kinds of applications for similar purposes.


That's an old debate. Here's a counter-response
http://www.prescod.net/xml/sexprs.html


it also misses the point, that "abstract" is not "concrete", and the surface
syntax in the input stream is not everything. an issue which "xml" will still
be fighting with long after the last so-encoded bits have long faded into the
aether.

analogies apply.

....
Jul 18 '05 #174


Andrew Dalke wrote:

Kaz Kylheku:
Ah, but in Lisp, this is commonly done at *compile* time.
Compile vs. runtime is an implementation issue. Doesn't
change expressive power, only performance. Type inferencing
suggests that there are other ways to get speed-ups from
dynamic languages.
Moreover,
two or more domain-specific languages can be mixed together, nested in
the same lexical scope, even if they were developed in complete
isolation by different programmers.


We have decidedly different definitions of what a "domain-specific
language" means. To you it means the semantics expressed as
an s-exp. To me it means the syntax is also domain specific. Eg,
Python is a domain specific language where the domain is
"languages where people complain about scope defined by
whitespace." ;)


that is an inaccurate projection of what "domain-specific" means in a
programming environment like lisp. perhaps it says more about what it would
mean in a programming environemtn like python? if the author would take the
example of one of the recent discussions which flew by here, e.weitz's
cl-interpol, it would be interesting to read how
Yes, one can support Python in Lisp as a reader macro -- but
it isn't done because Lispers would just write the Python out
as an S-exp. But then it wouldn't be Python, because the domain
language *includes*domain*syntax*.
it exemplifies "writing the [ domain-specific language ] out as an s-exp.

In other words, writing the domain language as an S-exp
is a short cut to make it easier on the programmer, and not
on the domain specialist. Unless the domain is programming.
And you know, very few of the examples of writing a domain
specific language in Lisp have been for tasks other than
programming.


the more likely approach to "python-in-lisp" would be a
reader-macro/tokenizer/parser/translater which compiled the
"python-domain-specific-language" into s-expressions.

....
Jul 18 '05 #175
"Andrew Dalke" <ad****@mindspring.com> writes:
Kaz Kylheku:
Moreover,
two or more domain-specific languages can be mixed together, nested in
the same lexical scope, even if they were developed in complete
isolation by different programmers.


We have decidedly different definitions of what a "domain-specific
language" means.


Probably not; more likely it is just a different emphasis.
To you it means the semantics expressed as
an s-exp. To me it means the syntax is also domain specific. Eg,
Python is a domain specific language where the domain is
"languages where people complain about scope defined by
whitespace." ;)


Your whole article leans heavily toward raising the importance of
syntax. Lispers tend to see it differently. For Common Lispers,
and many other lispers who tend to minimize syntax, if the domain-
specific language already has or is able to have similar syntax
as Lisp, then parsing is analready-solved problem, and one can
just use the CL parser (i.e. read) and move on to other more
important problems in the domain language. But if the syntax
doesn't match, then it really still isn't a big deal; a parser
is needed just as it is in any other language, and one must
(and can) solve that problem as well as the rest of the
domain-specifics. The real question is how quickly you can
finally leave the issue of syntax behind you in your new problem
domain and move on to the problem to solve.

In the early '80s, I did an experiment, using Franz Lisp and Naomi
Sager's (NYU) English Grammar parser from her early 1981 book
"Natural Language Information Processing". I wrote a parser
for English out of her BNF and Franz Lisp's character macros and
the lisp reader. When I joined Franz Inc, I was able to port the
parser to Common Lisp with a little cheating (CL doesn't define
infix-macros like Franz Lisp does, so I had to redefine read-list
in order to make the ' macro work well with the lexicon).

[Unfortunately, I can't release this parser, because in my
correspondences with her, Dr. Sager made it clear that although
the sentences and descriptions in the book are not copyrighted,
the BNF and restrictions are. So although you can see the BNF
nodes reresented in a tree below, I won't define the terms;
you'll have to get the book for that...]

The whole point of my experiment was not to write a parser or
to try to parse English, but to show how powerful Lisp's parser
already is. With an input of

"John's going to Mary's house."

for example, with neither John nor Mary being present in the
lexicon, the parser is able to provide the following analysis
(note that the parser was written as a stand-alone program used
in a fashion similar to a scripting language, with no actual
interactive input from the user except via pipe from stdin):

Franz Lisp, Opus 38.89+ plus English Grammar Parser version 0
-> nil
-> nil
-> t
-> t
-> sentence =
(|John's| going to |Mary's| house |.|)
form = <sentence>
revised sentence =
(|John's| going to Mary is house |.|)
form = <sentence>
revised sentence =
(|John's| going to Mary has house |.|)
form = <sentence>
revised sentence =
(John is going to |Mary's| house |.|)
form = <sentence>
found parse 1
1 1 <sentence>
2 2 <introducer>
4 2 <center>
5 3 <assertion>
6 4 <sa>
8 4 <subject>
9 5 <nstg>
10 6 <lnr>
11 7 <ln>
12 8 <tpos>
18 8 <qpos>
20 8 <apos>
22 8 <nspos>
24 8 <npos>
26 7 <nvar>
27 8 <namestg>
28 9 <lnamer>
29 10 <lname>
34 10 n --------------> John
35 10 <rname>
37 7 <rn>
39 4 <sa>
41 4 <tense>
42 5 <null>
43 4 <sa>
45 4 <verb>
47 5 <lv>
49 5 <vvar>
50 6 tv --------------> is
53 4 <sa>
55 4 <object>
56 5 <objectbe>
57 6 <vingo>
58 7 <lvsa>
60 7 <lvingr>
61 8 <lv>
63 8 ving --------------> going
64 8 <rv>
66 9 <rv1>
67 10 <pn>
69 11 <lp>
71 11 p --------------> to
72 11 <nstgo>
73 12 <nstg>
74 13 <lnr>
75 14 <ln>
76 15 <tpos>
77 16 <lnamesr>
78 17 <lname>
83 17 ns --------------> |Mary's|
84 15 <qpos>
86 15 <apos>
88 15 <nspos>
90 15 <npos>
92 14 <nvar>
93 15 n --------------> house
94 14 <rn>
98 7 <sa>
100 7 <object>
101 8 <nullobj>
102 7 <rv>
104 7 <sa>
106 4 <rv>
108 4 <sa>
110 2 <endmark>
111 3 |-.-| --------------> |.|
revised sentence =
(John is going to Mary is house |.|)
form = <sentence>
revised sentence =
(John is going to Mary has house |.|)
form = <sentence>
revised sentence =
(John has going to |Mary's| house |.|)
form = <sentence>
revised sentence =
(John has going to Mary is house |.|)
form = <sentence>
revised sentence =
(John has going to Mary has house |.|)
form = <sentence>
(no more parses count= 1)
->
A more complex structured sentence, which Sager gave in her book,
was "the force with which an isolated heart beats depends on
the concentration of calcium in the medium which surrounds it."
which also was parsed correctly, though I won't show the parse
tree for it here because it is long. I did have trouble with
conjunctions, because they were not covered fullly in her book
and involve splicing copies of parts of the grammar together,
and there are a number of "restrictions" (pruning and
well-formedness tests specific to some of the BNF nodes that help
to find the correct parse) which she did not describe in her book.

Again, the point is that syntax and parsing are already-solved
problems., and even problems that don't on the surface look like
problems naturally solved with the lisp reader can come fairly
close to being solved with very little effort. Perhaps we can
thus move a little deeper into the problem space a little faster.

--
Duane Rettig du***@franz.com Franz Inc. http://www.franz.com/
555 12th St., Suite 1450 http://www.555citycenter.com/
Oakland, Ca. 94607 Phone: (510) 452-2000; Fax: (510) 452-0182
Jul 18 '05 #176
Matthew Danish wrote:
On Wed, Oct 22, 2003 at 12:30:01PM -0400, Brian Kelley wrote:
Your two examples do completely different things and the second is
written rather poorly as f might might not exist in the finally block.
It certainly won't if the file doesn't exist.

A better comparison would (*might*, I haven't used lisp in a while) be:

(with-open-file (f filename :direction :output :if-exists :supersede)
(format f "Here are a couple~%of test data lines~%")) => NIL

if os.path.exists(filename):
f = open(filename, 'w')
print >> f, "Here are a couple of test data lines"

How are these in any way equivalent? Pascal posted his example with
try...finally and f.close () for a specific reason. In your Python
example, the file is not closed until presumably the GC collects the
descriptor and runs some finalizer (if that is even the case).


The file is closed when the reference count goes to zero, in this case
when it goes out of scope. This has nothing to do with the garbage
collector, just the reference counter. At least, that's the way I
understand, and I have been wrong before(tm). The upshot is that it has
worked in my experience 100% of the time and my code is structured to
use (abuse?) this. How is this more difficult?

The difference here, as I see it, is that if an exception happens then
the system has to wait for the garbage collector to close the file. In
both examples there was no exception handling after the fact (after the
file is closed). The macro, then, allows execution to continue with the
closed file while the python version stops execution in which case the
file is closed anyway. (unless it is in a another thread of execution
in which the file is closed when it goes out of scope)

In either case one still needs to write handling code to support the
failure as this is most likely application specific. Using macros as a
default handler seems very appropriate in lisp. In python, as I
mentioned in the model-centric view I would create a new file object to
support better handling of file-i/o and failures and hence abstract away
the "error-prone" styles you mentioned.

Now, macros really shine when they also use the local scope. However,
unless a macro is actually doing this I see no real difference between
creating a wrapper for an object and a macro:

f = FileSafeWrapper(open(...))

(with-file-open (f ...)

Except that the macros you are describing are part of the common
distribution. I have to write my own FileSafeWrapper for now...
http://www.lispworks.com/reference/H...with-open-file

So in fact, you pointed out a bug in Pascal's Python example, and one
that is easy to make. All this error-prone code is abstracted away by
WITH-OPEN-FILE in Lisp.


This is a good thing, naturally. But the examples you have given are
completely do-able (in one form or another) in python as it currently
stands, either by creating a wrapper around a file object that can
properly close down on errors or what not. In fact, this might be a
better abstraction in some cases. Consider:

(with-file-that-also-outputs-to-gui ... )
(with-file-that-also-outputs-to-console ...)

to

(with-open-file (f ...

in this case, f is supplied by some constructor that wraps the file to
output to the gui or standard i/o and is passed around to various
functions. Which is the better solution?

I'm not saying that lisp can't do this, it obviously can, but macros
might not be the appropriate solution to this problem ( they certainly
aren't in python ;) )

p.s. I really do enjoy programming in lisp, it was my second programming
language after fortran 77.

Jul 18 '05 #177
On Thu, Oct 23, 2003 at 10:53:21AM -0400, Brian Kelley wrote:
The file is closed when the reference count goes to zero, in this case
when it goes out of scope. This has nothing to do with the garbage
collector, just the reference counter. At least, that's the way I
understand, and I have been wrong before(tm). The upshot is that it has
worked in my experience 100% of the time and my code is structured to
use (abuse?) this. How is this more difficult?
I would see this as a dependence on an implementation artifact. This
may not be regarded as an issue in the Python world, though. (Are you
sure refcounting is used in all Python implementations? It is not that
great of a memory management technique except in certain situations).
One of the arguments against using finalizers to deallocate resources is
that it is unpredictable: a stray reference can keep the resource (and
maybe lock) open indefinitely.

This is not to say that Python couldn't achieve a similar solution to
the Lisp one. In fact, it could get quite nearly there with a
functional solution, though I understand that it is not quite the same
since variable bindings caught in closures are immutable. And it would
probably be most awkward considering Python's lambda.
The difference here, as I see it, is that if an exception happens then
the system has to wait for the garbage collector to close the file. In
both examples there was no exception handling after the fact (after the
file is closed). The macro, then, allows execution to continue with the
closed file while the python version stops execution in which case the
file is closed anyway. (unless it is in a another thread of execution
in which the file is closed when it goes out of scope)
I'm not sure I understand this paragraph. The macro only executes the
body code if the file is successfully opened. The failure mode can be
specified by a keyword argument. The usual HANDLER-CASE and
HANDLER-BIND can be used to handle conditions with or without unwinding
the stack (you could fix and continue from a disk full error, for
example). If the stack is unwound out of the macro, by a condition
(abnormally), then there is an attempt to restore the state of the
filesystem (method probably dependent on the :if-exists parameter). If
control exits normally, the file is closed normally.
In either case one still needs to write handling code to support the
failure as this is most likely application specific. Using macros as a
default handler seems very appropriate in lisp.
Not sure what `using macros as a default handler' means.
This is a good thing, naturally. But the examples you have given are
completely do-able (in one form or another) in python as it currently
stands, either by creating a wrapper around a file object that can
properly close down on errors or what not. In fact, this might be a
better abstraction in some cases. Consider:

(with-file-that-also-outputs-to-gui ... )
(with-file-that-also-outputs-to-console ...)

to

(with-open-file (f ...

in this case, f is supplied by some constructor that wraps the file to
output to the gui or standard i/o and is passed around to various
functions. Which is the better solution?

I'm not saying that lisp can't do this, it obviously can, but macros
might not be the appropriate solution to this problem ( they certainly
aren't in python ;) )


The WITH-OPEN-FILE macro is not really an example of a macro that
performs something unique. It is, I find, simply a handy syntactic
abstraction around something that is more complicated than it appears at
first. And, in fact, I find myself creating similar macros all the time
which guide the use of lower-level functions. However, that doesn't
mean, for example, that I would try to defeat polymorphism with macros
(WITH-OPEN-FILE happens to be a very often used special case). I would
write the macro to take advantage of that situation.

--
; Matthew Danish <md*****@andrew.cmu.edu>
; OpenPGP public key: C24B6010 on keyring.debian.org
; Signed or encrypted mail welcome.
; "There is no dark side of the moon really; matter of fact, it's all dark."
Jul 18 '05 #178
Matthew Danish <md*****@andrew.cmu.edu> writes:
On Wed, Oct 22, 2003 at 09:52:42AM -0700, Jock Cooper wrote:
One of the important differences is that MY-FUNC is lexically isolated
from the environment where WITH-OPEN-FILE appears. The macro version
does not suffer this; and it is often convenient for the code block
in the WITH-OPEN-FILE to access that environment.


(call-with-open-file
#'(lambda (stream)
...)
"somefile"
:direction :input)

WITH-OPEN-FILE happens to be one of those macros which doesn't require
compile-time computation, but rather provides a convenient interface to
the same functionality as above.

--


Right, if the function is specified as a lambda inside the call then it
*would* have access to the surrounding lexical space.

The recent thread on macros has got me thinking more carefully about
when I use macros, and if I could use functions instead.

Jul 18 '05 #179
Brian Kelley <bk*****@wi.mit.edu> wrote in message news:<3f*********************@senator-bedfellow.mit.edu>...
The file is closed when the reference count goes to zero, in this case
when it goes out of scope. This has nothing to do with the garbage
collector, just the reference counter. At least, that's the way I
understand, and I have been wrong before(tm). The upshot is that it has
worked in my experience 100% of the time and my code is structured to
use (abuse?) this. How is this more difficult?


As I understand, you're arguing that it's ok to let Python's
refcounter automagically close the file for you.

Please read this:
http://groups.google.com/groups?hl=e...%40alcyone.com
Erik Max Francis explains that expecting the system to close files
leads to brittle code. It's not safe or guaranteed.

After learning Python, people write this bug for months, until they
see some article or usenet post with the try/finally idiom. This
idiom isn't obvious from the docs; the tutorial doesn't say how
important closing a file is. (I was lucky enough to already know how
exceptions could bust out of code.)

Now, I love Python, but this really is a case where lots of people
write lots of potentially hard-to-reproduce bugs because the language
suddenly stopped holding their hands. This is where it really hurts
Python to make the tradeoff against having macros. The tradeoff may
be worth it, but ouch!
Jul 18 '05 #180
james anderson <ja************@setf.de> writes:
the more likely approach to "python-in-lisp" would be a
reader-macro/tokenizer/parser/translater which compiled the
"python-domain-specific-language" into s-expressions.


That's not really feasible because of semantic differences between
Python and Lisp. It should be possible, and may be worthwhile, to do
that with modified Python semantics.
Jul 18 '05 #181
Brian Kelley <bk*****@wi.mit.edu> writes:
The file is closed when the reference count goes to zero, in this case
when it goes out of scope. This has nothing to do with the garbage
collector, just the reference counter.
There is nothing in the Python spec that says that. One particular
implementation (CPython) happens to work that way, but another one
(Jython) doesn't. A Lisp-based implementation might not either.
At least, that's the way I
understand, and I have been wrong before(tm). The upshot is that it
has worked in my experience 100% of the time and my code is structured
to use (abuse?) this. How is this more difficult?


Abuse is the correct term. If your code is relying on stuff being
gc'd as soon as it goes out of scope, it's depending on CPython
implementation details that are over and above what's spelled out in
the Python spec. If you run your code in Jython, it will fail.
Jul 18 '05 #182
Matthew Danish wrote:
I would see this as a dependence on an implementation artifact. This
may not be regarded as an issue in the Python world, though.
As people have pointed out, I am abusing the C-implementation quite
roundly. That being said, I tend to write proxies/models (see
FileSafeWrapper) that do the appropriate action on failure modes and
don't leave it up the garbage collector. Refer to the "Do as I Do, not
as I Say" line of reasoning.
This is not to say that Python couldn't achieve a similar solution to
the Lisp one. In fact, it could get quite nearly there with a
functional solution, though I understand that it is not quite the same
since variable bindings caught in closures are immutable. And it would
probably be most awkward considering Python's lambda.
I don't consider proxies as "functional" solutions, but that might just
be me. They are just another way of generating something other than the
default behavior.

Python's lambda is fairly awkward to start with, it is also slower than
writing a new function. I fully admit that I have often wanted lambda
to be able to look up variables in the calling frame.

foo = lambda x: object.insert(x)
object = OracleDatabase()
foo(x)
object = MySqlDatabase()
foo(x)

But in practice I never write lambda's this way. I always bind them to
a namespace (in this case a class).

class bar:
def some_fun(x):
foo = lambda self=self, x: self.object.insert(x)
foo(x)

Now I could use foo on another object as well.
foo(object, x)
I'm not sure I understand this paragraph. The macro only executes the
body code if the file is successfully opened. The failure mode can be
specified by a keyword argument.
I can explain what I meant with an example: suppose you wanted to tell
the user what file failed to open/write and specify a new file to open.
You will have to write a handler for this and supply it to
(with-open-file ...) or catch the error some other way.
Not sure what `using macros as a default handler' means.
Apologies, I meant to say that writing a macro to handle particular
exceptions in a default-application wide way is a good thing and
appropriate.
The WITH-OPEN-FILE macro is not really an example of a macro that
performs something unique. It is, I find, simply a handy syntactic
abstraction around something that is more complicated than it appears at
first. And, in fact, I find myself creating similar macros all the time
which guide the use of lower-level functions.


Right. I'm was only trying to point out, rather lamely I might add,
that the macros I have been seeing I would solve in an object-oriented
manner. This might simply be because python doesn't have macros. But I
like the thought of "here is your fail-safe file object, use it however
you like". It is hard for me to say which is 'better' though, I tend to
use the language facilities available (and abuse them as pointedly
stated), in fact it took me a while to realize that (with-open-file) was
indeed a macro, it simply was "the way it was done(TM)" for a while.
Certainly, new (and old) users can forget to use their file I/O in the
appropriate macro as easily as forgetting to use try: finally:

In python I use my FileSafeWrapper(...) that ensures that the file is
properly closed on errors and the like. As I stated, this wasn't handed
to me by default though like I remember as (with-file-open ...) was from
my CHLS days.

So let me ask a lisp question. When is it appropriate to use a macro
and when is it appropriate to use a proxy or polymorphism? Perhaps
understanding this would break my macro stale-mate.

p.s. given some of the other posts, I am heartened by the civility of
this particular thread.

Jul 18 '05 #183
Tayss wrote:
http://groups.google.com/groups?hl=e...%40alcyone.com
Erik Max Francis explains that expecting the system to close files
leads to brittle code. It's not safe or guaranteed.

After learning Python, people write this bug for months, until they
see some article or usenet post with the try/finally idiom.
Some more than most. Interestingly, I write proxies that close
resources on failure but tend to let files do what they want.
Now, I love Python, but this really is a case where lots of people
write lots of potentially hard-to-reproduce bugs because the language
suddenly stopped holding their hands. This is where it really hurts
Python to make the tradeoff against having macros. The tradeoff may
be worth it, but ouch!


I choose a bad example on abusing the C-implementation. The main thrust
of my argument is that you don't need macros in this case, i.e. there
can be situations with very little tradeoff.

class SafeFileWrapper:
def __init__(self, f):
self.f = f

def write(self, data):
try: self.f.write(data)
except:
self.f.close()
self.f = None
raise

def close():
if self.f:
self.f.close()
...

Now the usage is:

f = SafeFileWrapper(open(...))
print >> f, "A couple of lines"
f.close()

So now I just grep through my code for open and replace it with
SafeFileWrapper(open...) and all is well again.

I still have to explicitly close the file though when I am done with it,
unless I don't care if it is open through the application run. But at
least I am guaranteed that the file is closed either when I tell it to
or on an error.

Brian Kelley

Jul 18 '05 #184
Brian Kelley wrote:
class bar:
def some_fun(x):
foo = lambda self=self, x: self.object.insert(x)
foo(x)

oops, that should be
foo = lambda x, self=self: self.object.insert(x)


Jul 18 '05 #185
On Thu, 23 Oct 2003 14:37:57 -0400, Brian Kelley wrote:
Python's lambda is fairly awkward to start with, it is also slower than
writing a new function. I fully admit that I have often wanted lambda
to be able to look up variables in the calling frame.


They can since Python 2.1 (in 2.1 with from __future__ import nested_scopes,
in 2.2 and later by default). This applies to nested functions as well.

--
__("< Marcin Kowalczyk
\__/ qr****@knm.org.pl
^^ http://qrnik.knm.org.pl/~qrczak/

Jul 18 '05 #186
JCM
In comp.lang.python Marcin 'Qrczak' Kowalczyk <qr****@knm.org.pl> wrote:
On Thu, 23 Oct 2003 14:37:57 -0400, Brian Kelley wrote:
Python's lambda is fairly awkward to start with, it is also slower than
writing a new function. I fully admit that I have often wanted lambda
to be able to look up variables in the calling frame.

They can since Python 2.1 (in 2.1 with from __future__ import nested_scopes,
in 2.2 and later by default). This applies to nested functions as well.


Variables in lexically enclosing scopes will be visible, but not
variables in calling frames.
Jul 18 '05 #187
Sorry for the long delay. Turns out my solution was to upgrade to
Windows XP, which has better compatibility with Windows 98 stuff than
Windows 2000. So I've had some fun reinstalling everything. On the
plus side, no more dual booting.

Anyway...
On Thu, 16 Oct 2003 18:52:04 GMT, Alex Martelli <al***@aleax.it>
wrote:
Stephen Horne wrote:
...
>no understanding, no semantic modeling.
>no concepts, no abstractions.

Sounds a bit like intuition to me. ...
What is your definition of intuition?
I can accept something like, e.g.:
2. insight without conscious reasoning
3. knowing without knowing how you know


but they require 'insight' or 'knowing', which are neither claimed
nor disclaimed in the above.


You can take 'insight' and 'knowing' to mean more than I (or the
dictionary) intended, but in this context they purely mean having
access to information (the results of the intuition).

Logically, those words simply cannot mean anything more in this
context. If you have some higher level understanding and use this to
supply the answer, then that is 'conscious reasoning' and clearly
shows that you you know something of how you know (ie how that
information was derived). Therefore it is not intuition anymore.

Of course the understanding could be a rationalisation - kind of a
reverse engineered explanation of why the intuition-supplied answer is
correct. That process basically adds the 'understanding' after the
fact, and is IMO an everyday fact of life (as I mentioned in an
earlier post, I believe most people only validate and select
consciously from an unconsciously suggested subset of likely solutions
to many problems). However, this rationalisation (even if backdated in
memory and transparent to the person) *is* after the fact - the
intuition in itself does not imply any 'insight' or 'knowing' at any
higher level than the simple availability of information.
There are many things I know,
without knowing HOW I do know -- did I hear it from some teacher,
did I see it on the web, did I read it in some book? Yet I would
find it ridiculous to claim I have such knowledge "by intuition":
Of course. The phrase I used is taken directly from literature, but
the 'not knowing how you know' is obviously intended to refer to a
lack of awareness of how the solution is derived from available
information. Memory is obviously not intuition, even if the context in
which the memory was laid down has been forgotten. I would even go so
far as to suggest that explicit memory is never a part of intuition.
Heuristics (learned or otherwise) are not explicit memories, and
neither is the kind of procedural memory which I suspect plays a
crucial role in intuition.

One thing that has become clear in neuroscience is that almost all
(perhaps literally all) parts and functions of the brain benefit from
learning. Explicit memory is quite distinct from other memory
processes - it serves the conscious mind in a way that other memory
processes do not.

For instance, when a person lives through a traumatic experience, a
very strong memory of that experience may be stored in explicit memory
- but not always. Whether remembered or not, however, that explicit
memory has virtually nothing to do with the way the person reacts to
cues that are linked to that traumatic experience. The kind of memory
that operates to trigger anxiety, anger etc has very weak links to the
conscious mind (well, actually it has very strong ones, but only so
that it can control the conscious mind - not the other way around). It
is located in the amygdala, it looks for signs of danger in sensory
cues, and when it finds any such cues it triggers the fight-or-flight
stress response.

Freudian repression is a myth. When people experience chronic stress
over a period of years (either due to ongoing traumatic experience or
due to PTSD) the hippocampus (crucial to explicit memory) is damaged.
The amygdala (the location of that stress-response triggering implicit
memory) however is not damaged. The explicit memory can be lost while
the implicit memory remains and continues to drive the PTSD symptoms.

It's no surprise, therefore, that recovered memories so often turn out
to simply be false - but still worth considering how this happens.
There are many levels. For instance, explicit memories seem to be
'lossy compressed' by basically factoring out the kinds of context
that can later be reconstructed from 'general knowledge'. Should your
general knowledge change between times, so does the reconstructed
memory.

At a more extreme level, entire memories can be fabricated. The harder
you search for memories, the more they are filled in by made up stuff.
And as mentioned elsewhere, the brain is quite willing to invent
rationalisations for things where it cannot provide a real reason. Add
a psychiatrist prompting and providing hints as to the expected form
of the 'memory' and hey presto!

So basically, the brain has many types of memory, and explicit memory
is different to the others. IMO intuition uses some subset of implicit
memory and has very little to do with explicit memory.
The third definition will tend to follow from the second (if the
insight didn't come from conscious reasoning, you won't know how you
know the reasoning behind it).


This seems to ignore knowledge that comes, not from insight nor
reasoning, but from outside sources of information (sources which one
may remember, or may have forgotten, without the forgetting justifying
the use of the word "intuition", in my opinion).


Yes, quite right - explicit memory was not the topic I was discussing
as it has nothing to do with intuition.
Basically, the second definition is the core of what I intend and
nothing you said above contradicts what I claimed. Specifically...


I do not claim the characteristics I listed:
no understanding, no semantic modeling.
>no concepts, no abstractions.
_contradict_ the possibility of "intuition". I claim they're very
far from _implying_ it.


OK - and in the context of your linking AI to 'how the human brain
works' that makes sense.

But to me, the whole point of 'intuition' (whether in people or, by
extension, in any kind of intelligence) is that the answer is supplied
by some mechanism which is not understood by the individual
experiencing the intuition. Whether that is a built-in algorithm or an
innate neural circuit, or whether it is a the product of an implicit
learning mechanism (whether electronic/algorithmic or
neural/cognitive).
...sounds like "knowing without knowing how you know".


In particular, there is no implication of "knowing" in the above.


Yes there is. An answer was provided. If the program 'understood' what
it was doing to derive that answer, then that wouldn't have been
intuition (unless the 'understanding' was a rationalisation after the
fact, of course).
You can read a good introduction to HMM's at:
http://www.comp.leeds.ac.uk/roger/Hi..._dev/main.html
I haven't read this yet, but your description has got my interest.
The software was not built to be "aware" of anything, right. We did
not care about software to build sophisticated models of what was
going on, but rather about working software giving good recognition
rates.
Evolution is just as much the pragmatist.

Many people seem to have an obsession with a kind of mystic view of
consciousness. Go through the list of things that people raise as
being part of consciousness, and judge it entirely by that list, and
it becomes just another set of cognitive functions - working memory,
primarily - combined with the rather obvious fact that you can't have
a useful understanding of the world unless you have a useful
understanding of your impact on it.

But there is this whole religious thing around consciousness that
really I don't understand, to the point that I sometimes wonder if
maybe Asperger syndrome has damaged that too.

Take, for instance, the whole fuss about mirror tests and the claim
that animals cannot be self-aware as they don't (with one or two
primate exceptions) pass the mirror test - they don't recognise
themselves in a mirror.

There is a particular species that has repeatedly failed the mirror
test that hardly anyone mentions. Homo sapiens sapiens. Humans. When
first presented with mirrors (or photographs of themselves), members
of tribes who have had no contact with modern cultures have
consistently reacted much the same way - they simply don't recognise
themselves in the images. Mirrors are pretty shiney things.
Photographs are colourful patterns, but nothing more.

The reason is simple - these people are not expecting to see images of
themselves and may never have seen clear reflected images of
themselves. It takes a while to pick up on the idea. It has nothing to
do with self-awareness.

To me, consciousness and self-awareness are nothing special. Our
perception of the world is a cognitive model constructed using
evidence from our senses using both innate and learned 'knowledge' of
how the world works. There is no such thing as 'yellow' in the real
world, for instance - 'colour' is just the brains way of labelling
certain combinations of intensities of the three wavebands of light
that our vision is sensitive to.

While that model isn't the real world, however, it is necessarily
linked to the real world. It exists for a purpose - to allow us to
understand and react to the environment around us. And that model
would be virtually useless if it did not include ourselves, because
obviously the goal of much of what we do is to affect the environment
around us.

In my view, a simple chess program has a primitive kind of
self-awareness. It cannot decide its next move without considering how
its opponent will react to its move. It has a (very simple) world
model, and it is aware of its own presence and influence in that world
model.

Of course human self-awareness is a massively more sophisticated
thing. But there is no magic.

Very likely your software was not 'aware' of anything, even in this
non-magical sense of awareness and consciousness. As you say - "We did
not care about software to build sophisticated models of what was
going on".

But that fits exactly my favorite definition of intuition - of knowing
without knowing how you know. If there were sophisticated models, and
particularly if the software had any 'understanding' of what it was
doing, it wouldn't be intuition - it would be conscious reasoning.
People do have internal models of how people understand speech -- not
necessarily accurate ones, but they're there. When somebody has trouble
understanding you, you may repeat your sentences louder and more slowly,
perhaps articulating each word rather than slurring them as usual: this
clearly reflects a model of auditory performance which may have certain
specific problems with noise and speed.
I disagree. To me, this could be one of two things...

1. A habitual, automatic response to not being heard with no
conscious thought at all - for most people, the most common
reasons for not being understood can be countered by speaking more
loudly and slowly.

2. It is possible that a mental model is used for this and the
decision made consciously, though I suspect the mental model comes
in more as the person takes on board the fact that there is a
novel communication barrier and tries to find solutions.

Neither case is relevant to what I meant, though. People don't
consciously work on recognising sounds nor on translating series of
such sounds into words and scentences - that information is provided
unconsciously. Only when understanding becomes difficult such that the
unconscious solutions are likely to be erroneous is there any
conscious analysis.

And the conscious analysis is not a conscious analysis of the process
by which the 'likely solutions subset' is determined. There is no
doubt 'introspection' in the sense that intermediate results in some
form (which phenomes were recognised, for instance) are no doubt
passed on the the conscious mind to aid that analysis, and at that
stage a conscious model obviously comes into play, but I don't see
that as particularly important to my original argument.

Of course people can use rational thought to solve communication
problems, at which point a mental model comes into play, but most of
the time our speach recongition is automatic and unconscious.

Even when we have communications difficulties, we are not free to
introspect the whole speach recognition process. Rather some plausible
solutions and key intermediate results (and a sense if where the
problem lies) are passed to the conscious mind for separate analysis.

The normal speach recognition process is basically a black box. It is
able to provide intermediate results and 'debugging information' in
difficult cases - but there is no conscious understanding of the
processes used to derive any of that. I couldn't tell anything much of
use about the patterns of sound that create each phenome, for
instance. The awareness that one phenome sounds rather similar to
another doesn't count, in itself.

BTW - I hope 'phenome' is the right word. My dictionary has failed me
and a web search seems to see 'phenome' as something to do with
genetic. It is intended to refer to 'basic' sound components that
build up words, but I think I've got a bit confused.
(as opposed to, e.g., the proverbial "ugly American" whose caricatural
reaction to foreigners having trouble understanding English would be
to repeat exactly the same sentences, but much louder:-).
I believe the English can outdo any American in the loud-and-slow
shouting at foreigners thing ;-)
The precise algorithms for speach recognition used by IBM/Dragons
dictation systems and by the brain are probably different, but to me


Probably.
fussing about that is pure anthropocentricity. Maybe one day we'll


Actually it isn't -- if you're aware of certain drastic differences
in the process of speech understanding in the two cases, this may be
directly useful to your attempts of enhancing communication that is
not working as you desire.


Yes, but I was talking about what can or cannot be considered
intelligent. I was simply stating that in my view, a thing that
provides intelligent results may be considered intelligent even if it
doesn't use the same methods that humans would use to provide those
results.

I talk to my mother in a slightly different way to the way I talk to
my father. This is a practical issue necessitated by their different
conversational styles (and the kind of thing that seriously bugs
cognitive theorists who insist despite the facts that people with
Aspergers can never understand or react to such differences). That
doesn't mean that my mother and father can't both be considered
intelligent.
E.g., if a human being with which you're
very interested in discussing Kant keeps misunderstanding each time
you mention Weltanschauung, it may be worth the trouble to EXPLAIN
to your interlocutor exactly what you mean by it and why the term is
important; but if you have trouble dictating that word to a speech
recognizer you had better realize that there is no "meaning" at all
connected to words in the recognizer -- you may or may not be able
to "teach" spelling and pronunciation of specific new words to the
machine, but "usage in context" (for machines of the kind we've been
discussing) is a lost cause and you might as well save your time.
Of course, that level of intelligence in computer speach recognition
is a very long way off.
But, you keep using "anthropocentric" and its derivatives as if they
were acknowledged "defects" of thought or behavior. They aren't.
Not at all. I am simply refusing to apply an arbitrary restriction on
what can or cannot be considered intelligent. You have repeatedly
stated, in effect, that if it isn't the way that people work then it
isn't intelligent (or at least AI). To me that is an arbitrary
restriction. Especially as evolution is a pragmatist - the way the
human mind actually works is not necessarily the best way for it to
work and almost certainly is not the only way it could have worked. It
seems distinctly odd to me to observe the result of a particular roll
of the dice and say "this is the only result that we can consider
valid".
see Timur Kuran's "Private Truths, Public Lies", IMHO a masterpiece
(but then, I _do_ read economics for fun:-).
I've not read that, though I suspect I'll be looking for it soon.
But of course you'd want _others_ to suppy you with information about
_their_ motivations (to refine your model of them) -- and reciprocity
is important -- so you must SEEM to be cooperating in the matter.
(Ridley's "Origins of Virtue" is what I would suggest as background
reading for such issues).
I've read 'origins of virtue'. IMO it spends too much time on the
prisoners dilemma. I have the impression that either Ridley has little
respect for his readers intelligence or he had little to say and had
to do some padding. From what Ridley takes a whole book to say, Pinker
covers the key points in a couple of pages.
But if there are many types, the one humans have is surely the most
important to us
From a pragmatic standpoint of getting things done, that is clearly
not true in most cases. For instance, when faced with the problem of
writing a speach recognition program, you and your peers decided to
follow the pragmatic approach and do something different to what the
brain does.
Turing's Test also operationally defines it that
way, in the end, and I'm not alone in considering Turing's paper
THE start and foundation of AI.
Often, the founders of a field have certain ideas in mind which don't
pan out in the long term. When Kanner discovered autism, for instance,
he blamed 'refridgerator' mothers - but that belief is simply false.

Turing was no more omniscient than Kanner. Of course his contribution
to many fields in computing was beyond measure, but that doesn't mean
that AI shouldn't evolve beyond his conception of it.

Evolution is a pragmatist. I see no reason why AI designers shouldn't
also be pragmatists.

If we need a battle of the 'gods', however, then may I refer you to
George Boole who created what he called 'the Laws of Thought'. They
are a lot simpler than passing the Turing Test ;-)
Studying humanity is important. But AI is not (or at least should not
be) a study of people - if it aims to provide practical results then
it is a study of intelligence.


But when we can't agree whether e.g. a termine colony is collectively
"intelligent" or not, how would it be "AI" to accurately model such a
colony's behavior?


When did I claim it would be?
The only occurrences of "intelligence" which a
vast majority of people will accept to be worthy of the term are those
displayed by humans
Of course - we have yet to find another intelligence at this point
that even registers on the same scale as human intelligence. But that
does not mean that such an intelligence cannot exist.
-- because then "model extroflecting", such an
appreciated mechanism, works fairly well; we can model the other
person's behavior by "putting ourselves in his/her place" and feel
its "intelligence" or otherwise indirectly that way.
Speaking as the frequent victim of a breakdown in that (my broken
non-verbal communication and other social difficulties frequently lead
to people jumping to the wrong conclusion - and persisting in that bad
conclusion, often for years, despite clear evidence to the contrary) I
can tell you that there is very little real intelligence involved in
that process. Of course even many quite profound autistics can "put
themselves in his/her place" and people who supposedly have no empathy
can frequently be seen crying about the suffering of others that
neurotypicals have become desensitised to. But my experience of trying
to explain Asperger syndrome to people (which is quite typical of what
many people with AS have experienced) is pretty much proof positive
that most people are too lazy to think about such things - they'd
rather keep on jumping to intuitive-but-wrong conclusions and they'd
rather carry on victimising people in supposed retaliation for
non-existent transgressions as a consequence.

'Intelligent' does not necessarily imply 'human' (though in practice
it does at this point in history), but certainly 'human' does not
imply 'intelligent'.
For non-humans
it only "works" (so to speak) by antroporphisation, and as the well
known saying goes, "you shouldn't antropomorphise computers: they
don't like it one bit when you do".
Of course - but I'm not the one saying that computer intelligence and
human intelligence must be the same thing.

A human -- or anything that can reliably pass as a human -- can surely
be said to exhibit intelligence in certain conditions; for anything
else, you'll get unbounded amount of controversy. "Artificial life",
where non-necessarily-intelligent behavior of various lifeforms is
modeled and simulated, is a separate subject from AI. I'm not dissing
the ability to abstract characteristics _from human "intelligent"
behavior_ to reach a useful operating definition of intelligence that
is not limited by humanity: I and the AAAI appear to agree that the
ability to build, adapt, evolve and generally modify _semantic models_
is a reasonable discriminant to use.
Why should the meaning of the term 'intelligent' be derived from the
meaning of the term 'human' in the first place!

Things never used to be this way. Boole could equate thought with
algebra and no-one batted an eyelid. Only since the human throne of
specialness has been threatened (on the one hand by Darwins assertion
that we are basically bald apes, and on the other by machines doing
tasks that were once considered impossible for anything but human
minds) did terms like 'intelligence', 'thought' and 'consciousness'
start taking on mystic overtones.

Once upon a time, "computer" was a job title. You would have to be
pretty intelligent to work as a computer. But such people were
replaced by pocket calculators.

People have been told for thousands of years that humanity is special,
created in gods image and similar garbage. Elephants would no doubt be
equally convinced of their superiority, if they thought of such
things. After all, no other animal has such a long and flexible nose,
so useful for spraying water around for instance.

Perhaps such arrogant elephants would find the concept of a hose pipe
quite worrying?

I think what is happening with people is similar. People now insist
that consciousness must be beyond understandability, for example, no
because there is any reason why it should be true but simply because
they need some way to differentiate themselves from machines and apes.
If what you want is to understand intelligence, that's one thing. But
if what you want is a program that takes dictation, or ones that plays
good bridge, then an AI approach -- a semantic model etc -- is not
necessarily going to be the most productive in the short run (and
"in the long run we're all dead" anyway:-).
I fully agree. And so does evolution. Which is why 99% or more of what
your brain does involves no semantic model whatsoever.
Calling program that use
completely different approaches "AI" is as sterile as similarly naming,
e.g., Microsoft Word because it can do spell-checking for you: you can
then say that ANY program is "AI" and draw the curtains, because the
term has then become totally useless. That's clearly not what the AAAI
may want, and I tend to agree with them on this point.
Then you and they will be very unhappy when they discover just how
'sterile' 99% of the brain is.
What we most need is a model of _others_ that gives better results
in social interactions than a lack of such a model would. If natural
selection has not wiped out Asperger's syndrome (assuming it has some
genetic component, which seems to be an accepted theory these days),
there must be some compensating adaptive advantage to the disadvantages
it may bring (again, I'm sure you're aware of the theories about that).
Much as for, e.g., sickle-cell anemia (better malaria resistance), say.
There are theories of compensating advantages, but I tend to doubt
them. This is basically a misunderstanding of what 'genetic' means.

First off, to the extent that autism involves genetics (current
assessments claim autism is around 80% genetic IIRC) those genetics
are certainly not simple. There is no single autism gene. Several
'risk factor' genes have been identified, but all can occur in
non-autistic people and none is common to even more than a
'significant minority' of autistic people.

Most likely, in my view, there are two key ideas to think of in the
context of autism genetics. The first is recessive genes. The second
is what I call a 'bad mix' of genes. I am more convinced by the latter
(partly because I thought it up independantly of others - yes, I know
that's not much of an argument) so I'll describe that in more detail.

I general, you can't just mutate one gene and get a single change in
the resulting organism. Genes interact in complex ways to determine
developmental processes, which in turn determine the end result.

People have recently, in evolutionary terms, evolved for much greater
mental ability. But while a new feature can evolve quite quickly, each
genetic change that contributes to that feature also has a certain
amount of 'fallout'. There are secondary consequences, unwanted
changes, that need to be compensated for - and the cleanup takes much
longer.

Genes are also be continuously swapped around, generation by
generation, by recombination. And particular combinations can have
'unintended' side-effects. There can be incompatibilities between
genes. For evolution to prgress to the point where there are no
incompatibilities (or immunities to the consequences of those
incompatibilities) can take a very long time, especially as each
problem combination may only occur rarely.

Based on this, I would expect autistic symptoms to suddenly appear in
a family line (when the bad mix genes are brought together by a fluke
of recombination). This could often be made worse by the general
principle that birds of a feather flock together, bringing more
incompatible bad mix genes together. But as reproductive success drops
(many autistics never find partners) some of the lines simply die out,
while other lines simply separate out those bad mix genes, so that
while the genes still exist most children no longer have an
incompatible mix.

Basically, the bad mix comes together by fluke, but after a few
generations that bad mix will be gone again.

Alternatively, people with autism and Asperger syndrome seem to
consistently have slightly overlarge heads, and there is considerable
evidence of an excessive growth in brain size at a very young age.
This growth spurt may well disrupt developmental processes in key
parts of the brain. The point being that this suggests to me that
autistic and AS people are basically pushing the limit in brain size.
We are the consequence of pushing too fast for too much more mental
ability. We have the combination of genes for slightly more brain
growth, but the genes to adapt developmental processes to cope with
that growth - but we don't have the genes to fix the unwanted
consequences of these new mixes of genes.

So basically, autism and AS are either the leading or trailing edge of
brain growth evolution - either we are the ones who suffer the
failings of 'prototype' brain designs so that future generations may
evolve larger non-autistic brains, or else we are the ones who suffer
the failings of bad mix 'fallout' while immunity to the bad gene
combinations gradually evolves.

In neither case do we have a particular compensating advantage, though
a few things have worked out relatively well for at least some people
with AS over the last few centuries. Basically, you get the prize
while I suffer for it. Of course I'm not bitter ;-)
But the point remains that we don't have "innate" mental models
of e.g. the way the mind of a dolphin may work, nor any way to
build such models by effectively extroflecting a mental model of
ourselves as we may do for other humans.


Absolutely true. Though it seems to me that people are far to good at
empathising with their pets for a claim that human innate mental
models are completely distinct from other animals. I figure there is a


Lots of antropomorphisation and not-necessarily-accurate projection
is obviously going on.


Not necessarily. Most of the empathising I was talking about is pretty
basic. The stress response has a lot in common from one species to
another, for instance. This is about the level that body language
works in AS - we can spot a few extreme and/or stereotyped emotions
such as anger, fear, etc.

Beyond that level, I wouldn't be able to recognise empathising with
pets even if it were happening right in front of me ;-)
My guess is that even then, there would be more dependence on
sophisticated heuristics than on brute force searching - but I suspect
that there is much more brute force searching going on in peoples
minds than they are consciously aware of.


I tend to disagree, because it's easy to show that the biases and
widespread errors with which you can easily catch people are ones
that would not occur with brute force searching but would with
heuristics. As you're familiar with the literature in the field
more than I am, I may just suggest the names of a few researchers
who have accumulated plenty of empirical evidence in this field:
Tversky, Gigerenzer, Krueger, Kahneman... I'm only peripherally
familiar with their work, but in the whole it seems quite indicative.


I'm not immediately familiar with those names, but before I go look
them up I'll say one thing...

Heuristics are fallible by definition. They can prevent a search
algorithm from searching a certain line (or more likely, prioritise
other lines) when in fact that line is the real best solution.

With human players having learned their heuristics over long
experience, they should have a very different pattern of 'tunnel
vision' in the search to that which a computer has (where the
heuristics are inherently those that could be expressed 'verbally' in
terms of program code or whatever).

In particular, human players should have had more real experience of
having their tunnel vision exploited by other players, and should have
learned more sophisticated heuristics as a result.

I don't believe in pure brute force searching - for any real problem,
that would be an infinite search (and probably not even such a small
infinity as aleph-0). When I say 'brute force' I tend to mean that as
a relative thing - faster searching, less sophisticated heuristics. I
suspect that may not have been clear above.

But anyway, the point is that heuristics are rarely much good at
solving real problems unless there is some kind of search or closure
algorithm or whatever added.

I do remember reading that recognition of rotated shapes shows clear
signs that a search process is going on unconsciously in the mind.
This isn't conscious rotation (the times were IIRC in milliseconds)
but the greater the number of degrees of rotation of the shape, the
longer it takes to recognise - suggesting that subconsciously, the
shape is rotated until it matches the required template.

So searches do seem to happen in the mind. Though you are quite right
to blame heuristics for a lot of the dodgy results. And while I doubt
that 'search loops' in the brain run through thousands of iterations
per second, with good heuristics maybe even a one iteration per second
(or even less) could be sufficient.

The real problem for someone with AS is that so much has to be handled
by the single-tasking conscious mind. The unconscious mind is, of
course, able to handle a number of tasks at once. If only I could
listen to someones words and figure out their tone of voice and pay
attention to their facial expression at the same time I'd be a very
happy man. After all, I can walk and talk at the same time, so why no
all this other stuff too :-(
It IS interesting how often an effective way to understand how
something works is to examine cases where it stops working or
misfires -- "how it BREAKS" can teach us more about "how it WORKS"
than studying it under normal operating conditions would. Much
like our unit tests should particularly ensure they test all the
boundary conditions of operation...;-).


That is, I believe, one reason why some people are so keen to study
autism and AS. Not so much to help the victims as to find out more
about how social ability works in people who don't have these
problems.
--
Steve Horne

steve at ninereeds dot fsnet dot co dot uk
Jul 18 '05 #188
Brian Kelley <bk*****@wi.mit.edu> writes:
I choose a bad example on abusing the C-implementation. The main
thrust of my argument is that you don't need macros in this case,
i.e. there can be situations with very little tradeoff.

class SafeFileWrapper:
def __init__(self, f):
self.f = f

def write(self, data):
try: self.f.write(data)
except:
self.f.close()
self.f = None
raise

def close():
if self.f:
self.f.close()
...

Now the usage is:

f = SafeFileWrapper(open(...))
print >> f, "A couple of lines"
f.close() .... I still have to explicitly close the file though when I am done with


It's just this sort of monotonous (yet important) book keeping (along
with all the exception protection, etc.) that something like
with-open-file ensures for you.
/Jon
Jul 18 '05 #189
Jon S. Anthony wrote:
Brian Kelley <bk*****@wi.mit.edu> writes:
Now the usage is:

f = SafeFileWrapper(open(...))
print >> f, "A couple of lines"
f.close()

...
I still have to explicitly close the file though when I am done with


It's just this sort of monotonous (yet important) book keeping (along
with all the exception protection, etc.) that something like
with-open-file ensures for you.


Personally I'd prefer guaranteed immediate destructors over with-open-file.
More flexibility, less syntax, and it matches what the CPython
implementation already does.
--
Rainer Deyke - ra*****@eldwood.com - http://eldwood.com
Jul 18 '05 #190
Jon S. Anthony wrote:
It's just this sort of monotonous (yet important) book keeping (along
with all the exception protection, etc.) that something like
with-open-file ensures for you.


I can't say that I am completely won over but this is an important
point. Thanks for the discussion.

Brian

Jul 18 '05 #191
"Rainer Deyke" <ra*****@eldwood.com> writes:
Jon S. Anthony wrote:
Brian Kelley <bk*****@wi.mit.edu> writes:
Now the usage is:

f = SafeFileWrapper(open(...))
print >> f, "A couple of lines"
f.close()

...
I still have to explicitly close the file though when I am done with


It's just this sort of monotonous (yet important) book keeping (along
with all the exception protection, etc.) that something like
with-open-file ensures for you.


Personally I'd prefer guaranteed immediate destructors over with-open-file.
More flexibility, less syntax, and it matches what the CPython
implementation already does.


Right... all along until CPython introduces a more elaborate
gc scheme.

Note that reference-counting has problems with cyclic
references; probably not something that will bite you in the case of
open files, but definitely a problem you need to be aware of.

--
Raymond Wiker Mail: Ra***********@fast.no
Senior Software Engineer Web: http://www.fast.no/
Fast Search & Transfer ASA Phone: +47 23 01 11 60
P.O. Box 1677 Vika Fax: +47 35 54 87 99
NO-0120 Oslo, NORWAY Mob: +47 48 01 11 60

Try FAST Search: http://alltheweb.com/
Jul 18 '05 #192
Kaz Kylheku <ka*@ashi.footprints.net> wrote:
+---------------
| Ah, but in Lisp, this is commonly done at *compile* time. Moreover,
| two or more domain-specific languages can be mixed together, nested in
| the same lexical scope, even if they were developed in complete
| isolation by different programmers. Everything is translated and
| compiled together. Some expression in macro language B can appear in
| an utterance of macro language A. Lexical references across these
| nestings are transparent:
|
| (language-a
| ... establish some local variable foo ...
| (language-b
| ... reference to local variable foo set up in language-a!
| ))
+---------------

Exactly so!

A concrete example of this is using a macro package such as Tim
Bradshaw's HTOUT together with (say) one's own application-specific
Lisp code. You can blithely nest macro-generated HTML inside Lisp
code inside macro-generated HTML, etc., and have *direct* access to
anything in an outer lexical scope! Big win. Consider the following
small excerpt from <URL:http://rpw3.org/hacks/lisp/appsrv-demo.lhp>
(which has been reformatted slightly to make the nesting more apparent).
Note that any form *not* starting with a keyword switches from the
HTML-generating language ("language-B") to normal Lisp ("language-A")
and that the MACROLET "htm" switches from normal Lisp ("A") back to
HTML generation ("B"):

(lhp-basic-page () ; Contains a call of WITH-HTML-OUTPUT.
...
(:table ()
(:tr ()
(loop for e from 1 to pows
and h in headings do
(let ((p (cond ((= e 1) ":") ((= e pows) "") (t ","))))
(htm (:th (:nowrap)
(fmt h e) p)))))
(loop for i from 1 to nums do
(htm
(:tr (:align "right")
(loop for e from 1 to pows do
(htm
(:td ()
(princ (expt i e) s))))))))
... )

Note how the innermost reference of "i" [in "(expt i e)"] is nested
inside *four* language shifts from the binding of "i" [in the LOOP form],
that is:

(Lisp
;; "i" is bound here
(HTML
(Lisp
(HTML
(Lisp
;; "i" is used here
)))))

Oh, and by the way, all of that HTML-generating code gets expanded
into Lisp code at macro-expansion time [roughly, at compile time].
-Rob

-----
Rob Warnock <rp**@rpw3.org>
627 26th Avenue <URL:http://rpw3.org/>
San Mateo, CA 94403 (650)572-2607

Jul 18 '05 #193
Raymond Wiker <Ra***********@fast.no> wrote in
news:86************@raw.grenland.fast.no:
"Rainer Deyke" <ra*****@eldwood.com> writes:
Jon S. Anthony wrote:
Brian Kelley <bk*****@wi.mit.edu> writes:
Now the usage is:

f = SafeFileWrapper(open(...))
print >> f, "A couple of lines"
f.close()
...
I still have to explicitly close the file though when I am done
with

It's just this sort of monotonous (yet important) book keeping
(along with all the exception protection, etc.) that something like
with-open-file ensures for you.
Personally I'd prefer guaranteed immediate destructors over
with-open-file. More flexibility, less syntax, and it matches what
the CPython implementation already does.


Right... all along until CPython introduces a more elaborate
gc scheme.


.... which is highly unlikely to happen without preservation of reference
counting semantics for files... google if you're _really_ interested :-/
Note that reference-counting has problems with cyclic
references; probably not something that will bite you in the case of
open files, but definitely a problem you need to be aware of.


(a) true (by definition of cycle/ref.counting?)
(b) no relevance to CPython (which uses a generational collector
to reclaim reference cycles [with the regular finalizer problems]).
(c) why are open file objects special (are you saying objects with no
internal references are less likely to be reachable from a cycle,
or that there is something intrinsically special about (open?) files)?

-- bjorn
Jul 18 '05 #194
Bjorn Pettersen <bj*************@comcast.net> writes:
Note that reference-counting has problems with cyclic
references; probably not something that will bite you in the case of
open files, but definitely a problem you need to be aware of.
(a) true (by definition of cycle/ref.counting?)
(b) no relevance to CPython (which uses a generational collector
to reclaim reference cycles [with the regular finalizer problems]).


I didn't know that (obviously :-)
(c) why are open file objects special (are you saying objects with no
internal references are less likely to be reachable from a cycle,
or that there is something intrinsically special about (open?) files)?


I cannot think of any scenarios where open files would be
involved in a cycle, that's all. That doesn't mean that it's impossible.

--
Raymond Wiker Mail: Ra***********@fast.no
Senior Software Engineer Web: http://www.fast.no/
Fast Search & Transfer ASA Phone: +47 23 01 11 60
P.O. Box 1677 Vika Fax: +47 35 54 87 99
NO-0120 Oslo, NORWAY Mob: +47 48 01 11 60

Try FAST Search: http://alltheweb.com/
Jul 18 '05 #195
Raymond Wiker <Ra***********@fast.no> wrote in
news:86************@raw.grenland.fast.no:
Bjorn Pettersen <bj*************@comcast.net> writes:
(c) why are open file objects special (are you saying objects with no
internal references are less likely to be reachable from a cycle,
or that there is something intrinsically special about (open?)
files)?


I cannot think of any scenarios where open files would be
involved in a cycle, that's all. That doesn't mean that it's
impossible.


Ok. Btw., creating such a cycle is easy,

a = [open('foo')]
b = [a]
a.append(b)

(i.e. any file that's reachable from a cycle). IIRC, A commonly used
'markup' library up until recently kept open file handles for all inline
images as part of the internal object state -- although I'd perhaps call
that more of a bug than a scenario? <wink>

-- bjorn
Jul 18 '05 #196
Bjorn Pettersen <bj*************@comcast.net> writes:
Raymond Wiker <Ra***********@fast.no> wrote in
news:86************@raw.grenland.fast.no:

[ SNIP ]
Note that reference-counting has problems with cyclic
references; probably not something that will bite you in the case of
open files, but definitely a problem you need to be aware of.


(a) true (by definition of cycle/ref.counting?)
(b) no relevance to CPython (which uses a generational collector
to reclaim reference cycles [with the regular finalizer problems]).
(c) why are open file objects special (are you saying objects with no
internal references are less likely to be reachable from a cycle,
or that there is something intrinsically special about (open?) files)?


Open file handles is a "scarce resource". It wasn't uncommon to have
an upper limit of 32, including strin, stdout and stderr, in older
unix environments. These days, 256 or 1024 seems to be more common
(default) upper limits. usually not a problem, but if one does a lot
of "open, write" and leave it for the GC to clean up, one *may* hit
resource starvation problems one wouldn't have hit, had teh file
handles been clsoed in a timely fashion.

//Ingvar
--
A routing decision is made at every routing point, making local hacks
hard to permeate the network with.
Jul 18 '05 #197
Stephen Horne <st***@ninereeds.fsnet.co.uk> wrote:

<snip>
One thing that has become clear in neuroscience is that almost all
(perhaps literally all) parts and functions of the brain benefit from
learning. Explicit memory is quite distinct from other memory
processes - it serves the conscious mind in a way that other memory
processes do not.
<snip>
At a more extreme level, entire memories can be fabricated. The harder
you search for memories, the more they are filled in by made up stuff.
And as mentioned elsewhere, the brain is quite willing to invent
rationalisations for things where it cannot provide a real reason. Add
a psychiatrist prompting and providing hints as to the expected form
of the 'memory' and hey presto!
<snip>
Many people seem to have an obsession with a kind of mystic view of
consciousness. Go through the list of things that people raise as
being part of consciousness, and judge it entirely by that list, and
it becomes just another set of cognitive functions - working memory,
primarily - combined with the rather obvious fact that you can't have
a useful understanding of the world unless you have a useful
understanding of your impact on it.


Sorry that I had to cut your long (but excellent) post to comment only
on a small subset of it. I want to reply only to the part that
concerns memory and consciousness, because I feel that you are having
a problem with these while the answer is already present in other
parts of your post.

According to Merleau-Ponty there is something to "the whole is more
than the sum of the parts" and other gestalt-like consciousness
mysticism. He explains this as the effect that having a sense of
direction and purpose gives each of the components that are involved a
new meaning such that they all operate coherently.

For example a primate can use a stick as an instrument, but a human
can use an instrument to build other instruments, thereby potentially
transforming the meanings of *all* objects in the environment, and
thereby finally being able to rearrange the environment itself or
gaining the powers to transport to some other place with such
efficient methods that the effect is virtually the same as rearranging
the environment.

My understanding of this point is that this effect is also seen very
prominently in the way we remember things. But contrary to your
position I see this rearranging of the constituent parts of our memory
-in a certain sense the basis of our consciousness- not as something
"fabricated" or "rationalized" (or in some other semi-pejorative
terms).

The way we structure our past is a positive accomplishment and we
should value it as such instead of insisting that there is only one
past and that any deviation from the "standard" (who's standard
anyway?) is showing some tendency to "evade problems" or gain unlawful
advantages over others.

In fact there is no fixed past at all, and the past that we have now
can be changed very easily, without using time-machines even. The
complete reality whether individual or shared is just a figment of our
collective or individual imagination and is therefore subject to
change without warning.

Ok, let me try to back up these bold claims with some trivial examples
of how our individual and collective pasts can be changed or are
changed or have been changed.

First a little "Gedankenexperiment", suppose you have been luckily
married for over twenty years and now I'm showing you a photograph
(supposedly of the kind that can't be forged) of your partner in a
compromising situation with someone else. I can back up this data with
several other facts, accumulated over the years, each of which you
have no prior knowledge of. Does this change your past, or just your
image of the past, so that it now comes closer to the "real" past?

If one looks at this example carefully one might wonder if this newer,
more "real", past is not also a figment of our imagination, and so on.

Another example, suppose someone on clpy who's first name is Alex came
into the possession of some convincing information linking corporate
culture and social factors to the divergence of programming styles and
data formats, in fact proving without a shred of doubt to him that
social factors are for 95% or more responsible for the generation of
divergent cultures of programming (with the accompanying dialects of
programming languages) through the process of "math-envy". This would
lead to some interesting new perspectives on macros being responsible
for the diverging of communities of programmers, making them into
mechanisms rather than sources. This could lead to a new world view
where from now on not macros would be avoided, but rather a certain
way of competition of programmers which would be identified as
unhealthy for the community as a whole. Would this change the past?
Certainly the whole macro-wars history of clpy would gain a new
interpretation.

Next consider the event of the twin tower airplane attack. Suppose it
was finally publicly acknowledged that there where no foreign nations
involved in the attack but that it was just a rather small group of
individuals acting on their own, backed by some very rich capitalist
but still only a small group of people acting in their own selfish
interest. Also no weapons of mass destruction could be found in Iraq,
Hussein had no connection with Osama bin Laden, and so on. This would
make the American attacks on two countries into war crimes and would
lay open speculations about the oil-industry being behind these so
called anti terrorism activities. Taking civil rights away from the
citizens of the united states could be another motivating factor for
certain powerful groups in the American government. Of course this is
all nonsense and pure speculation, but a quick view of what is still
written in Japanese history books about world war two, or an
investigation of what Iraqi school children had to learn about Saddam
Hussein will quickly disperse any doubts about what is possible in the
ways of completely misrepresenting collective memory.

Another example, imagine you had been suffering from a stomach ulcer
that was incurable and had taken drugs for years in order to reduce
the acidity of your stomach content. Someday someone comes along and
mentions that it's all caused by a bacterium called helicobacter and
you could be cured in two weeks. If you believed this person, would
this change your personal past?

There *is* no fixed past. What is possible is to make a past that is
purposeful, makes one strong and directed and flexible and open to
change. We should start seeing our pasts as an accomplishment and an
asset, and -realizing the moldability of it all- begin healthy
cooperations with other people to synchronize our pasts, incorporating
all the little "facts" of all people into a grand unifying theory in
which all these past episodes have a new meaning, and where everything
can find its place.

If you still think that's not magic, it's probably better to keep it
that way in order not to compromise the wonderful life that is
theoretically possible, given these observations.

Anton

Jul 18 '05 #198
On Fri, 24 Oct 2003 16:00:12 +0200, an***@vredegoor.doge.nl (Anton
Vredegoor) wrote:
There *is* no fixed past.


So you are saying that if a psychiatrist convinces his patient that
she was abused by her parents as a child, even though it never
occurred, to the point that she can remember it - then it must be
true!

I'm not confused on this - you are.

Sorry, but you are not keeping a clear distinction between "the past"
and "knowledge of the past". If memory is altered then it doesn't
change the past, only the (now incorrect) memory.

This sounds a lot like extreme cultural relativism. But I'm afraid
that if you drive your car past a village of people with no knowledge
of cars, while they may believe that your car is propelled by a demon
under the hood, in reality the engine will still be there.

Perception is not reality. It is only a (potentially flawed)
representation of reality. But reality is still real. And perception
*is* tied to reality as well as it can be by the simple pragmatic
principle of evolution - if our ancestors had arbitrary perceptions
which were detached from reality, they could not have survived and had
children.

You'll find many people who claim otherwise, of course. But the limits
of perception are actually already better understood than most people
realise. On the whole, they are pretty much the limits of information
processing.
--
Steve Horne

steve at ninereeds dot fsnet dot co dot uk
Jul 18 '05 #199
Raymond Wiker wrote:
"Rainer Deyke" <ra*****@eldwood.com> writes:
Personally I'd prefer guaranteed immediate destructors over
with-open-file. More flexibility, less syntax, and it matches what
the CPython implementation already does.


Right... all along until CPython introduces a more elaborate
gc scheme.

Note that reference-counting has problems with cyclic
references; probably not something that will bite you in the case of
open files, but definitely a problem you need to be aware of.


I'm all for more elaborate gc schemes. In particular, I want one that gives
me guaranteed immediate destructors even in the presence of reference
cycles. And I want to language specification to guarantee it.

Having to explicitly close files is something I'd expect in C or assembly,
but it has no place in high-level languages. And "with-open-file" is too
limiting. Let the computer do the work, not the programmer.
--
Rainer Deyke - ra*****@eldwood.com - http://eldwood.com
Jul 18 '05 #200

This discussion thread is closed

Replies have been disabled for this discussion.

By using this site, you agree to our Privacy Policy and Terms of Use.