By using this site, you agree to our updated Privacy Policy and our Terms of Use. Manage your Cookies Settings.
435,131 Members | 1,728 Online
Bytes IT Community
+ Ask a Question
Need help? Post your question and get tips & solutions from a community of 435,131 IT Pros & Developers. It's quick & easy.

BIG successes of Lisp (was ...)

P: n/a
In the context of LATEX, some Pythonista asked what the big
successes of Lisp were. I think there were at least three *big*
successes.

a. orbitz.com web site uses Lisp for algorithms, etc.
b. Yahoo store was originally written in Lisp.
c. Emacs

The issues with these will probably come up, so I might as well
mention them myself (which will also make this a more balanced
post)

a. AFAIK Orbitz frequently has to be shut down for maintenance
(read "full garbage collection" - I'm just guessing: with
generational garbage collection, you still have to do full
garbage collection once in a while, and on a system like that
it can take a while)

b. AFAIK, Yahoo Store was eventually rewritten in a non-Lisp.
Why? I'd tell you, but then I'd have to kill you :)

c. Emacs has a reputation for being slow and bloated. But then
it's not written in Common Lisp.

Are ViaWeb and Orbitz bigger successes than LATEX? Do they
have more users? It depends. Does viewing a PDF file made
with LATEX make you a user of LATEX? Does visiting Yahoo
store make you a user of ViaWeb?

For the sake of being balanced: there were also some *big*
failures, such as Lisp Machines. They failed because
they could not compete with UNIX (SUN, SGI) in a time when
performance, multi-userism and uptime were of prime importance.
(Older LispM's just leaked memory until they were shut down,
newer versions overcame that problem but others remained)

Another big failure that is often _attributed_ to Lisp is AI,
of course. But I don't think one should blame a language
for AI not happening. Marvin Mins ky, for example,
blames Robotics and Neural Networks for that.
Jul 18 '05 #1
Share this Question
Share on Google+
303 Replies


P: n/a
mi*****@ziplip.com writes:
a. AFAIK Orbitz frequently has to be shut down for maintenance
(read "full garbage collection" - I'm just guessing: with
generational garbage collection, you still have to do full
garbage collection once in a while, and on a system like that
it can take a while)
I'm skeptical that's the reason for the shutdowns, if they're using a
reasonable Lisp implementation.
b. AFAIK, Yahoo Store was eventually rewritten in a non-Lisp.
Why? I'd tell you, but then I'd have to kill you :)
The Yahoo Store software was written by some small company that sold
the business to some other company that didn't want to develop in
Lisp, I thought. I'd be interested to know more.
c. Emacs has a reputation for being slow and bloated. But then
it's not written in Common Lisp.
Actually, Hemlock is much more bloated. However, Emacs's reputation
for bloat came from the 1 mips VAX days, when it was bigger than less
capable editors such as vi. However, compared with the editors people
run all the time on PC's nowadays (viz. Microsoft Word), Emacs is tiny
and fast. In fact if I want to look in a big mail archive for (say)
mentions of Python, it's faster for me to read the file into Emacs and
search for "python" than it is for me to pipe the file through "more"
and use "more"'s search command.
Are ViaWeb and Orbitz bigger successes than LATEX? Do they
have more users? It depends. Does viewing a PDF file made
with LATEX make you a user of LATEX? Does visiting Yahoo
store make you a user of ViaWeb?
I missed the earlier messages in this thread but Latex wasn't written
in Lisp. There were some half-baked attempts to lispify TeX, but
afaik none went anywhere.
For the sake of being balanced: there were also some *big*
failures, such as Lisp Machines. They failed because
they could not compete with UNIX (SUN, SGI) in a time when
performance, multi-userism and uptime were of prime importance.
Well, they were too bloody expensive too.
Another big failure that is often _attributed_ to Lisp is AI,
of course. But I don't think one should blame a language
for AI not happening. Marvin Mins ky, for example,
blames Robotics and Neural Networks for that.


Actually, there are many AI success stories, but the AI field doesn't
get credit for them, because as soon as some method developed by AI
researchers becomes useful or practical, it stops being AI. Examples
include neural networks, alpha-beta search, natural language
processing to the extent that it's practical so far, optical character
recognition, and so forth.
Jul 18 '05 #2

P: n/a
Paul Rubin wrote:
There were some half-baked attempts to lispify TeX, but
afaik none went anywhere.


Tex2page is pretty nice:

http://www.ccs.neu.edu/home/dorai/te...2page-doc.html

Jul 18 '05 #3

P: n/a
On 13 Oct 2003 16:43:57 -0700, Paul Rubin
<http://ph****@NOSPAM.invalid> wrote:
mi*****@ziplip.com writes:

b. AFAIK, Yahoo Store was eventually rewritten in a non-Lisp.
Why? I'd tell you, but then I'd have to kill you :)


The Yahoo Store software was written by some small company that sold
the business to some other company that didn't want to develop in
Lisp, I thought. I'd be interested to know more.


In a context like that, my first guess would be "the other company
didn't have any Lisp programmers". The second would be that the
programmers they did have didn't like (their idea of) Lisp.

Assuming there is some truth in that, it would probably have been
rationalised in other terms of course.
Another big failure that is often _attributed_ to Lisp is AI,
of course. But I don't think one should blame a language
for AI not happening. Marvin Mins ky, for example,
blames Robotics and Neural Networks for that.


Actually, there are many AI success stories, but the AI field doesn't
get credit for them, because as soon as some method developed by AI
researchers becomes useful or practical, it stops being AI. Examples
include neural networks, alpha-beta search, natural language
processing to the extent that it's practical so far, optical character
recognition, and so forth.


Absolutely true - and many more.

I have come to believe that cognitive psychologists (of whatever field
- there seems to have been a 'cognitive revolution' in most variants
of psychology) should have some experience of programming - or at
least of some field where their conception of what is possible in
terms of information processing would get some hard testing.

Brains are not computers, of course - and even more important, they
were evolved rather than designed. But then a lot of cognitive
psychologists don't seem to take any notice of the underlying
'hardware' at all. In the field I'm most interested in - autism - the
main cognitive 'theories' are so sloppy that it is very difficult to
pick out the 5% of meaningfullness from the 95% of crud.

Take 'theory of mind', for instance - the theory that autistic people
have no conception that other people have minds of their own. This is
nice and simple at 'level 1' - very young autistics seem not to keep
track of other peoples states of mind when other children of the same
age do, just as if they don't realise that other people have minds of
their own. But the vast majority of autistics pass level 1 theory of
mind tests, at least after the age of about 5.

The cognitive psychologists answer to this is to apply level 2 and
higher theory of mind tests. They acknowledge that these tests are
more complex, but they don't acknowledge that they are testing
anything different. But take a close look and it rapidly becomes
apparent that the difference between these tests and the level 1 tests
is simply the amount of working memory that is demanded.

Actually, you don't have to study the tests to realise that - just
read some of the autistic peoples reactions to the tests. They
*understand* the test, but once the level is ramped up high enough
they can't keep track of everything they are expected to keep track
of. It's rather like claiming that if you can't simultaneously
remember 5000 distinct numbers you must lack a 'theory of number'!

But when autistic people have described things that suggest they have
'theory of mind' (e.g. Temple Grandin), the experts response has
typically been to suggest that either the auther or her editor is a
liar (e.g. Francesca Happé, 'the autobiographical writings of three
Asperger syndrome adults: problems of interpretation and implications
for theory', section in 'Autism and Asperger Syndrome' edited by Uta
Frith).

It's not even as if the 'theory of mind' idea has any particular
predictive power. The symptoms of autism vary dramatically from one
person to another, and the distinct symptoms vary mostly independently
of one another. Many symptoms of autism (e.g. sensory problems) have
nothing to do with awareness of other people.

The basic problem, of course, is that psychologists often lean more
towards the subjective than the objective, and towards intuition
rather than logic. It is perhaps slightly ironic that part of my own
theory of autism (integrating evolutionary, neurological and cognitive
issues) has as a key component an IMO credible answer to 'what is
intuition, how does it work, and why is (particularly social)
intuition disrupted in autism?'

But despite all this, it is interesting to note that cognitive
psychology and AI have very much the same roots (even down to mainly
the same researchers in the early days) and that if you search hard
enough for the 'good stuff' in psychology, it doesn't take long before
you start finding the same terms and ideas that used to be strongly
associated with AI.

Creating a machine that thinks like a person was never a realistic
goal for AI (and probably won't be for a very long time yet), but that
was always the dreaming and science fiction rather than the fact. Even
so, it is hard to think of an AI technique that hasn't been adopted by
real applications.

I remember the context where I first encountered Bayes theorem. It was
in AI - expert systems, to be precise - along with my first encounter
with information theory. The value of Bayes theorem is that it allows
an assessment of the probability of a hypothesis based only on the
kinds of information that can be reasonably easily assessed in
studies, approximated by a human expert, or even learned by the expert
system in some contexts as it runs. The probability of a hypothesis
given a single fact can be hard to assess, but a good approximation of
the probability of that fact given the hypothesis is usually easy to
assess given a simple set of examples.

Funny how a current popular application of this approach (spam
filtering) is not considered to be an expert system, or even to be AI
at all. But AI was never meant to be in your face. Software acts more
intelligently these days, but the methods used to achieve that are
mostly hidden.
--
Steve Horne

steve at ninereeds dot fsnet dot co dot uk
Jul 18 '05 #4

P: n/a
In article <7x************@ruckus.brouhaha.com>, Paul Rubin
<http://ph****@NOSPAM.invalid> wrote:
The Yahoo Store software was written by some small company
Viaweb
that sold the business to some other company
Yahoo (obviously).
that didn't want to develop in
Lisp, I thought.


That's right. Yahoo ultimately reimplemented Yahoo Store in C++.

E.
Jul 18 '05 #5

P: n/a
ga*@jpl.nasa.gov (Erann Gat) writes:
In article <7x************@ruckus.brouhaha.com>, Paul Rubin
<http://ph****@NOSPAM.invalid> wrote:
The Yahoo Store software was written by some small company


Viaweb
that sold the business to some other company


Yahoo (obviously).
that didn't want to develop in Lisp, I thought.


That's right. Yahoo ultimately reimplemented Yahoo Store in C++.


Of course to do so, they had to--according to Paul Graham--implement a
Lisp interpreter in C++! And they had to leave out some features that
depended on closures. So folks who are running the old Lisp version
may never "upgrade" to the new version since it would mean a
functional regression. Graham's messages on the topic to the ll1 list
are at:

<http://www.ai.mit.edu/~gregs/ll1-discuss-archive-html/msg02380.html>
<http://www.ai.mit.edu/~gregs/ll1-discuss-archive-html/msg02367.html>

-Peter

--
Peter Seibel pe***@javamonkey.com

Lisp is the red pill. -- John Fraser, comp.lang.lisp
Jul 18 '05 #6

P: n/a


Peter Seibel wrote:
ga*@jpl.nasa.gov (Erann Gat) writes:

In article <7x************@ruckus.brouhaha.com>, Paul Rubin
<http://ph****@NOSPAM.invalid> wrote:

The Yahoo Store software was written by some small company


Viaweb

that sold the business to some other company


Yahoo (obviously).

that didn't want to develop in Lisp, I thought.


That's right. Yahoo ultimately reimplemented Yahoo Store in C++.

Of course to do so, they had to--according to Paul Graham--implement a
Lisp interpreter in C++!


I hope he made another $40m off them in consulting fees helping them
with the port. :)

--
http://tilton-technology.com
What?! You are a newbie and you haven't answered my:
http://alu.cliki.net/The%20Road%20to%20Lisp%20Survey

Jul 18 '05 #7

P: n/a
mi*****@ziplip.com writes:
In the context of LATEX, some Pythonista asked what the big
successes of Lisp were. I think there were at least three *big*
successes.

a. orbitz.com web site uses Lisp for algorithms, etc.
b. Yahoo store was originally written in Lisp.
c. Emacs

The issues with these will probably come up, so I might as well
mention them myself (which will also make this a more balanced
post)

a. AFAIK Orbitz frequently has to be shut down for maintenance
(read "full garbage collection" - I'm just guessing: with
generational garbage collection, you still have to do full
garbage collection once in a while, and on a system like that
it can take a while)


You are misinformed. Orbitz runs a `server farm' of hundreds of
computers each running the ITA faring engine. Should any of these
machines need to GC, there are hundreds of others waiting to service
users.
Jul 18 '05 #8

P: n/a
Stephen Horne wrote:
...
I remember the context where I first encountered Bayes theorem. It was
in AI - expert systems, to be precise - along with my first encounter
OK, but Reverend Bayes developed it well before "AI" was even conceived,
around the middle 18th century; considering Bayes' theorem to be part
of AI makes just about as much sense as considering addition in the
same light, if "expert systems" had been the first context in which
you had ever seen numbers being summed. In the '80s, when at IBM
Research we developed the first large-vocabulary real-time dictation
taking systems, I remember continuous attacks coming from the Artificial
Intelligentsia due to the fact that we were using NO "AI" techniques --
rather, stuff named after Bayes, Markov and Viterbi, all dead white
mathematicians (it sure didn't help that our languages were PL/I, Rexx,
Fortran, and the like -- no, particularly, that our system _worked_,
the most unforgivable of sins:-). I recall T-shirts boldly emblazoned
with "P(A|B) = P(B|A) P(A) / P(B)" worn at computational linguistics
conferences as a deliberately inflammatory gesture, too:-).

Personally, I first met Rev. Bayes in high school, together with the
rest of the foundations of elementary probability theory, but then I
did admittedly go to a particularly good high school; neither of my
kids got decent probability theory in high school, though both of
them met it in their first college year (in totally different fields,
neither of them connected with "AI" -- financial economics for my
son, telecom engineering for my daughter).

Funny how a current popular application of this approach (spam
filtering) is not considered to be an expert system, or even to be AI
at all. But AI was never meant to be in your face. Software acts more


I don't see how using Bayes' Theorem, or any other fundamental tool
of probability and statistics, connects a program to "AI", any more
than using fundamental techniques of arithmetic or geometry would.
Alex

Jul 18 '05 #9

P: n/a
>>>>> "mike420" == mike420 <mi*****@ziplip.com> writes:

mike420> c. Emacs has a reputation for being slow and bloated.

People making that claim most often does not understand what Emacs
really is or how to use it effectively. Try to check out what other
popular software use up on such peoples machines, stuff like KDE or
gnome or mozilla or any Java based application.

This just isn't a very relevant issue on modern equipment.

mike420> For the sake of being balanced: there were also some *big*
mike420> failures, such as Lisp Machines. They failed because
mike420> they could not compete with UNIX (SUN, SGI) in a time when
mike420> performance, multi-userism and uptime were of prime importance.

It is still a question of heated debate what actually killed the lisp
machine industry.

I have so far not seen anybody dipsuting that they were a marvel of
technical excellence, sporting stuff like colour displays, graphical
user interfaces and laser printers way ahead of anybody else.

In fact the combined bundle of a Symbolics machine is so good that
there still is a viable market for those 20-30 years old machines
(been there, done that, still needs to get it to run :-) I challenge
you to get a good price for a Sun 2 with UNIX SYSIII or whatever they
were equipped with at the time.

As far as I know Symbolics was trying to address the price issues but
the new generation of the CPU was delayed which greatly contributed to
the original demise and subsequent success of what we now know as
stock hardware. Do not forget that when the Sun was introduced it was
by no means obvious who was going to win the war of the graphical
desktop server.
------------------------+-----------------------------------------------------
Christian Lynbech | christian #\@ defun #\. dk
------------------------+-----------------------------------------------------
Hit the philistines three times over the head with the Elisp reference manual.
- pe*****@hal.com (Michael A. Petonic)
Jul 18 '05 #10

P: n/a
On Tue, 14 Oct 2003 07:45:40 GMT, Alex Martelli <al***@aleax.it>
wrote:
Stephen Horne wrote:
...
I remember the context where I first encountered Bayes theorem. It was
in AI - expert systems, to be precise - along with my first encounter
OK, but Reverend Bayes developed it well before "AI" was even conceived,
around the middle 18th century; considering Bayes' theorem to be part
of AI makes just about as much sense as considering addition in the
same light, if "expert systems" had been the first context in which
you had ever seen numbers being summed.


OK, but you can't say that a system isn't artificial intelligence just
because it uses Bayes theorem or any other method either - it isn't
about who first described the formula or algorithm or whatever.
I recall T-shirts boldly emblazoned
with "P(A|B) = P(B|A) P(A) / P(B)" worn at computational linguistics
conferences as a deliberately inflammatory gesture, too:-).


;-)
Funny how a current popular application of this approach (spam
filtering) is not considered to be an expert system, or even to be AI
at all. But AI was never meant to be in your face. Software acts more


I don't see how using Bayes' Theorem, or any other fundamental tool
of probability and statistics, connects a program to "AI", any more
than using fundamental techniques of arithmetic or geometry would.


Well, I don't see how a neural net can be considered intelligent
whether trained using back propogation, forward propogation or a
genetic algorithm. Or a search algorithm, whether breadth first, depth
first, prioritised, using heuristics, or applying backtracking I don't
care. Or any of the usual parsing algorithms that get applied in
natural language and linguistics (Early etc). I know how all of these
work so therefore they cannot be considered intelligent ;-)

Certainly the trivial rule-based expert systems consisting of a huge
list of if statements are, IMO, about as unintelligent as you can get.

It's a matter of the problem it is trying to solve rather than simply
saying 'algorithm x is intelligent, algorithm y is not'. An
intelligent, knowledge-based judgement of whether an e-mail is or is
not spam is to me the work of an expert system. The problem being that
once people know how it is done, they stop thinking of it as
intelligent.

Perhaps AI should be defined as 'any means of solving a problem which
the observer does not understand' ;-)

Actually, I remember an article once with the tongue-in-cheek claim
that 'artificial stupidity' and IIRC 'artificial bloody mindedness'
would be the natural successors to AI. And that paperclip thing in
Word did not exist until about 10 years later!
--
Steve Horne

steve at ninereeds dot fsnet dot co dot uk
Jul 18 '05 #11

P: n/a
On Tue, 14 Oct 2003 07:45:40 GMT, Alex Martelli <al***@aleax.it>
wrote:
Personally, I first met Rev. Bayes in high school, together with the
rest of the foundations of elementary probability theory, but then I
did admittedly go to a particularly good high school; neither of my
kids got decent probability theory in high school, though both of
them met it in their first college year (in totally different fields,
neither of them connected with "AI" -- financial economics for my
son, telecom engineering for my daughter).


Sorry - missed this bit on the first read.

I never limited my education to what the school was willing to tell
me, partly because having Asperger syndrome myself meant that the
library was the best refuge from bullies during break times.

I figure I first encountered Bayes in the context of expert systems
when I was about 14 or 15. I imagine that fits roughly into the high
school junior category, but I'm not American so I don't know for sure.
--
Steve Horne

steve at ninereeds dot fsnet dot co dot uk
Jul 18 '05 #12

P: n/a
mike420 wrote in message news:<BY**************************************@zip lip.com>...
In the context of LATEX, some Pythonista asked what the big
successes of Lisp were. I think there were at least three *big*
See:

http://alu.cliki.net/Success%20Stories

a. AFAIK Orbitz frequently has to be shut down for maintenance
(read "full garbage collection" - I'm just guessing: with


They don't use garbage collection, they do explicit memory allocation
from pools. More details were given in the ILC 2002 talk "ITA Software
and Orbitz: Lisp in the Online Travel World" by Rodney Daughtrey:

http://www.international-lisp-confer...Daughtrey.html

The talk's slides are included in the ILC 2002 proceedings available
from Franz, Inc. As for shutdown for maintenance, the slides seem to
suggest that they use online patching.
Paolo
Jul 18 '05 #13

P: n/a
On Mon, 13 Oct 2003 16:23:46 -0700 (PDT), mi*****@ziplip.com wrote:
a. AFAIK Orbitz frequently has to be shut down for maintenance
Where do you "know" that from? Have you any quotes or numbers to back
up your claims or are you just trying to spread FUD?
(read "full garbage collection" -
Others have debunked this already.
I'm just guessing:


Please leave the guessing to people who are better at it.

Edi.
Jul 18 '05 #14

P: n/a
Stephen Horne wrote:
On Tue, 14 Oct 2003 07:45:40 GMT, Alex Martelli <al***@aleax.it>
wrote:
Personally, I first met Rev. Bayes in high school, together with the
rest of the foundations of elementary probability theory, but then I
did admittedly go to a particularly good high school; neither of my
kids got decent probability theory in high school, though both of
them met it in their first college year (in totally different fields,
neither of them connected with "AI" -- financial economics for my
son, telecom engineering for my daughter).
Sorry - missed this bit on the first read.

I never limited my education to what the school was willing to tell


Heh, me neither, of course.
me, partly because having Asperger syndrome myself meant that the
library was the best refuge from bullies during break times.
Not for me, as it was non-smoking and I started smoking very young;-).

But my house was always cluttered with books, anyway. However,
interestingly enough, I had not met Bayes' theorem _by that name_,
only in the somewhat confusing presentation known as "restricted
choice" in bridge theory -- problem is, Borel et Cheron's "Theorie
Mathematique du Bridge" was out of print for years, until (I think)
Mona Lisa Press finally printed it again (in English translation --
the French original came out again a while later as a part of the
reprint of all of Borel's works, but always was much costlier), so
my high school got there first (when I was 16). My kids' exposure
to probability theory was much earlier of course (since I taught
them bridge when they were toddlers, and Bayes' Theorem pretty
obviously goes with it).

I figure I first encountered Bayes in the context of expert systems
when I was about 14 or 15. I imagine that fits roughly into the high
school junior category, but I'm not American so I don't know for sure.


I'm not American either -- I say "high school" to mean what in Italy
is known as a "Liceo" (roughly the same age range, 14-18).
Alex

Jul 18 '05 #15

P: n/a
Stephen Horne wrote:
...
Perhaps AI should be defined as 'any means of solving a problem which
the observer does not understand' ;-)


Clarke's law...?-)

The AAAI defines AI as:

"the scientific understanding of the mechanisms underlying thought and
intelligent behavior and their embodiment in machines."

But just about any mathematical theory is an abstraction of "mechanisms
underlying thought": unless we want to yell "AI" about any program doing
computation (or, for that matter, memorizing information and fetching it
back again, also simplified versions of "mechanisms underlying thought"),
this had better be a TAD more nuanced. I think a key quote on the AAAI
pages is from Grosz and Davis: "computer systems must have more than
processing power -- they must have intelligence". This should rule out
from their definition of "intelligence" any "brute-force" mechanism
that IS just processing power. Chess playing machines such as Deep
Blue, bridge playing programs such as GIB and Jack (between the two
of them, winners of the world computer bridge championship's last 5 or
6 editions, regularly grinding into the dust programs described by their
creators as "based on artificial intelligence techniques" such as
expert systems), dictation-taking programs such as those made by IBM
and Dragon Systems in the '80s (I don't know if the technology has
changed drastically since I left the field then, though I doubt it),
are based on brute-force techniques, and their excellent performance
comes strictly from processing power. For example, IBM's speech
recognition technology descended directly from the field of signal
processing -- hidden Markov models, Viterbi algoriths, Bayes all over
the battlefield, and so on. No "AI heritage" anywhere in sight...
Alex

Jul 18 '05 #16

P: n/a
pr***********@comcast.net wrote:
mi*****@ziplip.com writes:

In the context of LATEX, some Pythonista asked what the big
successes of Lisp were. I think there were at least three *big*
successes.

a. orbitz.com web site uses Lisp for algorithms, etc.
b. Yahoo store was originally written in Lisp.
c. Emacs

The issues with these will probably come up, so I might as well
mention them myself (which will also make this a more balanced
post)

a. AFAIK Orbitz frequently has to be shut down for maintenance
(read "full garbage collection" - I'm just guessing: with
generational garbage collection, you still have to do full
garbage collection once in a while, and on a system like that
it can take a while)

You are misinformed. Orbitz runs a `server farm' of hundreds of
computers each running the ITA faring engine. Should any of these
machines need to GC, there are hundreds of others waiting to service
users.


Besides, when I read the description of orbiz, I was with the
impression, that they prealocated the memory for just that reason: to
remove the need for garbage collection.

--
Ivan Toshkov

email: fi********@last-name.org

Jul 18 '05 #17

P: n/a
Christian Lynbech <ch***************@ericsson.com> writes:
It is still a question of heated debate what actually killed the lisp
machine industry.

I have so far not seen anybody disputing that they were a marvel of
technical excellence, sporting stuff like colour displays, graphical
user interfaces and laser printers way ahead of anybody else.


It's clear to me that LMI killed itself by an attempt to rush the
LMI-Lambda to market before it was reasonably debugged. A lot of LMI
machines were DOA. It's amazing how fast you can lose customers that
way.

As far as Symbolics goes... I *think* they just saturated the market.
Jul 18 '05 #18

P: n/a
Alex Martelli <al***@aleax.it> wrote in message news:<EQ**********************@news2.tin.it>...
Stephen Horne wrote:
...
I remember the context where I first encountered Bayes theorem. It was
in AI - expert systems, to be precise - along with my first encounter
OK, but Reverend Bayes developed it well before "AI" was even conceived,
around the middle 18th century; considering Bayes' theorem to be part
of AI makes just about as much sense as considering addition in the
same light, if "expert systems" had been the first context in which
you had ever seen numbers being summed. In the '80s, when at IBM
Research we developed the first large-vocabulary real-time dictation
taking systems, I remember continuous attacks coming from the Artificial
Intelligentsia due to the fact that we were using NO "AI" techniques --
rather, stuff named after Bayes, Markov and Viterbi, all dead white
mathematicians (it sure didn't help that our languages were PL/I, Rexx,
Fortran, and the like -- no, particularly, that our system _worked_,
the most unforgivable of sins:-). I recall T-shirts boldly emblazoned
with "P(A|B) = P(B|A) P(A) / P(B)" worn at computational linguistics
conferences as a deliberately inflammatory gesture, too:-).

Personally, I first met Rev. Bayes in high school, together with the
rest of the foundations of elementary probability theory, but then I
did admittedly go to a particularly good high school; neither of my
kids got decent probability theory in high school, though both of
them met it in their first college year (in totally different fields,
neither of them connected with "AI" -- financial economics for my
son, telecom engineering for my daughter).

Funny how a current popular application of this approach (spam
filtering) is not considered to be an expert system, or even to be AI
at all. But AI was never meant to be in your face. Software acts more


I don't see how using Bayes' Theorem, or any other fundamental tool
of probability and statistics, connects a program to "AI", any more
than using fundamental techniques of arithmetic or geometry would.


i boldly disagree. back when i first heard about AI (the '70s, i'd say),
the term had a very specific meaning: probablistic decision making with
feedback. a medical diagnosis system would be the archetypal example.
my recollection of why few ever got made was: feedback collection was not
always easy (did the patient die because the diagnosis was wrong? and
what is the correct diagnosis? and did we get all the symtoms right?, etc),
and humans were unwilling to accept the notion of machine determined
decision making. the machine, like humans before it, would learn from its
mistakes. this was socially unacceptable.

everything else is just rule processing. whether done with declarative
typeless languages like Lisp or Prolog, or the more familiar imperative
typed languages like Java/C++ is a matter of preference. i'm currently
working with a Prolog derivative, and don't find it a better way. fact
is, i find typeless languages (declarative or imperative) a bad thing for
large system building.

robert


Alex

Jul 18 '05 #19

P: n/a
>>>>> "Alex" == Alex Martelli <al***@aleax.it> writes:

Alex> Chess playing machines such as Deep Blue, bridge playing programs
Alex> such as GIB and Jack ..., dictation-taking programs such as those
Alex> made by IBM and Dragon Systems in the '80s (I don't know if the
Alex> technology has changed drastically since I left the field then,
Alex> though I doubt it), are based on brute-force techniques, and their
Alex> excellent performance comes strictly from processing power.

Nearly no program would rely only on non-brute-force techniques. On the
other hand, all the machines that you have named uses some non-brute-force
techniques to improve performance. How you can say that they are using
"only" brute-force techniques is something I don't quite understand. But
even then, I can't see why this has anything to do with whether the machines
are intelligent or not. We cannot judge whether a machine is intelligent or
not by just looking at the method used to solve it. A computer is best at
number crunching, and it is simply natural for any program to put a lot more
weights than most human beings on number crunching. You can't say a machine
is unintelligent just because much of it power comes from there. Of course,
you might say that the problem does not require a lot of intelligence.

Whether a system is intelligent must be determined by the result. When you
feed a chess configuration to the big blue computer, which any average
player of chess would make a move that will guarantee checkmate, but the
Deep Blue computer gives you a move that will lead to stalemate, you know
that it is not very intelligent (it did happen).

Regards,
Isaac.
Jul 18 '05 #20

P: n/a
As to Lisp Successes I would like to add just a few:

Macsyma and Maxima http://maxima.sourceforge.net/

Powercase and Watson http://www.xanalys.com/solutions/powercase.html

AutoCAD http://www.autocad.com

Wade

Jul 18 '05 #21

P: n/a
On Tue, 14 Oct 2003 16:16:52 GMT, Wade Humeniuk <wh******@nospamtelus.net>
wrote:
As to Lisp Successes I would like to add just a few:

Macsyma and Maxima http://maxima.sourceforge.net/

Powercase and Watson http://www.xanalys.com/solutions/powercase.html

AutoCAD http://www.autocad.com

Wade


You all seem to forget www.google.com
One of the most used distributed applications in the world.
Written in Common Lisp (xanalysis)

--
Using M2, Opera's revolutionary e-mail client: http://www.opera.com/m2/
Jul 18 '05 #22

P: n/a
Isaac To wrote:
>> "Alex" == Alex Martelli <al***@aleax.it> writes:

Alex> Chess playing machines such as Deep Blue, bridge playing
programs Alex> such as GIB and Jack ..., dictation-taking programs
such as those Alex> made by IBM and Dragon Systems in the '80s (I
don't know if the Alex> technology has changed drastically since I
left the field then, Alex> though I doubt it), are based on
brute-force techniques, and their Alex> excellent performance comes
strictly from processing power.

Nearly no program would rely only on non-brute-force techniques. On the


Yes, the temptation to be clever and cut corners IS always there...;-).
other hand, all the machines that you have named uses some non-brute-force
techniques to improve performance. How you can say that they are using
"only" brute-force techniques is something I don't quite understand. But
In the case of Deep Blue, just hearsay. In the case of the IBM speech
recognition (dictation-taking) system, I was in the team and knew the
code quite well; that Dragon was doing essentially the same (on those
of their systems that _worked_, that is:-) was the opinion of people
who had worked in both teams, had friends on the competitors' teams,
etc (a lot of unanimous opinions overall). GIB's approach has been
described by Ginsberg, his author, in detail; Jack's hasn't. but the
behavior of the two programs, seen as players, is so similar that it
seems doubtful to me that their implementation techniques may be
drastically different.

Maybe we're having some terminology issue...? For example, I consider
statistical techniques "brute force"; it's not meant as pejorative --
they're techniques that WORK, as long as you can feed enough good data
to the system for the statistics to bite. A non-brute-force model of
natural language might try to "understand" some part of the real world
that an utterance is about -- build a semantic model, that is -- and
use the model to draw some part of the hypotheses for further prediction
or processing; a purely statistical model just sees sequences of
symbols (words) in (e.g.) a Hidden Markov Model from which it takes
all predictions -- no understanding, no semantic modeling. A non-bf
bridge playing program might have concepts (abstractions) such as "a
finesse", "a squeeze", "drawing trumps", "cross-ruffing", etc, and
match those general patterns to the specific distribution of cards to
guide play; GIB just has a deterministic double-dummy engine and guides
play by montecarlo samples of possible distributions for the two
unseen hands -- no concepts, no abstractions.
even then, I can't see why this has anything to do with whether the
machines
are intelligent or not. We cannot judge whether a machine is intelligent
or
not by just looking at the method used to solve it. A computer is best at
That's not how I read the AAAI's page's attempt to describe AI.
number crunching, and it is simply natural for any program to put a lot
more
weights than most human beings on number crunching. You can't say a
machine
is unintelligent just because much of it power comes from there. Of
course, you might say that the problem does not require a lot of
intelligence.
Again, that't not AAAI's claim, as I read their page. If a problem
can be solved by brute force, it may still be interesting to "model
the thought processes" of humans solving it and implement _those_ in
a computer program -- _that_ program would be doing AI, even for a
problem not intrinsically "requiring" it -- so, AI is not about what
problem is being solved, but (according to the AAAI as I read them)
also involves considerations about the approach.

Whether a system is intelligent must be determined by the result. When
you feed a chess configuration to the big blue computer, which any average
player of chess would make a move that will guarantee checkmate, but the
Deep Blue computer gives you a move that will lead to stalemate, you know
that it is not very intelligent (it did happen).


That may be your definition. It seems to me that the definition given
by a body comprising a vast numbers of experts in the field, such as
the AAAI, might be considered to carry more authority than yours.
Alex

Jul 18 '05 #23

P: n/a

"Paul Rubin" <http://ph****@NOSPAM.invalid>
I missed the earlier messages in this thread but Latex wasn't written
in Lisp. There were some half-baked attempts to lispify TeX, but
afaik none went anywhere.


I'm currently working on a Common Lisp typesetting system based on cl-pdf.
One example of what it can already do is here:
http://www.fractalconcept.com/ex.pdf

For now I'm working on the layout and low level stuff. But it's a little bit
soon to think that it will go nowhere, as I hope a lot of people will add TeX
compatibility packages for it ;-)

BTW note that I'm not rewriting TeX. It's another design with other
priorities.

Marc
Jul 18 '05 #24

P: n/a
John Thingstad <jo************@chello.no> writes:
On Tue, 14 Oct 2003 16:16:52 GMT, Wade Humeniuk
<wh******@nospamtelus.net> wrote:
As to Lisp Successes I would like to add just a few:

Macsyma and Maxima http://maxima.sourceforge.net/

Powercase and Watson http://www.xanalys.com/solutions/powercase.html

AutoCAD http://www.autocad.com

Wade


You all seem to forget www.google.com
One of the most used distributed applications in the world.
Written in Common Lisp (xanalysis)


I'm pretty sure that this is absolutely incorrect.

--
Raymond Wiker Mail: Ra***********@fast.no
Senior Software Engineer Web: http://www.fast.no/
Fast Search & Transfer ASA Phone: +47 23 01 11 60
P.O. Box 1677 Vika Fax: +47 35 54 87 99
NO-0120 Oslo, NORWAY Mob: +47 48 01 11 60

Try FAST Search: http://alltheweb.com/
Jul 18 '05 #25

P: n/a
"Christian Lynbech" <ch***************@ericsson.com> wrote in message
news:of************@situla.ted.dk.eu.ericsson.se.. .
>> "mike420" == mike420 <mi*****@ziplip.com> writes:


It is still a question of heated debate what actually killed the lisp
machine industry.

I have so far not seen anybody dipsuting that they were a marvel of
technical excellence, sporting stuff like colour displays, graphical
user interfaces and laser printers way ahead of anybody else.


I think what helped kill the lisp machine was probably lisp: many people
just don't like lisp, because it is a very different way of thinking that
most are rather unaccustomed to. Procedural, imperative programming is
simply a paradigm that more closely matches the ordinary way of thinking
(ordinary = in non-programming, non-computing spheres of human endevor) than
functional programming. As such, lisp machines were an oddity and too
different for many to bother, and it was easy for them to come up with
excuses not to bother (so that the 'barrier of interest,' so to speak, was
higher.) Lisp, the language family (or whatever you want to call it), still
has this stigma: lambda calculus is not a natural way of thinking.

This isn't to make a value judgment, but I think it's an important thing
that the whole "functional/declarative v. procedural/OO" debate overlooks.
The same reason why programmers call lisp "mind-expanding" and "the latin of
programming languages" is the very same reason why they are reluctant to
learn it--its different, and for many also hard to get used to. Likewise,
Americans seem to have some repulsive hatred of learning latin--for people
who are used to english, it's just plain different and harder, even if it's
better. (Ok, that last bit was a value judgement. :)

Python doesn't try (too) hard to change the ordinary manner of thinking,
just to be as transparent as possible. I guess in that sense it encourages a
degree of mental sloth, but the objective is executable pseudocode. Lisp
counters that thinking the lisp way may be harder, but the power it grants
is out of all proportion to the relatively meager investment of mental
energy required--naturally, it's hard to convince someone of that if they
don't actually _use_ it first, and in the end some will probably still think
it isn't worth the trouble. It will take very significant and deep cultural
and intellectual changes before lisp is ever an overwhelmingly dominant
language paradigm. That is, when it becomes more natural to think of
cake-making as

UD: things
Gxyz: x is baked at y degrees for z minutes.
Hx: x is a cake.
Ix: x is batter.

For all x, ( (Ix & Gx(350)(45)) > Hx )

(i.e. "Everything that's a batter and put into a 350 degree oven for 45
minutes is a cake")

....instead of...

1. Heat the oven to 350 degrees.
2. Place batter in oven.
3. Bake 45 minutes
4. Remove cake from oven.

(i.e. "To make a cake, bake batter in a 350 degree oven for 45 minutes")

....then lisp will take over the universe. Never mind that the symbolic
logic version has more precision.

--
Francis Avila

Jul 18 '05 #26

P: n/a
John Thingstad wrote:

You all seem to forget www.google.com
One of the most used distributed applications in the world.
Written in Common Lisp (xanalysis)

That new. But it may explain Peter Norvigs role there.
From where do you know that ?

Best
AHz
Jul 18 '05 #27

P: n/a
Alex Martelli <al***@aleax.it> writes:
[...]
Whether a system is intelligent must be determined by the result. When
you feed a chess configuration to the big blue computer, which any average
player of chess would make a move that will guarantee checkmate, but the
Deep Blue computer gives you a move that will lead to stalemate, you know
that it is not very intelligent (it did happen).


That may be your definition. It seems to me that the definition given
by a body comprising a vast numbers of experts in the field, such as
the AAAI, might be considered to carry more authority than yours.


Argument from authority has rarely been less useful or interesting
than in the case of AI, where both our abilities and understanding are
so poor, and where the embarrassing mystery of conciousness lurks
nearby...
John
Jul 18 '05 #28

P: n/a

"Francis Avila" <fr***********@yahoo.com> wrote in message
news:vo************@corp.supernews.com...
"Christian Lynbech" <ch***************@ericsson.com> wrote in message
It is still a question of heated debate what actually killed the lisp machine industry.


I think what helped kill the lisp machine was probably lisp: many

people just don't like lisp, because it is a very different way of thinking that most are rather unaccustomed to.


My contemporaneous impression, correct or not, as formed from
miscellaneous mentions in the computer press and computer shows, was
that they were expensive, slow, and limited -- limited in the sense of
being specialized to running Lisp, rather than any language I might
want to use. I can understand that a dedicated Lisper would not
consider Lisp-only to be a real limitation, but for the rest of us...

If these impressions are wrong, then either the publicity effort was
inadequate or the computer press misleading. I also never heard of
any 'killer ap' like Visicalc was for the Apple II. Even if there had
been, I presume that it could have been ported to others workstations
and to PCs -- or imitated, just as spreadsheets were (which removed
Apple's temporary selling point).

Terry J. Reedy
Jul 18 '05 #29

P: n/a
Terry Reedy wrote:
My contemporaneous impression, correct or not, as formed from
miscellaneous mentions in the computer press and computer shows, was
that they were expensive, slow, and limited -- limited in the sense of
being specialized to running Lisp, rather than any language I might
want to use. I can understand that a dedicated Lisper would not
consider Lisp-only to be a real limitation, but for the rest of us...


Well its not true. Symbolics for one supported additional languages,
and I am sure others have pointed out that are C compilers for
the Lisp Machines.

See

http://kogs-www.informatik.uni-hambu...h-summary.html

Section: Other Languages

It says that Prolog, Fortran and Pascal were available.

Wade

Jul 18 '05 #30

P: n/a
mi*****@ziplip.com wrote in message news:<BY**************************************@zip lip.com>...
b. AFAIK, Yahoo Store was eventually rewritten in a non-Lisp.
Why? I'd tell you, but then I'd have to kill you :)


From a new footnote in Paul Graham's ``Beating The Averages'' article:

In January 2003, Yahoo released a new version of the editor written
in C++ and Perl. It's hard to say whether the program is no longer
written in Lisp, though, because to translate this program into C++
they literally had to write a Lisp interpreter: the source files of
all the page-generating templates are still, as far as I know, Lisp
code. (See Greenspun's Tenth Rule.)
Jul 18 '05 #31

P: n/a
In article <9a**************@news1.telusplanet.net>,
Wade Humeniuk <wh******@nospamtelus.net> wrote:
Terry Reedy wrote:
My contemporaneous impression, correct or not, as formed from
miscellaneous mentions in the computer press and computer shows, was
that they were expensive, slow, and limited -- limited in the sense of
being specialized to running Lisp, rather than any language I might
want to use. I can understand that a dedicated Lisper would not
consider Lisp-only to be a real limitation, but for the rest of us...


Well its not true. Symbolics for one supported additional languages,
and I am sure others have pointed out that are C compilers for
the Lisp Machines.

See

http://kogs-www.informatik.uni-hambu...h-summary.html

Section: Other Languages

It says that Prolog, Fortran and Pascal were available.

Wade


ADA also.

Actually using an incremental C compiler and running C on type- and bounds-checking
hardware - like on the Lisp Machine - is not that a bad idea.
A whole set of problems disappears.
Jul 18 '05 #32

P: n/a
John J. Lee wrote:
Alex Martelli <al***@aleax.it> writes:
[...]
> Whether a system is intelligent must be determined by the result. When
> you feed a chess configuration to the big blue computer, which any
> average player of chess would make a move that will guarantee
> checkmate, but the Deep Blue computer gives you a move that will lead
> to stalemate, you know that it is not very intelligent (it did happen).


That may be your definition. It seems to me that the definition given
by a body comprising a vast numbers of experts in the field, such as
the AAAI, might be considered to carry more authority than yours.


Argument from authority has rarely been less useful or interesting
than in the case of AI, where both our abilities and understanding are
so poor, and where the embarrassing mystery of conciousness lurks
nearby...


It's a problem of terminology, no more, no less. We are dissenting on
what it _means_ to say that a system is about AI, or not. This is no
different from discussing what it means to say that a pasta sauce is
carbonara, or not. Arguments from authority and popular usage are
perfectly valid, and indeed among the most valid, when discussing what given
phrases or words "MEAN" -- not "mean to you" or "mean to Joe Blow"
(which I could care less about), but "mean in reasonable discourse".

And until previously-incorrect popular usage has totally blown away by
sheer force of numbers a previously-correct, eventually-pedantic
meaning (as, alas, inevitably happens sometimes, given the nature of
human language), I think it's indeed worth striving a bit to preserve the
"erudite" (correct by authority and history of usage) meaning against
the onslaught of the "popular" (correct by sheer force of numbers) one.
Sometimes (e.g., the jury's still out on "hacker") the "originally correct"
meaning may be preserved or recovered.

One of the best ways to define what term X "means", when available, is
to see what X means _to people who self-identify as [...] X_ (where the
[...] may be changed to "being", or "practising", etc, depending on X's
grammatical and pragmatical role). For example, I think that when X
is "Christian", "Muslim", "Buddhist", "Atheist", etc, it is best to
ascertain the meaning of X with reference to the meaning ascribed to X by
people who practice the respective doctrines: in my view of the world, it
better defines e.g. the term "Buddhist" to see how it's used, received,
meant, by people who _identify as BEING Buddhist, as PRACTISING
Buddhism_, rather than to hear the opinions of others who don't. And,
here, too, Authority counts: the opinion of the Pope on the meaning of
"Catholicism" matter a lot, and so do those of the Dalai Lama on the
meaning of "Buddhism", etc. If _I_ think that a key part of the meaning
of "Catholicism" is eating a lot of radishes, that's a relatively harmless
eccentricity of mine with little influence on the discourse of others; if
_the Pope_ should ever hold such a strange opinion, it will have
inordinately vaster effect.

Not an effect limited to religions, by any means. What Guido van Rossum
thinks about the meaning of the word "Pythonic", for example, matters a lot
more, to ascertain the meaning of that word, than the opinion on the same
subject, if any, of Mr Joe Average of Squedunk Rd, Berkshire, NY. And
similarly for terms related to scientific, professional, and technological
disciplines, such as "econometrics", "surveyor", AND "AI".

The poverty of our abilities and understanding, and assorted lurking
mysteries, have as little to do with the best approach to ascertain the
definition of "AI", as the poverty of our tastebuds, and the risk that the
pasta is overcooked, have to do with the best approach to ascertain the
definition of "sugo alla carbonara". Definitions and meanings are about
*human language*, in either case.
Alex

Jul 18 '05 #33

P: n/a
Alex Martelli <al***@aleax.it> schreef:
But just about any mathematical theory is an abstraction of "mechanisms
underlying thought": unless we want to yell "AI" about any program doing
computation (or, for that matter, memorizing information and fetching it
back again, also simplified versions of "mechanisms underlying thought"),
this had better be a TAD more nuanced. I think a key quote on the AAAI
pages is from Grosz and Davis: "computer systems must have more than
processing power -- they must have intelligence". This should rule out
from their definition of "intelligence" any "brute-force" mechanism
that IS just processing power.


Maybe intelligence IS just processing power, but we don't know (yet) how
all these processes in the human brain[*] work... ;-)
[*] Is there anything else considered "intelligent"?

--
JanC

"Be strict when sending and tolerant when receiving."
RFC 1958 - Architectural Principles of the Internet - section 3.9
Jul 18 '05 #34

P: n/a
On Tue, 14 Oct 2003 13:07:23 -0400, Francis Avila wrote:
"Christian Lynbech" <ch***************@ericsson.com> wrote in message
news:of************@situla.ted.dk.eu.ericsson.se.. .
>>>>> "mike420" == mike420 <mi*****@ziplip.com> writes:
It is still a question of heated debate what actually killed the lisp
machine industry.

I have so far not seen anybody dipsuting that they were a marvel of
technical excellence, sporting stuff like colour displays, graphical
user interfaces and laser printers way ahead of anybody else.

I think what helped kill the lisp machine was probably lisp: many people
just don't like lisp, because it is a very different way of thinking that
most are rather unaccustomed to. Procedural, imperative programming is
simply a paradigm that more closely matches the ordinary way of thinking
(ordinary = in non-programming, non-computing spheres of human endevor) than
functional programming. As such, lisp machines were an oddity and too
Rubbish. Lisp is *not* a functional programming language; and Lisp
Machines also had C and Ada and Pascal compilers (and maybe others)
different for many to bother, and it was easy for them to come up with
excuses not to bother (so that the 'barrier of interest,' so to speak, was
higher.) Lisp, the language family (or whatever you want to call it), still
has this stigma: lambda calculus is not a natural way of thinking.
Lisp has almost nothing at all to do with the lambda calculus.
[McCarthy said he looked at Church's work, got the name "lambda" from
there, and couldn't understand anything else :-) So that's about all
the relationship there is -- one letter! Python has twice as much in
common with INTERCAL! :-)]
This isn't to make a value judgment, but I think it's an important thing
that the whole "functional/declarative v. procedural/OO" debate overlooks.


How does that have anything to do with Lisp, even if true? Lisp is,
if anything, more in the "procedural/OO" camp than the
"functional/declarative" camp. [Multiple inheritance was invented in
Lisp (for the Lisp Machine window system); ANSI CL was the first
language to have an OO standard; ...]

There's nothing wrong with functional languages, anyway (Lisp just
isn't one; try Haskell!)
[much clueless ranting elided]

--
Cogito ergo I'm right and you're wrong. -- Blair Houghton

(setq reply-to
(concatenate 'string "Paul Foley " "<mycroft" '(#\@) "actrix.gen.nz>"))
Jul 18 '05 #35

P: n/a
John Thingstad <jo************@chello.no> writes:
You all seem to forget www.google.com
One of the most used distributed applications in the world.
Written in Common Lisp (xanalysis)


Nah, google runs on 10K (yes, ten thousand) computers and is written
in C++. Norvig in his talk at the last Lisp conference explained all
this. He said this gave them the ability to patch and upgrade without
taking the whole system down --- they'd just do it
machine-by-machine.

--
Fred Gilham gi****@csl.sri.com
The density of a textbook must be inversely proportional to the
density of the students using it. --- Dave Stringer-Calvert
Jul 18 '05 #36

P: n/a
JanC wrote:
Alex Martelli <al***@aleax.it> schreef:
But just about any mathematical theory is an abstraction of "mechanisms
underlying thought": unless we want to yell "AI" about any program doing
... Maybe intelligence IS just processing power, but we don't know (yet) how
all these processes in the human brain[*] work... ;-)

[*] Is there anything else considered "intelligent"?


Depends on who you ask -- e.g., each of dolphins, whales, dogs, cats,
chimpanzees, etc, are "considered intelligent" by some but not all people.
Alex

Jul 18 '05 #37

P: n/a
On Tue, 14 Oct 2003 16:58:45 GMT, Alex Martelli <al***@aleax.it>
wrote:
Isaac To wrote:
>>> "Alex" == Alex Martelli <al***@aleax.it> writes:
Alex> Chess playing machines such as Deep Blue, bridge playing
programs Alex> such as GIB and Jack ..., dictation-taking programs
such as those Alex> made by IBM and Dragon Systems in the '80s (I
don't know if the Alex> technology has changed drastically since I
left the field then, Alex> though I doubt it), are based on
brute-force techniques, and their Alex> excellent performance comes
strictly from processing power.

Nearly no program would rely only on non-brute-force techniques. On the


Yes, the temptation to be clever and cut corners IS always there...;-).


It's an essential part of an intelligent (rather than perfect)
solution.

The only one of the programs above where I know the approach taken is
Deep Blue. A perfect search solution is still not viable for chess,
and unlikely to be for some time. Deep Blue therefore made extensive
use of heuristics to make the search viable - much the same as any
chess program.

A heuristic is by definition fallible, but without heuristics the
seach cannot be viable at all.

So far as I can see, this is essentially the approach that human chess
players take - a mixture of learned heuristics and figuring out the
consequences of their actions several moves ahead (ie searching). One
difference is that the balance in humans is much more towards
heuristics. Another is that human heuristics tend to be much more
sophisticated (and typically impossible to state precisely in words)
because of the way the brain works. Finally, there is intuition - but
as I believe that intuition is basically the product of unconscious
learned (and in some contexts innate) heuristics and procedural memory
applied to information processing, it really boils down to the same
thing - a heuristic search.

The human and computer approaches (basically their sets of heuristics)
differ enough that the computer may do things that a human would
consider stupid, yet the computer still beat the human grandmaster
overall.

Incidentally, I learned more about heuristics from psychology
(particularly social psychology) than I ever did from AI. You see,
like Deep Blue, we have quite a collection of innate, hard-wired
heuristics. Or at least you lot do - I lost a lot of mine by being
born with Asperger syndrome.

No, it wasn't general ability that I lost by the neural damage that
causes Asperger syndrome. I still have the ability to learn. My IQ is
over 100. But as PET and MRI studies of people with Asperger syndrome
reveal, my social reasoning has to be handled by the general
intelligence area of my brain instead of the specialist social
intelligence area that neurotypical people use. What makes the social
intelligence area distinct in neurotypicals? To me, the only answer
that makes sense is basically that innate social heuristics are held
in the social intelligence area.

If the general intelligence region can still take over the job of
social intelligence when needed, the basic 'algorithms' can't be
different. But domain-specific heuristics aren't so fundamental that
you can't make do without them (as long as you can learn some new
ones, presumably).

What does a neural network do under training? Basically it searches
for an approximation of the needed function - a heuristic. Even the
search process itself uses heuristics - "a better approximation may be
found by combining two existing good solutions" for recombination in
genetic algorithms, for instance.

Now consider the "Baldwin Effect", described in Steven Pinkers "How
the mind works"...

"""
So evolution can guide learning in neural networks. Surprisingly,
learning can guide evolution as well. Remember Darwin's discussion of
"the incipient stages of useful structures" - the
what-good-is-half-an-eye problem. The neural-network theorists
Geoffrey Hinton and Steven Nowlan invented a fiendish example. Imagine
an animal controlled by a neural network with twenty connections, each
either excitatory (on) or neutral (off). But the network is utterly
useless unless all twenty connections are correctly set. Not only is
it no good to have half a network; it is no good to have ninety-five
percent of one. In a population of animals whose connections are
determined by random mutation, a fitter mutant, with all the right
connections, arises only about once every million (220) genetically
distinct organisms. Worse, the advantage is immediately lost if the
animal reproduces sexually, because after having finally found the
magic combination of weights, it swaps half of them away. In
simulations of this scenario, no adapted network ever evolved.

But now consider a population of animals whose connections can come in
three forms: innately on, innately off, or settable to on or off by
learning. Mutations determine which of the three possibilities (on,
off, learnable) a given connection has at the animal's birth. In an
average animal in these simulations, about half the connections are
learnable, the other half on or off. Learning works like this. Each
animal, as it lives its life, tries out settings for the learnable
connections at random until it hits upon the magic combination. In
real life this might be figuring out how to catch prey or crack a nut;
whatever it is, the animal senses its good fortune and retains those
settings, ceasing the trial and error. From then on it enjoys a higher
rate of reproduction. The earlier in life the animal acquires the
right settings, the longer it will have to reproduce at the higher
rate.

Now with these evolving learners, or learning evolvers, there is an
advantage to having less than one hundred percent of the correct
network. Take all the animals with ten innate connections. About one
in a thousand (210) will have all ten correct. (Remember that only one
in a million nonlearning animals had all twenty of its innate
connections correct.) That well-endowed animal will have some
probability of attaining the completely correct network by learning
the other ten connections; if it has a thousand occasions to learn,
success is fairly likely. The successful animal will reproduce
earlier, hence more often. And among its descendants, there are
advantages to mutations that make more and more of the connections
innately correct, because with more good connections to begin with, it
takes less time to learn the rest, and the chances of going through
life without having learned them get smaller. In Hinton and Nowlan's
simulations, the networks thus evolved more and more innate
connections. The connections never became completely innate, however.
As more and more of the connections were fixed, the selection pressure
to fix the remaining ones tapered off, because with only a few
connections to learn, every organism was guaranteed to learn them
quickly. Learning leads to the evolution of innateness, but not
complete innateness.
"""

A human brain is not so simple, but what this says to me is (1) that
anything that allows a person to learn important stuff (without
damaging flexibility) earlier in life should become innate, at least
to a degree, and (2) that learning should work together with
innateness - there is no hard divide (some aspects of neurological
development suggest this too). So I would expect some fairly fixed
heuristics (or partial heuristics) to be hard wired, and I figure
autism and Asperger syndrome are fairly big clues as to what is
innate. Stuff related to nonverbal communication such as body
language, for instance, and a tendency to play activities that teach
social stuff in childhood.

And given that these connectionist heuristics cannot be stated in
rational terms, they must basically form subconscious processes which
generate intuitions. Meaning that the subconscious provides the most
likely solutions to a problem, with conscious rational thought only
handling the last 5% ish of figuring it out. Which again fits my
experience - when I'm sitting there looking stupid and people say 'but
obviously it's z' and yes, it obviously is - but how did they rule out
a, b, c, d, e, f and so on and so forth so damned quick? Maybe they
didn't. Maybe their subconsious heuristics suggested a few 'obvious'
answers to consider, so they never had to think about a, b, c...
Human intelligence is IMO not so hugely different from Deep Blue, at
least in principle.
no understanding, no semantic modeling.
no concepts, no abstractions.
Sounds a bit like intuition to me. Of course it would be nice if
computers could invent a rationalisation, the way that human brains
do.

What? I hear you say...

When people have had their corpus callosum cut, so the left and right
cerebral cortex cannot communicate directly, you can present an
instruction to one eye ('go get a coffee' for instance) and the
instruction will be obeyed. But ask why to the other eye, and you get
an excuse (such as 'I'm thirsty') with no indication of any awareness
that the request was made.

You can even watch action potentials in the brain and predict peoples
actions from them. In fact, if you make a decision in less than a
couple of seconds, you probably never thought about it at all -
whatever you may remember. The rationalisation was made up after the
fact, and 'backdated' in your memory.

Hmmm - how long does it take to reply to someone in a typical
conversation...

Actually, it isn't quite so bad - the upper end of this is about 3
seconds IIRC, but the absolute lower end is something like half a
second. You may still have put some thought into it in advance down to
that timescale (though not much, obviously).

Anyway, why?

Well, the textbooks don't tend to explain why, but IMO there is a
fairly obvious explanation. We need excuses to give to other people to
explain our actions. That's easiest if we believe the excuses
ourselves. But if the actions are suggested by subconscious
'intuition' processes, odds are there simply isn't a rational line of
reasoning that can be put into words. Oh dear - better make one up
then!

Again, that't not AAAI's claim, as I read their page. If a problem
can be solved by brute force, it may still be interesting to "model
the thought processes" of humans solving it and implement _those_ in
a computer program -- _that_ program would be doing AI, even for a
problem not intrinsically "requiring" it -- so, AI is not about what
problem is being solved, but (according to the AAAI as I read them)
also involves considerations about the approach.


1. This suggests that the only human intelligence is human
intelligence. A very anthropocentric viewpoint.

2. Read some cognitive neuroscience, some social psychology,
basically whatever you can get your hands on that has cognitive
leanings (decent textbooks - not just pop psychology) - and
then tell me that the human mind doesn't use what you call brute
force methods.
--
Steve Horne

steve at ninereeds dot fsnet dot co dot uk
Jul 18 '05 #38

P: n/a
larry wrote:
"Francis Avila" <fr***********@yahoo.com> wrote in message
UD: things
Gxyz: x is baked at y degrees for z minutes.
Hx: x is a cake.
Ix: x is batter.

For all x, ( (Ix & Gx(350)(45)) > Hx )

(i.e. "Everything that's a batter and put into a 350 degree oven for 45
minutes is a cake")

...instead of...

1. Heat the oven to 350 degrees.
2. Place batter in oven.
3. Bake 45 minutes
4. Remove cake from oven.

(i.e. "To make a cake, bake batter in a 350 degree oven for 45 minutes")


This is a great opportunity to turn this into a thread where people swap yummy
recipes. :-) In any case, that's more constructive that all that "language X
is better than language Y" drivel. And it tastes better too. :-) So I start
off with my mom's potato salad:

Ingredients:
potatoes (2 kilos)
meat (1 pound; this should be the kind of beef that comes in little threads)
little silver onions
small pickles
pepper, salt, mayonnaise, mustard

Boil potatoes. Cut meat, potatoes, onions and pickles in little pieces, and
mix everything. Add pepper, salt. Mix with mayonnaise. Add a tiny bit of
mustard. Put in freezer for one day.

There are many variants of this; some people add apple, vegetables, etc.

So, what is YOUR favorite recipe? (Parentheses soup doesn't count.)

Hungrily y'rs,

--
Hans (ha**@zephyrfalcon.org)
http://zephyrfalcon.org/

Jul 18 '05 #39

P: n/a
Stephen Horne <$$$$$$$$$$$$$$$$$@$$$$$$$$$$$$$$$$$$$$.co.uk> wrote:

Now consider the "Baldwin Effect", described in Steven Pinkers "How
the mind works"...
<SNIP>
A human brain is not so simple, but what this says to me is (1) that
anything that allows a person to learn important stuff (without
damaging flexibility) earlier in life should become innate, at least
to a degree, and (2) that learning should work together with
innateness - there is no hard divide (some aspects of neurological
development suggest this too). So I would expect some fairly fixed
heuristics (or partial heuristics) to be hard wired, and I figure
autism and Asperger syndrome are fairly big clues as to what is
innate. Stuff related to nonverbal communication such as body
language, for instance, and a tendency to play activities that teach
social stuff in childhood.


Sometimes I wonder whether Neandertals (a now extinct human
subspecies) would be more relying on innate knowledge than modern Homo
Sapiens. Maybe individual Neandertals were slightly more intelligent
than modern Sapiens but since they were not so well organized (they
rather made their own fire than using one common fire for the whole
clan) evolutionary pressure gradually favored modern Sapiens.

Looking at it from this perspective the complete modern human
population suffers from Aspergers syndrome on a massive scale, while a
more natural human species is now sadly extinct. A great loss, and
something to grieve about instead of using them in unfavorable
comparisons to modern society, which I find a despicable practice.

However let the Aspergers, and maybe later robots (no innate knowledge
at all?) inherit the Earth, whether programmed with Pythons Neandertal
like bag of unchangeable tricks or with Lisp-like adaptable Aspergers
programming.

Anton

Jul 18 '05 #40

P: n/a
Francis Avila wrote:
"Christian Lynbech" <ch***************@ericsson.com> wrote in message
news:of************@situla.ted.dk.eu.ericsson.se.. .
>>>"mike420" == mike420 <mi*****@ziplip.com> writes:


It is still a question of heated debate what actually killed the lisp
machine industry.

I have so far not seen anybody dipsuting that they were a marvel of
technical excellence, sporting stuff like colour displays, graphical
user interfaces and laser printers way ahead of anybody else.

I think what helped kill the lisp machine was probably lisp: many people
just don't like lisp, because it is a very different way of thinking that
most are rather unaccustomed to. Procedural, imperative programming is
simply a paradigm that more closely matches the ordinary way of thinking
(ordinary = in non-programming, non-computing spheres of human endevor) than
functional programming.


Wrong in two ways:

1) Lisp is not a functional programming language.

2) Imperative programming does not match "ordinary" thinking. When you
visit a dentist, do you explain to her each single step she should do,
or do you just say "I would like to have my teeth checked"?
Pascal

Jul 18 '05 #41

P: n/a
On Fri, 17 Oct 2003 12:04:58 +0200, an***@vredegoor.doge.nl (Anton
Vredegoor) wrote:
Stephen Horne <$$$$$$$$$$$$$$$$$@$$$$$$$$$$$$$$$$$$$$.co.uk> wrote:

Now consider the "Baldwin Effect", described in Steven Pinkers "How
the mind works"...


<SNIP>
A human brain is not so simple, but what this says to me is (1) that
anything that allows a person to learn important stuff (without
damaging flexibility) earlier in life should become innate, at least
to a degree, and (2) that learning should work together with
innateness - there is no hard divide (some aspects of neurological
development suggest this too). So I would expect some fairly fixed
heuristics (or partial heuristics) to be hard wired, and I figure
autism and Asperger syndrome are fairly big clues as to what is
innate. Stuff related to nonverbal communication such as body
language, for instance, and a tendency to play activities that teach
social stuff in childhood.


Sometimes I wonder whether Neandertals (a now extinct human
subspecies) would be more relying on innate knowledge than modern Homo
Sapiens. Maybe individual Neandertals were slightly more intelligent
than modern Sapiens but since they were not so well organized (they
rather made their own fire than using one common fire for the whole
clan) evolutionary pressure gradually favored modern Sapiens.


As I understand it, Neanderthals had a very stable lifestyle and level
of technology for quite a long time while us cro-magnons were still
evolving somewhere around the North East African coast. In accordance
with the Baldwin effect, I would therefore expect much more of there
behaviour to be innate.

While innate abilities are efficient and available early in life, they
are also relatively inflexible. When modern humans arrived in Europe,
the climate was also changing. Neaderthals had a lifestyle suited for
woodlands, but the trees were rapidly disappearing. Modern humans
preferred open space, but more importantly hadn't had a long period
with a constant lifestyle and were therefore more flexible.

Asperger syndrome inflexibility is a different thing. If you were
constantly overloaded (from lack of intuition about what is going on,
thus having to figure out everything consciously), I expect you would
probably cling too much to what you know as well.
--
Steve Horne

steve at ninereeds dot fsnet dot co dot uk
Jul 18 '05 #42

P: n/a

Ever noticed how upgrading a PC tends to lead to disaster?

Upgraded to 768MB yesterday, which I need for Windows 2000, but now
Windows 98 (which I also need, sadly) has died. Even with that
system.ini setting to stop it using the extra memory. I'm pretty sure
it's a driver issue, but I haven't worked out which driver(s) are to
blame.

Anyway, I haven't had a chance to read your post properly yet because
I've been farting around with old backups and reinstalls trying to
figure this out, but I will get around to it in the next couple of
days.
--
Steve Horne

steve at ninereeds dot fsnet dot co dot uk
Jul 18 '05 #43

P: n/a
Stephen Horne <$$$$$$$$$$$$$$$$$@$$$$$$$$$$$$$$$$$$$$.co.uk> wrote:
Asperger syndrome inflexibility is a different thing. If you were
constantly overloaded (from lack of intuition about what is going on,
thus having to figure out everything consciously), I expect you would
probably cling too much to what you know as well.


It is tempting to claim that my post was implying Aspergers to be
potentially *more* flexible because of lack of innateness, however
knowing next to nothing about Aspergers and making wild claims,
linking it to anthropology and computer science, may be a bit prone to
making oneself misunderstood and to possibly hurting people
inadvertently in the process of formulating some consistent theory. So
I'd rather apologize for any inconveniences and confusion produced
sofar, and humbly ask my post to be ignored.

Anton

Jul 18 '05 #44

P: n/a
Someone wrote:
It is still a question of heated debate what actually killed the lispmachine industry.
The premise of this question is that there actually was a lisp-machine
industry (LMI) to be killed. My memory is that is was stillborn and
that the promoters never presented a *convincing* value proposition to
enough potentional customers to really get it off the ground.

and continued:I have so far not seen anybody dipsuting that they were a marvel oftechnical excellence,
Never having seen one, or read an independent description, I cannot
confirm or 'dipsute' this. But taking this as given, there is the
overlooked matter of price. How many people, Lispers included, are
going to buy, for instance, an advanced, technically excellent,
hydrogen fuel cell car, complete with in-garage hydrogen generator
unit, for, say $200,000.
sporting stuff like colour displays, graphical
user interfaces and laser printers way ahead of anybody else.

I believe these are disputable. The American broadcast industry
switched to color displays in the 50s-60s. Around 1980 there were
game consoles (specialized computers) and small 'general purpose'
computers that piggybacked on color televisions. TV game consoles
thrive today while general PC color computing switched (mid80s) to
computer monitors with the higher resolution needed for text. It was
their use with PCs that brought the price down to where anyone could
buy one.

Did lisp machines really have guis before Xerox and Apple?

Did lisp machine companies make laser printers before other companies
like HP made them for anyone to use? If so, what did they price them
at?
Francis Avila wrote:
I think what helped kill the lisp machine was probably lisp: many
people just don't like lisp, because it is a very different way of thinking that most are rather unaccustomed to.


"Pascal Costanza" <co******@web.de> responded Wrong in two ways:


In relation to the question about the would-be Lisp machine industry,
this answer, even if correct, is besides the point. People buy on the
basis of what they think. One answer may to be that the LMI failed to
enlighten enough people as to the error or their thoughts.

I wonder about another technical issue: intercompatibility. I
strongly believe that media incompatibility helped kill the once
thriving 8080/Z80/CPM industry. (In spite of binary compatibility,
there were perhaps 30 different formats for 5 1/2 floppies.) I
believe the same about the nascent Motorola 68000 desktop Unix
industry of the early 80s. (My work unit has some, and I loved them.)
So I can't help wondering if the LMI made the same blunder.

Did all the LMI companies adopt the same version of Lisp so an outside
Lisper could write one program and sell it to run on all? Or did they
each adopt proprietary versions so they could monopolize what turned
out to be dried-up ponds? Again, did they all adopt uniform formats
for distribution media, such as floppy disks, so that developers could
easily distribute to all? Or did they differentiate to monopolize?

Terry J. Reedy
Jul 18 '05 #45

P: n/a
Stephen Horne wrote:

[snip]

While innate abilities are efficient and available early in life,
they are also relatively inflexible. When modern humans arrived in
Europe, the climate was also changing. Neaderthals had a lifestyle
suited for woodlands, but the trees were rapidly disappearing.
Modern humans preferred open space, but more importantly hadn't
had a long period with a constant lifestyle and were therefore
more flexible.

Asperger syndrome inflexibility is a different thing. If you were
constantly overloaded (from lack of intuition about what is going
on, thus having to figure out everything consciously), I expect
you would probably cling too much to what you know as well.


Asperger's syndrome? -- I did a search and read about it. And,
all this time I thought I was a *programmer*. If I had only known
that I've had Asperger's disorder, I could have saved myself all
those many years of debugging code. It's been fun though,
especially with Python, even if the DSM IV does authoritively say
that I'm just crazy.

Is there any way that I can pass this off as an efficient
adaptation to my environment (Linux, Python, XML, text processing,
the Web, etc.)? Not likely I suppose.

I look forward to DSM V's definition of PPD (Python programmer's
disorder): persistent and obsessive attention to invisible
artifacts (called variously "whitespace" and "indentation").

Here is a link. Remember, the first step toward recovery is
understanding what you've got:

http://www.udel.edu/bkirby/asperger/aswhatisit.html

From now on, I will not have to worry about how to spell
"programmer". When the form says occupation, I'll just write down
299.80.

From the DSM IV:

================================================== ===================
Diagnostic Criteria For 299.80 Asperger's Disorder

A. Qualitative impairment in social interaction, as manifested by at
least two of the following:
1. marked impairments in the use of multiple nonverbal behaviors such
as eye-to-eye gaze, facial expression, body postures, and gestures
to regulate social interaction
2. failure to develop peer relationships appropriate to developmental
level
3. a lack of spontaneous seeking to share enjoyment, interests, or
achievements with other people (e.g. by a lack of showing, bringing,
or pointing out objects of interest to other people)
4. lack of social or emotional reciprocity

B. Restricted repetitive and stereotyped patterns of behavior,
interests, and activities, as manifested by at least one of the
following:
1. encompassing preoccupation with one or more stereotyped and
restricted patterns of interest that is abnormal either in intensity
or focus
2. apparently inflexible adherence to specific, nonfunctional routines
or rituals
3. stereotyped and repetitive motor mannerisms (e.g., hand or finger
flapping or twisting, or complex whole-body movements)
4. persistent preoccupation with parts of objects

C. The disturbance causes clinically significant impairment in social,
occupational, or other important areas of functioning

D. There is no clinically significant general delay in language (e.g.,
single words used by age 2 years, communicative phrases used by age 3
years)

E. There is no clinically significant delay in cognitive development or
in the development of age-appropriate self-help skills, adaptive
behavior (other than social interaction), and curiosity about the
environment in childhood

F. Criteria are not met for another specific Pervasive Developmental
Disorder or Schizophrenia

================================================== ===================

Dave

--
Dave Kuhlman
http://www.rexx.com/~dkuhlman
dk******@rexx.com
Jul 18 '05 #46

P: n/a
"Terry Reedy" <tj*****@udel.edu> writes:

[some opinions and questions about Lisp Machines]

I'm going to take the questions out of order. I'm also leaving the
crosspost in because this is an immediate response to a Pythonista.
I wonder about another technical issue: intercompatibility. Did all
the LMI companies adopt the same version of Lisp so an outside
Lisper could write one program and sell it to run on all?
As a first-order approximation, there really was only one Lisp machine
company: Symbolics. Xerox, Lisp Machine Inc. and TI were minor players
in the market.

Nevertheless, the effort to create a common Lisp specification that
would be portable across all lisp implementations produced ....
Common Lisp. This was in the early 80's while the industry was still
growing.
The premise of this question is that there actually was a lisp-machine
industry (LMI) to be killed. My memory is that is was stillborn and
that the promoters never presented a *convincing* value proposition to
enough potentional customers to really get it off the ground.
The first principle of marketing is this: the minimum number of
customers needed is one. The Lisp machine customers were typically
large, wealthy companies with a significant investment in research
like petrochemical companies and defense contractors. The Lisp
machine was originally developed as an alternative to the `heavy
metal' of a mainframe, and thus was quite attractive to these
companies. They were quite convinced of the value. The problem was
that they didn't *stay* convinced.
How many people, Lispers included, are going to buy, for instance,
an advanced, technically excellent, hydrogen fuel cell car, complete
with in-garage hydrogen generator unit, for, say $200,000.


Very few. But consider the Ford Motor Company. They have spent
millions of dollars to `buy' exactly that. There are successful
companies whose entire customer base is Ford.

The Lisp industry was small, no doubt about it, but there was (for
a while) enough of an industry to support a company.

Jul 18 '05 #47

P: n/a
[followup set to comp.lang.lisp only]

Terry Reedy writes:
The premise of this question is that there actually was a lisp-machine
industry (LMI) to be killed. My memory is that is was stillborn and
LMI may be a confusing acronym for that. LMI (Lisp Machines, Inc.) was
just one of the vendors, together with Symbolics, Texas Instruments,
Xerox and a few minor ones (e.g. Siemens).

As for your point, you may check the book:

"The Brain Makers - Genius, Ego, and Greed in the Quest for Machines
that Think"
H.P. Newquist
SAMS Publishing, 1994
ISBN 0-672-30412-0

that the promoters never presented a *convincing* value proposition to
enough potentional customers to really get it off the ground.
From page 2 of the above mentioned book:

[...] Symbolics [...] had been able to accomplish in those four
years what billion-dollar computer behemoths like IBM, Hewlett
Packard, and Digital Equipment could not: It had brought an
intelligent machine to market.

>>I have so far not seen anybody dipsuting that they were a marvel of >>technical excellence,


Never having seen one, or read an independent description, I cannot
confirm or 'dipsute' this. But taking this as given, there is the


Here are a few relevant Google queries (I am offline and I don't have
the URLs handy):

lisp machines symbolics ralf moeller museum
minicomputer orphanage [see theese PDF documentation sections: mit,
symbolics, ti, xerox]

You may also search for "lisp machine video" at this weblog:

http://lemonodor.com

Did lisp machines really have guis before Xerox and Apple?
Xerox was also a Lisp Machine vendor. If I recall correctly, the first
Lisp Machine was developed at MIT in the mid 1970s, and it had a GUI.

Did all the LMI companies adopt the same version of Lisp so an outside
Lisper could write one program and sell it to run on all? Or did they


At least early products of major Lisp Machines vendors were
descendants of the CADR machine developed at MIT.
Paolo
--
Paolo Amoroso <am*****@mclink.it>
Jul 18 '05 #48

P: n/a
"Terry Reedy" <tj*****@udel.edu> wrote in message news:<DL********************@comcast.com>...
Someone wrote:
>It is still a question of heated debate what actually killed the lisp>machine industry.

The premise of this question is that there actually was a lisp-machine
industry (LMI) to be killed. My memory is that is was stillborn and
that the promoters never presented a *convincing* value proposition to
enough potentional customers to really get it off the ground.


Revenues of Symbolics in 1986 were in the range of about 100 million
dollars. This was probably the peak time.
Never having seen one, or read an independent description, I cannot
confirm or 'dipsute' this. But taking this as given, there is the
overlooked matter of price. How many people, Lispers included, are
going to buy, for instance, an advanced, technically excellent,
hydrogen fuel cell car, complete with in-garage hydrogen generator
unit, for, say $200,000.
Customers were defence industries, research labs, animation companies,
etc.

A machine usable for 3d animation from Symbolics was well in the
$100,000 range. Each 3d software module might have been around
$25,000 - remember that was in years 1985 - 1990.

From then prices went down. The mainstream workstation business model
switched rapidly (to Unix workstations) and Symbolics could not
adopt (and successful) fast enough. They tried by:
- selling a half-assed PC-based solution
- selling VME cards for SUNs
- selling NuBUS cards for Mac II
- and finally selling an emulator for their OS running on DEC Alphas
I believe these are disputable. The American broadcast industry
switched to color displays in the 50s-60s. Around 1980 there were
game consoles (specialized computers) and small 'general purpose'
computers that piggybacked on color televisions. TV game consoles
thrive today while general PC color computing switched (mid80s) to
computer monitors with the higher resolution needed for text. It was
their use with PCs that brought the price down to where anyone could
buy one.
Sure, but Symbolics could do 3d animations in full HDTV qualitiy
in 1987 (or earlier?). I've seen animations by Sony or Smarties
done on Lisp machines. Several TV stations did their broadcast
quality logo animations on Lisp machines. The animations
for the ground breaking TRON movie were done on Lisp machines.
Etc.
Did lisp machines really have guis before Xerox and Apple?
Xerox was producing Lisp machines, too. Of course they
had graphical user interfaces - they were developed at
about the same time as the Smalltalk machines of Xerox.
So, they did not have it before Xerox - they were Xerox. ;-)

MIT Lisp machines had megabit b&w displays with
mouse driven GUIs before the 80s, IIRC. In mid 1980
they switched to a new revolutionary object-oriented graphics
system (Dynamic Windows).
Did lisp machine companies make laser printers before other companies
like HP made them for anyone to use? If so, what did they price them
at?


Symbolics was just reselling Laser printers. The Symbolics OS
could output to Postscript somewhen in mid 1980s - the Concordia
system was a software for book/manual production and could
produce large scale hypertext documents (the Symbolics manual set
had almost 10000 pages) - printing to postscript.

Xerox had of course connected their Lisp machines to their
laser printers.

More stuff on: http://kogs-www.informatik.uni-hambu...symbolics.html

Remember, most of that is HISTORY.
Jul 18 '05 #49

P: n/a
Joe Marshall wrote:
... The Lisp machine customers were typically
large, wealthy companies with a significant investment in research
like petrochemical companies and defense contractors. The Lisp
machine was originally developed as an alternative to the `heavy
metal' of a mainframe, and thus was quite attractive to these
companies. They were quite convinced of the value. The problem was
that they didn't *stay* convinced.
How many people, Lispers included, are going to buy, for instance,
an advanced, technically excellent, hydrogen fuel cell car, complete
with in-garage hydrogen generator unit, for, say $200,000.


Maybe that was part of the problem: all of the Lisp installed base lived
on an expensive platform (unless compared with big iron). When the AI
projects did not deliver, there waa no grass roots safety net to fall
back on and Lisp disappeared from radar in a wink.

This time Lisp is growing slowly, with an early adopter here and an
early adopter there. And this time Lisp requires /no/ special hardware.
And there is a standard so there is no fragmentation. Except of course
that the first thing anyone does after learning Lisp is start a project
to create a new version of Lisp. :)

--

clinisys, inc
http://www.tilton-technology.com/
---------------------------------------------------------------
"[If anyone really has healing powers,] I would like to call
them about my knees."
-- Tenzin Gyatso, the Fourteenth Dalai Lama
Jul 18 '05 #50

303 Replies

This discussion thread is closed

Replies have been disabled for this discussion.