473,776 Members | 1,513 Online
Bytes | Software Development & Data Engineering Community
+ Post

Home Posts Topics Members FAQ

BIG successes of Lisp (was ...)

In the context of LATEX, some Pythonista asked what the big
successes of Lisp were. I think there were at least three *big*
successes.

a. orbitz.com web site uses Lisp for algorithms, etc.
b. Yahoo store was originally written in Lisp.
c. Emacs

The issues with these will probably come up, so I might as well
mention them myself (which will also make this a more balanced
post)

a. AFAIK Orbitz frequently has to be shut down for maintenance
(read "full garbage collection" - I'm just guessing: with
generational garbage collection, you still have to do full
garbage collection once in a while, and on a system like that
it can take a while)

b. AFAIK, Yahoo Store was eventually rewritten in a non-Lisp.
Why? I'd tell you, but then I'd have to kill you :)

c. Emacs has a reputation for being slow and bloated. But then
it's not written in Common Lisp.

Are ViaWeb and Orbitz bigger successes than LATEX? Do they
have more users? It depends. Does viewing a PDF file made
with LATEX make you a user of LATEX? Does visiting Yahoo
store make you a user of ViaWeb?

For the sake of being balanced: there were also some *big*
failures, such as Lisp Machines. They failed because
they could not compete with UNIX (SUN, SGI) in a time when
performance, multi-userism and uptime were of prime importance.
(Older LispM's just leaked memory until they were shut down,
newer versions overcame that problem but others remained)

Another big failure that is often _attributed_ to Lisp is AI,
of course. But I don't think one should blame a language
for AI not happening. Marvin Mins ky, for example,
blames Robotics and Neural Networks for that.
Jul 18 '05
303 17764
james anderson <ja************ @setf.de> writes:
the more likely approach to "python-in-lisp" would be a
reader-macro/tokenizer/parser/translater which compiled the
"python-domain-specific-language" into s-expressions.


That's not really feasible because of semantic differences between
Python and Lisp. It should be possible, and may be worthwhile, to do
that with modified Python semantics.
Jul 18 '05 #181
Brian Kelley <bk*****@wi.mit .edu> writes:
The file is closed when the reference count goes to zero, in this case
when it goes out of scope. This has nothing to do with the garbage
collector, just the reference counter.
There is nothing in the Python spec that says that. One particular
implementation (CPython) happens to work that way, but another one
(Jython) doesn't. A Lisp-based implementation might not either.
At least, that's the way I
understand, and I have been wrong before(tm). The upshot is that it
has worked in my experience 100% of the time and my code is structured
to use (abuse?) this. How is this more difficult?


Abuse is the correct term. If your code is relying on stuff being
gc'd as soon as it goes out of scope, it's depending on CPython
implementation details that are over and above what's spelled out in
the Python spec. If you run your code in Jython, it will fail.
Jul 18 '05 #182
Matthew Danish wrote:
I would see this as a dependence on an implementation artifact. This
may not be regarded as an issue in the Python world, though.
As people have pointed out, I am abusing the C-implementation quite
roundly. That being said, I tend to write proxies/models (see
FileSafeWrapper ) that do the appropriate action on failure modes and
don't leave it up the garbage collector. Refer to the "Do as I Do, not
as I Say" line of reasoning.
This is not to say that Python couldn't achieve a similar solution to
the Lisp one. In fact, it could get quite nearly there with a
functional solution, though I understand that it is not quite the same
since variable bindings caught in closures are immutable. And it would
probably be most awkward considering Python's lambda.
I don't consider proxies as "functional " solutions, but that might just
be me. They are just another way of generating something other than the
default behavior.

Python's lambda is fairly awkward to start with, it is also slower than
writing a new function. I fully admit that I have often wanted lambda
to be able to look up variables in the calling frame.

foo = lambda x: object.insert(x )
object = OracleDatabase( )
foo(x)
object = MySqlDatabase()
foo(x)

But in practice I never write lambda's this way. I always bind them to
a namespace (in this case a class).

class bar:
def some_fun(x):
foo = lambda self=self, x: self.object.ins ert(x)
foo(x)

Now I could use foo on another object as well.
foo(object, x)
I'm not sure I understand this paragraph. The macro only executes the
body code if the file is successfully opened. The failure mode can be
specified by a keyword argument.
I can explain what I meant with an example: suppose you wanted to tell
the user what file failed to open/write and specify a new file to open.
You will have to write a handler for this and supply it to
(with-open-file ...) or catch the error some other way.
Not sure what `using macros as a default handler' means.
Apologies, I meant to say that writing a macro to handle particular
exceptions in a default-application wide way is a good thing and
appropriate.
The WITH-OPEN-FILE macro is not really an example of a macro that
performs something unique. It is, I find, simply a handy syntactic
abstraction around something that is more complicated than it appears at
first. And, in fact, I find myself creating similar macros all the time
which guide the use of lower-level functions.


Right. I'm was only trying to point out, rather lamely I might add,
that the macros I have been seeing I would solve in an object-oriented
manner. This might simply be because python doesn't have macros. But I
like the thought of "here is your fail-safe file object, use it however
you like". It is hard for me to say which is 'better' though, I tend to
use the language facilities available (and abuse them as pointedly
stated), in fact it took me a while to realize that (with-open-file) was
indeed a macro, it simply was "the way it was done(TM)" for a while.
Certainly, new (and old) users can forget to use their file I/O in the
appropriate macro as easily as forgetting to use try: finally:

In python I use my FileSafeWrapper (...) that ensures that the file is
properly closed on errors and the like. As I stated, this wasn't handed
to me by default though like I remember as (with-file-open ...) was from
my CHLS days.

So let me ask a lisp question. When is it appropriate to use a macro
and when is it appropriate to use a proxy or polymorphism? Perhaps
understanding this would break my macro stale-mate.

p.s. given some of the other posts, I am heartened by the civility of
this particular thread.

Jul 18 '05 #183
Tayss wrote:
http://groups.google.com/groups?hl=e...%40alcyone.com
Erik Max Francis explains that expecting the system to close files
leads to brittle code. It's not safe or guaranteed.

After learning Python, people write this bug for months, until they
see some article or usenet post with the try/finally idiom.
Some more than most. Interestingly, I write proxies that close
resources on failure but tend to let files do what they want.
Now, I love Python, but this really is a case where lots of people
write lots of potentially hard-to-reproduce bugs because the language
suddenly stopped holding their hands. This is where it really hurts
Python to make the tradeoff against having macros. The tradeoff may
be worth it, but ouch!


I choose a bad example on abusing the C-implementation. The main thrust
of my argument is that you don't need macros in this case, i.e. there
can be situations with very little tradeoff.

class SafeFileWrapper :
def __init__(self, f):
self.f = f

def write(self, data):
try: self.f.write(da ta)
except:
self.f.close()
self.f = None
raise

def close():
if self.f:
self.f.close()
...

Now the usage is:

f = SafeFileWrapper (open(...))
print >> f, "A couple of lines"
f.close()

So now I just grep through my code for open and replace it with
SafeFileWrapper (open...) and all is well again.

I still have to explicitly close the file though when I am done with it,
unless I don't care if it is open through the application run. But at
least I am guaranteed that the file is closed either when I tell it to
or on an error.

Brian Kelley

Jul 18 '05 #184
Brian Kelley wrote:
class bar:
def some_fun(x):
foo = lambda self=self, x: self.object.ins ert(x)
foo(x)

oops, that should be
foo = lambda x, self=self: self.object.ins ert(x)


Jul 18 '05 #185
On Thu, 23 Oct 2003 14:37:57 -0400, Brian Kelley wrote:
Python's lambda is fairly awkward to start with, it is also slower than
writing a new function. I fully admit that I have often wanted lambda
to be able to look up variables in the calling frame.


They can since Python 2.1 (in 2.1 with from __future__ import nested_scopes,
in 2.2 and later by default). This applies to nested functions as well.

--
__("< Marcin Kowalczyk
\__/ qr****@knm.org. pl
^^ http://qrnik.knm.org.pl/~qrczak/

Jul 18 '05 #186
JCM
In comp.lang.pytho n Marcin 'Qrczak' Kowalczyk <qr****@knm.org .pl> wrote:
On Thu, 23 Oct 2003 14:37:57 -0400, Brian Kelley wrote:
Python's lambda is fairly awkward to start with, it is also slower than
writing a new function. I fully admit that I have often wanted lambda
to be able to look up variables in the calling frame.

They can since Python 2.1 (in 2.1 with from __future__ import nested_scopes,
in 2.2 and later by default). This applies to nested functions as well.


Variables in lexically enclosing scopes will be visible, but not
variables in calling frames.
Jul 18 '05 #187
Sorry for the long delay. Turns out my solution was to upgrade to
Windows XP, which has better compatibility with Windows 98 stuff than
Windows 2000. So I've had some fun reinstalling everything. On the
plus side, no more dual booting.

Anyway...
On Thu, 16 Oct 2003 18:52:04 GMT, Alex Martelli <al***@aleax.it >
wrote:
Stephen Horne wrote:
...
>no understanding, no semantic modeling.
>no concepts, no abstractions.

Sounds a bit like intuition to me. ...
What is your definition of intuition?
I can accept something like, e.g.:
2. insight without conscious reasoning
3. knowing without knowing how you know


but they require 'insight' or 'knowing', which are neither claimed
nor disclaimed in the above.


You can take 'insight' and 'knowing' to mean more than I (or the
dictionary) intended, but in this context they purely mean having
access to information (the results of the intuition).

Logically, those words simply cannot mean anything more in this
context. If you have some higher level understanding and use this to
supply the answer, then that is 'conscious reasoning' and clearly
shows that you you know something of how you know (ie how that
information was derived). Therefore it is not intuition anymore.

Of course the understanding could be a rationalisation - kind of a
reverse engineered explanation of why the intuition-supplied answer is
correct. That process basically adds the 'understanding' after the
fact, and is IMO an everyday fact of life (as I mentioned in an
earlier post, I believe most people only validate and select
consciously from an unconsciously suggested subset of likely solutions
to many problems). However, this rationalisation (even if backdated in
memory and transparent to the person) *is* after the fact - the
intuition in itself does not imply any 'insight' or 'knowing' at any
higher level than the simple availability of information.
There are many things I know,
without knowing HOW I do know -- did I hear it from some teacher,
did I see it on the web, did I read it in some book? Yet I would
find it ridiculous to claim I have such knowledge "by intuition":
Of course. The phrase I used is taken directly from literature, but
the 'not knowing how you know' is obviously intended to refer to a
lack of awareness of how the solution is derived from available
information. Memory is obviously not intuition, even if the context in
which the memory was laid down has been forgotten. I would even go so
far as to suggest that explicit memory is never a part of intuition.
Heuristics (learned or otherwise) are not explicit memories, and
neither is the kind of procedural memory which I suspect plays a
crucial role in intuition.

One thing that has become clear in neuroscience is that almost all
(perhaps literally all) parts and functions of the brain benefit from
learning. Explicit memory is quite distinct from other memory
processes - it serves the conscious mind in a way that other memory
processes do not.

For instance, when a person lives through a traumatic experience, a
very strong memory of that experience may be stored in explicit memory
- but not always. Whether remembered or not, however, that explicit
memory has virtually nothing to do with the way the person reacts to
cues that are linked to that traumatic experience. The kind of memory
that operates to trigger anxiety, anger etc has very weak links to the
conscious mind (well, actually it has very strong ones, but only so
that it can control the conscious mind - not the other way around). It
is located in the amygdala, it looks for signs of danger in sensory
cues, and when it finds any such cues it triggers the fight-or-flight
stress response.

Freudian repression is a myth. When people experience chronic stress
over a period of years (either due to ongoing traumatic experience or
due to PTSD) the hippocampus (crucial to explicit memory) is damaged.
The amygdala (the location of that stress-response triggering implicit
memory) however is not damaged. The explicit memory can be lost while
the implicit memory remains and continues to drive the PTSD symptoms.

It's no surprise, therefore, that recovered memories so often turn out
to simply be false - but still worth considering how this happens.
There are many levels. For instance, explicit memories seem to be
'lossy compressed' by basically factoring out the kinds of context
that can later be reconstructed from 'general knowledge'. Should your
general knowledge change between times, so does the reconstructed
memory.

At a more extreme level, entire memories can be fabricated. The harder
you search for memories, the more they are filled in by made up stuff.
And as mentioned elsewhere, the brain is quite willing to invent
rationalisation s for things where it cannot provide a real reason. Add
a psychiatrist prompting and providing hints as to the expected form
of the 'memory' and hey presto!

So basically, the brain has many types of memory, and explicit memory
is different to the others. IMO intuition uses some subset of implicit
memory and has very little to do with explicit memory.
The third definition will tend to follow from the second (if the
insight didn't come from conscious reasoning, you won't know how you
know the reasoning behind it).


This seems to ignore knowledge that comes, not from insight nor
reasoning, but from outside sources of information (sources which one
may remember, or may have forgotten, without the forgetting justifying
the use of the word "intuition" , in my opinion).


Yes, quite right - explicit memory was not the topic I was discussing
as it has nothing to do with intuition.
Basically, the second definition is the core of what I intend and
nothing you said above contradicts what I claimed. Specifically...


I do not claim the characteristics I listed:
no understanding, no semantic modeling.
>no concepts, no abstractions.
_contradict_ the possibility of "intuition" . I claim they're very
far from _implying_ it.


OK - and in the context of your linking AI to 'how the human brain
works' that makes sense.

But to me, the whole point of 'intuition' (whether in people or, by
extension, in any kind of intelligence) is that the answer is supplied
by some mechanism which is not understood by the individual
experiencing the intuition. Whether that is a built-in algorithm or an
innate neural circuit, or whether it is a the product of an implicit
learning mechanism (whether electronic/algorithmic or
neural/cognitive).
...sounds like "knowing without knowing how you know".


In particular, there is no implication of "knowing" in the above.


Yes there is. An answer was provided. If the program 'understood' what
it was doing to derive that answer, then that wouldn't have been
intuition (unless the 'understanding' was a rationalisation after the
fact, of course).
You can read a good introduction to HMM's at:
http://www.comp.leeds.ac.uk/roger/Hi..._dev/main.html
I haven't read this yet, but your description has got my interest.
The software was not built to be "aware" of anything, right. We did
not care about software to build sophisticated models of what was
going on, but rather about working software giving good recognition
rates.
Evolution is just as much the pragmatist.

Many people seem to have an obsession with a kind of mystic view of
consciousness. Go through the list of things that people raise as
being part of consciousness, and judge it entirely by that list, and
it becomes just another set of cognitive functions - working memory,
primarily - combined with the rather obvious fact that you can't have
a useful understanding of the world unless you have a useful
understanding of your impact on it.

But there is this whole religious thing around consciousness that
really I don't understand, to the point that I sometimes wonder if
maybe Asperger syndrome has damaged that too.

Take, for instance, the whole fuss about mirror tests and the claim
that animals cannot be self-aware as they don't (with one or two
primate exceptions) pass the mirror test - they don't recognise
themselves in a mirror.

There is a particular species that has repeatedly failed the mirror
test that hardly anyone mentions. Homo sapiens sapiens. Humans. When
first presented with mirrors (or photographs of themselves), members
of tribes who have had no contact with modern cultures have
consistently reacted much the same way - they simply don't recognise
themselves in the images. Mirrors are pretty shiney things.
Photographs are colourful patterns, but nothing more.

The reason is simple - these people are not expecting to see images of
themselves and may never have seen clear reflected images of
themselves. It takes a while to pick up on the idea. It has nothing to
do with self-awareness.

To me, consciousness and self-awareness are nothing special. Our
perception of the world is a cognitive model constructed using
evidence from our senses using both innate and learned 'knowledge' of
how the world works. There is no such thing as 'yellow' in the real
world, for instance - 'colour' is just the brains way of labelling
certain combinations of intensities of the three wavebands of light
that our vision is sensitive to.

While that model isn't the real world, however, it is necessarily
linked to the real world. It exists for a purpose - to allow us to
understand and react to the environment around us. And that model
would be virtually useless if it did not include ourselves, because
obviously the goal of much of what we do is to affect the environment
around us.

In my view, a simple chess program has a primitive kind of
self-awareness. It cannot decide its next move without considering how
its opponent will react to its move. It has a (very simple) world
model, and it is aware of its own presence and influence in that world
model.

Of course human self-awareness is a massively more sophisticated
thing. But there is no magic.

Very likely your software was not 'aware' of anything, even in this
non-magical sense of awareness and consciousness. As you say - "We did
not care about software to build sophisticated models of what was
going on".

But that fits exactly my favorite definition of intuition - of knowing
without knowing how you know. If there were sophisticated models, and
particularly if the software had any 'understanding' of what it was
doing, it wouldn't be intuition - it would be conscious reasoning.
People do have internal models of how people understand speech -- not
necessarily accurate ones, but they're there. When somebody has trouble
understandin g you, you may repeat your sentences louder and more slowly,
perhaps articulating each word rather than slurring them as usual: this
clearly reflects a model of auditory performance which may have certain
specific problems with noise and speed.
I disagree. To me, this could be one of two things...

1. A habitual, automatic response to not being heard with no
conscious thought at all - for most people, the most common
reasons for not being understood can be countered by speaking more
loudly and slowly.

2. It is possible that a mental model is used for this and the
decision made consciously, though I suspect the mental model comes
in more as the person takes on board the fact that there is a
novel communication barrier and tries to find solutions.

Neither case is relevant to what I meant, though. People don't
consciously work on recognising sounds nor on translating series of
such sounds into words and scentences - that information is provided
unconsciously. Only when understanding becomes difficult such that the
unconscious solutions are likely to be erroneous is there any
conscious analysis.

And the conscious analysis is not a conscious analysis of the process
by which the 'likely solutions subset' is determined. There is no
doubt 'introspection' in the sense that intermediate results in some
form (which phenomes were recognised, for instance) are no doubt
passed on the the conscious mind to aid that analysis, and at that
stage a conscious model obviously comes into play, but I don't see
that as particularly important to my original argument.

Of course people can use rational thought to solve communication
problems, at which point a mental model comes into play, but most of
the time our speach recongition is automatic and unconscious.

Even when we have communications difficulties, we are not free to
introspect the whole speach recognition process. Rather some plausible
solutions and key intermediate results (and a sense if where the
problem lies) are passed to the conscious mind for separate analysis.

The normal speach recognition process is basically a black box. It is
able to provide intermediate results and 'debugging information' in
difficult cases - but there is no conscious understanding of the
processes used to derive any of that. I couldn't tell anything much of
use about the patterns of sound that create each phenome, for
instance. The awareness that one phenome sounds rather similar to
another doesn't count, in itself.

BTW - I hope 'phenome' is the right word. My dictionary has failed me
and a web search seems to see 'phenome' as something to do with
genetic. It is intended to refer to 'basic' sound components that
build up words, but I think I've got a bit confused.
(as opposed to, e.g., the proverbial "ugly American" whose caricatural
reaction to foreigners having trouble understanding English would be
to repeat exactly the same sentences, but much louder:-).
I believe the English can outdo any American in the loud-and-slow
shouting at foreigners thing ;-)
The precise algorithms for speach recognition used by IBM/Dragons
dictation systems and by the brain are probably different, but to me


Probably.
fussing about that is pure anthropocentric ity. Maybe one day we'll


Actually it isn't -- if you're aware of certain drastic differences
in the process of speech understanding in the two cases, this may be
directly useful to your attempts of enhancing communication that is
not working as you desire.


Yes, but I was talking about what can or cannot be considered
intelligent. I was simply stating that in my view, a thing that
provides intelligent results may be considered intelligent even if it
doesn't use the same methods that humans would use to provide those
results.

I talk to my mother in a slightly different way to the way I talk to
my father. This is a practical issue necessitated by their different
conversational styles (and the kind of thing that seriously bugs
cognitive theorists who insist despite the facts that people with
Aspergers can never understand or react to such differences). That
doesn't mean that my mother and father can't both be considered
intelligent.
E.g., if a human being with which you're
very interested in discussing Kant keeps misunderstandin g each time
you mention Weltanschauung, it may be worth the trouble to EXPLAIN
to your interlocutor exactly what you mean by it and why the term is
important; but if you have trouble dictating that word to a speech
recognizer you had better realize that there is no "meaning" at all
connected to words in the recognizer -- you may or may not be able
to "teach" spelling and pronunciation of specific new words to the
machine, but "usage in context" (for machines of the kind we've been
discussing) is a lost cause and you might as well save your time.
Of course, that level of intelligence in computer speach recognition
is a very long way off.
But, you keep using "anthropocentri c" and its derivatives as if they
were acknowledged "defects" of thought or behavior. They aren't.
Not at all. I am simply refusing to apply an arbitrary restriction on
what can or cannot be considered intelligent. You have repeatedly
stated, in effect, that if it isn't the way that people work then it
isn't intelligent (or at least AI). To me that is an arbitrary
restriction. Especially as evolution is a pragmatist - the way the
human mind actually works is not necessarily the best way for it to
work and almost certainly is not the only way it could have worked. It
seems distinctly odd to me to observe the result of a particular roll
of the dice and say "this is the only result that we can consider
valid".
see Timur Kuran's "Private Truths, Public Lies", IMHO a masterpiece
(but then, I _do_ read economics for fun:-).
I've not read that, though I suspect I'll be looking for it soon.
But of course you'd want _others_ to suppy you with information about
_their_ motivations (to refine your model of them) -- and reciprocity
is important -- so you must SEEM to be cooperating in the matter.
(Ridley's "Origins of Virtue" is what I would suggest as background
reading for such issues).
I've read 'origins of virtue'. IMO it spends too much time on the
prisoners dilemma. I have the impression that either Ridley has little
respect for his readers intelligence or he had little to say and had
to do some padding. From what Ridley takes a whole book to say, Pinker
covers the key points in a couple of pages.
But if there are many types, the one humans have is surely the most
important to us
From a pragmatic standpoint of getting things done, that is clearly
not true in most cases. For instance, when faced with the problem of
writing a speach recognition program, you and your peers decided to
follow the pragmatic approach and do something different to what the
brain does.
Turing's Test also operationally defines it that
way, in the end, and I'm not alone in considering Turing's paper
THE start and foundation of AI.
Often, the founders of a field have certain ideas in mind which don't
pan out in the long term. When Kanner discovered autism, for instance,
he blamed 'refridgerator' mothers - but that belief is simply false.

Turing was no more omniscient than Kanner. Of course his contribution
to many fields in computing was beyond measure, but that doesn't mean
that AI shouldn't evolve beyond his conception of it.

Evolution is a pragmatist. I see no reason why AI designers shouldn't
also be pragmatists.

If we need a battle of the 'gods', however, then may I refer you to
George Boole who created what he called 'the Laws of Thought'. They
are a lot simpler than passing the Turing Test ;-)
Studying humanity is important. But AI is not (or at least should not
be) a study of people - if it aims to provide practical results then
it is a study of intelligence.


But when we can't agree whether e.g. a termine colony is collectively
"intelligent " or not, how would it be "AI" to accurately model such a
colony's behavior?


When did I claim it would be?
The only occurrences of "intelligen ce" which a
vast majority of people will accept to be worthy of the term are those
displayed by humans
Of course - we have yet to find another intelligence at this point
that even registers on the same scale as human intelligence. But that
does not mean that such an intelligence cannot exist.
-- because then "model extroflecting", such an
appreciated mechanism, works fairly well; we can model the other
person's behavior by "putting ourselves in his/her place" and feel
its "intelligen ce" or otherwise indirectly that way.
Speaking as the frequent victim of a breakdown in that (my broken
non-verbal communication and other social difficulties frequently lead
to people jumping to the wrong conclusion - and persisting in that bad
conclusion, often for years, despite clear evidence to the contrary) I
can tell you that there is very little real intelligence involved in
that process. Of course even many quite profound autistics can "put
themselves in his/her place" and people who supposedly have no empathy
can frequently be seen crying about the suffering of others that
neurotypicals have become desensitised to. But my experience of trying
to explain Asperger syndrome to people (which is quite typical of what
many people with AS have experienced) is pretty much proof positive
that most people are too lazy to think about such things - they'd
rather keep on jumping to intuitive-but-wrong conclusions and they'd
rather carry on victimising people in supposed retaliation for
non-existent transgressions as a consequence.

'Intelligent' does not necessarily imply 'human' (though in practice
it does at this point in history), but certainly 'human' does not
imply 'intelligent'.
For non-humans
it only "works" (so to speak) by antroporphisati on, and as the well
known saying goes, "you shouldn't antropomorphise computers: they
don't like it one bit when you do".
Of course - but I'm not the one saying that computer intelligence and
human intelligence must be the same thing.

A human -- or anything that can reliably pass as a human -- can surely
be said to exhibit intelligence in certain conditions; for anything
else, you'll get unbounded amount of controversy. "Artificial life",
where non-necessarily-intelligent behavior of various lifeforms is
modeled and simulated, is a separate subject from AI. I'm not dissing
the ability to abstract characteristics _from human "intelligen t"
behavior_ to reach a useful operating definition of intelligence that
is not limited by humanity: I and the AAAI appear to agree that the
ability to build, adapt, evolve and generally modify _semantic models_
is a reasonable discriminant to use.
Why should the meaning of the term 'intelligent' be derived from the
meaning of the term 'human' in the first place!

Things never used to be this way. Boole could equate thought with
algebra and no-one batted an eyelid. Only since the human throne of
specialness has been threatened (on the one hand by Darwins assertion
that we are basically bald apes, and on the other by machines doing
tasks that were once considered impossible for anything but human
minds) did terms like 'intelligence', 'thought' and 'consciousness'
start taking on mystic overtones.

Once upon a time, "computer" was a job title. You would have to be
pretty intelligent to work as a computer. But such people were
replaced by pocket calculators.

People have been told for thousands of years that humanity is special,
created in gods image and similar garbage. Elephants would no doubt be
equally convinced of their superiority, if they thought of such
things. After all, no other animal has such a long and flexible nose,
so useful for spraying water around for instance.

Perhaps such arrogant elephants would find the concept of a hose pipe
quite worrying?

I think what is happening with people is similar. People now insist
that consciousness must be beyond understandabili ty, for example, no
because there is any reason why it should be true but simply because
they need some way to differentiate themselves from machines and apes.
If what you want is to understand intelligence, that's one thing. But
if what you want is a program that takes dictation, or ones that plays
good bridge, then an AI approach -- a semantic model etc -- is not
necessarily going to be the most productive in the short run (and
"in the long run we're all dead" anyway:-).
I fully agree. And so does evolution. Which is why 99% or more of what
your brain does involves no semantic model whatsoever.
Calling program that use
completely different approaches "AI" is as sterile as similarly naming,
e.g., Microsoft Word because it can do spell-checking for you: you can
then say that ANY program is "AI" and draw the curtains, because the
term has then become totally useless. That's clearly not what the AAAI
may want, and I tend to agree with them on this point.
Then you and they will be very unhappy when they discover just how
'sterile' 99% of the brain is.
What we most need is a model of _others_ that gives better results
in social interactions than a lack of such a model would. If natural
selection has not wiped out Asperger's syndrome (assuming it has some
genetic component, which seems to be an accepted theory these days),
there must be some compensating adaptive advantage to the disadvantages
it may bring (again, I'm sure you're aware of the theories about that).
Much as for, e.g., sickle-cell anemia (better malaria resistance), say.
There are theories of compensating advantages, but I tend to doubt
them. This is basically a misunderstandin g of what 'genetic' means.

First off, to the extent that autism involves genetics (current
assessments claim autism is around 80% genetic IIRC) those genetics
are certainly not simple. There is no single autism gene. Several
'risk factor' genes have been identified, but all can occur in
non-autistic people and none is common to even more than a
'significant minority' of autistic people.

Most likely, in my view, there are two key ideas to think of in the
context of autism genetics. The first is recessive genes. The second
is what I call a 'bad mix' of genes. I am more convinced by the latter
(partly because I thought it up independantly of others - yes, I know
that's not much of an argument) so I'll describe that in more detail.

I general, you can't just mutate one gene and get a single change in
the resulting organism. Genes interact in complex ways to determine
developmental processes, which in turn determine the end result.

People have recently, in evolutionary terms, evolved for much greater
mental ability. But while a new feature can evolve quite quickly, each
genetic change that contributes to that feature also has a certain
amount of 'fallout'. There are secondary consequences, unwanted
changes, that need to be compensated for - and the cleanup takes much
longer.

Genes are also be continuously swapped around, generation by
generation, by recombination. And particular combinations can have
'unintended' side-effects. There can be incompatibiliti es between
genes. For evolution to prgress to the point where there are no
incompatibiliti es (or immunities to the consequences of those
incompatibiliti es) can take a very long time, especially as each
problem combination may only occur rarely.

Based on this, I would expect autistic symptoms to suddenly appear in
a family line (when the bad mix genes are brought together by a fluke
of recombination). This could often be made worse by the general
principle that birds of a feather flock together, bringing more
incompatible bad mix genes together. But as reproductive success drops
(many autistics never find partners) some of the lines simply die out,
while other lines simply separate out those bad mix genes, so that
while the genes still exist most children no longer have an
incompatible mix.

Basically, the bad mix comes together by fluke, but after a few
generations that bad mix will be gone again.

Alternatively, people with autism and Asperger syndrome seem to
consistently have slightly overlarge heads, and there is considerable
evidence of an excessive growth in brain size at a very young age.
This growth spurt may well disrupt developmental processes in key
parts of the brain. The point being that this suggests to me that
autistic and AS people are basically pushing the limit in brain size.
We are the consequence of pushing too fast for too much more mental
ability. We have the combination of genes for slightly more brain
growth, but the genes to adapt developmental processes to cope with
that growth - but we don't have the genes to fix the unwanted
consequences of these new mixes of genes.

So basically, autism and AS are either the leading or trailing edge of
brain growth evolution - either we are the ones who suffer the
failings of 'prototype' brain designs so that future generations may
evolve larger non-autistic brains, or else we are the ones who suffer
the failings of bad mix 'fallout' while immunity to the bad gene
combinations gradually evolves.

In neither case do we have a particular compensating advantage, though
a few things have worked out relatively well for at least some people
with AS over the last few centuries. Basically, you get the prize
while I suffer for it. Of course I'm not bitter ;-)
But the point remains that we don't have "innate" mental models
of e.g. the way the mind of a dolphin may work, nor any way to
build such models by effectively extroflecting a mental model of
ourselves as we may do for other humans.


Absolutely true. Though it seems to me that people are far to good at
empathising with their pets for a claim that human innate mental
models are completely distinct from other animals. I figure there is a


Lots of antropomorphisa tion and not-necessarily-accurate projection
is obviously going on.


Not necessarily. Most of the empathising I was talking about is pretty
basic. The stress response has a lot in common from one species to
another, for instance. This is about the level that body language
works in AS - we can spot a few extreme and/or stereotyped emotions
such as anger, fear, etc.

Beyond that level, I wouldn't be able to recognise empathising with
pets even if it were happening right in front of me ;-)
My guess is that even then, there would be more dependence on
sophisticated heuristics than on brute force searching - but I suspect
that there is much more brute force searching going on in peoples
minds than they are consciously aware of.


I tend to disagree, because it's easy to show that the biases and
widespread errors with which you can easily catch people are ones
that would not occur with brute force searching but would with
heuristics. As you're familiar with the literature in the field
more than I am, I may just suggest the names of a few researchers
who have accumulated plenty of empirical evidence in this field:
Tversky, Gigerenzer, Krueger, Kahneman... I'm only peripherally
familiar with their work, but in the whole it seems quite indicative.


I'm not immediately familiar with those names, but before I go look
them up I'll say one thing...

Heuristics are fallible by definition. They can prevent a search
algorithm from searching a certain line (or more likely, prioritise
other lines) when in fact that line is the real best solution.

With human players having learned their heuristics over long
experience, they should have a very different pattern of 'tunnel
vision' in the search to that which a computer has (where the
heuristics are inherently those that could be expressed 'verbally' in
terms of program code or whatever).

In particular, human players should have had more real experience of
having their tunnel vision exploited by other players, and should have
learned more sophisticated heuristics as a result.

I don't believe in pure brute force searching - for any real problem,
that would be an infinite search (and probably not even such a small
infinity as aleph-0). When I say 'brute force' I tend to mean that as
a relative thing - faster searching, less sophisticated heuristics. I
suspect that may not have been clear above.

But anyway, the point is that heuristics are rarely much good at
solving real problems unless there is some kind of search or closure
algorithm or whatever added.

I do remember reading that recognition of rotated shapes shows clear
signs that a search process is going on unconsciously in the mind.
This isn't conscious rotation (the times were IIRC in milliseconds)
but the greater the number of degrees of rotation of the shape, the
longer it takes to recognise - suggesting that subconsciously, the
shape is rotated until it matches the required template.

So searches do seem to happen in the mind. Though you are quite right
to blame heuristics for a lot of the dodgy results. And while I doubt
that 'search loops' in the brain run through thousands of iterations
per second, with good heuristics maybe even a one iteration per second
(or even less) could be sufficient.

The real problem for someone with AS is that so much has to be handled
by the single-tasking conscious mind. The unconscious mind is, of
course, able to handle a number of tasks at once. If only I could
listen to someones words and figure out their tone of voice and pay
attention to their facial expression at the same time I'd be a very
happy man. After all, I can walk and talk at the same time, so why no
all this other stuff too :-(
It IS interesting how often an effective way to understand how
something works is to examine cases where it stops working or
misfires -- "how it BREAKS" can teach us more about "how it WORKS"
than studying it under normal operating conditions would. Much
like our unit tests should particularly ensure they test all the
boundary conditions of operation...;-).


That is, I believe, one reason why some people are so keen to study
autism and AS. Not so much to help the victims as to find out more
about how social ability works in people who don't have these
problems.
--
Steve Horne

steve at ninereeds dot fsnet dot co dot uk
Jul 18 '05 #188
Brian Kelley <bk*****@wi.mit .edu> writes:
I choose a bad example on abusing the C-implementation. The main
thrust of my argument is that you don't need macros in this case,
i.e. there can be situations with very little tradeoff.

class SafeFileWrapper :
def __init__(self, f):
self.f = f

def write(self, data):
try: self.f.write(da ta)
except:
self.f.close()
self.f = None
raise

def close():
if self.f:
self.f.close()
...

Now the usage is:

f = SafeFileWrapper (open(...))
print >> f, "A couple of lines"
f.close() .... I still have to explicitly close the file though when I am done with


It's just this sort of monotonous (yet important) book keeping (along
with all the exception protection, etc.) that something like
with-open-file ensures for you.
/Jon
Jul 18 '05 #189
Jon S. Anthony wrote:
Brian Kelley <bk*****@wi.mit .edu> writes:
Now the usage is:

f = SafeFileWrapper (open(...))
print >> f, "A couple of lines"
f.close()

...
I still have to explicitly close the file though when I am done with


It's just this sort of monotonous (yet important) book keeping (along
with all the exception protection, etc.) that something like
with-open-file ensures for you.


Personally I'd prefer guaranteed immediate destructors over with-open-file.
More flexibility, less syntax, and it matches what the CPython
implementation already does.
--
Rainer Deyke - ra*****@eldwood .com - http://eldwood.com
Jul 18 '05 #190

This thread has been closed and replies have been disabled. Please start a new discussion.

Similar topics

73
8072
by: RobertMaas | last post by:
After many years of using LISP, I'm taking a class in Java and finding the two roughly comparable in some ways and very different in other ways. Each has a decent size library of useful utilities as a standard portable part of the core language, the LISP package, and the java.lang package, respectively. Both have big integers, although only LISP has rationals as far as I can tell. Because CL supports keyword arguments, it has a wider range...
699
34235
by: mike420 | last post by:
I think everyone who used Python will agree that its syntax is the best thing going for it. It is very readable and easy for everyone to learn. But, Python does not a have very good macro capabilities, unfortunately. I'd like to know if it may be possible to add a powerful macro system to Python, while keeping its amazing syntax, and if it could be possible to add Pythonistic syntax to Lisp or Scheme, while keeping all of the...
34
2688
by: nobody | last post by:
This article is posted at the request of C.W. Yang who asked me to detail my opinion of Lisp, and for the benefit of people like him, who may find themselves intrigued by this language. The opinions expressed herein are my personal ones, coming from several years of experience with Lisp. I did plenty of AI programming back in the day, which is what would now be called "search" instead.
82
5388
by: nobody | last post by:
Howdy, Mike! mikecoxlinux@yahoo.com (Mike Cox) wrote in message news:<3d6111f1.0402271647.c20aea3@posting.google.com>... > I'm a C++ programmer, and have to use lisp because I want to use > emacs. I've gotten a book on lisp, and I must say lisp is the ugliest > looking language syntax wise. What is up with this: (defun(foo()). (DEFUN FOO () NIL) > What were the lisp authors thinking? Why did Stallman use lisp in
852
28731
by: Mark Tarver | last post by:
How do you compare Python to Lisp? What specific advantages do you think that one has over the other? Note I'm not a Python person and I have no axes to grind here. This is just a question for my general education. Mark
0
9627
marktang
by: marktang | last post by:
ONU (Optical Network Unit) is one of the key components for providing high-speed Internet services. Its primary function is to act as an endpoint device located at the user's premises. However, people are often confused as to whether an ONU can Work As a Router. In this blog post, we’ll explore What is ONU, What Is Router, ONU & Router’s main usage, and What is the difference between ONU and Router. Let’s take a closer look ! Part I. Meaning of...
0
9462
by: Hystou | last post by:
Most computers default to English, but sometimes we require a different language, especially when relocating. Forgot to request a specific language before your computer shipped? No problem! You can effortlessly switch the default language on Windows 10 without reinstalling. I'll walk you through it. First, let's disable language synchronization. With a Microsoft account, language settings sync across devices. To prevent any complications,...
0
10287
Oralloy
by: Oralloy | last post by:
Hello folks, I am unable to find appropriate documentation on the type promotion of bit-fields when using the generalised comparison operator "<=>". The problem is that using the GNU compilers, it seems that the internal comparison operator "<=>" tries to promote arguments from unsigned to signed. This is as boiled down as I can make it. Here is my compilation command: g++-12 -std=c++20 -Wnarrowing bit_field.cpp Here is the code in...
0
10119
jinu1996
by: jinu1996 | last post by:
In today's digital age, having a compelling online presence is paramount for businesses aiming to thrive in a competitive landscape. At the heart of this digital strategy lies an intricately woven tapestry of website design and digital marketing. It's not merely about having a website; it's about crafting an immersive digital experience that captivates audiences and drives business growth. The Art of Business Website Design Your website is...
0
8951
agi2029
by: agi2029 | last post by:
Let's talk about the concept of autonomous AI software engineers and no-code agents. These AIs are designed to manage the entire lifecycle of a software development project—planning, coding, testing, and deployment—without human intervention. Imagine an AI that can take a project description, break it down, write the code, debug it, and then launch it, all on its own.... Now, this would greatly impact the work of software developers. The idea...
1
7469
isladogs
by: isladogs | last post by:
The next Access Europe User Group meeting will be on Wednesday 1 May 2024 starting at 18:00 UK time (6PM UTC+1) and finishing by 19:30 (7.30PM). In this session, we are pleased to welcome a new presenter, Adolph Dupré who will be discussing some powerful techniques for using class modules. He will explain when you may want to use classes instead of User Defined Types (UDT). For example, to manage the data in unbound forms. Adolph will...
0
6721
by: conductexam | last post by:
I have .net C# application in which I am extracting data from word file and save it in database particularly. To store word all data as it is I am converting the whole word file firstly in HTML and then checking html paragraph one by one. At the time of converting from word file to html my equations which are in the word document file was convert into image. Globals.ThisAddIn.Application.ActiveDocument.Select();...
0
5492
by: adsilva | last post by:
A Windows Forms form does not have the event Unload, like VB6. What one acts like?
2
3621
muto222
by: muto222 | last post by:
How can i add a mobile payment intergratation into php mysql website.

By using Bytes.com and it's services, you agree to our Privacy Policy and Terms of Use.

To disable or enable advertisements and analytics tracking please visit the manage ads & tracking page.