473,443 Members | 1,982 Online
Bytes | Software Development & Data Engineering Community
Create Post

Home Posts Topics Members FAQ

BIG successes of Lisp (was ...)

In the context of LATEX, some Pythonista asked what the big
successes of Lisp were. I think there were at least three *big*
successes.

a. orbitz.com web site uses Lisp for algorithms, etc.
b. Yahoo store was originally written in Lisp.
c. Emacs

The issues with these will probably come up, so I might as well
mention them myself (which will also make this a more balanced
post)

a. AFAIK Orbitz frequently has to be shut down for maintenance
(read "full garbage collection" - I'm just guessing: with
generational garbage collection, you still have to do full
garbage collection once in a while, and on a system like that
it can take a while)

b. AFAIK, Yahoo Store was eventually rewritten in a non-Lisp.
Why? I'd tell you, but then I'd have to kill you :)

c. Emacs has a reputation for being slow and bloated. But then
it's not written in Common Lisp.

Are ViaWeb and Orbitz bigger successes than LATEX? Do they
have more users? It depends. Does viewing a PDF file made
with LATEX make you a user of LATEX? Does visiting Yahoo
store make you a user of ViaWeb?

For the sake of being balanced: there were also some *big*
failures, such as Lisp Machines. They failed because
they could not compete with UNIX (SUN, SGI) in a time when
performance, multi-userism and uptime were of prime importance.
(Older LispM's just leaked memory until they were shut down,
newer versions overcame that problem but others remained)

Another big failure that is often _attributed_ to Lisp is AI,
of course. But I don't think one should blame a language
for AI not happening. Marvin Mins ky, for example,
blames Robotics and Neural Networks for that.
Jul 18 '05
303 17397
Michele Simionato
Actually it is much *smaller* than that: this is the reason why it is
not significant at all from a physical perspective. ... The number
I find in my Ph. D. thesis ... is 10^227 GeV (!) BTW, it seems
too large now, I don't remember how I got it, but anyway I am sure the
number is much much larger than Plank scale (10^19 GeV).


1E227.... That's... ummm, a lot.

(((10 ** 227) GeV) / (c * c)) / mass of the sun = 8.96296347 × 10E169

Roughly 7 * 10E22 stars visible with current telescopes
http://www.cnn.com/2003/TECH/space/07/22/stars.survey/

Estimate for dark matter is about 10* visible matter, as I recall,
which means you're talking about

10000000000000000000000000000000000000000000000000
00000000000000000000000000000000000000000000000000
00000000000000000000000000000000000000000000000

times the mass of the universe.

Andrew
da***@dalkescientific.com
Jul 18 '05 #301
Stephen Horne <st***@ninereeds.fsnet.co.uk> wrote in message news:<ql********************************@4ax.com>. ..
I am also quite skeptical about IA claims.

Yes, but AI doesn't have to be all that "I" to have a huge economic and social impact.


I thought IA != AI, though I have to admit I'm not sure what IA stands
for. Instrumentalist something-or-other?


or maybe simply a typo ? ;)

Michele
Jul 18 '05 #302
On 3 Nov 2003 21:02:06 GMT, bo**@oz.net (Bengt Richter) wrote:
On Mon, 03 Nov 2003 01:48:23 +0000, Stephen Horne <st***@ninereeds.fsnet.co.uk> wrote:
Right, continuing off from yesterday...
Well, close again. I was trying to explore the notion of a distinction between
the "thing" as physical shape holder, and something non-physical that could be given
a particular shape as a consequence, like an electric or magnetic field in the
neighborhood. (Or, what if consciousness is a peculiar dynamic thing like lasing,
something that happens under certain conditions, which as evolution would have it,
occurs in brains a lot).
My major thought is that you are effectively invoking the god of the
gaps. There is no evidence of this 'field' - you are free to
hypothesise it, but until some kind of evidence is found I am free to
ignore it.

In comparison, there is a huge body of evidence that the brain is an
information processing machine and that consciousness is a function of
the brain.

Even altered states of consciousness have been both measured and
invoked, for instance, in ways that are perfectly consistent with a
neurological information processing basis for consciousness.
Electrically stimulate the temporal lobe in a way that mimics temporal
lobe epilepsy, for instance, and the end result is normally described
as a feeling of being in the presence of god.

Even moderately high levels of dopamine (well below those needed to
cause schizophrenia) are consistently associated with a more
'spiritualistic' outlook. Of course if I claim that any 'spiritualist'
viewpoint is an aberration caused by excess dopamine, the obvious
retort is that I must be suffering from dopamine deficiency - but this
misses the point. The processes that regulate dopamine in the brain
are pretty well understood. Dopamine is simply an aspect of the
information processing that the brain does - a metaphorical cog in the
machine. The spiritualistic outlook is *generated* by the information
processing in the brain.

And if you look at the social evolution of the human species, it is
not hard to believe that spirituality is an innate feature with an
important social purpose.

And as for our valuing consciousness, our valuing ourselves clearly
has an important evolutionary purpose - despite the myth, even
lemmings aren't suicidal.

The perception of self is pretty central within most peoples concept
of consciousness, so it is unsurprising that most people see
consciousness as important and valuable.
The brain really does change during sleep. In a way, you literally are
not the same person when you wake up as when you went to sleep. The physical state-holder is not identical, but what do you identify with?
The more your body changes as you age, the more you have to recognize that
your body's persistence is more like the persistence of an eddy near a rock
in a stream than the persistence of the rock. What then of your consciousness,
which perhaps is not even associated with the same atoms after some years? IOW,
it looksto me like the important aspect is essentially form, not substance.
And yet consciousness itself changes dramatically, sometimes from
second to second, and we still feel a sense of continuity.

How can we explain this sense of continuity?

One way is to appeal to the 'god of the gaps' - to claim some
currently unknown entity which we cannot yet observe must provide the
real experience of consciousness.

The other way is to look at the pragmatics of social behaviour. A huge
amount of social behaviour depends on that sense of continuity - that
feeling that (me today) == (me tomorrow) == (me yesterday).

Why, for instance, should I return a favor if I believe that I don't
owe anything because the recipient of the favor was, in effect,
someone other than me? How can it be fair to punish the person today
if he is not the same person who yesterday committed a crime? For that
matter, why worry about punishment if the person who will be punished
is not the same person who is committing the crime?

Having a sense of continuity of self is, in other words, pretty
directly linked to the practical information processes involved in
maintaining a reputation and social status.

More worryingly, the continuity of consciousness even when awake is
itself an illusion. Think of the fridge light that turns off when the <snip>This doesn't in the least clash with e.g. the concept of some kind of field-like
basis for experience. It could flicker on and off and flit around. No prob.
Yes, and so long as you can keep your theory in the gaps and well away
from scientific probing, no-one can prove it wrong, can they?

Whereas the information processing concept is actively being probed as
I type, revealing important clues about how consciousness works on a
regular basis.
Yes, if I understand you correctly. But I am interested in whether we should
be trying to look with sensitive devices for some kind of field/energy processes
that correlate with reported subjective conscious experience. (I.e., the "water"
in the metaphor above) I feel blocked if I have to limit myself to talking about
"information processing capacity of the brain."

While interesting, IMO it is not on the path to discovering the _basis_ for consciousness
in some measurable events/processes/tranformations/relationships/etc. Information processing
is more of a modulator than a medium in my view.
OK, so lets consider the implications of this 'field'...

Lets consider the 'alters' in multiple personality disorder. This has
been studied under a brain scanner. When alters switch, the activity
in the hippocampus changes dramatically. If the 'primary' personality
is active, the hippocampus is more active.

In both of our models, the hippocampus does not generate consciousness
itself. The information processing model tends to primarily implicate
the prefrontal cortex. In your model, obviously you invoke the field.

All the hippocampus provides is a set of memories - explicit memories,
to be precise.

An alter can suddenly emerge after years of not being expressed, and
with a perfectly intact sense of continuity of self - as if that self
had just been transported to the future with a time machine. And the
same thing applies to the primary personality when he/she returns,
perhaps days, weeks or longer later - an intact sense of continuity of
self despite the apparent jump in time.

If the brain contains a single 'field' which persists the whole time,
how come their are two (or more) independent personalities with
separate senses of continuity of consciousness?

If there is a distinct 'field' for each personality, how are the two
kept distinct in the same container? And while you can claim that
there is some awareness of time lost in sleep, it is much harder to
justify an awareness of time lost in this case so how come the fields
are not aware (in general) of having lost control of the 'host' body,
brain included, for perhaps very long periods of time?

The fields only really make sense if each field arises as the
personality arises, taking the shape set out by the memories that
become active. But think what that means. If the field is created and
shaped by the activity of neurons (and presumably influences the
neurons in turn as a feedback process), but it has *no* independent
existence of its own - then it is nothing more than another cog in the
information processing machine.

Does this really satisfy your need for the mind to be something more
than information processing?
The information processing theory can take MPD in its stride, of
course. If you keep your memories in several self-consistent but
distinct chunks then, even if the same neural machinery handles the
processing, you get a different 'self' by simply switching which sets
of memories are active.

There remain important questions to answer about how and why MPD
occurs (not the traumatic-experience-etc why but more the question of
if there is an evolutionary benefit to maintaining several distinct
'selves', or whether it is simply a kind of breakdown), but there is
no sign at yet of them being unanswerable.
Basically, as I said before, you seem to be appealing to the 'god of
the gaps'. You find the idea of the mind as an information processor
unapalatable, but - as with many people who claim not to be religious
per se - you don't want to appeal to an actual god or soul. So instead
you appeal to an undiscovered type of physics.

But even if this undiscovered type of physics exists, once discovered
and understood it would just be another piece of the science jigsaw.
Just like quantum computing, it would simply be another way of doing
the information processing.
Can I explain why people shouldn't be comfortable with being a part of
the physical world, and of having minds that are explainable in
information processing terms? Obviously there's the sense of agency
and illusion of free will, but while they might explain a
misinterpretation they don't explain a continuing strong conviction,
so what else?

There is a concept called the 'expanding circle' which relates to who
is and who is not considered a 'person'. I put that in quotes because
it is far from literal - humans are often excluded, and (particularly
in modern times) animals etc are very often included.

Basically, it refers to an intuitive sense of who has rights, who you
can empathise with, etc. When you can torture and kill 'the enemy'
without being traumatised yourself (assuming you are not simply a
sociopath) it is a clear sign that 'the enemy' are outside of your
circle, for instance.

This intuitive sense has clear practical evolutionary value - being
willing to kill others in your tribe without batting an eyelid would
obviously not suit a social species, yet it would be hard to carry out
intertribal warfare if you empathised with your enemies. And this is
complicated by the fact that it seems tribes did make both short and
longer term alliances - you couldn't rely on an individual being a
stranger, but needed to make more flexible classifications.

This isn't the only intuitive classification the mind makes, of
course. There appear to be clear intuitive distinctions between
animals, plants, and inanimate objects for instance. These
distinctions seem to be innate and instinctive, though there is
flexibility in them.

If these are general intuitive principles, it is no surprise that when
you introspect you find it hard to accept that your mind is not
distinct from an information processing machine. Your mind naturally
classifies your self in the intuitive category of 'person'.

Basically, it would be surprising if most people didn't resist the
idea of being, in principle, little different to a computer.

As for me, well perhaps those classification systems don't work so
well in people with autism spectrum disorders. Maybe that is why we
are overliteral in applying moral rules as well as other areas - we
don't really have the same intuitive sense that everyone else has of
the differences in how people, animals and inanimate objects should be
treated.

Maybe that is a part of the 'theory of mind' issue.

So when I introspect, it doesn't particularly bother me to see myself
as a (broken) machine designed and built to carry genes from one
generation to the next, and neither does it particularly bother me to
see my mind and consciousness as a product of information processing.
Talking about information processing is like talking about chopping vegetables.
It is only one factor in giving taste to the soup, and the subject is how we experiece
the taste of soup. And what we are, that we can have a tasting-soup experience without being soup.
In what way is 'experience' not information processing?

I know, I know. There is all this information being integrated from
senses, working memory, associations from longer term memory,
emotional colouring etc, but 'the experience itself must be something
else' as the claim typically goes.

But of course you are aware of 'experience' - that's just 'higher
order perception'. The brain is perfectly capable of several 'meta's
in front of 'experience', 'thought' or whatever.
To put it another way, when looking at a picture you may feel that you
are aware of the whole picture, but you are not. Big changes can be
made to the picture without you noticing, as long as they don't effect
the precise detail that you are focussing on at that precise moment.
It is called change blindness.

A big part of the sense that something is 'conscious' is actually
illusory in itself - things that can be immediately brought into
consciousness tend to be classified by the mind as currently conscious
because, in the practical sense, they may as well be. So with change
blindness, you 'experience' the whole picture mainly because you
intuitively know that you can focus on any part of it in an instant.

The same applies to the soup, really. At the moment you may only be
focussing on the texture, or the temperature, or for that matter the
fly you just spotted in your bowl - but there is a higher level
perception that you are having the 'whole soup experience' because you
unconsciously know that you can tune into any aspect of it at will.

The 'whole soup experience' is basically an illusion, or maybe a
'collective noun' for the lower level perceptions that you had (or
didn't have) while eating the soup.
BTW - what is this thing about 'without being soup'. If someone
claimed to 'be one with the soup' I would suspect them of having a
rather odd case of altered consciousness. The 'being one with
everything' sense seems to be an exaggerated case of something quite
ordinary, by the way.

Pick up a pencil and run it over a rough surface. Most people will
quite quickly start experiencing the bumps as being at the tip of the
pencil, as if the pencil were an extension of the body.

Phantom limbs also have some significance to this.

Basically, a part of our sense of self is a sense of the boundary
between ourselves and the rest of the world. Normally this matches
pretty well with our bodies, but there is a degree of flexibility to
put the bounds where they are needed, and sometimes things just plain
go wrong.

The experience of tasting soup is certainly more than just the result
of the input from the taste buds, of course. There are other senses
involved, there are emotional associations and associations from
memory - the whole soup-eating experience.
Basically, there is nothing here which I don't see as information
processing.
Now consider the experience of being "convinced" that a theory "is true." What does that mean?


Science suggests that all subjective meaning is linked to either

Who is Science? Some person made some observations and concocted a story about them,
that's my take ;-)


You might like to read some Rita Carter stuff - this is my easiest
reference because, as I mentioned before, I'm currently reading her
book "consciousness".

The specific field of science, in this case, would be neuroscience.
In fact a key language area (Brocas area IIRC) is also strongly linked
to gesture - hence sign language, I suppose. IMO, language is only limited by your imagination and that of the other in your
communication. That's why we speak of body language, etc. Anything perceptible
can serve, if the two are in tune (~ at the same point in a context-sensitive parse).
The fact that there is generally muscular effort in creating a perceptible signal
should not IMO be taken to mean that our understanding must be limited to things
with associated 'action potentials' ;-)
Body language is quite distinct from the kind of gesture that occurs
in sign language. As I understand it, most body language is not
generated by Brocas area.

Also, the use of muscles isn't what this is about. Even when you here
a word, or think about the concept, your brain generates the
associated action potentials - but inhibits the actual muscular
actions.
This makes good sense. Evolution always works by modifying and
extending what it already has. The mental 'vocabulary' has always been
in terms of the bodily inputs and outputs, so as the capacity for more "Always" is a way to prune your thought tree. Careful ;-)
It is also very much the right word in this case. Sophisticated useful
abilities in complex organisms do not arise through fluke mutations,
much as broken fragments of china do not suddenly join themselves into
a perfect vase and leap up onto your wobbly shelf. It's called the law
of entropy.
abstract thought evolved it would of course build upon the existing
body-based 'vocabulary' foundations.

Ditto about "of course."


Evolution doesn't throw away what it already has and start again from
scratch. It seeks local maxima - it cannot anticipate long term
advantages.

Assuming that the initial 'mental vocabulary' was spelled using
body-based 'heiroglyphs', then it is very hard to believe that
evolution would ignore this foundation. And besides, as I already
mentioned, the body-based vocabulary is still alive and well in modern
human minds.
How do you _feel_ when you _feel_ convinced?
I ask the same of you - how do _you_ feel when _you_ feel convinced
that there is something more than information processing?

As I mentioned already, I don't have an emotional reaction to the idea
of my mind being an information processing machine. I mostly see the
science as a means of understanding myself better and trying to solve
some very real problems that I have.

But I have a number of reasons for expecting most people to have quite
a strong emotional reaction against the idea of their minds being
information processors. Reasons that themselves arise out of the
information processing model.

So who is being more objective? The person who looks to the theory
that is currently generating interesting results and new ways to look
at things by the bucket-load? Or the person who looks to an
'undiscovered' idea as a way to avoid accepting ideas he finds
distasteful.
Basically, if a person has to give the concept a new, abstract,
internal symbol instead of using the normal body-language associated
internal symbol, then any innate intuitions relating to that concept
will be lost. The neurological implementation of the intuitions may
exist but never get invoked - and therefore it will tend to atrophy,
even if its normal development doesn't depend on somatosensory
feedback in the first place.

ISTM the brain is fairly adept at selecting metaphor-stand-in players to keep
a show going when there seems to be a need for some new role-player
on the stage.


Of course. That seems to me to be key to the body-based mental
vocabulary idea. How else could abstract terms be represented but as
'metaphors'?

For instance, just because my mental spelling of 'democracy' involves
putting up my hand to vote, doesn't mean I'm not going to put a tick
in the box (or push the button on the voting machine or whatever) when
I'm in that booth.

But consider how you look in a dictionary. It's hard to find the right
word if you don't know how it is spelled.

Now consider what happens if innate concepts are stored in such a
'dictionary', dependent on using the standard 'spellings' for the
mental concept to look them up.
IOW, I suspect that our core ability is not in the linguistics
of stage directions, but in putting on a mental show (sometimes silent mime ;-)
with some useful relation to "reality-out-there."
Of course. I wouldn't expect the mental vocabulary to be spelt in
verbal terms (speaking action potentials, or sound-of-word) except as
a last resort, when it is too abstract to assign a 'metaphor'. And I
never said that the mental 'words' are organised into sentences - that
is almost certainly pushing the metaphor way too far. I meant
'vocabulary' in the sense of a set of symbols manipulated in an
information processing machine.
Perhaps one of the bit players early on created a traumatic disturbance
on the stage, and the theatre owner just decided no more of that, perhaps being too young
and inexperienced to manage a frightening crew incident at the time.


If that's a metaphor for the causes of autistic spectrum disorders, it
is an extremely dated one. Autism isn't caused by traumatic
experience. It is a neurological disorder, with an extremely large
degree of genetic causation. There is a significant environmental
causation, but emotional trauma at least isn't it.

The trauma comes later, when autistic symptoms wrongly convince others
that the person is rejecting them, is being deliberately disruptive,
is untrustworthy etc etc, and when people react in turn by rejecting
the autistic person.

Autistic symptoms can typically be seen when the child is three or
even younger - when the brain is toward the end of its 'initial
development', and particularly when the prefrontal cortex comes fully
'online'.

The trauma doesn't typically start until school age, though
occasionally it can start earlier when doctors jump to conclusions
about abuse and separate the child from the environment and people
that he/she is reliant upon, destroying the sense of predicatability
that autistic children need.
Is it pain and pleasure at bottom


Nope - the innate circuitry of our brain brings a lot more than that.

I am not sure what to make of your apparent confidence in asserting truths ;-)


This is simple fact. But I'm tiring of explaining everything etc.

Start with Rita Carters books 'mapping the mind' and 'consciousness'.

Steven Pinkers books (particularly 'How the Mind Works') are also IMO
very good, with the possible exception of 'words and rules' which on
first reading droned on far too long and didn't seem that relevant to
a wider understanding of the mind anyway - though I may go back and
re-evaluate that idea now.

Joseph LeDouxs books (The Emotional Brain and Synaptic Self) are
simply essential reading if you are serious about understanding how
the brain works.

There is a BBC book-of-a-series called 'human instinct' by Robert
Winston which may be worth reading, but keep your critical faculties
operating - he doesn't seem to be an expert himself, and he
occasionally seems to miss the point of the theories he's describing.
My first-choice textbooks to have around are...

Neuroscience - Exploring the Brain
Mark F. Bear, Barry W. Connors, Michael F. Paradiso
ISBN 0-7817-3944-6 (2nd edition)
ISBN 0-683-00488-3 (1st edition)

Obviously concentrates on the neurology, but nevertheless essential
reading if you ask me - the second edition really is much improved
over the first, too.

Social Psychology
Robert A. Baron, Donn Byrne
ISBN 0-205-34977-3 (10th edition)
ISBN 0-205-31131-8 (9th edition)

Definitely the one I trust most in social psychology, and truly
fascinating. Not a great deal has changed between editions, but I'd
pay far more for far less when it's this good.

Cognitive Neuroscience - The Biology of the Mind
Michael S. Gazzaniga, Richard B. Ivry, George R. Mangun
ISBN 0-393-97219-4 (1st edition)

I'm in two minds about this one. It is certainly very interesting,
but the cognitive and neurology levels are often quite weakly tied
together. Probably an artifact of this being a relatively new field,
and the two source perspectives being less than perfectly welded.

Great for understanding the prefrontal cortex in particular, though.
I found the stuff on executive function particularly enlightening.

Psychology
Richard Gross
ISBN 0-340-64762-0 (3rd edition)

There is some good stuff in here, though there is obviously overlap
(and in general less detail) with the more specialised fields.
Pretty sure this has been updated.

Cognitive Psychology
Michael W. Eyesenck, Mark T. Keane
ISBN 0-86377-551-9 (4th edition)

Essential to have around, but not the most interesting. That is
probably unfair, though - I am very sceptical of pure cognitive
theories that have no link back to neurology.

There is (at least) a fifth edition now.

Abnormal Psychology
Ronald J. Comer
ISBN 0-7167-4083-4 (4th edition)
ISBN 0-7167-2494-4 (2nd edition)

The second edition was excellent, and still stands as my first point
of reference for abnormal psychology stuff - though sadly its stuff
on autistic spectrum disorders is pretty poor. The fourth edition is
updated a bit here and there, but the main change seems to be to
remove as much detail as possible.
With social psychology and abnormal psychology in particular, there
are plenty of other choices and it's always good to have more than one
perspective.
Obviously there is a big dose of appealing to authority in this list,
but if you are really interested in what science is revealing about
the mind and you can spare the money and (much more significantly) the
time, there is a lot of good stuff in that list. Obviously the
textbooks would wait until you're *really* taking it seriously.
I'll be very brief with the rest...
'Pain' and 'pleasure' are actually quite high level concepts, things

If you experience pleasure only as a high level concept, you are missing something ;-)


I think you're confusing 'high level' with 'abstract'. Emotional
experience is very high level. As Joseph LeDoux is keen on pointing
out, our experience of emotions is not the same as the processing that
generates them. Most of the processing is relatively low level, in the
limbic system and brain stem, but that is not the same as the
experience of emotion which occurs in the prefrontal cortex.

The prefrontal cortex does seem to be the center of rational thought
and planning, but that is a long way from being its whole job.

You will find it hard to find someone who knows more about neurology
that LeDoux - if your first impression is that his books are pop
science, try counting the references to his papers in the textbooks.

Which is not to say that his books aren't readable, I must emphasise.

Ayway, it is quite possible that I am missing something in
pain/pleasure terms - but how can I know for sure? For all I know,
everyone feels pain the same as me but you're all a bunch of whiners
and crybabies ;-)
'Ugly' and 'dumb' are themselves only subjective perceptions. If you

I disagree with the 'only' part ;-)


OK - I take the point.
--
Steve Horne

steve at ninereeds dot fsnet dot co dot uk
Jul 18 '05 #303
Stephen Horne
The brain really does change during sleep. In a way, you literally are
not the same person when you wake up as when you went to sleep.
Some ways are more meaningful than others. I obviously
breath, providing oxygen to burn parts of my body and thus
provide energy to survive. Therefore I 'literally' am not the same
person. Similarly, the me of time T experiences events which
happened at time T-1, which the me of time T-2 does not know,
so the state of my neurons and muscles have changed.

Is that a useful of "me"? In some cases, yes. In most cases, no.
And yet consciousness itself changes dramatically, sometimes from
second to second, and we still feel a sense of continuity.

How can we explain this sense of continuity?

One way is to appeal to the 'god of the gaps' - to claim some
currently unknown entity which we cannot yet observe must provide the
real experience of consciousness.

The other way is to look at the pragmatics of social behaviour. A huge
amount of social behaviour depends on that sense of continuity - that
feeling that (me today) == (me tomorrow) == (me yesterday).
Only two ways? Perhaps there just are no gaps? Consider
evolution. We do not have every fossil for every ancestor of,
say, a horse, so there are always gaps in the record (what is
the gap between you and your father?) But the prediction that
there is something in between has fit nicely with both traditional
cladistics and sequence-based phylogenies.

Plus, you are shaking the wrong end of the stick. The opposite
is likely more true; people have a strong sense of continuity so
we have developed social structures which reflect that. If,
for example, personalities were randomally exchanged during
sleep then we would have ended up with a quite different
culture.

(I'm an avid science fiction reader but I can't recall a
story built around this. There are stories where one or a
small number of people change bodies, but not the whole
culture. It would be even harder if the full personality of a
body changed every day to a new one -- how would we,
as personality-centered creatures, identify with a being
which every day had a new personality? The closest I
can think of is Vinge's packs in 'A Fire Upon the Deep',
where the personality can change as pack members join
and leave the pack, or Brin's ... the waxy ring people in
the Uplift stories.. where each ring contributes to the
full personality and, ahem, one ring can rule them all.)
Having a sense of continuity of self is, in other words, pretty
directly linked to the practical information processes involved in
maintaining a reputation and social status.
Again, I'll argue it's the other way around.
Yes, and so long as you can keep your theory in the gaps and well away
from scientific probing, no-one can prove it wrong, can they?
You suggested people fill in the gaps of consciousness.
You said they exist because, for example, electrical stimulation
of part of the brain would cause movement, and asking the person
why that movement occurs would get a post hoc reason along
the lines of "well, I wanted to move my arm."

That doesn't necessarily mean that there was no consciousness,
only that parts of actions are always not under direct conscious
control. Stimulate my eye lids to blink and ask me why I blinked.
I'll say "because my eyes were dry?" That doesn't mean that
there was no sense of consciousness at that time.

Or ask someone to stop tapping a pencil and that person
might respond with "I'm sorry, I didn't realize I was doing
that."

Or when I drove across country along you might ask me
what I did during that time and I'll repond "I don't remember;
just zoned out." Was I not conscious during that time or
did I simply decide it wasn't worth remembering?
Lets consider the 'alters' in multiple personality disorder. This has
been studied under a brain scanner. When alters switch, the activity
in the hippocampus changes dramatically. If the 'primary' personality
is active, the hippocampus is more active.
This work is deeply conjectural. I have no training at all in the
subject (err, I do know some of the theory behind brain scanning)
but I just recently read a "Straight Dope" article
http://www.straightdope.com/columns/031003.html
which says

Multiple personality disorder, now officially known as dissociative
identity disorder (DID), remains the object of bitter controversy.
One thing's clear, though--it's not nearly as common as people
thought just a few years ago.
...
The question remains: Are multiple personalities ever real? The
debate still rages. Skeptics claim that alters are invariably
induced by the therapist; the more respectable defenders of
DID agree that many are, but not all. The controversy has been
complicated by disagreement over the nature of personality. The
common understanding of DID is that the alters are independent
of one another and don't share memories and other cognitive
processes, but demonstrating this has proven difficult. Speech and
behavior are under conscious control, so changes can readily be
faked. Even things like brain-wave patterns may vary not because
of a genuine personality switch but because alleged alters cultivate
different emotional states and different ways of acting out.

(Note those last two lines ;)
Does this really satisfy your need for the mind to be something more
than information processing?
One interpretation of Bengt's statements is that this higher-level
structure may be modeled in its own right, like phonons, or
Cooper pairs in supercondutors, or boolean logic gates created out
of analog silicon with impurities, or the Great Red Spot on Jupiter.

This doesn't make them 'something more', only something distinct.
And there's no reason these can't be studied with an information
processing model as well.

(However, my reading suggests that his statements are too
vague to know which interpretation is correct, and I'm leaning
towards agreeing that he meant what you thought it to mean.)
The information processing theory can take MPD in its stride, of
course. If you keep your memories in several self-consistent but
distinct chunks then, even if the same neural machinery handles the
processing, you get a different 'self' by simply switching which sets
of memories are active.
But if MPD really is as rare as Uncle Cecil says, then
informational processing theory is not a useful *predictor* of
human behaviour, because it makes no statements about
the likelyhood of a given event. Why don't I have acraphobia
today, arachnaphobia tomorrow, and agoraphobia the next?
Why doesn't everyone have strong MPD?

And a requirement for being a useful theory is that it be
able to make predictions.

(Off this off-topic thread; in 'Call of Cthulhu' as I recall there
was a Latinate word for 'the fear that gravity would reverse'.
I can't find a reference to it. Any pointers?)
Basically, as I said before, you seem to be appealing to the 'god of
the gaps'. You find the idea of the mind as an information processor
unapalatable, but - as with many people who claim not to be religious
per se - you don't want to appeal to an actual god or soul. So instead
you appeal to an undiscovered type of physics.
Suppose some day there is artificial intellegence which requires
so much computer power that it takes in 1 second of input then
spends 10 minutes processing it. That makes for very large gaps.
From an information processing model, these gaps do not exist
because time does not come into play in those models (that I've
seen). But it does occur and does not require a 'god of the gaps'.

(Again, I suspect that I am arguing a different viewpoint than
Bengt.)
Can I explain why people shouldn't be comfortable with being a part of
the physical world, and of having minds that are explainable in
information processing terms? Obviously there's the sense of agency
and illusion of free will, but while they might explain a
misinterpretation they don't explain a continuing strong conviction,
so what else?
What's information? The definitions I know of come from Shannon,
the definition of entropy in thermodynamics, and the surface area
of the event horizon of a black hole, and as I recall it's philosophically
appropriate to wonder if (or that) they are the same thing.

How then do you turn these definitions into a useful model for
intelligence? I suspect you do so by first assuming a boolean
algebra. It requires a lot of entropy to process one bit (because
of the need for certainty), so you are already using a higher
level approximation of the underlying physics.

And as Michele pointed out, some things can be explained well
with a higher level field equation which does not accurately
model the smaller scale behaviour.
There is a concept called the 'expanding circle' which relates to who
is and who is not considered a 'person'. I put that in quotes because
it is far from literal - humans are often excluded, and (particularly
in modern times) animals etc are very often included.

Basically, it refers to an intuitive sense of who has rights, who you
can empathise with, etc. When you can torture and kill 'the enemy'
without being traumatised yourself (assuming you are not simply a
sociopath) it is a clear sign that 'the enemy' are outside of your
circle, for instance.
It is difficult for me to throw a book away, break its spine, or
otherwise destroy it. I wanted to get rid of a few and ended up
telling a friend of mine to do it for me, because I couldn't do it
myself. These books were given to me by family, but I was
not going to reread them. Does that make the books a 'person'?
Does that extend my relative's personhood into my books?

Suppose someone it told to destroy a work of art which took
that person 10 years of devotion to create. It's likely that
that would cause trauma. Does that make the work of art
a person to that artist?
This intuitive sense has clear practical evolutionary value - being
willing to kill others in your tribe without batting an eyelid would
obviously not suit a social species, yet it would be hard to carry out
intertribal warfare if you empathised with your enemies. And this is
complicated by the fact that it seems tribes did make both short and
longer term alliances - you couldn't rely on an individual being a
stranger, but needed to make more flexible classifications.
You have a naive view of what sociobiology might do,
biased no doubt by being brought up in this culture.

The restriction for 'survival of the fittest' is to increase the chances
that your genes will be propogated. There's no reason that
cannot happen in a social species. Newly dominant gorillas,
as I recall, will kill the infants which aren't his. (No references
though; I should reread my Gould books.)

And in Jared Diamond's "Guns, Germs and Steel" he
mentioned a woman he met in New Guinea, from a tribe
only recently out of the hunter/gather stage whose 2nd
husband was killed by his brother, so that he could be
her 3rd husband.

Or consider Shakespeare's McBeth, where the king's brother
killed the king to become the new king. We don't
immediately and instinctively reject that as condition which
cannot occur under human relationships, meaning that it
isn't prohibited by a million years of Darwinistic evolution.

Consider the Medici or Borgia families, or for
that matter much of the ruling Europeans. Just how
many of them died from the hand of family than from
outsiders. (Or consider Prince Caspian from the
Narnia book of the same name. :)

Wild dogs are social creatures. This page
http://www.szgdocent.org/aa/a-wildog.htm
says that "wounded members have been known to
be killed by their pack" which is in contradiction to
your direct statement that "being willing to kill others
in your tribe without batting an eyelid would obviously
not suit a social species."

In the Bible, Abraham almost sacrificed his son Isaac,
http://en.wikipedia.org/wiki/Near_sacrifice_of_Isaac
and of course Cain killed his brother Abel.

There are plenty of cases where one family member
killed another, making it hard to argue that the
prohibition comes from evolutionary reasons.

Plus, as a male it is evolutionary advantageous to
impregnate non-tribal women instead of killing
them, so you'll need to modify that clause as well.
This isn't the only intuitive classification the mind makes, of
course. There appear to be clear intuitive distinctions between
animals, plants, and inanimate objects for instance. These
distinctions seem to be innate and instinctive, though there is
flexibility in them.
That's very culturally biased. Consider some Native American
languages which have different conjugations for things which
are alive vs. things which are not. (I think Navajo is one such.)
As I recall, clouds are alive.

In any case, that intuition breaks down because some things
are not animals, not plants, and not inanimate. Eg, for living
things we must also include bacteria and archeobacteria,
there's also viruses in the grey area. For animate vs.
inanimate, when does an animate volcano become an
inanimate mountain? Is the Moon animate? What about
a waterfall? A tornado?
If these are general intuitive principles, it is no surprise that when
you introspect you find it hard to accept that your mind is not
distinct from an information processing machine. Your mind naturally
classifies your self in the intuitive category of 'person'.
Again, a culture bias. It's hard for many people to accept
that they are animals, or that humans and other hominds had
a common ancestor. Yet so far there is much more evidence
for evolution than there is that an "information processing
theory" gives a similarly useful model for understanding
the brain.

To expand upon that, as a good reductionist and (ex-)physicist,
I think in principle the human body, including the brain, can
be modeled from first principles using quantum mechanics.
However, that cannot be done in realistic time so we must use
approximations, and those approximations may be extremely
good (as in 99.99%+) because the higher levels have an
appropriate 'field theory'. Eg, weather prediction doesn't
require an atomic theory.

You have postulated such an "information processing theory"
but not stated what that theory means .. and given yourself
an out by saying that it can be expanded to include new
behaviour. Without explanation, it's hard to judge if your
idea is correct or not. Without explanation, I can see that
"information processing theory" is identical to the laws of
thermodynmics (where entropy == information) and then
point out that you are omitting many parts of physics,
Basically, it would be surprising if most people didn't resist the
idea of being, in principle, little different to a computer.
Basically, it would be surprising if most people didn't resist the
idea of being, in principle, little different to another person. Or
little different to a bonobo monkey, or little different than a dog,
or little different to yeast. But under some definitions these
are little enough.
As for me, well perhaps those classification systems don't work so
well in people with autism spectrum disorders. Maybe that is why we
are overliteral in applying moral rules as well as other areas - we
don't really have the same intuitive sense that everyone else has of
the differences in how people, animals and inanimate objects should be
treated.
But those classifications systems are invalid, and the determination
of the boundaries between classes are culturally determined.
(Hence the rejection of the idea of a platypus when it was
first presented in Europe, but the full acceptance of it now)

I suspect it's more a matter of not knowing enough about
animal behaviour and evolutionary biology. You may want to
read some of the Gould books. There are several others I
could suggest, but it's been about 10-15 years since I read
them and I can't recall them now.
Maybe that is a part of the 'theory of mind' issue.
That's too, but you are using evoluationary arguments
and need a better background in evolution both theoretical
and as observed. (Eg, insects pull off just about every
evolutionary trick in the book, and bacteria can pull off almost
all of the rest. Humans, though, have learned a few new
ones ;)
So when I introspect, it doesn't particularly bother me to see myself
as a (broken) machine designed and built to carry genes from one
generation to the next, and neither does it particularly bother me to
see my mind and consciousness as a product of information processing.
"A chicken is an egg's way of making more eggs."

Again, the problem with both those views is that they don't
provide much predictive power. I have no problems seeing
myself as a self-organized meta-stable collection of atoms
living in a collapsing wave function, but for just about everything
I do that knowledge doesn't help me much.

Oh, and the definition of "broken" in an evolutionary sense means
your genes aren't passed on -- even if you don't have children,
if you help ensure that two or more of your nieces and nephews
do reproduce then you are an evolutionary success. Plus, you
should drop the "designed" because that implies a designer.
Saying "as a machine to carry genes" would be just fine.
To put it another way, when looking at a picture you may feel that you
are aware of the whole picture, but you are not. Big changes can be
made to the picture without you noticing, as long as they don't effect
the precise detail that you are focussing on at that precise moment.
It is called change blindness.
Here's a neat Javas-based demo of change blindness.
http://www.usd.edu/psyc301/ChangeBlindness.htm

One of them (with the sphinx) I got right away. Another, with
the couple eating dinner, I didn't get even though I was
looking at the spot that change -- I just though "something
changed... but what?" And with the cemetary one I figured
it out because the scene was wrong.

But why deal with change blindness? Close one eye.
Do you see your blind spot? (I spent hours in high school
practicing to see it. I think I annoy optomitrists when I
say 'and now the pointer has entered my blind spot' ;)
A big part of the sense that something is 'conscious' is actually
illusory in itself - things that can be immediately brought into
consciousness tend to be classified by the mind as currently conscious
because, in the practical sense, they may as well be. So with change
blindness, you 'experience' the whole picture mainly because you
intuitively know that you can focus on any part of it in an instant.
What you say doesn't necessarily follow from that. Consider
this image processing model. The eye does image processing
which is not under direct conscious control. This reduces the
scene into a simplified model, and access the brain's internal
model of how things should be to fill in details, like fill in the
blind spot. We do have some ability to influence things, but
it isn't under full control.

We are conscious of this model, but don't realize that it's
only an approximation to the actual data coming in. Isn't
this just as valid as your description, but leading to a different
conclusion?

(What I did here was pull a Chinese room on the problem,
and redefine that consiousness takes place after image
processing. Personally I'm fine with saying that part of my
consciousness exists at times even outside my head, as with
a dog to wake me up if there are intruders, or even other
people, to provide insight I find hard on my own.)
The 'whole soup experience' is basically an illusion, or maybe a
'collective noun' for the lower level perceptions that you had (or
didn't have) while eating the soup.
A problem I have with your definition is that you can say "illusion"
but have no way to define what is anything besides an illusion. That
makes the word useless in terms of descriptive power, and could
just as easily use the word "interpretation", which doesn't have
the connotations that it's false. Plus, if you push "illusion" too much
you end up in solipsism, which is just plain boring.
Pick up a pencil and run it over a rough surface. Most people will
quite quickly start experiencing the bumps as being at the tip of the
pencil, as if the pencil were an extension of the body.

Phantom limbs also have some significance to this.
The pencil *is* an extension of the body, no? So is putting gloves
on. Or driving a car. I'm having a problem with your use of
the phrase "as if".

In your list of significance, consider also prosthetic limbs,
and limbs which have "fallen asleep."
Basically, there is nothing here which I don't see as information
processing.
What in the universe *don't* you consider as information processing?
Why not?

Eg, if information is entropy then everything in the Universe
is information processing, meaning your model has no more
predictive power than thermodynamics.
You might like to read some Rita Carter stuff - this is my easiest
reference because, as I mentioned before, I'm currently reading her
book "consciousness".
Based on the statements you've made, I have distrust in what you've
been reading, or at least your interpretation of that data. So I was
biased in my search and found a page critical of one of her books, at
http://human-brain.org/mapping.html
It says of her some of the same things I've complained about here,
like a problem with definitions, and a tendency to give only one
interpretation when many are possible and equally likely. This
may be okay for a general science book, on the justification
that it provides a possible world view, but it doesn't mean that
that's correct.

(Ain't 2nd hand put downs fun? :)
Body language is quite distinct from the kind of gesture that occurs
in sign language. As I understand it, most body language is not
generated by Brocas area.
I personally don't know. I know evolution much better than I
do neural models. Google found
http://cogweb.ucla.edu/ep/GestureLanguage.html
] Brain imaging has shown that a region called Broca's area, which
] is important for speech production, is active not only when we
] speak, but when we wave our hands. Conversely, motor and
] premotor areas are activated by language tasks even when those
] tasks have been robbed of their motor aspect--during silent
] reading, for instance--and particularly by words that have a
] strong gestural component, such as verbs and tool names.
]
] Impairments of language and coordination are closely related,
] too. People with a condition called Broca's aphasia can put
] names to things but have trouble stringing sentences together.
] They show similarly impoverished gestures, indulging less in
] hand movements that relate to the flow or rhythm of the
] accompanying speech than to those that convey its content.
Also, the use of muscles isn't what this is about. Even when you here
a word, or think about the concept, your brain generates the
associated action potentials - but inhibits the actual muscular
actions.
As far as I can tell, you haven't said what an 'action potential' is.
I assumed it was something related to synapse signalling, which
the brain must do to work. Now it appears to be something else
because some words, like "star" or "python programming
language" or "tastes of anise" have no corresponding muscular
actions.
This makes good sense. Evolution always works by modifying and
extending what it already has. The mental 'vocabulary' has always been
in terms of the bodily inputs and outputs, so as the capacity for more

"Always" is a way to prune your thought tree. Careful ;-)


It is also very much the right word in this case. Sophisticated useful
abilities in complex organisms do not arise through fluke mutations,
much as broken fragments of china do not suddenly join themselves into
a perfect vase and leap up onto your wobbly shelf. It's called the law
of entropy.


There are too many things going on here. First, assume the
Universe is a closed system. Evolution works in the Universe, so
of course it always 'modifies and extends' things in the Universe --
it cannot do otherwise.

Second, 'it' can mean a single organism or species, when things
like gene transfer between bacteria show that evolution doesn't
quite fit that nice categorization.

Third, 'evolution' doesn't modify things, things like errors during
{D,R}NA copying, chemical mutagens, and radation modify things.
Evolution describes how the local ecological landscape can have
different effects on the different mutations can cause speciation
or extinction. This is less personified than you use.

And finally, YES MOST DEFINITELY "sophisticated useful
abilities in complex organisms DO arise through fluke mutations."
THAT IS A FUNDAMENTAL IDEA OF EVOLUTIONARY
THEORY!

Random mutations + ecological pressures => evolution

The reference to entropy (by which you likely mean the
second law of thermodynamics) is for a closed system
and a TOTALLY INAPPROPRIATE METAPHOR.
Locally we have this thing called the Sun, which makes
the Earth be an open system.

(My apologies for the yelling, but I wanted to get my point
across that what you said was absolutely counter to
evolutionary theory.)
Evolution doesn't throw away what it already has and start again from
scratch. It seeks local maxima - it cannot anticipate long term
advantages.


Stop anthropomorphising.

Abilities are definitely lost due to evolution, unless you can
indeed breath underwater?

As to "start again from scratch", I don't know enough give
a good example, but what about dolphins? They like us were
derived from a fish, but lost many of the sea going abilities
of a fish, which have now been regained. It cannot be that
these were all done in the same way, because dolphins
are not fish, and it's almost certain that there was parallel
evolution for some of the traits.

(I also seem to recall some legless lizards first losing then
regaining then losing legs, but again I don't know enough
of the details.)

What to you would qualify as "starting from scratch"?
Here's one theoretical model:

the parent has gene X which undergoes a gene duplication
event, so there are now two copies of X.

X is important so one of those genes remains unmodified
while the other is free to mutate into Y.

While once useful, Y is no longer needed, and when a
retrovirus comes in it inserts itself into Y with no ill
effects.

Conditions change again so that the X->Y transition
is helpful and prefered.

During that time, X undergoes another duplication event
and the new X is free to follow evolutionary pressures
to be a new version of Y.

I see no reason this couldn't happen, and it's exactly
the case where a gene was thrown away then recreated.

Andrew
da***@dalkescientific.com
Jul 18 '05 #304

This thread has been closed and replies have been disabled. Please start a new discussion.

Similar topics

73
by: RobertMaas | last post by:
After many years of using LISP, I'm taking a class in Java and finding the two roughly comparable in some ways and very different in other ways. Each has a decent size library of useful utilities...
699
by: mike420 | last post by:
I think everyone who used Python will agree that its syntax is the best thing going for it. It is very readable and easy for everyone to learn. But, Python does not a have very good macro...
34
by: nobody | last post by:
This article is posted at the request of C.W. Yang who asked me to detail my opinion of Lisp, and for the benefit of people like him, who may find themselves intrigued by this language. The...
82
by: nobody | last post by:
Howdy, Mike! mikecoxlinux@yahoo.com (Mike Cox) wrote in message news:<3d6111f1.0402271647.c20aea3@posting.google.com>... > I'm a C++ programmer, and have to use lisp because I want to use >...
852
by: Mark Tarver | last post by:
How do you compare Python to Lisp? What specific advantages do you think that one has over the other? Note I'm not a Python person and I have no axes to grind here. This is just a question for...
1
by: Sonnysonu | last post by:
This is the data of csv file 1 2 3 1 2 3 1 2 3 1 2 3 2 3 2 3 3 the lengths should be different i have to store the data by column-wise with in the specific length. suppose the i have to...
0
marktang
by: marktang | last post by:
ONU (Optical Network Unit) is one of the key components for providing high-speed Internet services. Its primary function is to act as an endpoint device located at the user's premises. However,...
0
by: Hystou | last post by:
Most computers default to English, but sometimes we require a different language, especially when relocating. Forgot to request a specific language before your computer shipped? No problem! You can...
1
by: Hystou | last post by:
Overview: Windows 11 and 10 have less user interface control over operating system update behaviour than previous versions of Windows. In Windows 11 and 10, there is no way to turn off the Windows...
0
tracyyun
by: tracyyun | last post by:
Dear forum friends, With the development of smart home technology, a variety of wireless communication protocols have appeared on the market, such as Zigbee, Z-Wave, Wi-Fi, Bluetooth, etc. Each...
0
agi2029
by: agi2029 | last post by:
Let's talk about the concept of autonomous AI software engineers and no-code agents. These AIs are designed to manage the entire lifecycle of a software development project—planning, coding, testing,...
0
by: conductexam | last post by:
I have .net C# application in which I am extracting data from word file and save it in database particularly. To store word all data as it is I am converting the whole word file firstly in HTML and...
0
by: TSSRALBI | last post by:
Hello I'm a network technician in training and I need your help. I am currently learning how to create and manage the different types of VPNs and I have a question about LAN-to-LAN VPNs. The...
0
by: adsilva | last post by:
A Windows Forms form does not have the event Unload, like VB6. What one acts like?

By using Bytes.com and it's services, you agree to our Privacy Policy and Terms of Use.

To disable or enable advertisements and analytics tracking please visit the manage ads & tracking page.