469,315 Members | 1,883 Online
Bytes | Developer Community
New Post

Home Posts Topics Members FAQ

Post your question to a community of 469,315 developers. It's quick & easy.

BIG successes of Lisp (was ...)

In the context of LATEX, some Pythonista asked what the big
successes of Lisp were. I think there were at least three *big*
successes.

a. orbitz.com web site uses Lisp for algorithms, etc.
b. Yahoo store was originally written in Lisp.
c. Emacs

The issues with these will probably come up, so I might as well
mention them myself (which will also make this a more balanced
post)

a. AFAIK Orbitz frequently has to be shut down for maintenance
(read "full garbage collection" - I'm just guessing: with
generational garbage collection, you still have to do full
garbage collection once in a while, and on a system like that
it can take a while)

b. AFAIK, Yahoo Store was eventually rewritten in a non-Lisp.
Why? I'd tell you, but then I'd have to kill you :)

c. Emacs has a reputation for being slow and bloated. But then
it's not written in Common Lisp.

Are ViaWeb and Orbitz bigger successes than LATEX? Do they
have more users? It depends. Does viewing a PDF file made
with LATEX make you a user of LATEX? Does visiting Yahoo
store make you a user of ViaWeb?

For the sake of being balanced: there were also some *big*
failures, such as Lisp Machines. They failed because
they could not compete with UNIX (SUN, SGI) in a time when
performance, multi-userism and uptime were of prime importance.
(Older LispM's just leaked memory until they were shut down,
newer versions overcame that problem but others remained)

Another big failure that is often _attributed_ to Lisp is AI,
of course. But I don't think one should blame a language
for AI not happening. Marvin Mins ky, for example,
blames Robotics and Neural Networks for that.
Jul 18 '05
303 15499
"Rainer Deyke" <ra*****@eldwood.com> writes:
I'm all for more elaborate gc schemes. In particular, I want one
that gives me guaranteed immediate destructors even in the presence
of reference cycles. And I want to language specification to
guarantee it.
It's difficult to even put a requirement for "gc" in a language
_specification_ let alone one that _guarantees_ this sort of thing.

Having to explicitly close files is something I'd expect in C or
assembly, but it has no place in high-level languages. And
"with-open-file" is too limiting. Let the computer do the work, not
the programmer.


What's limiting about it? Or are you saying you want the computer (or
I suppose the language spec) to somehow also open your files for you
as well? What would that even mean? If you don't require that, then
you have to explicitly ask for the file to be opened. Well, that's
what with-open-file is.

/Jon
Jul 18 '05 #201
"Rainer Deyke" <ra*****@eldwood.com> writes:
I'm all for more elaborate gc schemes. In particular, I want one
that gives me guaranteed immediate destructors even in the presence
of reference cycles. And I want to language specification to
guarantee it.

Having to explicitly close files is something I'd expect in C or
assembly, but it has no place in high-level languages. And
"with-open-file" is too limiting. Let the computer do the work, not
the programmer.


This is the problem with languages that are part excellent and part
work in progress (read: crap). You learn to trust the language from
using the excellent parts (which tend to be the basic things you first
encounter), and when you eventually stumble upon the not-so-great
parts you trust those parts, too. It's extremely unfortunate, IMHO.

--
Frode Vatvedt Fjeld
Jul 18 '05 #202
Stephen Horne <st***@ninereeds.fsnet.co.uk> wrote:
Perception is not reality. It is only a (potentially flawed)
representation of reality. But reality is still real. And perception
*is* tied to reality as well as it can be by the simple pragmatic
principle of evolution - if our ancestors had arbitrary perceptions
which were detached from reality, they could not have survived and had
children.


If we had the same perceptions about reality as our ancestors, we
wouldn't reproduce very successfully now, because the world has
changed as a result of their reproductive success.

Anton
Jul 18 '05 #203
Jon S. Anthony wrote:
"Rainer Deyke" <ra*****@eldwood.com> writes:
I'm all for more elaborate gc schemes. In particular, I want one
that gives me guaranteed immediate destructors even in the presence
of reference cycles. And I want to language specification to
guarantee it.


It's difficult to even put a requirement for "gc" in a language
_specification_ let alone one that _guarantees_ this sort of thing.

Having to explicitly close files is something I'd expect in C or
assembly, but it has no place in high-level languages. And
"with-open-file" is too limiting. Let the computer do the work, not
the programmer.


What's limiting about it?


Unless I've completely misunderstood the concept, "with-open-file" requires
me to structure my program such that the file is opened at the beginning and
closed at the end of a block of code. In the absence of coroutines or
threads (both of which would significantly complicate the structure of my
code), this implies that the lifetime of open files is strictly
hierarchical: if file 'a' is opened while file 'b' is open, then file 'b'
must be closed before file 'a' is closed. More generally, it implies that
the file is opened at the beginning of a natural code block (instead of, for
example, on first use) and is closed at the end of the same block (instead
of, for example, when it is no longer needed).
--
Rainer Deyke - ra*****@eldwood.com - http://eldwood.com
Jul 18 '05 #204
Bjorn Pettersen <bj*************@comcast.net> writes:
Raymond Wiker <Ra***********@fast.no> wrote:
"Rainer Deyke" <ra*****@eldwood.com> writes:
Personally I'd prefer guaranteed immediate destructors over
with-open-file. More flexibility, less syntax, and it matches what
the CPython implementation already does.


Right... all along until CPython introduces a more elaborate
gc scheme.


... which is highly unlikely to happen without preservation of reference
counting semantics for files... google if you're _really_ interested :-/


I hope you put assertion in your code so that it won't run under
Jython, because that's the kind of insidious bug that would be
*aweful* to find the hard way.

(I really don't understand why you wouldn't just want at least real
closures, so you can just use call_with_open_file, and not have to
worry about what GC you're using when *opening* *files*)

--
/|_ .-----------------------.
,' .\ / | No to Imperialist war |
,--' _,' | Wage class war! |
/ / `-----------------------'
( -. |
| ) |
(`-. '--.)
`. )----'
Jul 18 '05 #205
"Rainer Deyke" <ra*****@eldwood.com> writes:
Jon S. Anthony wrote:
"Rainer Deyke" <ra*****@eldwood.com> writes:
I'm all for more elaborate gc schemes. In particular, I want one
that gives me guaranteed immediate destructors even in the presence
of reference cycles. And I want to language specification to
guarantee it.


It's difficult to even put a requirement for "gc" in a language
_specification_ let alone one that _guarantees_ this sort of thing.

Having to explicitly close files is something I'd expect in C or
assembly, but it has no place in high-level languages. And
"with-open-file" is too limiting. Let the computer do the work, not
the programmer.


What's limiting about it?


Unless I've completely misunderstood the concept, "with-open-file"
requires me to structure my program such that the file is opened at
the beginning and closed at the end of a block of code.


I don't know about "completely" but you've certainly misunderstood an
important point. Yes, WITH-OPEN-FILE attempts to open a file, executes
a block of code with the open stream bound to a variable, and then
closes the stream when the block is exited, even if something goes
wrong such as a condition being signaled.

But--and this is the bit I think you may have missed--if that's *not*
what you want you don't use WITH-OPEN-FILE. Lisp also provides OPEN
and CLOSE, that act as you'd expect and allow you to explicitly
control the lifetime of the file stream.

Which gets us back to Pascal's original point--WITH-OPEN-FILE is a
macro that abstracts a set of operations in a concise way. (In
addition to ensuring that the file is closed, it also allows us to
easily express what to do in certain situations while trying to open
the file--what to do if the file already exists or doesn't exist,
etc.)

-Peter

--
Peter Seibel pe***@javamonkey.com

Lisp is the red pill. -- John Fraser, comp.lang.lisp
Jul 18 '05 #206
"Rainer Deyke" <ra*****@eldwood.com> writes:
Jon S. Anthony wrote:
"Rainer Deyke" <ra*****@eldwood.com> writes:
I'm all for more elaborate gc schemes. In particular, I want one
that gives me guaranteed immediate destructors even in the presence
of reference cycles. And I want to language specification to
guarantee it.
It's difficult to even put a requirement for "gc" in a language
_specification_ let alone one that _guarantees_ this sort of thing.

Having to explicitly close files is something I'd expect in C or
assembly, but it has no place in high-level languages. And
"with-open-file" is too limiting. Let the computer do the work, not
the programmer.


What's limiting about it?


Unless I've completely misunderstood the concept, "with-open-file" requires
me to structure my program such that the file is opened at the beginning and
closed at the end of a block of code.


No it does not require this - it is what you use when this is the
natural expression for what you would want.
In the absence of coroutines or threads (both of which would
significantly complicate the structure of my code), this implies
that the lifetime of open files is strictly hierarchical: if file
'a' is opened while file 'b' is open, then file 'b' must be closed
before file 'a' is closed.
No, even within the with-open-file of b you could close a whenever or
wherever you wanted. The with-open-file for a just guarantees certain
things when you leave the scope.
More generally, it implies that the file is opened at the beginning
of a natural code block
Well, the block that it itself opens.

and is closed at the end of the same block .
Is guaranteed to be closed. You could close it before this if that
made sense.

(instead of, for example, on first use) ... (instead of, for
example, when it is no longer needed)


OK. If your code needs to be more flexible in this, you would use
OPEN and/or CLOSE. If you had a particular open/close paradigm in
mind you could then write an abstraction (macro) akin to
with-open-file which provides behavior like what you say here and it
would generate the calls to open and close. I'm sure there are
scenarios where this might not be "reasonably doable".

/Jon
Jul 18 '05 #207
In article <rd********************************@4ax.com>, Stephen Horne
<st***@ninereeds.fsnet.co.uk> writes
On Fri, 24 Oct 2003 16:00:12 +0200, an***@vredegoor.doge.nl (Anton
Vredegoor) wrote:
.......Perception is not reality. It is only a (potentially flawed)
representation of reality. But reality is still real. And perception
*is* tied to reality as well as it can be by the simple pragmatic
principle of evolution - if our ancestors had arbitrary perceptions
which were detached from reality, they could not have survived and had
children.

......
It is a commonplace of developmental psychology that the persistence of
objects is learned by children at some relatively young age (normally 3
months as I recall). I assume that children learn the persistence of
hidden objects by some statistical mechanism ie if it happens often
enough it must be true (setting up enough neural connections etc).

Would the reality of children subjected to a world where hidden objects
were somehow randomly 'disappeared' be more or less objective then that
of normal children.

Unlucky experimental cats brought up in vertical stripe worlds were
completely unable to perceive horizontals so their later reality was
apparently filled with invisible and mysterious objects. I can't
remember if they could do better by rotating their heads, but even that
would be quite weird.

I think it unwise to make strong statements about reality when we know
so little about it. Apparently now the universe is 90-95% stuff we don't
know anything about and we only found that out in the last 10 years.
--
Robin Becker
Jul 18 '05 #208
Peter Seibel wrote:
But--and this is the bit I think you may have missed--if that's *not*
what you want you don't use WITH-OPEN-FILE. Lisp also provides OPEN
and CLOSE, that act as you'd expect and allow you to explicitly
control the lifetime of the file stream.


Which leads us back to having to manually close files.

I DON'T want to manually close files. I DON'T want to deal with the
limitations of with-open-file. And, here's the important bit, I DON'T WANT
TO COMBINE OR CHOOSE BETWEEN THESE TWO METHODS, BOTH OF WHICH ARE FLAWED.

What I want to open a file and have it close automatically when I am done
with it. I can do that in C++. Why can't I do it in Python?
--
Rainer Deyke - ra*****@eldwood.com - http://eldwood.com
Jul 18 '05 #209
On Fri, Oct 24, 2003 at 06:09:55PM +0000, Rainer Deyke wrote:
[...] and is closed at the end of the same block (instead of, for
example, when it is no longer needed).


The reasoning behind WITH-OPEN-FILE is that file streams are resources
that should be allocated with dynamic extent, not indefinite. To make
that more concrete, it means that file streams should be closed
immediately after they are no longer being used (much like local
variables in C are immediately destroyed when the function returns).

I have seen debates before that argue that memory should be the only
resource managed by GC, and the opposite view that every resource should
be managable by GC (I don't think this is quite as feasible, but you can
look up the discussions on google, a year ago on c.l.l or so).

But WITH-OPEN-FILE doesn't prevent you from, say, having a file open the
entire time your program is running. Some people pointed out that you
can use OPEN and CLOSE, but also this would work fine:

(defvar *log-stream*) ;; an unbound special variable

(defun log (fmt-control &rest args)
(apply 'format *log-stream* fmt-control args))

(defun start (&key (log-file "log"))
(with-open-file (*log-stream* log-file
:direction :output
:if-exists :append)
;; *log-stream* is bound until this body exits
(call-other-functions)))
--
; Matthew Danish <md*****@andrew.cmu.edu>
; OpenPGP public key: C24B6010 on keyring.debian.org
; Signed or encrypted mail welcome.
; "There is no dark side of the moon really; matter of fact, it's all dark."
Jul 18 '05 #210
On Fri, 24 Oct 2003 20:00:23 +0000, Rainer Deyke wrote:
I DON'T want to manually close files. I DON'T want to deal with the
limitations of with-open-file. And, here's the important bit, I DON'T WANT
TO COMBINE OR CHOOSE BETWEEN THESE TWO METHODS, BOTH OF WHICH ARE FLAWED.

What I want to open a file and have it close automatically when I am done
with it. I can do that in C++. Why can't I do it in Python?


In C++ you must choose between a local variable for the stream (which is
equivalent to with-open-file) or dynamically allocated object (which is
equivalent to open & close, where close is spelled delete).

You can't say that you want the C++ way and you don't want explicit open &
close and with-open-file, because they are equivalent! They differ only in
syntax details, but the style of usage and limitations are the same.

The only Python's problem is that there is no nice way to pass an
anonymous function to with-open-file, and try:finally: doesn't look so
nice either. I'm not defending Python's lack of large anonymous functions
but the interface of open, close & with-open-file in general, irrespective
of the programming language.

In particular you shouldn't wish for GC to find unused files because it's
impossible without sacrificing other GC properties. No languages does that
except misused CPython (you shouldn't rely on that).

--
__("< Marcin Kowalczyk
\__/ qr****@knm.org.pl
^^ http://qrnik.knm.org.pl/~qrczak/

Jul 18 '05 #211
On Fri, Oct 24, 2003 at 08:00:23PM +0000, Rainer Deyke wrote:
I DON'T want to manually close files. I DON'T want to deal with the
limitations of with-open-file. And, here's the important bit, I DON'T WANT
TO COMBINE OR CHOOSE BETWEEN THESE TWO METHODS, BOTH OF WHICH ARE FLAWED.
I already discussed this in another post.
What I want to open a file and have it close automatically when I am done
with it. I can do that in C++. Why can't I do it in Python?


One major difference between C++ and Lisp/Python is that C++ lacks
closures. This means that local variables do not have to be considered
for allocation on the heap. So saying

{
ofstream f;
f.open ("output");
...
// I presume ofstream dtor closes file? I'm a bit rusty
}

is okay in C++ to obtain a similar effect as WITH-OPEN-FILE, though
I am not sure how well it does when it comes to abnormal conditions.

In Common Lisp, and presumably now Python, lexical scope combined with
higher-order functions implies that variables may be captured in a
closure and have indefinite extent. In CL, also, the language does not
define memory management; leaving open the possibility for a better
solution than GC. CL's answer to the handling of dynamic-extent
resources is WITH-OPEN-FILE, and also the other operators from which
WITH-OPEN-FILE is defined (such as UNWIND-PROTECT).

One final question: do you expect C++ to clean up dynamically allocated
ofstream objects, automatically?

--
; Matthew Danish <md*****@andrew.cmu.edu>
; OpenPGP public key: C24B6010 on keyring.debian.org
; Signed or encrypted mail welcome.
; "There is no dark side of the moon really; matter of fact, it's all dark."
Jul 18 '05 #212
"Rainer Deyke" <ra*****@eldwood.com> writes:
Peter Seibel wrote:
But--and this is the bit I think you may have missed--if that's *not*
what you want you don't use WITH-OPEN-FILE. Lisp also provides OPEN
and CLOSE, that act as you'd expect and allow you to explicitly
control the lifetime of the file stream.


Which leads us back to having to manually close files.

I DON'T want to manually close files. I DON'T want to deal with the
limitations of with-open-file. And, here's the important bit, I DON'T WANT
TO COMBINE OR CHOOSE BETWEEN THESE TWO METHODS, BOTH OF WHICH ARE FLAWED.


Let me see if I get this:

1. You don't want to tell the computer when you are done with the
file.

2. But the computer shouldn't wait until it can prove you are done
to close the file.

3. And you definitely are opposed to the idea of letting the
computer figure it out in general, but giving it hints when
it is important that it be prompt.

Do you want to tell the computer the file name, or should it read your
mind on that as well?

Jul 18 '05 #213
Marcin 'Qrczak' Kowalczyk wrote:
In C++ you must choose between a local variable for the stream (which
is equivalent to with-open-file) or dynamically allocated object
(which is equivalent to open & close, where close is spelled delete).
I can also choose to have the stream reference counted (through
boost::shared_ptr or similar), and I can make it a member variable of an
object (tying the lifetime of the stream to the lifetime of the owning
object).

Now, I *could* write my own reference counting stream wrapper in Python,
together with a with_reference_to_stream HOF. I could, but it would be
expensive, not only in terms of performnace, but also in terms of syntax and
mental overhead.
You can't say that you want the C++ way and you don't want explicit
open & close and with-open-file, because they are equivalent! They
differ only in syntax details, but the style of usage and limitations
are the same.


The syntax is actually very significant here. Python's lack of anonymous
code blocks hurts, but even without this, the C++ syntax is more lightweight
and therefore more managable. Compare the levels of indentation:

void f()
{
std::fstream a("a"), b("b");
do_something(a, b);
std::fstream c;
do_something_else(a, b, c);
}

def f():
with_open_file(a = open("a")):
with_open_file(b = open("b")):
do_something(a, b)
with_open_file(c = open("c")):
do_something_else(a, b, c)
--
Rainer Deyke - ra*****@eldwood.com - http://eldwood.com
Jul 18 '05 #214
Joe Marshall wrote:
2. But the computer shouldn't wait until it can prove you are done
to close the file.


Wrong. The computer can prove that I am done with the file the moment my
last reference to the file is gone. I demand nothing more (or less).
--
Rainer Deyke - ra*****@eldwood.com - http://eldwood.com
Jul 18 '05 #215
In article <rxfmb.19034$Tr4.39346@attbi_s03>,
"Rainer Deyke" <ra*****@eldwood.com> wrote:
Peter Seibel wrote:
But--and this is the bit I think you may have missed--if that's *not*
what you want you don't use WITH-OPEN-FILE. Lisp also provides OPEN
and CLOSE, that act as you'd expect and allow you to explicitly
control the lifetime of the file stream.


Which leads us back to having to manually close files.

I DON'T want to manually close files. I DON'T want to deal with the
limitations of with-open-file. And, here's the important bit, I DON'T WANT
TO COMBINE OR CHOOSE BETWEEN THESE TWO METHODS, BOTH OF WHICH ARE FLAWED.

What I want to open a file and have it close automatically when I am done
with it. I can do that in C++. Why can't I do it in Python?


None of the above makes a whole lot of sense to me, and judging
by the use of upper case I'm inclined to believe that discussing
this with comp.lang.lisp participants has caused you to become
disturbed. I am accordingly posting this only to comp.lang.python.

You can have files close automatically in Python, but automatically
isn't by itself a rather vacuous term, and `when I am done with it'
doesn't help a bit. In C Python, when a file object is no longer
referenced by any part of the program, it closes. If it's local
to a function, including bound only to a function parameter or some
such thing, that will occur when control returns from the function.

Unless the file becomes involved in a circular reference, in which
case the close will be deferred until the references are discovered
and broken by the garbage collector. Unless the garbage collector
is unable to do so because of some property of the members, such as
a __del__ method.

No doubt there is a great deal of C++ arcana that I have missed out
on, but the basic finalization issues are the same as far as I know.
C++ programmers aren't subject to the above exceptions only because
they don't get any of the functionality to which these are exceptions!
Function local objects are always deleted on return from the function
regardless - and any other part of the program that retains a pointer
to such an object will become unsound. Objects allocated on the heap
have to be deleted explicitly, after which other parts of the program
that reference them will become unsound. If you do a fraction of the
explicit management that C++ requires, you can get reliable finalization.

I am not familiar with with-open-file, but I imagine if you decide to
write your software in Lisp, you will probably want to give it a try.

There is also nothing wrong with closing a file explicitly. There are
reasons for it that have nothing to do with the language, and then there
is a school of thought (actually the party line for Python) that says
you should explicitly close files in any case because the memory
management rules are different for Java Python and could in theory
change in a later release of C Python. Apparently Java's finalization
is not immediate.

Donn Cave, do**@u.washington.edu
Jul 18 '05 #216
Donn Cave wrote:
You can have files close automatically in Python, but automatically
isn't by itself a rather vacuous term, and `when I am done with it'
doesn't help a bit. In C Python, when a file object is no longer
referenced by any part of the program, it closes. If it's local
to a function, including bound only to a function parameter or some
such thing, that will occur when control returns from the function.


I understand both the specification and the implementation of finalization
in Python, as well as the reasoning behind them. My point is, I would
prefer it if Python guaranteed immediate finalization of unreferenced
objects. Maybe I'll write a PEP about if someday.
--
Rainer Deyke - ra*****@eldwood.com - http://eldwood.com
Jul 18 '05 #217
"Andrew Dalke" <ad****@mindspring.com> wrote in message news:<64*****************@newsread4.news.pas.earth link.net>...
Kaz Kylheku:
Moreover,
two or more domain-specific languages can be mixed together, nested in
the same lexical scope, even if they were developed in complete
isolation by different programmers.
We have decidedly different definitions of what a "domain-specific
language" means. To you it means the semantics expressed as
an s-exp. To me it means the syntax is also domain specific. Eg,
Python is a domain specific language where the domain is
"languages where people complain about scope defined by
whitespace." ;)


There are two levels of syntax: the read syntax, and a deeper abstract
syntax. People who understand only the first one to be syntax are
scared of syntactic manipulation, because they are used to being
burned by syntax: stupid precedence rules, quirky punctuation,
semicolon diseases, strange whitespace handling, and so on.

A symbolic expression is a printed form which codes for a data
structure. That data structure continues to exhibit syntax, long after
the parentheses, whitespace and other lexical elements are processed
and gone.
Yes, one can support Python in Lisp as a reader macro -- but
it isn't done because Lispers would just write the Python out
as an S-exp.
They would do that in cases when they have to invoke so many escape
hatches to embed Lisp code in the Python that the read syntax no
longer provides any benefit.
But then it wouldn't be Python, because the domain
language *includes*domain*syntax*.

In other words, writing the domain language as an S-exp
is a short cut to make it easier on the programmer, and not
on the domain specialist.
Writing the language in terms of a data structure (which can be
converted back and forth to a printed symbolic expression) is not a
shortcut; it's proper layering at work. The read syntax can be
independently developed; the symbolic expressions merely give you a
generic one for free! Read syntax is just lexical sugar. It can be
quite important. Heck symbolic expressions have enough of it; it would
be a pain to type (QUOTE X) all the time instead of 'X, or
#.(make-array '(4) :element-type 'bit :initial-contents '(1 1 0 1))
instead of #*1101!
Can you use that syntax in a Python source file and have
it processed together with normal code?


Did you look at my example doing just that? I built
an AST for Python and converted it into a normal function.


I'm looking for a Python example which adds a new kind of statement to
the language, and then later in the source file uses that statement to
write a part of the program, such that it's all processed in the same
pass. The statement can be nested in existing ones, and can have
existing syntax embedded in it. Oh yeah, and it should work on at
least two Python implementations.
What is Python's equivalent to the backquote syntax, if I
want to put some variant pieces into a parse tree template?


There isn't. But then there isn't need. The question isn't


When you start talking about need, that quickly becomes a losing
proposition. Because needs are actually wants in disguise. Do human
beings need electricity, running water or food prepared by heating?
"how do I do this construct that I expect in Lisp?" it's "how
do I solve this problem?"
The answer is: if I can get Lisp, I use that. Otherwise I work around
that. If you are in the middle of civilization, and the question is
``what's for dinner'', the answers are to go to the supermarket or a
restaurant. If you are lost in the woods, then you have to reason
differently.
There are other ways to solve
that problem than creating a "parse tree template" and to
date there have been few cases where the alternatives were
significantly worse -- even in the case of translating a domain
What cases? Where? Can you be more specific? Can ``significantly
worse'' be put into some kind of numbers?
language into local syntax, which is a Lisp specialty, it's only
about twice as long for Python as for Lisp and definitely
not "impossible" like you claimed.


Okay, I'm expecting that example I asked for to be only twice as long
as the Lisp version.
Jul 18 '05 #218
"Rainer Deyke" <ra*****@eldwood.com> writes:
Joe Marshall wrote:
2. But the computer shouldn't wait until it can prove you are done
to close the file.


Wrong. The computer can prove that I am done with the file the moment my
last reference to the file is gone. I demand nothing more (or less).


If you don't care about performance, this is trivial.
Jul 18 '05 #219
tf*@famine.OCF.Berkeley.EDU (Thomas F. Burdick) wrote in
news:xc*************@famine.OCF.Berkeley.EDU:
Bjorn Pettersen <bj*************@comcast.net> writes:
Raymond Wiker <Ra***********@fast.no> wrote:
> "Rainer Deyke" <ra*****@eldwood.com> writes:
>
> > Personally I'd prefer guaranteed immediate destructors over
> > with-open-file. More flexibility, less syntax, and it matches
> > what the CPython implementation already does.
>
> Right... all along until CPython introduces a more
> elaborate
> gc scheme.
... which is highly unlikely to happen without preservation of
reference counting semantics for files... google if you're _really_
interested :-/


I hope you put assertion in your code so that it won't run under
Jython, because that's the kind of insidious bug that would be
*aweful* to find the hard way.


I'm assuming the "import win32con" etc., will do :-) Seriously, this
would be a minor issue compared with everything else I want to connect
to (e.g. 950KLocs of legacy c++ -- somehow I don't think I'd get the go-
ahead to rewrite that in Java <smile>).
(I really don't understand why you wouldn't just want at least real
closures, so you can just use call_with_open_file, and not have to
worry about what GC you're using when *opening* *files*)


I'm not the one worrying here <wink>. I know exactly what Python does
and it is no cognitive burden. How would you rewrite these without
obfuscating the real work with book-keeping tasks, or leaving the file
open longer than necessary?

timestamp = datetime.now().strftime('%Y%m%d%H%M%S\n')
file('log', 'a').write(timestamp)

file('output', 'w').write(file('template').read() % locals())

nwords = sum([len(line.split()) for line in file('input')])

for line in file('input'):
print line[:79]

-- bjorn


Jul 18 '05 #220
Quoth "Rainer Deyke" <ra*****@eldwood.com>:
....
| I understand both the specification and the implementation of finalization
| in Python, as well as the reasoning behind them. My point is, I would
| prefer it if Python guaranteed immediate finalization of unreferenced
| objects. Maybe I'll write a PEP about if someday.

Write code first! I think most would see it as a good thing,
but I have the impression from somewhere that a guarantee of
immediate finalization might not be a practical possibility,
given reference cycles etc. Between that and the fact that the
present situation is close enough that few people care about the
difference, an implementation would really help your cause.

Donn Cave, do**@drizzle.com
Jul 18 '05 #221
In comp.lang.lisp Bjorn Pettersen <bj*************@comcast.net> wrote:
I'm not the one worrying here <wink>. I know exactly what Python does
and it is no cognitive burden.


"To foil the maintenance engineer, you must understand how he thinks."

Cheers,

-- Nikodemus
Jul 18 '05 #222
On Fri, 24 Oct 2003 19:33:17 +0200, an***@vredegoor.doge.nl (Anton
Vredegoor) wrote:
Stephen Horne <st***@ninereeds.fsnet.co.uk> wrote:
Perception is not reality. It is only a (potentially flawed)
representation of reality. But reality is still real. And perception
*is* tied to reality as well as it can be by the simple pragmatic
principle of evolution - if our ancestors had arbitrary perceptions
which were detached from reality, they could not have survived and had
children.


If we had the same perceptions about reality as our ancestors, we
wouldn't reproduce very successfully now, because the world has
changed as a result of their reproductive success.


Except for the fact that (1) people, at least those with power,
changed the world to suit themselves, and (2) the things that really
count for perception haven't changed - objects are still objects and
still follow the same laws of physics, for instance, even if there are
occasional novel touches such as some of those objects having engines.
--
Steve Horne

steve at ninereeds dot fsnet dot co dot uk
Jul 18 '05 #223
On Fri, 24 Oct 2003 20:58:13 +0100, Robin Becker
<ro***@jessikat.fsnet.co.uk> wrote:
In article <rd********************************@4ax.com>, Stephen Horne
<st***@ninereeds.fsnet.co.uk> writes
On Fri, 24 Oct 2003 16:00:12 +0200, an***@vredegoor.doge.nl (Anton
Vredegoor) wrote:

......
Perception is not reality. It is only a (potentially flawed)
representation of reality. But reality is still real. And perception
*is* tied to reality as well as it can be by the simple pragmatic
principle of evolution - if our ancestors had arbitrary perceptions
which were detached from reality, they could not have survived and had
children.

.....
It is a commonplace of developmental psychology that the persistence of
objects is learned by children at some relatively young age (normally 3
months as I recall). I assume that children learn the persistence of
hidden objects by some statistical mechanism ie if it happens often
enough it must be true (setting up enough neural connections etc).

Would the reality of children subjected to a world where hidden objects
were somehow randomly 'disappeared' be more or less objective then that
of normal children.

Unlucky experimental cats brought up in vertical stripe worlds were
completely unable to perceive horizontals so their later reality was
apparently filled with invisible and mysterious objects. I can't
remember if they could do better by rotating their heads, but even that
would be quite weird.

I think it unwise to make strong statements about reality when we know
so little about it. Apparently now the universe is 90-95% stuff we don't
know anything about and we only found that out in the last 10 years.


Actually, a great deal is understood about precisely that mechanism
you describe. There is nothing mysterious about it. Even innate
processes depend of certain features of the environment which
evolution has effectively assumed constant, such as - for the cats
example above - the presence of various angles of rough lines in the
environment. If these features do not occur at the correct
developmental stage, the processes that wire the appropriate neurons
together simply don't occur. Evolution is nothing if not pragmatic.

As for children subjected to a world where hidden objects suddenly
disappeared, you may be surprised. I am not aware of anyone doing that
precise experiment for obvious moral reasons, but if you know where to
look you can find evidence...

The human brain continues developing after birth. It cannot be fully
developed at birth, as with most animals, because of the limits of the
human female hip bone. Continued brain developement after birth does
NOT automatically mean learning, therefore.

Take for instance social development. Human cruelty knowing no bounds,
there have been children who were shut away from all human interaction
by their parents. These children do *not* become autistic - even when
found well into their teenage years, and despite the symptoms of
traumatic stress, such children socialise remarkably well and
extremely quickly - far faster than they learn language, for instance
- whereas an autistic may never learn good socialisation despite a
lifetime of intense effort (I speak from experience).

Reason - a substantial part of socialisation is innate (and thus
accessible without learning), but neurological damage prevents that
innate socialisation ability from developing.

Even if this was not the case, you have not proved that reality is not
real. Of course perception still varies slightly from person to
person, and more extensively from species to species, but it is not
independent of reality - it still has to be tied to reality as closely
as possible or else it is useless.
--
Steve Horne

steve at ninereeds dot fsnet dot co dot uk
Jul 18 '05 #224
In article <2a********************************@4ax.com>, Stephen Horne
<st***@ninereeds.fsnet.co.uk> writes
Even if this was not the case, you have not proved that reality is not
real. Of course perception still varies slightly from person to
person, and more extensively from species to species, but it is not
independent of reality - it still has to be tied to reality as closely
as possible or else it is useless.

Actually it was not my intention to attempt any such proof, merely to
indicate that what we call real is at the mercy of perception. If I
choose to call a particular consensus version of reality the 'one true
reality' I'm almost certainly wrong. As with most of current physics we
understand that 'reality' is a model. An evolution based on low speed
physics hardly prepares us for quantum mechanics and spooky action at a
distance interactions. For that reality, which we cannot perceive, we
employ mathematicians as interpreters (priests?) to argue about the
number of hidden dimensions etc etc. Even causality is frowned upon in
some circles.

What we humans call 'reality' is completely determined by our senses and
the instruments we can build. How we interpret the data is powerfully
influenced by our social environment and history. As an example the
persistence of material objects is alleged by some to be true only for
small time scales <10^31 years; humans don't have long enough to learn
that.
--
Robin Becker
Jul 18 '05 #225
> Python doesn't try (too) hard to change the ordinary manner of thinking,
just to be as transparent as possible. I guess in that sense it encourages a
degree of mental sloth, but the objective is executable pseudocode. Lisp


I have started with Python because I needed a fast way to develop
solutions using OO. Python fullfiled my expectations.

The point is that I have started with Python because I was missing
something, not because I wanted to expand my knowledge.

Learning one programming language is a burden, because there is always to
little time for everything.

Lisp is probably cool, but what will I gain from Lisp ? What will
justify my investment, I mean spent time ?

DG
Jul 18 '05 #226
On Sat, 25 Oct 2003 16:00:14 +0100, Robin Becker
<ro***@jessikat.fsnet.co.uk> wrote:
In article <2a********************************@4ax.com>, Stephen Horne
<st***@ninereeds.fsnet.co.uk> writes
Even if this was not the case, you have not proved that reality is not
real. Of course perception still varies slightly from person to
person, and more extensively from species to species, but it is not
independent of reality - it still has to be tied to reality as closely
as possible or else it is useless.
Actually it was not my intention to attempt any such proof, merely to
indicate that what we call real is at the mercy of perception. If I
choose to call a particular consensus version of reality the 'one true
reality' I'm almost certainly wrong.
True. But perception cannot change reality. Reality is not about
perception - it existed long before there was anything capable of
percieving.

What we *normally* call real is normally a perception, or more
precisely (as you say) a model, and not the actual reality. But at
least when that model has been built up from experimental evidence, it
is vanishingly unlikely to have approached anything other than
reality. The model defined by science has limits and inaccuracies of
course, but it is not credible to claim that it is arbitrary.
What we humans call 'reality' is completely determined by our senses and
the instruments we can build.


Not at all. What our senses and instuments are observing is real,
*not* arbitrary, and *not* affected by perception. Our perceptions are
dependent on reality, even though they cannot be a perfect. We are not
free to define perception arbitrarily precisely because it is a
representation of reality, derived from the information provided by
our senses.

As I already mentioned, if a primitive person observes a car and
theorises that there is a demon under the hood, that does not become
true. Reality does not care about anyones perceptions as it is not
dependent on them in any way - perceptions are functionally dependent
on reality, and our perceptions are designed to form a useful model of
reality.

If there was no reality, there would be no common baseline for our
perceptions and therefore no reason for any commonality between them.
In fact there would be no reason to have perceptions at all.
--
Steve Horne

steve at ninereeds dot fsnet dot co dot uk
Jul 18 '05 #227
In article <3f********************************@4ax.com>, Stephen Horne
<st***@ninereeds.fsnet.co.uk> writes
As I already mentioned, if a primitive person observes a car and
theorises that there is a demon under the hood, that does not become
true. Reality does not care about anyones perceptions as it is not
dependent on them in any way - perceptions are functionally dependent
on reality, and our perceptions are designed to form a useful model of
reality.

We observe electrons and make up mathematical theories etc etc, but in
reality little demons are driving them around. :)

Your assertion that there is an objective reality requires proof as
well. Probably it cannot be proved, but must be made an axiom. The
scientific method requires falsifiability.

The fact is we cannot perceive well enough to determine reality. The
physicists say that observation alters the result so if Heisenberg is
right there is no absolute reality. Perhaps by wishing hard I can get my
batteries to last longer 1 time in 10^67.

Awareness certainly mucks things up in socio-economic systems which are
also real in some sense. I hear people putting forward the view that
time is a construct of our minds; does time flow?

This is a bit too meta-physical, but then much of modern physics is like
that. Since much of physics is done by counting events we are in the
position of the man who having jumped out of the top floor observes that
all's well after falling past the third floor as falling past floors
10,9,... etc didn't hurt. We cannot exclude exceptional events.
--
Robin Becker
Jul 18 '05 #228
Lulu of the Lotus-Eaters <me***@gnosis.cx> wrote in message news:<ma*************************************@pyth on.org>...
Incidentally, I have never seen--and expect never to see--some new
mysterious domain where Python is too limited because the designers did
not forsee the problem area. Nor similarly with other very high level
languages. It NEVER happens that you just cannot solve a problem
because of the lack of some novel syntax to do so... that's what
libraries are for.


The problem I have with this argument is, people already invent little
"languages" whenever they create new libraries. Right now I'm working
with wxPython. Here's an idiom that comes up (you don't need to
understand it):

app = wxPySimpleApp()
frame = MainWindow(None, -1, "A window")
frame.Show(True)
app.MainLoop()

Here, I have to put each line in a magical order. Deviate the
slightest bit, the thing crashes hard. It is hard to work with this
order; wxPython inherited an old design (not wxPython's fault), and
it's showing its age.

I'd fix it, but functions don't give me that power. I need to specify
the order of execution, because GUIs are all about side-effects --
macros are a solution worth having in your belt.

I am chained to wxPython's language. It's a language that is
basically Python-in-a-weird-order. Why not accept we need good
abstraction facilities because code has a habit of spiralling into
unmaintainability?

I'm not slamming Python or wxPython, since for this project they're
objectively better than today's CL. My only point is macros shouldn't
be underestimated. Especially since there are lots of things built
into lisp to make macros nice to use. (Like the macroexpand function,
which shows you what macros turn into.) Even if macros are
objectively wrong for Python, people should still know about them.
Jul 18 '05 #229
On Sat, 25 Oct 2003 20:57:58 -0700, Tayss wrote:
app = wxPySimpleApp()
frame = MainWindow(None, -1, "A window")
frame.Show(True)
app.MainLoop()

Here, I have to put each line in a magical order. Deviate the
slightest bit, the thing crashes hard. It is hard to work with this
order; wxPython inherited an old design (not wxPython's fault), and
it's showing its age.

I'd fix it, but functions don't give me that power.
Why?
I need to specify the order of execution,


What's the problem in specifying the order of execution in functions?

--
__("< Marcin Kowalczyk
\__/ qr****@knm.org.pl
^^ http://qrnik.knm.org.pl/~qrczak/

Jul 18 '05 #230
On Sat, 25 Oct 2003 19:03:34 +0100, Robin Becker
<ro***@jessikat.fsnet.co.uk> wrote:
In article <3f********************************@4ax.com>, Stephen Horne
<st***@ninereeds.fsnet.co.uk> writes
As I already mentioned, if a primitive person observes a car and
theorises that there is a demon under the hood, that does not become
true. Reality does not care about anyones perceptions as it is not
dependent on them in any way - perceptions are functionally dependent
on reality, and our perceptions are designed to form a useful model of
reality.We observe electrons and make up mathematical theories etc etc, but in
reality little demons are driving them around. :)

Your assertion that there is an objective reality requires proof as
well. Probably it cannot be proved, but must be made an axiom. The
scientific method requires falsifiability.


I can't proove it exactly, but I think I can show the alternative to
be logically inconsistent quite simply.

If you assert that there is no objective reality - only perception -
then I have as much right to claim assert my perceptions as anyone
else. And I percieve that there is an objective reality.
The fact is we cannot perceive well enough to determine reality. The
physicists say that observation alters the result so if Heisenberg is
right there is no absolute reality. Perhaps by wishing hard I can get my
batteries to last longer 1 time in 10^67.
If that were true, why should it only work 1 time in 10^67?

As I said before, the limit of the accuracy of our perceptions is
basically the limit of information processing. *Not* information
theory - we need machinery to do the processing. That machinery has
limits, cannot be perfect, and thus is pretty well optimised to
achieve a purpose as well as possible without the need to be perfect.
The reason it can't percieve quantum effects is because we have no
evolutionary need to percieve things at that level, and thus have no
senses etc etc to deal with them.

Quantum effects are actually a good example, so lets take a look...

Yes, a particle may have two or more states at the same time. But once
*any* observer observes that particle, it resolves to the same state
for *all* observers. Individual observers cannot choose for themselves
what to perceive.

Still, it is worth asking what is so special about this observer. What
makes a particular arrangement of matter special, so that it can
'observe' while other arrangements cannot. Is it some mystic
metaphysical conscience, as many have asserted, or is it perhaps
nothing magical at all, and nothing to do with mind?

I tend to go with Penrose on this. That is, a superposition of states
has strict limits with respect to gravity. Whenever particles are in a
superposition, space-time must also be in a superposition. Particles
therefore have different levels of gravitational energy in each
superposition. This creates an uncertainty in the system. And as
Heisenberg states, a large uncertainty can only exist for a short
time.

Thus the reason why Schrodingers cat cannot be alive and dead at the
same time (at least for more than a tiny fraction of a second) is
because, as with any 'observer', it has a significant mass and
therefore the alive and dead superpositions create too much
uncertainty in spacetime.

Observers are nothing more than large masses that don't stay in
superposition states for significant times, and thus once the observer
(or cat, or for that matter vial of poison) is superposed the whole
system must rapidly resolve to one state or another because of the
scale of uncertainty involved in spacetime.

But it is easier to handle Schrodingers cat that that. According to
the thought experiment, any observation - no matter how indirect -
resolves the state of the system. But at the microscopic scale, that
is simply not the case - superposed particles interact with each other
in ways that allow the superposition of states to be detected, or else
the there could be no experimental proof of superpositions. There is
no such experimental proof at macroscopic scales, thus the same kind
of superposition simply cannot be happening (at perceptible
timescales) at the macroscopic scale.

Just as with relativity, the observer is certainly important but not
defining except in an extremely restricted way. There is a reality
which the observer is observing, and which the observer cannot define
arbitrarily.

Consciousness is not magic. Brains, like the rest of the body, are
just another arrangement of matter - certainly a complex and useful
arrangement, but it is still obeying (not defining) the rules layed
down by the universe we live in. There is nothing special about people
which lets them arbitrarily define the universe.
Awareness certainly mucks things up in socio-economic systems which are
also real in some sense. I hear people putting forward the view that
time is a construct of our minds; does time flow?
Take a look around and you will see that it does. Do you really
arbitrarily choose not to be able to observe next weeks lottery
numbers before you place your bet?

We know that the models provided by physics are imperfect. Maybe some
day someone will explain why time is different to space. Maybe not.
But what we are able to percieve does not define reality - it only
forms an imperfect model.

The fact that perception is not perfect does not mean there isn't a
defining reality to percieve. There must be something that ties all
our perceptions together, though, or else why are they sufficiently
compatible that we can interact at all.
This is a bit too meta-physical, but then much of modern physics is like
that. Since much of physics is done by counting events we are in the
position of the man who having jumped out of the top floor observes that
all's well after falling past the third floor as falling past floors
10,9,... etc didn't hurt. We cannot exclude exceptional events.


I'm not excluding the exceptional. I'm also not excluding what I can
see for myself just by opening my eyes.
--
Steve Horne

steve at ninereeds dot fsnet dot co dot uk
Jul 18 '05 #231
Robin Becker <ro***@jessikat.fsnet.co.uk> wrote:
What we humans call 'reality' is completely determined by our senses and
the instruments we can build. How we interpret the data is powerfully
influenced by our social environment and history. As an example the
persistence of material objects is alleged by some to be true only for
small time scales <10^31 years; humans don't have long enough to learn
that.


Persistence of material objects will become obsolete much sooner. See:

http://crnano.org/systems.htm

This discusses three ethical systems and their usefulness for dealing
with the coming nanotechnology era.

The articles conclusion has quite a Pythonic ring to it, I feel.
However just like Python, it will have to give up on backward
compatibility someday :-)

Anton
Jul 18 '05 #232
On Sat, 25 Oct 2003 22:34:47 GMT, Alex Martelli <al***@aleax.it>
wrote:
Stephen Horne wrote:
...
True. But perception cannot change reality. Reality is not about
perception - it existed long before there was anything capable of
percieving.
You are so WONDERFULLY certain about such things -- including the
fact that "before" is crucial, i.e., the arrow of time has some
intrinsic meaning.


Take a look around. When you can walk back to last Wednesday, I'll
believe that time has no special meaning.

Just because physicists don't have a perfect model yet, it doesn't
change basic facts that anyone can observe by opening their eyes.

If you want to believe that time has no significance, proove it. When
you have successfully discounted all the clear and obvious artifacts
of times arrow, I will be happy to consider the possibility that times
arrow has no significance.
"observer-participancy" is a delightful way to say "perception", of
course
No. Perception does not require participation of any kind, except in
that sense (which does not imply control) in which any observation
involves an interaction and changes what it observes.

"deriving a working model of reality from sensory input" is what
perception is all about.
, but the most interesting part of this is that, to a theoretical-
enough physicist, the mere fact that something happens in the future is
obviously no bar to that something "building" something else in the past.
Yes, for the theoretician. It is a theoreticians job to test the
limits of the current models, and thus hopefully find better models.
But look around you. When was the last time you lived in a house that
was due to be built 50 years later?

The model is not reality, but only a working approximation of reality.
If the theoreticians could arbitrarily choose the results, why would
anyone bother with experiments?
Now, it IS quite possible, of course, that Wheeler's working hypothesis
that "the world is a self-synthesizing system of existences, built on
observer-participancy" will one day turn out to be unfounded -- once
somebody's gone to the trouble of developing it out completely in fully
predictive form, devise suitable experiments, and monitor results.
Or else someone could simply say "I do not require that hypothesis".
But to dismiss the hypothesis out of hand, "just because", does not
seem to me to be a productive stance. That the universe cannot have
built its own past through future acts of perception by existences
within the universe is "obvious"... but so, in the recent past, were
SO many other things, that just didn't turn out to hold...:-).
Absolutely true. But take a close look at the pattern. The central
role of people in defining the universe is something that, step by
step, we are being forced to give up. We are not created in the image
of god, the Earth is not the center of the universe, and our minds are
no more special than any other arrangement of matter.

In quantum theory, the observer is nothing more than a sufficient mass
that a superposition must be resolved quickly. Not so long ago, people
were grasping to the idea that being an 'observer' in quantum physics
was a special function of human consciousness. I do not need that
hypothesis any more than I need the hypothesis of god, or the
hypothesis that we are living in the matrix acting as magical
batteries that somehow produce more energy than we consume.
Searle has written a book with a curiously similar title, "The
Construction of Social Reality", and takes hundreds of pages to defend
the view that there ARE "brute facts" independent of human actions
and perceptions -- but does NOT deny the existence of "social reality"
superimposed, so to speak, upon "brute reality".


Yes - but that "social reality" is nothing special either. There are
good practical reasons for it - reasons which can be derived fairly
simply from what we know of reality.

If you, like me, had Asperger syndrome you would understand the
practical consequences of not having full access to the definition of
"social reality".

As it happens, I have little patience for the constructionism vs.
deconstructionism thing. I have not chosen a side, and I do not see
deconstructionism as a single true faith. I do not believe all the
things that constructionists might say "then you must believe X" to.

To be honest, I don't see the point of basing opinions on what was
said by philosophers before the current level of knowledge about
physics and about the mind was achieved.
--
Steve Horne

steve at ninereeds dot fsnet dot co dot uk
Jul 18 '05 #233
Marcin 'Qrczak' Kowalczyk <qr****@knm.org.pl> wrote in message news:<pa****************************@knm.org.pl>.. .
1 app = wxPySimpleApp()
2 frame = MainWindow(None, -1, "A window")
3 frame.Show(True)
4 app.MainLoop()

Here, I have to put each line in a magical order. Deviate the
slightest bit, the thing crashes hard. It is hard to work with this
order; wxPython inherited an old design (not wxPython's fault), and
it's showing its age.

I'd fix it, but functions don't give me that power.


Why?


Ok, here I need to make every line execute in order or it crashes.
The problem is that, like Lisp, Python wants to greedily execute any
function I give it. So if I try abstracting lines 1, 3 and 4 in a
function, like the following, it will execute line 2 first, crashing
the system:

def make_app(frame):
app = wxPySimpleApp() # 1
frame.Show(True) # 3
app.MainLoop() # 4

# oops, #2 is executed first. game over!
make_app( MainWindow(None, -1, "Sample editor") ) # 2
So I have to somehow wrap up line 2 in something so it won't greedily
execute. One way is to flash-freeze it in a function, say:
lambda: MainWindow(None, -1, "A window")

or freeze it in a list:
[MainWindow, None, -1, "A window"]

And these are possible solutions. But it's less readable and frankly
strange to anyone who has to read my code. It's a weird functional
trick to deal with side-effect ridden code. When I really just wanted
to make execution work in the right order. So I likely fail in making
it more readable and maintainable, which is the whole point in doing
this.

I need to specify the order of execution,


What's the problem in specifying the order of execution in functions?


Because in most languages (like Python and Lisp), functions don't give
the right amount of control over side-effects. They're great when
side-effects don't matter, but once they do, something like macros are
made for that situation.

Now, is this a big deal? Not really; it doesn't dominate the
advantages of using Python and wxPython. Just something I noticed.
But the tool is missing from the programmer's belt -- and whoever
defines a framework is already writing a new language that people must
deal with.
Jul 18 '05 #234
What an outrageously off-topic thread, I can't resist it :-)

Robin Becker <ro***@jessikat.fsnet.co.uk> writes:
In article <2a********************************@4ax.com>, Stephen Horne
<st***@ninereeds.fsnet.co.uk> writes
Even if this was not the case, you have not proved that reality is not
real.
What would it mean to 'prove that reality is not real', in fact??

Of course perception still varies slightly from person to
person, and more extensively from species to species,
It can vary in arbitrarily large ways. Our perception of the world is
based on our understanding of it (our models of it), including that
'understanding' embodied in our biology, put there by our evolutionary
past.

but it is not
independent of reality - it still has to be tied to reality as closely
as possible or else it is useless.

Absolutely! And there's no contradiction between that and the fact
that perception depends on both reality and our models of reality.

Actually it was not my intention to attempt any such proof, merely to
indicate that what we call real is at the mercy of perception.
The notion of reality is simply the working hypothesis that there's a
world out there to be understood, and that we have some hope of
understanding it, isn't it? Why give up on that until we get really
stuck? Science as a whole shows no sign of being stuck at present.

If I
choose to call a particular consensus version of reality the 'one true
reality' I'm almost certainly wrong.
What justification do you have for that statement?

As with most of current physics we
understand that 'reality' is a model.
Can you explain how that statement means anything at all?

An evolution based on low speed
physics hardly prepares us for quantum mechanics and spooky action at a
distance interactions. For that reality, which we cannot perceive, we
employ mathematicians as interpreters (priests?) to argue about the
number of hidden dimensions etc etc. Even causality is frowned upon in
some circles.
Well, they frown on it for no good reason. They're arbitrarily
setting aside a bunch of stuff and trying to legislate that
"everything works just *as if* it were real, except parts of it aren't
real". They can make that decision if they want to, but don't expect
others to likewise give up on science.

What we humans call 'reality' is completely determined by our senses and
the instruments we can build.
Well, your use of the word 'reality' is at odds with the way it's
usually understood (see above). You can use it in that way (where
most people would use the word 'model' in its place), but that only
serves to making communication more difficult.

How we interpret the data is powerfully
influenced by our social environment and history. As an example the
Oh, sure -- except that you're kind of implying that data even
*exists* in isolation from models of the world.

persistence of material objects is alleged by some to be true only for
small time scales <10^31 years; humans don't have long enough to learn
that.


It must be a mystery to you, then, how we know it. ;-)
John
Jul 18 '05 #235
BTW, if in my other post you notice that in line 2
"A window"
morphs into
"Sample editor"

don't let it bother your subconscious. I was wrestling with google
groups to post (acting buggy recently, I wonder what's up..), and in
the process I cut 'n pasted the wrong stuff from my code. But it acts
the same, except for the window having a different title.
Jul 18 '05 #236
Robin Becker <ro***@jessikat.fsnet.co.uk> writes:
In article <3f********************************@4ax.com>, Stephen Horne
<st***@ninereeds.fsnet.co.uk> writes
As I already mentioned, if a primitive person observes a car and
theorises that there is a demon under the hood, that does not become
true. Reality does not care about anyones perceptions as it is not
dependent on them in any way - perceptions are functionally dependent
on reality, and our perceptions are designed to form a useful model of
reality.
We observe electrons and make up mathematical theories etc etc, but in
reality little demons are driving them around. :)


That's basically my model, too :-)

Your assertion that there is an objective reality requires proof as
well.
It does not.

Probably it cannot be proved, but must be made an axiom. The
scientific method requires falsifiability.
It's the whole project of science to understand reality, so the
concept is outside of science. I guess the phrase 'existence of
reality' means pretty much the same as 'the degree of success of
science'.

The fact is we cannot perceive well enough to determine reality. The
physicists say that observation alters the result so if Heisenberg is
Those physicists are wrong, and Stephen is right. It's a bit of an
embarrassment to Physics that some physicists apparently still believe
in the Copenhagen interpretation.

right there is no absolute reality. Perhaps by wishing hard I can get my
batteries to last longer 1 time in 10^67.
No, but you can get them to last arbitrarily long by being *extremely*
lucky ;-)

Awareness certainly mucks things up in socio-economic systems which are
also real in some sense.
But there's no mystery or deep philosophical problem there.

I hear people putting forward the view that
time is a construct of our minds; does time flow?
No, 'the flow of time' doesn't really mean anything.

Any more deep mysteries you want me to clear up for you while I'm
about this? ;-)

This is a bit too meta-physical, but then much of modern physics is like
that. Since much of physics is done by counting events we are in the
position of the man who having jumped out of the top floor observes that
all's well after falling past the third floor as falling past floors
10,9,... etc didn't hurt. We cannot exclude exceptional events.


There's rather a big difference between the probabilities involved
there, Robin. We *could* be in a "Harry Potter Universe" of the sort
you hint at, but the word 'unlikely' hardly begins to describe the
magnitude of it!

I highly recommend David Deutsch's book "The Fabric of Reality", which
covers most of the stuff discussed in this thread.
John
Jul 18 '05 #237
Robin Becker wrote:
Your assertion that there is an objective reality requires proof as
well. Probably it cannot be proved, but must be made an axiom. The
scientific method requires falsifiability.


If the statement that there is an objective reality (or any other statement)
can be objectively proven either way, then objective truth (and hence
objective reality) exists. If it cannot, then the statement that there is
an objective reality is as true as any other statement, and requires no
proof.
--
Rainer Deyke - ra*****@eldwood.com - http://eldwood.com
Jul 18 '05 #238
Alex Martelli <al***@aleax.it> writes:
Stephen Horne wrote:
...
True. But perception cannot change reality. Reality is not about
perception - it existed long before there was anything capable of
percieving.
You are so WONDERFULLY certain about such things -- including the
fact that "before" is crucial, i.e., the arrow of time has some
intrinsic meaning.


Well, I agree that time-ordering is not important. I think the main
point is simply that reality is (defined as) independent of
perception, which he was illustrating with an example of a case where
perception was (or is, or will be, if you insist ;-) absent, but
reality was present. This is a somewhat metaphysical claim (though
perhaps the success of science in itself gives it scientific meaning).
It's the only sane metaphysical position to take, though, unless and
until science grinds to a halt. Anything else is either advocating
giving up on science, or merely playing around with language.

Physicist J. A. Wheeler (and his peer referees for the "IBM Journal
of Research and Development") didn't have your admirable certainty
that "reality is not about perception".
And it's Wheeler who's wrong, I suspect, not Stephen. And I mean
wrong in his epistemology, not merely wrong about some particular
theory.

[...] course, but the most interesting part of this is that, to a theoretical-
enough physicist, the mere fact that something happens in the future is
obviously no bar to that something "building" something else in the past.
I don't have a problem with that a priori.

Now, it IS quite possible, of course, that Wheeler's working hypothesis
that "the world is a self-synthesizing system of existences, built on
observer-participancy" will one day turn out to be unfounded -- once
somebody's gone to the trouble of developing it out completely in fully
predictive form, devise suitable experiments, and monitor results.

[...]

Not having read the paper, I can't comment on that particular theory.
All I can say is that that there exist many 'zombie' theories in the
area of quantum mechanics and cosmology (and this thing of Wheeler's
has a suspiciously similar smell) which arbitrarily deny the existence
of some part of reality where some other extant theory does not. If
both theories are of equal explanatory and predictive power (as is the
case with the rival theories of quantum mechanics), the old one is no
longer rationally tenable. Now, OK, it's not *quite* as cut-and-dried
as that, because the ideas are hard and complicated, so I may simply
be mistaken about the particular theories we're discussing (I'd
certainly be a fool to say that John Wheeler hasn't thought deeply
about these things, or that my understanding of Physics approaches
his). But it's certainly true that some theories (the Copehagen
interpretation itself, for example, or the Inquisition's explanation
of the motions of the Solar System) that people continue to believe in
are indefensible because they arbitrarily reject the very existence of
some part of reality that another theory successfully explains. To
quote David Deutsch: "A prediction, or any assertion, that cannot be
defended might still be true, but an explanation that cannot be
defended is not an explanation".
Can't resist another quote from Deutsch ("The Fabric of Reality", in
the chapter "Criteria for Reality"):

There is a standard philosophical joke about a professor who gives a
lecture in defence of solipsism. So persuasive is the lecture that as
soon as it ends, several enthusiastic students hurry forward to shake
the professor's hand. "Wonderful. I agreed with every word," says
one student earnestly. "So did I," says another. "I am very
gratified to hear it," says the professor. "One so seldom has the
opportunity to meet fellow solipsists."
John
Jul 18 '05 #239
"Rainer Deyke" <ra*****@eldwood.com> writes:
Robin Becker wrote:
Your assertion that there is an objective reality requires proof as
well. Probably it cannot be proved, but must be made an axiom. The
scientific method requires falsifiability.


If the statement that there is an objective reality (or any other statement)
can be objectively proven either way, then objective truth (and hence
objective reality) exists. If it cannot, then the statement that there is
an objective reality is as true as any other statement, and requires no
proof.


The justification of scientific knowledge doesn't require proof in the
usual sense of the word, so your statement seems ill-founded. My
guess is that the concept of reality is a metaphysical one, though
(inevitably quoting from Deutsch again):

"The reliability of scientific reasoning is ... a new fact about
physical reality itself..."
John
Jul 18 '05 #240
Stephen Horne <st***@ninereeds.fsnet.co.uk> writes:
On Sat, 25 Oct 2003 22:34:47 GMT, Alex Martelli <al***@aleax.it>
wrote: [...this is Stephen again...] Just because physicists don't have a perfect model yet, it doesn't
change basic facts that anyone can observe by opening their eyes.
'Basic facts that anyone can observe by opening their eyes' are
elusive things! The earth is not flat. All observations are made in
the context of a model of reality.

[...] The central
role of people in defining the universe is something that, step by
step, we are being forced to give up. We are not created in the image
of god, the Earth is not the center of the universe,
Doubtful, but I can't be bothered to get into that.

and our minds are
no more special than any other arrangement of matter.
Unqualified, that's clearly nonsense.

In quantum theory, the observer is nothing more than a sufficient mass
that a superposition must be resolved quickly. Not so long ago, people
were grasping to the idea that being an 'observer' in quantum physics
was a special function of human consciousness. I do not need that
hypothesis any more than I need the hypothesis of god, or the
hypothesis that we are living in the matrix acting as magical
batteries that somehow produce more energy than we consume.
(I like your general thrust, but I think it's simpler than that -- the
many-worlds theory just says "let's forget about the collapse of the
wavefunction", and everything seems to work out fine.)

Searle has written a book with a curiously similar title, "The
Construction of Social Reality", and takes hundreds of pages to defend
the view that there ARE "brute facts" independent of human actions
and perceptions -- but does NOT deny the existence of "social reality"
superimposed, so to speak, upon "brute reality".


Yes - but that "social reality" is nothing special either. There are
good practical reasons for it - reasons which can be derived fairly
simply from what we know of reality.


Yes. We can explain the social aspects of reality even if not in
complete detail. It doesn't bring up any major philosophical
problems, French sociologists notwithstanding.

[...] To be honest, I don't see the point of basing opinions on what was
said by philosophers before the current level of knowledge about
physics and about the mind was achieved.


Certainly some philosophers seem over-concerned with the history of
philosophy.
John
Jul 18 '05 #241
an***@vredegoor.doge.nl (Anton Vredegoor) writes:
Robin Becker <ro***@jessikat.fsnet.co.uk> wrote:
What we humans call 'reality' is completely determined by our senses and
the instruments we can build. How we interpret the data is powerfully
influenced by our social environment and history. As an example the
persistence of material objects is alleged by some to be true only for
small time scales <10^31 years; humans don't have long enough to learn
that.
Persistence of material objects will become obsolete much sooner. See:

http://crnano.org/systems.htm

This discusses three ethical systems and their usefulness for dealing
with the coming nanotechnology era.


But this is all quite irrelevant to the question of the validity of
realism. Robin and Anton both are merely making points about the
particular common-sense *models* of reality that we carry with us in
order to get through the dayy without spilling our coffee or trying to
walk through doors.

The articles conclusion has quite a Pythonic ring to it, I feel.
However just like Python, it will have to give up on backward
compatibility someday :-)


Weak link, very weak, Anton. ;-) Still, at least you're trying, unlike
me...
John
Jul 18 '05 #242
On 26 Oct 2003 17:54:58 +0000, jj*@pobox.com (John J. Lee) wrote:
Stephen Horne <st***@ninereeds.fsnet.co.uk> writes:
On Sat, 25 Oct 2003 22:34:47 GMT, Alex Martelli <al***@aleax.it>
wrote:

[...this is Stephen again...]
Just because physicists don't have a perfect model yet, it doesn't
change basic facts that anyone can observe by opening their eyes.


'Basic facts that anyone can observe by opening their eyes' are
elusive things! The earth is not flat. All observations are made in
the context of a model of reality.


Well, I still think that the *local* 'flatness' of the Earths surface
is highly significant (at least the fact that the general
sphericalness has less significance at a local scale than the hills
and valleys and other lumps and bumps), even if only locally. Our
current models are much more general, of course, but showing that
something can be explained as local effects in a new and more general
model is not the same as proving that easily observable consistent
patterns are insignificant.

In the case of the Earths flatness, the historical model has not only
been superceded but now seems cringingly obsolete as our daily lives
have exceeded the limits of that model - not a day goes by without
some reminder of the non-flat nature of the Earth at non-local scales.

But are we likely to exceed the limits of perceptions where time is
significant any time soon? Ever? If so, how come no smug gits from the
future have come back to tell us how it is done?

If you say that our perception of time is not a universal absolute,
well some aspects of that are already proven fact and other aspects
are perfectly plausible. I have no problem with that. But to claim
that our local perception of time has no basis in our locally
perceptable 'region' of reality is, IMO, daft.

All the evidence shows that there is a consistent arrow of time that
we cannot opt out of - and 'local' in this case seems a lot bigger
than a few tens or hundreds of miles. Current evidence suggests that
works much the same in distant galaxies as it does in the next town
down the road, as long as we allow for relativity where relevant.
and our minds are
no more special than any other arrangement of matter.


Unqualified, that's clearly nonsense.


It is qualified by the context of the discussion - the claims that
there is no reality separate from perception (and therefore that the
arrangement of matter called a brain has a special ability to write
the rules that all matter in the universe follows).

As I said in another post...

"""
Consciousness is not magic. Brains, like the rest of the body, are
just another arrangement of matter - certainly a complex and useful
arrangement, but it is still obeying (not defining) the rules layed
down by the universe we live in. There is nothing special about people
which lets them arbitrarily define the universe.
"""

Yes, the human brain is (currently, so far as we know) unique. It is
special. But it does not need magic powers in order to be special.
In quantum theory, the observer is nothing more than a sufficient mass
that a superposition must be resolved quickly. Not so long ago, people
were grasping to the idea that being an 'observer' in quantum physics
was a special function of human consciousness. I do not need that
hypothesis any more than I need the hypothesis of god, or the
hypothesis that we are living in the matrix acting as magical
batteries that somehow produce more energy than we consume.


(I like your general thrust, but I think it's simpler than that -- the
many-worlds theory just says "let's forget about the collapse of the
wavefunction", and everything seems to work out fine.)


Yes, but why can we see the affects of superposition at the
microscopic scale but not at the macroscopic. That is what strikes me
as odd - if parallel universes work as an explanation, then why do
they work differently at the two scales. In particular, why can we not
see evidence of it at the scales we are good at percieving when we can
see the evidence so clearly at the scales we are not naturally
equipped to percieve at all.
To be honest, I don't see the point of basing opinions on what was
said by philosophers before the current level of knowledge about
physics and about the mind was achieved.


Certainly some philosophers seem over-concerned with the history of
philosophy.


Looking at that again, I overstated it of course. Wisdom is not such a
cheap thing. But still, these philosophers simply did not have access
to much of the knowledge that, thanks to science, we now take
more-or-less for granted.
One last thought, at least for today...

If there is no reality separate from perception, and if 'reality' is
therefore just another perception, how come it is so bloody complex
and impossible for most of the organisms capable of perception to
understand?

When essentially everyone on Earth believed in a flat Earth, why was
there any perceptible evidence that the Earth was not flat - unless it
was because of an independent reality 'taking precedence' over
perceptions?
--
Steve Horne

steve at ninereeds dot fsnet dot co dot uk
Jul 18 '05 #243
On 26 Oct 2003 18:01:47 +0000, jj*@pobox.com (John J. Lee) wrote:
Weak link, very weak, Anton. ;-) Still, at least you're trying, unlike
me...


<desperate attempt to lighten the tone>

I'm *very* trying ;-)
OK, sorry for the bad joke.
--
Steve Horne

steve at ninereeds dot fsnet dot co dot uk
Jul 18 '05 #244
In article <j2********************************@4ax.com>, Stephen Horne
<st***@ninereeds.fsnet.co.uk> writes
On 26 Oct 2003 18:01:47 +0000, jj*@pobox.com (John J. Lee) wrote:
Weak link, very weak, Anton. ;-) Still, at least you're trying, unlike
me...


<desperate attempt to lighten the tone>

I'm *very* trying ;-)
OK, sorry for the bad joke.

Well here's even more fun the following quotes from this

http://www.scieng.flinders.edu.au/cp.../Semantic.html

seem to be in my corner, but as before I'm getting dizzy even reading
it.
'''
That process physics could be implemented by a model of Mind in the SNN
to reveal the fundamental semantic, temporal, experiential nature of
reality is deeply satisfying for a number of reasons: 1.) the essential
semantic nature of reality has been thrust upon us by the rigorously
proven limitations of self-referential syntactic systems, and so rests
upon the most secure imaginable and uncompromisingly honest intellectual
foundation; 2.) it is, of course, Mind in which semantic and the Meaning
to which it corresponds is ultimately registered[10]; 3.) Mind, as the
theoretical statistician turned economic theorist and pioneering
biophysical economist, Nicholas Georgescu-Roegen argues, appears to be
required for the experience of what Dr. Cahill calls the "present
moment" or the "now" required if we are to make meaningful observations
at all.
'''

'''
To elaborate upon this third point: Georgescu-Roegen tells us of an
illustration by Nobel Prize-winning physicist, Percy Bridgman, showing
that with the advent of relativity in physics, it is perfectly possible
for two separated observers travelling in different directions through
space to register a signal from a third position in space as two
different facts. One observer may, for instance, detect a " ‘a flash of
yellow light’ " while the second registers the same signal as " ‘a glow
of heat on his finger.’ " Bridgman’s point, according to Georgescu-
Roegen, is that for relativity to be able to assert that about the same
event implies that even relativity physics really presupposes
simultaneity in some absolute sense despite its attempt to show
simultaneity’s problematic nature with the registration of a single
event as two distinct facts. Furthermore, relativity physics does not
show how this absolute simultaneity could be established.
'''
--
Robin Becker
Jul 18 '05 #245
Stephen Horne <st***@ninereeds.fsnet.co.uk> writes:
On 26 Oct 2003 17:54:58 +0000, jj*@pobox.com (John J. Lee) wrote:
Stephen Horne <st***@ninereeds.fsnet.co.uk> writes: [...]
Just because physicists don't have a perfect model yet, it doesn't
change basic facts that anyone can observe by opening their eyes.
'Basic facts that anyone can observe by opening their eyes' are
elusive things! The earth is not flat. All observations are made in
the context of a model of reality.


Well, I still think that the *local* 'flatness' of the Earths surface
is highly significant (at least the fact that the general

[...]

I think all this is irrelevant to the question at hand (realism).

and our minds are
no more special than any other arrangement of matter.


Unqualified, that's clearly nonsense.


It is qualified by the context of the discussion - the claims that
there is no reality separate from perception (and therefore that the
arrangement of matter called a brain has a special ability to write
the rules that all matter in the universe follows).

[...]

Oh, OK.

[...] Yes, but why can we see the affects of superposition at the
microscopic scale but not at the macroscopic. That is what strikes me
as odd - if parallel universes work as an explanation, then why do
they work differently at the two scales. In particular, why can we not
They don't.

see evidence of it at the scales we are good at percieving when we can
see the evidence so clearly at the scales we are not naturally
equipped to percieve at all.
We see exactly the effects that the theory predicts. They're just
very small.
[...] One last thought, at least for today...

If there is no reality separate from perception, and if 'reality' is
therefore just another perception, how come it is so bloody complex
and impossible for most of the organisms capable of perception to
understand?

[...]

Yeah -- hence the solipsism joke.
John
Jul 18 '05 #246
On 26 Oct 2003 20:56:09 +0000, jj*@pobox.com (John J. Lee) wrote:
Yes, but why can we see the affects of superposition at the
microscopic scale but not at the macroscopic. That is what strikes me
as odd - if parallel universes work as an explanation, then why do
they work differently at the two scales. In particular, why can we not
They don't.


Are you claiming that in schroedingers experiment that the dead and
live cats interact in some way that can be measured outside the box
without collapsing the waveform?

I was also under the impression that the largest 'particle' to be
successfully superposed in an experiment was a buckyball (or something
like that - at least a 'large' molecule of some kind or another) and
the timescale for that superposition was tiny.

Yet the whole point of the thought experiment is that according to the
theory, as conventionally described (I know next to nothing of the
detail), it should be possible for a cat to be superposed almost as
easily as it is possible for a subatomic particle - a simple
cause-and-effect chain is all that is needed. If that is the case,
superpositions of macroscopic objects should be happening all the
time.

Now either the superpositions are in parallel universes with each
state undetectable from an observer in another one of those universes,
or they are in the same universe and detectable in some way, or there
is a differentiation between the microscopic and macroscopic scales,
or - and this is very likely, I admit - I am seriously confused about
what the hell is going on (the natural state for a human confronted
with quantum theory).
see evidence of it at the scales we are good at percieving when we can
see the evidence so clearly at the scales we are not naturally
equipped to percieve at all.


We see exactly the effects that the theory predicts. They're just
very small.


OK - so why is it not possible to detect the superposition of that
cat? Why is the experiment still considered a thought experiment only?

I would have thought, with a huge number of particles affected by the
superposition of states, there would be a huge number of interactions
between the particles in those two superposed states.

Or am I just seeing the effects of superposition in the wrong way?
Yeah -- hence the solipsism joke.


Ah - sorry - I'm not actually familiar with that term.
--
Steve Horne

steve at ninereeds dot fsnet dot co dot uk
Jul 18 '05 #247
"Rainer Deyke" <ra*****@eldwood.com> writes:

Joe Marshall wrote:
2. But the computer shouldn't wait until it can prove you are done
to close the file.


Wrong. The computer can prove that I am done with the file the moment my
last reference to the file is gone. I demand nothing more (or less).


But in practice, the computer will only prove you are done with the file
the next time it decides to check. This is different from "the moment
my last reference...is gone" because for very good preformance reasons,
the computer doesn't check at each unbinding. (And no, reference counts
aren't a good idea either.)

Common Lisp as currently formulated does not have finalizers, although
there have been over the years proposals to add them. There are still
some unresolved issues regarding how to limit what such finalizers can
do. In any case, that will also not give immediate results.

--
Thomas A. Russ, USC/Information Sciences Institute

Jul 18 '05 #248
Stephen Horne <st***@ninereeds.fsnet.co.uk> writes:
On 26 Oct 2003 20:56:09 +0000, jj*@pobox.com (John J. Lee) wrote:
Yes, but why can we see the affects of superposition at the
microscopic scale but not at the macroscopic. That is what strikes me
as odd - if parallel universes work as an explanation, then why do
they work differently at the two scales. In particular, why can we not
They don't.


Are you claiming that in schroedingers experiment that the dead and
live cats interact in some way that can be measured outside the box
without collapsing the waveform?


Well, in the many-worlds interpretation (MWI) there *is* no
wavefunction collapse: everything just evolves deterministically
according to the Schrodinger equation. But of course, since cats are
big lumps of matter, one wouldn't expect to be able to measure
interference effects using cats.

I was also under the impression that the largest 'particle' to be
successfully superposed in an experiment was a buckyball (or something
like that - at least a 'large' molecule of some kind or another) and
the timescale for that superposition was tiny.
The largest *measured* superposition, yes. The Copenhagen
interpretation says that the world evolves according to the
Schrodinger equation until, um, it stops doing that, and collapses to
an eigenstate. When does the Copenhagen interpretation say the wfn
collapses? It doesn't! It denies any meaning to that question.
That's claiming that we just "shouldn't" ask about this part of
reality, and stop our enquiry there. Why should I follow that
instruction when the MWI explains exactly what happens? If a theory
explains more than its rival, one rejects the rival theory. And it
doesn't make any sense to say "there are many universes, except for
large objects, for which there is only one universe". This brings us
into epistemological issues which Deutsch deals with in his book much
better than I can.

Of course, there's more to this debate than Copenhagen vs. MWI, but
the other rival theories all (to my very limited knowledge) seem to be
either re-hashings of MWI in disguise, or complicated theories that
introduce ad-hoc irrelevancies without any compensating benefit. And,
to dispense with the absurd objection that MWI is 'expensive in
universes', since when has complexity of *entities* been a criterion
on which to judge a theory?? Complexity of *theories* of the world is
a problem, complexity of the world itself is not. Indeed, one thing
we know independent of any theory of quantum mechanics (QM) is that
the world is damned complicated!

Yet the whole point of the thought experiment is that according to the
theory, as conventionally described (I know next to nothing of the
detail), it should be possible for a cat to be superposed almost as
easily as it is possible for a subatomic particle - a simple
cause-and-effect chain is all that is needed. If that is the case,
superpositions of macroscopic objects should be happening all the
time.
They do, yes!

Now either the superpositions are in parallel universes with each
state undetectable from an observer in another one of those universes,
*The fact that those superpositions exist* is justified by the fact
that MWI is the best theory of QM that we have. The particular nature
of a particular large object's superposition is not measurable.
Contrary to popular belief, this raises no major epistemological
problems for MWI, and does not turn it into metaphysics.

or they are in the same universe and detectable in some way, or there
is a differentiation between the microscopic and macroscopic scales,
or - and this is very likely, I admit - I am seriously confused about
what the hell is going on (the natural state for a human confronted
with quantum theory).


Saying that superpositions are "in one universe" or another seems to
be playing mix-n-match with the various theories.

[...]
We see exactly the effects that the theory predicts. They're just
very small.


OK - so why is it not possible to detect the superposition of that
cat? Why is the experiment still considered a thought experiment only?


Simply because that's what QM predicts for large objects. The
'accident' of the size of Planck's constant means that interference
effects are small for large objects. The universes involved are none
the less real for that: denying that requires doublethink.
Interference effects aside, why *should* we experience anything
unusual when "we" (scare quotes because issues of personal identiy
come up here, of course) exist as a superposition, ie. when we exist
in multiple universes? There is a very close parallel here with
people's disbelief in the round-earth theory because they couldn't see
why they wouldn't fall off the earth if they moved too far from "the
top of the earth". Why don't we fall off the earth? Because the
(scientifically justified) theory says we won't. Why doesn't the me
in this universe experience multiple universes simultaneously?
Because the (scientifically justified) theory says I won't. Why
*should* we experience multiple universes? -- universes are entirely
independent of each other apart from interference effects that are
only large for very small objects, or slightly larger and very
carefully constructed ones.

But again, for those arguments in more detail you're vastly better
advised to go to David Deutsch's (extremely readable and enlightening)
book than to me :-)

[...]
Yeah -- hence the solipsism joke.


Ah - sorry - I'm not actually familiar with that term.


Well, explaining a joke always spoils it, but: a solipsist is a person
who believes that he is the only real thing in existence. The rest of
the universe, to a solipsist (to *the* solipsist, in fact ;-) is
simply the result of his own imaginings. Deutsch very clearly
presents an argument that this position is indefensible and
meaningless, starting from that joke I quoted.
John
Jul 18 '05 #249
In article <87************@pobox.com>, jj*@pobox.com (John J. Lee)
wrote:
Stephen Horne <st***@ninereeds.fsnet.co.uk> writes:
On 26 Oct 2003 20:56:09 +0000, jj*@pobox.com (John J. Lee) wrote:
> Yes, but why can we see the affects of superposition at the
> microscopic scale but not at the macroscopic. That is what strikes me
> as odd - if parallel universes work as an explanation, then why do
> they work differently at the two scales. In particular, why can we not

They don't.


Are you claiming that in schroedingers experiment that the dead and
live cats interact in some way that can be measured outside the box
without collapsing the waveform?


Well, in the many-worlds interpretation (MWI) there *is* no
wavefunction collapse: everything just evolves deterministically
according to the Schrodinger equation. But of course, since cats are
big lumps of matter, one wouldn't expect to be able to measure
interference effects using cats.


For an interesting discussion of the shortcomings of MWI (not to mention
CI) have a look at
<http://www.npl.washington.edu/npl/int_rep/tiqm/TI_toc.html>.
I was also under the impression that the largest 'particle' to be
successfully superposed in an experiment was a buckyball (or something
like that - at least a 'large' molecule of some kind or another) and
the timescale for that superposition was tiny.


You all might also be interested in Objective Reduction theories. Some
of them suggest that the brain itself is a fairly large object in
superposition. See <http://www.consciousness.arizona.edu/quantum/>

Enjoy.

--

- rmgw

<http://www.electricfish.com/>

----------------------------------------------------------------------------
Richard Wesley Electric Fish, Inc.

"Violence is the last refuge of the incompetent."
- Isaac Azimov, _Foundation_
Jul 18 '05 #250

This discussion thread is closed

Replies have been disabled for this discussion.

By using this site, you agree to our Privacy Policy and Terms of Use.