473,776 Members | 1,565 Online
Bytes | Software Development & Data Engineering Community
+ Post

Home Posts Topics Members FAQ

BIG successes of Lisp (was ...)

In the context of LATEX, some Pythonista asked what the big
successes of Lisp were. I think there were at least three *big*
successes.

a. orbitz.com web site uses Lisp for algorithms, etc.
b. Yahoo store was originally written in Lisp.
c. Emacs

The issues with these will probably come up, so I might as well
mention them myself (which will also make this a more balanced
post)

a. AFAIK Orbitz frequently has to be shut down for maintenance
(read "full garbage collection" - I'm just guessing: with
generational garbage collection, you still have to do full
garbage collection once in a while, and on a system like that
it can take a while)

b. AFAIK, Yahoo Store was eventually rewritten in a non-Lisp.
Why? I'd tell you, but then I'd have to kill you :)

c. Emacs has a reputation for being slow and bloated. But then
it's not written in Common Lisp.

Are ViaWeb and Orbitz bigger successes than LATEX? Do they
have more users? It depends. Does viewing a PDF file made
with LATEX make you a user of LATEX? Does visiting Yahoo
store make you a user of ViaWeb?

For the sake of being balanced: there were also some *big*
failures, such as Lisp Machines. They failed because
they could not compete with UNIX (SUN, SGI) in a time when
performance, multi-userism and uptime were of prime importance.
(Older LispM's just leaked memory until they were shut down,
newer versions overcame that problem but others remained)

Another big failure that is often _attributed_ to Lisp is AI,
of course. But I don't think one should blame a language
for AI not happening. Marvin Mins ky, for example,
blames Robotics and Neural Networks for that.
Jul 18 '05
303 17764
jj*@pobox.com (John J. Lee) wrote in message news:<87******* *****@pobox.com >...
mi**@pitt.edu (Michele Simionato) writes:
OK, I give you that, I shouldn't have implied that that the overall
number (or fraction) of CI-believers or non-MWI-believers was small
(though I certainly don't know whether it's a majority or not -- and
I'm afraid your anecdotes don't persuade me that you do, either).
Amongst jobbing Physicists (rather than the "great and good" of that
survey), I would *guess* the fraction of CI-believers or people who've
never really thought about it is much higher (partly because some
parts of the 'front-line' of Physics, cosmology and quantum
computation in particular, tend to rub the inadequacy of CI in your
face).


Please, take a look at my publication list (it is enough to search for
my name at http://www.slac.stanford.edu/spires/hep/) and you will see that
I have published research papers in the domain of early Universe cosmology.
In my experience, the beliefs of people in the field are different
from what you say.
I prefer not to comment on the "jobbing physicist" part.

Michele Simionato
Jul 18 '05 #271
mi**@pitt.edu (Michele Simionato) writes:
[...]
I prefer not to comment on the "jobbing physicist" part.


Hmm, just in case of any misunderstandin g: I wasn't trying to belittle
your work as a Physicist by using that phrase, Michele. It was meant
light-heartedly, and wasn't even meant as a description of you in
particular (I've never read any of your Physics work, after all).
John
Jul 18 '05 #272
Stephen Horne <st***@ninereed s.fsnet.co.uk> wrote in message news:
<snip some argument I would agree>
Perhaps cats simply don't have a particle/wave duality issue to worry
about.


I have got the impression (please correct me if I misread your posts) that
you are invoking the argument "cats are macroscopic objects, so their
ondulatory nature does not matter at all, whereas electrons are
microscopic, so they ondulatory nature does matter a lot."

This kind of arguments are based on the de Broglie wavelenght concept and
are perfectly fine. Nevertheless, I would like to make clear (probably
it is already clear to you) that quantum effects are by no means
confined to the microscopic realm. We cannot say "okay, quantum is
bizarre, but it does not effect me, it affects only a little world
that I will never see". That's not true. We see macroscopic effects of
the quantum nature of reality all the time. Take for instance
conduction theory. When you turn on your computer, electron flow
through a copper cable from the electric power plant to your house.
Any modern theory of conduction is formulated as a
(non-relativistic) quantum field theory of an electron gas
interacting with a lattice of copper atoms. From the microscopic
theory you get macroscopic concepts, for instance you may determine
the resistivity as a function of the temperature. The classical
Drude's model has long past as a good enough explanation of
conductivity. Think also to superconductivi ty and superfluidity:
these are spectacular examples of microscopic quantum effects
affecting macroscopic quantities.
Finally, the most extreme example: quantum fluctuations
during the inflationary era, when the entire observable universe
has a microscopic size, are finally responsible for the density
fluctuations at the origin of galaxies formation. Moreover, we
observe effects of these fluctuations in the cosmic background
radiation, i.e. from phothons coming from the most extreme
distance in the Universe, phothons that travelled from billions
and billions of light years. Now, that's macroscopic!
Michele
Jul 18 '05 #273
On 29 Oct 2003 23:26:05 -0800, mi**@pitt.edu (Michele Simionato)
wrote:
Stephen Horne <st***@ninereed s.fsnet.co.uk> wrote in message news:
<snip some argument I would agree>
Perhaps cats simply don't have a particle/wave duality issue to worry
about.
I have got the impression (please correct me if I misread your posts) that
you are invoking the argument "cats are macroscopic objects, so their
ondulatory nature does not matter at all, whereas electrons are
microscopic, so they ondulatory nature does matter a lot."


That is *far* from what I am saying.

I find some explanations of superposition and decoherence difficult to
believe *because* they seem to differentiate between the microscopic
and macroscopic scales. MWI is one - the appearance is that
macroscopic objects in superposition get a different universe for each
superposed state (because there is no visible artifact of the
superposition - the observer is only in one universe) whereas for
microscopic objects in superposition there is no different universe
(as there are clear artifacts of the superposition, showing that the
superposed states interacted and thus existed in the same universe at
the same time).

I equally find the 'conscious mind has special role as observer'
hypothesis hard to accept as we have ample evidence that the universe
existed for billions of years before there were any conscious minds
that we know of. The evidence suggests that conscious minds exist
within the universe as an arrangement of matter subject to the same
laws as any other arrangement of matter.

In both cases, there is no issue of proof or logic involved. It's more
a matter of credibility - and with the conscious mind concept in
particular, of explanatory value. As far as science has studied the
mind so far all the findings show it to be an arrangement of matter
following the same laws of physics and chemistry that any other
arrangement of matter follows. There is no sign of an outside agency
creating unexplainable artifacts in the brains functioning. And if
there is no role for a thing outside of the brain to be generating
consciousness - if the consciousness we experience is a product of the
brain - then what role does this other consciousness have?

While I have a tendency to confuse his name (I think I called him
Penfold earlier, though what Dangermouse' sidekick has to do with this
I really can't say!), I prefer Penrose' theory where the microscopic
and macroscopic really are different - not because they follow
different rules, but because the time that a superposition can survive
is inversely related to the uncertainty it creates in space-time. Have
a lot of mass in substantially different states (e.g. a cat both alive
and dead, or for that matter a vial of poison both broken and intact)
and the superposition can only survive for a tiny portion of a second.

I'm not sure if this is the same Penrose who speculates that
superposition of brain states is important to creating consciousness.
It would be odd if it is, of course, as a brain is clearly
macroscopic. But then he could mean something else - many
superpositions of particles within the brain. As long as each created
superposition only a small local uncertainty in space time (ie no
substantial 'hotspots' of superposition), this accumulation of
microscopic superpositions could be consistent - though to be honest I
seriously doubt it.

As should be clear, my understanding of the specifics of quantum
theory is extremely limited - but my understanding of general
scientific principles isn't too bad. That is why I earlier pointed out
that maybe the MWI wouldn't cause me such a problem if it was
expressed in some other way - after all, most current theory is so
abstract that the explanations should be taken as metaphors rather
than reality anyway.
This kind of arguments are based on the de Broglie wavelenght concept and
are perfectly fine. Nevertheless, I would like to make clear (probably
it is already clear to you) that quantum effects are by no means
confined to the microscopic realm. We cannot say "okay, quantum is
bizarre, but it does not effect me, it affects only a little world
that I will never see". That's not true. We see macroscopic effects of
the quantum nature of reality all the time.
No problem with that, but we are seeing microscopic effects en masse
rather than macroscopic effects - something rather different, in my
mind, to a cat being both alive and dead at the same time. For
example...
Take for instance
conduction theory. When you turn on your computer, electron flow
through a copper cable from the electric power plant to your house.
Any modern theory of conduction is formulated as a
(non-relativistic) quantum field theory of an electron gas
interacting with a lattice of copper atoms. From the microscopic
theory you get macroscopic concepts, for instance you may determine
the resistivity as a function of the temperature. The classical
Drude's model has long past as a good enough explanation of
conductivity . Think also to superconductivi ty and superfluidity:
these are spectacular examples of microscopic quantum effects
affecting macroscopic quantities.
Of course. But none of these requires a macroscopic object to be
superposed. It may require many microscopic objects to have been
superposed, over and over again (I really don't know how, or even if,
superposition is really involved in these effects - but let me argue
the principle anyway) - but that isn't the same thing. Taking Penrose'
theory again, each individual superposition only creates a small local
uncertainty in spacetime. As long as the many separate superpositions
are spread out in space and time, there will be no particular
'hotspots' where superposition would brake down. In fact, any
coincidental hotspots of uncertainty would accelerate decorence of
superpositions in that region and thus act as a stabilising or
limiting factor in setting the amount of superposition that can occur
in any region.

I would suggest that this limiting thing would be a useful artifact to
look for - or at least some useful artifact might be suggested by the
idea - that could be tested for to proove or disproove the theory.
This limiting effect wasn't mentioned in the article, BTW - maybe
Penrose hasn't thought of it (he proposed to look for artifacts of a
single superposition which cannot be measured using current
technology).

Maybe superfluids would be the place to look for that artifact. Maybe
there is something in the shells of electrons around a nucleus - I'd
certainly expect quantum wierdness there.

But of course I wouldn't have a clue what kind of artifact to look
for, as my understanding is strictly limited to the 'I've read new
scientist on occasion' level.
Finally, the most extreme example: quantum fluctuations
during the inflationary era, when the entire observable universe
has a microscopic size, are finally responsible for the density
fluctuations at the origin of galaxies formation. Moreover, we
observe effects of these fluctuations in the cosmic background
radiation, i.e. from phothons coming from the most extreme
distance in the Universe, phothons that travelled from billions
and billions of light years. Now, that's macroscopic!


Of course, but the quantum effects are not particularly interesting in
that case. Or rather they are to cosmology, but not as far as I can
see to understanding quantum theory. It's a bit like looking at an
image from an electron microscope and claiming that an atom is several
mm wide - the artifact that you are observing has simply been scaled
up relative to the process that created that artifact. The effects
when they occured were on the microscopic scale - only the artifacts
are macroscopic.
--
Steve Horne

steve at ninereeds dot fsnet dot co dot uk
Jul 18 '05 #274
Stephen Horne <st***@ninereed s.fsnet.co.uk> wrote:
particular, of explanatory value. As far as science has studied the
mind so far all the findings show it to be an arrangement of matter
following the same laws of physics and chemistry that any other
arrangement of matter follows. There is no sign of an outside agency
creating unexplainable artifacts in the brains functioning. And if
there is no role for a thing outside of the brain to be generating
consciousnes s - if the consciousness we experience is a product of the
brain - then what role does this other consciousness have?
The more territory modern science covers, the more it becomes clear
that the known parts of the universe are only a small part of what is
"out there". So "objectivel y" science gains more knowledge, but
relatively speaking (seeing it as a percentage of that what is
currently known to be not known, but knowable in principle) science is
loosing ground fast. Also an even greater area of the universe is
supposed to exist that we will not even have a chance *ever* to know
anything about.

Trough out the centuries there has been evidence placing humanity
firmly *not* in the center of the universe. First the earth was proven
to rotate around the sun and not the other way around, next our sun
was not in the center of the galaxy and so on.

Maybe now it is time to accept the fact that all the things we know
taken together are only a small fraction of what can be known, and
even more that that fraction is not even in a central position in the
larger universe of the knowable, and that the knowable is just
disappearingly small compared to the great abyss of the unknowable
where everything is embedded in.

How come then that the sciences have been so uncanningly effective
given that they are such an arbitrary choice within the knowable? The
answer is of course that there are a lot of other possible sciences,
completely unrelated to our own that would have been just as effective
as -or even more effective than- our current sciences, had they been
pursued with equal persistence during the same amount of time over a
lot of generations.

The effectiveness of the current natural sciences is a perceptual
artifact caused by our biased history. From a lot of different
directions messages are coming in now, all saying more or less the
same: "If asking certain questions, one gets answers to these
questions in a certain way, but if asking different questions one gets
different answers, sometimes even contradicting the answers to the
other questions".

This might seem mystic or strange but one can see these things
everywhere, if one asks that kind of questions :-)

One example would be the role the observer plays in quantum mechanics,
but something closer to a programmer would be the way static or
dynamic typing influence the way one thinks about designing a computer
program. A static typist is like someone removing superfluous material
in order to expose the statue hidden inside the marble, while a
dynamic typist would be comparable to someone taking some clay,
forming it into a shape and baking it into a fixed form only at the
last possible moment. These ways of designing things are both valid
(and there are infinitely more other ways to do it) but they lead to
completely different expectations about the design of computer
programs.

In a certain sense all science reduces the world to a view of it that
leaves out more than that it describes, but that doesn't preclude it
being effective. For a last example, what about the mathematics of
curved surfaces? Sciences have had most of their successes using
computations based on straight lines, and only recently the power of
curves is discovered as being equal or more predictive than linear
approaches.

[..]
As should be clear, my understanding of the specifics of quantum
theory is extremely limited - but my understanding of general
scientific principles isn't too bad. That is why I earlier pointed out
that maybe the MWI wouldn't cause me such a problem if it was
expressed in some other way - after all, most current theory is so
abstract that the explanations should be taken as metaphors rather
than reality anyway.


Abstractness doesn't preclude effectiveness, but to try to use
abstractions to understand the world is foolish since it doesn't work
the other way around. It's a many to one mapping, as in plotting a
sinus function on an xy-plane and not being able to find a
x-coordinate to a certain y-coordinate, while at he same time being
perfectly able to predict an y-coordinate if given an x-coordinate.

Anton

Jul 18 '05 #275
Anton Vredegoor" <an***@vredegoo r.doge.nl>:
The more territory modern science covers, the more it becomes clear
that the known parts of the universe are only a small part of what is
"out there".
But as Stephen pointed out, the new things we find are new
in terms of arrangement but still following the same laws of physics
we see here on Earth. (There *may* be some slight changes in
cosmological constants over the last dozen billion years, but I
haven't followed that subject.) These arrangements may be
beyond our ability to model well, but there's little to suggest
that in principle they couldn't be. (Eg, QCD could be used to
model the weather on Jupiter, if only we had a currently almost
inconceivably powerful computer. Running Python. ;)
So "objectivel y" science gains more knowledge, but
relatively speaking (seeing it as a percentage of that what is
currently known to be not known, but knowable in principle) science is
loosing ground fast. Also an even greater area of the universe is
supposed to exist that we will not even have a chance *ever* to know
anything about.
That's a strange measure: what we know vs. what we know we
don't know. Consider physics back in the late 1800s. They could
write equations for many aspects of electricity and magnetism, but
there were problems, like the 'ultraviolet catastrophe'. Did they
consider those only minor gaps in knowledge or huge, gaping chasms
best stuck in a corner and ignored?

Is the gap between QCD and general relativity as big? Hmmmm...
Trough out the centuries there has been evidence placing humanity
firmly *not* in the center of the universe. First the earth was proven
to rotate around the sun and not the other way around, next our sun
was not in the center of the galaxy and so on.
You use this as an argument for insignificance. I use it to show
that the idea of "center of" is a meaningless term. If I want, I can
consider my house as the center of the universe. I can still make
predictions about the motion of the planets, and they will be
exactly as accurate as using a sun-centered model. The only
difference is that my equations will be a bit more complicated.
Maybe now it is time to accept the fact that all the things we know
taken together are only a small fraction of what can be known, and
even more that that fraction is not even in a central position in the
larger universe of the knowable, and that the knowable is just
disappearingly small compared to the great abyss of the unknowable
where everything is embedded in.
Why? I mean, it's true, but it seems that some knowledge is
more useful than others. It's true because even if there were
a quadrillion people, each one would be different, with a unique
arrangement of thoughts and perceptions and a unique bit of
knowledge unknowable to you.
How come then that the sciences have been so uncanningly effective
given that they are such an arbitrary choice within the knowable? The
answer is of course that there are a lot of other possible sciences,
completely unrelated to our own that would have been just as effective
as -or even more effective than- our current sciences, had they been
pursued with equal persistence during the same amount of time over a
lot of generations.
I don't follow your argument that this occurs "of course."

It's not for a dearth of ideas. Humans did try other possible
sciences over the last few millenia. Despite centuries of effort,
alchemy never became more than a descriptive science, and
despite millenia of attempts, animal sacrifices never improved
crop yields, and reading goat entrails didn't yield any better
weather predictions.

On the other hand, there are different but equivalent ways to
express known physics. For example, Hamiltonian and Newtonian
mechanics, or matrix vs. wave forms of classical quantum mechanics.
These are alternative ways to express the same physics, and some
are easier to use for a given problem than another. Just like a
sun-centered system is easier for some calculations than a "my house"
centered one.

On the third hand, there are new theoretical models, like string
theory, which are different than the models we use. But they are
not "completely unrelated" and yield our standard models given
suitable approximations.

On the fourth hand, Wolfram argues that cellular automata
provide such a new way of doing science as you argue. But
my intuition (brainw^Wtraine d as it is by the current scientific
viewpoint) doesn't agree.
The effectiveness of the current natural sciences is a perceptual
artifact caused by our biased history. From a lot of different
directions messages are coming in now, all saying more or less the
same: "If asking certain questions, one gets answers to these
questions in a certain way, but if asking different questions one gets
different answers, sometimes even contradicting the answers to the
other questions".

This might seem mystic or strange but one can see these things
everywhere, if one asks that kind of questions :-)
Or it means that asking those questions is meaningless. What's
the charge of an electron? The bare point charge is surrounded
by a swarm of virtual particles, each with its own swarm. If you
work it out using higher and higher levels of approximation you'll
end up with different, non-converging answers, and if you continue
it onwards you'll get infinite energies. But given a fixed
approximation you'll find you can make predictions just fine, and
using mathematical tricks like renormalization , the inifinities cancel.

For a simpler case .. what is the center of the universe? All locations
are equally correct. Is it mystic then that there can be multiple
different answers or is simply that the question isn't well defined?
One example would be the role the observer plays in quantum mechanics,
but something closer to a programmer would be the way static or
dynamic typing influence the way one thinks about designing a computer
program.
The latter argument was an analogy that the tools (formalisms) affect
the shape of science. With that I have no disagreement. The science
we do now is affected by the existance of computers. But that's
because no one without computers would work on, say, fluid dynamics
simulations requiring trillions of calculations. It's not because the
science is fundamentally different.

And I don't see how the reference to QM affects the argument. Then
again, I've no problems living in a collapsing wave function.
In a certain sense all science reduces the world to a view of it that
leaves out more than that it describes, but that doesn't preclude it
being effective. For a last example, what about the mathematics of
curved surfaces? Sciences have had most of their successes using
computations based on straight lines, and only recently the power of
curves is discovered as being equal or more predictive than linear
approaches.
Yes, a model is a reduced representation. The orbit of Mars can be
predicted pretty well without knowing the location of every rock on
it surface. The argument is that knowing more of the details (and
having the ability to do the additional calculations) only improves the
accuracy. And much of the training in science is in learning how to
make those approximations and recognize what is interesting in the
morass of details.

As for "straight lines". I don't follow your meaning. Orbits have
been treated as curves since, well, since before Ptolomy (who used
circles) or since Kepler (using ellipses), and Newton was using
parabolas for trajectories in the 1600s, and Einstein described
curved space-time a century ago.
Abstractness doesn't preclude effectiveness, but to try to use
abstractions to understand the world is foolish since it doesn't work
the other way around.


You have a strange definition of "effectiveness. " I think a science
is effective when it helps understand the world.

The other solution is to know everything about everything, and, well,
I don't know about you but my brain is finite. While I can remember
a few abstractions, I am not omnipotent.

Andrew
da***@dalkescie ntific.com
Jul 18 '05 #276
Andrew Dalke wrote:
Anton Vredegoor" <an***@vredegoo r.doge.nl>:
The more territory modern science covers, the more it becomes clear
that the known parts of the universe are only a small part of what is
"out there".
But as Stephen pointed out, the new things we find are new
in terms of arrangement but still following the same laws of physics
we see here on Earth. (There *may* be some slight changes in
cosmological constants over the last dozen billion years, but I
haven't followed that subject.) These arrangements may be
beyond our ability to model well, but there's little to suggest
that in principle they couldn't be. (Eg, QCD could be used to
model the weather on Jupiter, if only we had a currently almost
inconceivably powerful computer. Running Python. ;)


Probably not even with Python! :-(
Weather (3D fluid dynamics) is chaotic both here on Earth and on Jupiter.
As Dr. Lorenz established when he tried to model Earth's weather, prediction
of future events based on past behavior (deterministic modeling) is not
possible with chaotic events. Current weather models predicting global or
regional temperatures 50 years from now obtain those results by careful
choices of initial conditions and assumptions. In a chaotic system
changing the inputs by even a small fractional amount causes wild swings in
the output, but for deterministic models fractional changes on the input
produce predictable outputs.
So "objectivel y" science gains more knowledge, but
relatively speaking (seeing it as a percentage of that what is
currently known to be not known, but knowable in principle) science is
loosing ground fast. Also an even greater area of the universe is
supposed to exist that we will not even have a chance *ever* to know
anything about.

Exactly. Even worse, the various peripherals of Physics and Math are
getting so esoteric that scholars in those areas are losing their ability
to communicate to each other. It is almost like casting chicken entrails.

That's a strange measure: what we know vs. what we know we
don't know. Consider physics back in the late 1800s. They could
write equations for many aspects of electricity and magnetism, but
there were problems, like the 'ultraviolet catastrophe'. Did they
consider those only minor gaps in knowledge or huge, gaping chasms
best stuck in a corner and ignored?

Is the gap between QCD and general relativity as big? Hmmmm...
Trough out the centuries there has been evidence placing humanity
firmly *not* in the center of the universe. First the earth was proven
to rotate around the sun and not the other way around, next our sun
was not in the center of the galaxy and so on.
You use this as an argument for insignificance. I use it to show
that the idea of "center of" is a meaningless term. If I want, I can
consider my house as the center of the universe. I can still make
predictions about the motion of the planets, and they will be
exactly as accurate as using a sun-centered model. The only
difference is that my equations will be a bit more complicated.
Maybe now it is time to accept the fact that all the things we know
taken together are only a small fraction of what can be known, and
even more that that fraction is not even in a central position in the
larger universe of the knowable, and that the knowable is just
disappearingly small compared to the great abyss of the unknowable
where everything is embedded in.


Why? I mean, it's true, but it seems that some knowledge is
more useful than others. It's true because even if there were
a quadrillion people, each one would be different, with a unique
arrangement of thoughts and perceptions and a unique bit of
knowledge unknowable to you.
How come then that the sciences have been so uncanningly effective
given that they are such an arbitrary choice within the knowable? The
answer is of course that there are a lot of other possible sciences,
completely unrelated to our own that would have been just as effective
as -or even more effective than- our current sciences, had they been
pursued with equal persistence during the same amount of time over a
lot of generations.


I don't follow your argument that this occurs "of course."

It's not for a dearth of ideas. Humans did try other possible
sciences over the last few millenia. Despite centuries of effort,
alchemy never became more than a descriptive science, and
despite millenia of attempts, animal sacrifices never improved
crop yields, and reading goat entrails didn't yield any better
weather predictions.

On the other hand, there are different but equivalent ways to
express known physics. For example, Hamiltonian and Newtonian
mechanics, or matrix vs. wave forms of classical quantum mechanics.
These are alternative ways to express the same physics, and some
are easier to use for a given problem than another. Just like a
sun-centered system is easier for some calculations than a "my house"
centered one.

On the third hand, there are new theoretical models, like string
theory, which are different than the models we use. But they are
not "completely unrelated" and yield our standard models given
suitable approximations.

On the fourth hand, Wolfram argues that cellular automata
provide such a new way of doing science as you argue. But
my intuition (brainw^Wtraine d as it is by the current scientific
viewpoint) doesn't agree.
The effectiveness of the current natural sciences is a perceptual
artifact caused by our biased history. From a lot of different
directions messages are coming in now, all saying more or less the
same: "If asking certain questions, one gets answers to these
questions in a certain way, but if asking different questions one gets
different answers, sometimes even contradicting the answers to the
other questions".

This might seem mystic or strange but one can see these things
everywhere, if one asks that kind of questions :-)


Or it means that asking those questions is meaningless. What's
the charge of an electron? The bare point charge is surrounded
by a swarm of virtual particles, each with its own swarm. If you
work it out using higher and higher levels of approximation you'll
end up with different, non-converging answers, and if you continue
it onwards you'll get infinite energies. But given a fixed
approximation you'll find you can make predictions just fine, and
using mathematical tricks like renormalization , the inifinities cancel.


The charge of an Electron is a case in point. Okkam's Razor is the
justification for adopting unitary charges and disregarding fractional
charges. But, who justifies Okkam's Razor?


For a simpler case .. what is the center of the universe? All locations
are equally correct. Is it mystic then that there can be multiple
different answers or is simply that the question isn't well defined?
"All locations are equally correct" depends on your base assumptions about
the Cosmological Constant, and a few other constants. Event Stephen
Hawkings, in "A Brief History of Time" mentions the admixture of philsophy
in determining the value of A in Einstein's Metric.

One example would be the role the observer plays in quantum mechanics,
but something closer to a programmer would be the way static or
dynamic typing influence the way one thinks about designing a computer
program.


The latter argument was an analogy that the tools (formalisms) affect
the shape of science. With that I have no disagreement. The science
we do now is affected by the existance of computers. But that's
because no one without computers would work on, say, fluid dynamics
simulations requiring trillions of calculations. It's not because the
science is fundamentally different.

And I don't see how the reference to QM affects the argument. Then
again, I've no problems living in a collapsing wave function.
In a certain sense all science reduces the world to a view of it that
leaves out more than that it describes, but that doesn't preclude it
being effective. For a last example, what about the mathematics of
curved surfaces? Sciences have had most of their successes using
computations based on straight lines, and only recently the power of
curves is discovered as being equal or more predictive than linear
approaches.


Yes, a model is a reduced representation. The orbit of Mars can be
predicted pretty well without knowing the location of every rock on
it surface. The argument is that knowing more of the details (and
having the ability to do the additional calculations) only improves the
accuracy. And much of the training in science is in learning how to
make those approximations and recognize what is interesting in the
morass of details.

As for "straight lines". I don't follow your meaning. Orbits have
been treated as curves since, well, since before Ptolomy (who used
circles) or since Kepler (using ellipses), and Newton was using
parabolas for trajectories in the 1600s, and Einstein described
curved space-time a century ago.
Abstractness doesn't preclude effectiveness, but to try to use
abstractions to understand the world is foolish since it doesn't work
the other way around.


You have a strange definition of "effectiveness. " I think a science
is effective when it helps understand the world.

The other solution is to know everything about everything, and, well,
I don't know about you but my brain is finite. While I can remember
a few abstractions, I am not omnipotent.

Andrew
da***@dalkescie ntific.com


--

-
GrayGeek
Jul 18 '05 #277
On Sat, 01 Nov 2003 16:28:09 +0100, an***@vredegoor .doge.nl (Anton
Vredegoor) wrote:
As should be clear, my understanding of the specifics of quantum
theory is extremely limited - but my understanding of general
scientific principles isn't too bad. That is why I earlier pointed out
that maybe the MWI wouldn't cause me such a problem if it was
expressed in some other way - after all, most current theory is so
abstract that the explanations should be taken as metaphors rather
than reality anyway.


Abstractness doesn't preclude effectiveness, but to try to use
abstractions to understand the world is foolish since it doesn't work
the other way around. It's a many to one mapping, as in plotting a
sinus function on an xy-plane and not being able to find a
x-coordinate to a certain y-coordinate, while at he same time being
perfectly able to predict an y-coordinate if given an x-coordinate.


Oh dear, here we go again...

The human brain simply doesn't have the 'vocabulary' to handle
concepts which are outside of our direct experience. 'Gravity' we can
deal with as it is in our everyday direct experience. Space-time
curvature, OTOH, is not - it is an abstract concept (relative to what
we perceive directly) and we can only understand it by relating it to
things that we do understand - metaphor being a very common way of
doing so.

So in the case of space-time curvature, for instance, 'curvature'
itself is a metaphor. It relates to the geometry of a non-Euclidean
space (or rather space-time, in this case). The intuitive meaning of
the word 'curve' relates to shape - in mathematical terms, a locus of
points in space. But the concept of non-Euclidian curvature is not
intended to suggest that the non-Euclidian space exists within some
higher order space. Sometimes it does (e.g. the non-Euclidean space
defined by the surface of the Earth exists within the 3D space of,
well, space ;-) ) but equally, often it does not.

So far as we know, space-time is 'curved' but does not exist in a
higher-order space - it just happens to have a non-Euclidean geometry.
And as that geometry is entirely defined (so far as we currently know)
by the content of space-time (gravity), there is no need to
hypothesise a higher level space. In fact it may not be possible to
create appropriate space-time 'shapes' within any higher order space -
it is certainly possible to define non-Euclidian 2D surfaces that
cannot be implemented using a real surface shape in flat 3D space.

You can point out that there is a specialist definition of the word
'curve' which relates specifically to non-Euclidian geometry, and deny
any connection to earlier meanings of the word 'curve' or to peoples
intuitive sense of what a curve is, but that would be pretty abstruse.
The simple fact is that that word has considerable explanatory power -
it presents an abstract concept in terms that make sense to people,
and which allows them to visualise the concept. That is why the term
'curve' was selected when non-Euclidian geometry was first defined and
studied and, despite the later specialist definition of the word, it
remains relevant - it is still how people understand the geometry of
non-Euclidean spaces and it is still how people understand the concept
of space-time curvature. The real phenomena described by the term
'space-time curvature' simply don't exist in peoples direct
experience.
I have just been through the same point (ad nauseum) in an e-mail
discussion. Yet I can't see what is controversial about my words.
Current theory is abstract (relative to what we can percieve) and we
simply don't have the right vocabulary (both in language, and within
the mind) to represent the concepts involved. We can invent words to
solve the language problem, of course, but at some point we have to
explain what the new words mean.

Thus, as I said, "most current theory is so abstract that the
explanations should be taken as metaphors rather than reality anyway."

The point being that different metaphors may equally have been chosen
to explain the same models - presumably emphasising different aspects
of them - and such explanations may work better for people who can't
connect with the existing explanations. The model would still be the
same, though, just as it remains the same even if you describe it in a
language other than English. The terminology changes, but not the
model.
I get the feeling that there must be some 'ism' relating to metaphor
in physics, and people are jumping to the conclusion that I'm selling
that 'ism'. But seriously, whatever a 'metaphorianist ' may be, I am
not one of them.

Read very carefully, and you will note that I said the EXPLANATIONS
should be taken as metaphors - NOT the models themselves.

And yes, a model is distinct from the real thing that it models, but
that is a rather uninteresting point and not one that I would normally
bother commenting on.

As I know from the e-mail discussion that this point can be serially
misunderstood no matter how I state it, I will make one final attempt
to clarify my point as far as I am able and that is it - I am bored to
death with the issue, and have no interest in continuing it. So here
it is...

I don't deny that, for instance, quarks are real. They are just as
real as protons, atoms, molecules, bricks and galaxy clusters. The
fact that the concept of a quark is more than a little abstract
(relative to what humans can experience directly) does not stop it
being real.

The word "curvature" in "space-time curvature" is a metaphor, however.
Space-time does not exist as a locus of points in some higher-order
space. But space-time really does have a non-Euclidean geometry, which
is easier to understand if you visualise it in terms of curvature.
Hence the common ball-on-a-rubber-sheet analogy (the rubber sheet
being an easily understood 2D curved 'space' within a 3D space).

At no time did I say that relativity is a metaphor. The model
described by relativity is real (within limits, as with any model), as
is clear from the fact that it has been proven.

But if you believe you can describe relativity (or equally, quantum
mechanics) in entirely literal terms - at all, let alone in a way that
people can understand it - I'll be very *very* impressed.
I hope that is sufficient to lay that issue to rest, but in any case I
cannot be bothered with it any more. I *HAVE* asked several people IRL
what they thought I meant by that half scentence in case I was going
nuts, and it was clear that they all understood what I meant. This is
not, in other words, an Asperger syndrome based misunderstandin g.

I stand by what I said 100%, but I can't write a book explaining every
half-scentence I write. Life is too short.
With appols, BTW, to those who have written rather large books dealing
with my misunderstandin gs and thus helped me to understand how they
arise. But I really don't think my AS is the problem here.
--
Steve Horne

steve at ninereeds dot fsnet dot co dot uk
Jul 18 '05 #278
Stephen Horne <st***@ninereed s.fsnet.co.uk> wrote in message news:<uh******* *************** **********@4ax. com>...
On 29 Oct 2003 23:26:05 -0800, mi**@pitt.edu (Michele Simionato)
wrote:
I have got the impression (please correct me if I misread your posts) that
you are invoking the argument "cats are macroscopic objects, so their
ondulatory nature does not matter at all, whereas electrons are
microscopic, so they ondulatory nature does matter a lot."
That is *far* from what I am saying.

Oops, sorry! I was not sure about your point: sometimes I have difficulties
in understanding what are you saying, but when I understand it, I usually agree
with you ;)
The evidence suggests that conscious minds exist
within the universe as an arrangement of matter subject to the same
laws as any other arrangement of matter.
I think that mind is a an arrangement of matter, too; nevertheless,
I strongly doubt that we will ever be able to understand it. Incidentally,
I am also quite skeptical about IA claims.
I prefer Penrose' theory
I read a Penrose's book book years ago: if I remember correctly, he
was skeptical about AI (that was ok). However, at some point there was
an argument of this kind: we don't understand mind, we don't understand
quantum gravity, therefore they must be related. (?)
most current theory is so
abstract that the explanations should be taken as metaphors rather
than reality anyway.
True.
No problem with that, but we are seeing microscopic effects en masse
rather than macroscopic effects - something rather different, in my
mind, to a cat being both alive and dead at the same time. The effects
when they occured were on the microscopic scale - only the artifacts
are macroscopic.


I have nothing against what you say in the rest of your post, but let me
make a comment on these points, which I think is important and may be of
interest for the other readers of this wonderfully off-topics thread.

According to the old school of Physics, there is a large distinction
between fundamental (somewhat microscopic) Physics and
non-fundamental (somewhat macroscopic) Physics. The idea
is that once you know the fundamental Physics, you may in principle
derive all the rest (not only Physics, but also Chemistry, Biology,
Medicine, and every science in principle). This point of view, the
reductionism, has never been popular between chemists of biologists, of
course, but it was quite popular between fundamental physicists with
a large hubrys.

Now, things are changing. Nowadays most people agree with the effective
field theory point of view. According to the effective field theory approach,
the fundamental (microscopic) theory is not so important. Actually, for
the description of most phenomena it is mostly irrelevant. The point is
that macroscopic phenomena (here I have in mind (super)conducti vity or
superfluidity) are NOT simply microscopic effects en mass: and in
certain circumstances they do NOT depend at all from the microscopic theory.

These ideas come from the study of critical phenomena (mostly in condensed
matter physics) where the understanding of the macroscopic is (fortunately)
completely unrelated from the understanding of the microscopic: we don't need
to know the "true" theory or a detailed description of the material we
are studying, if we are near a critical point. In this situation it is enough to
know an effective field theory which can explain all the phenomena we can see
given a finite experimental precision, even if it is not microscopically
correct. In critical phenomena the concept of universality came out: completely
different microscopical theories can give the *same* universal macroscopic
field theory. Actually, the only things that matter are the dimensionality
of the space-time and the symmetry group, all others details are irrelevant.

This point of view has become relevant at the fundamental Physics level too,
since nowadays most people regard the Standard Model of Particle Physics
(once regarded as "the" fundamental theory) as a low energy effective
theory of the "true" theory.

This means that even if it is not the full story, it explain the
99.99% of phenomena we can measure; moreover, it is extremely
difficult to see clear signatures of the "true" underlining theory.
The real theory can be string theory, can be loop quantum gravity,
can be a completely new theory, but for 99.99% of our experiments
only the effective theory matters. So, even if we knew perfectly
quantum gravity, this would not help at all in describing 99.99%
of elementary particle physics, since we would still need to
solve the quantum field theory.

And, for a similar reason, even if we knew everything about QCD,
we could not use it to describe the wheather of Jupiter (which is described
by a completely different effective theory) even if we had an ultra-powerful
Python-powered quantum computer ...

That's life, but it is more interesting this way ;)
Michele
Jul 18 '05 #279
"Andrew Dalke" <ad****@mindspr ing.com> wrote in message news:<Kd******* ********@newsre ad4.news.pas.ea rthlink.net>...
On the fourth hand, Wolfram argues that cellular automata
provide such a new way of doing science as you argue. But
my intuition (brainw^Wtraine d as it is by the current scientific
viewpoint) doesn't agree.
Me too ;)
Or it means that asking those questions is meaningless. What's
the charge of an electron? The bare point charge is surrounded
by a swarm of virtual particles, each with its own swarm. If you
work it out using higher and higher levels of approximation you'll
end up with different, non-converging answers, and if you continue
it onwards you'll get infinite energies. But given a fixed
approximation you'll find you can make predictions just fine, and
using mathematical tricks like renormalization , the inifinities cancel.
I would qualify myself as an expert on renormalization theory and I would
like to make an observation on how the approach to renormalization has
changed in recent years, since you raise the point.

At the beginning, quantum field theory was - more or less universally -
regarded as a fundamental theory. Fundamental theory means that asking
the right questions one must get the right answers.
Nowadays people are no more so optimistic.

Quantum field theory is hard: even if the perturbative renormalizabili ty
properties you are referring to can be proved (BTW, now there are easy
proofs based on the effective field theory approach; I did my Ph.D. on
the subject) very very little can be said at the non-perturbative level.
Also, there are worrying indications. It may be very well possible that
QFT does not exists as a fundamental theory: i.e. it is not mathematically
consistent. For instance, perturbation theory in quantum electrodynamics
is probably not summable, so the sum of the renormalized series (even if
any single term is finite) is not finite. In practice, this is not bad,
since we can resum even non-summable series, but the price to pay to make
finite an infinite sum is to add an arbitrarity (technically, this
is completely unrelated to the infinities in renormalization , they
only seems similar). Now, one can prove that the arbitrarity is
extremely small and has no effect at all at our energy scales: but
in principle it seems that we cannot determine completely an observable,
even in quantum electrodynamics , due to an internal inconsistency of the
mathematical model.

Notice that what I am saying is by no means a definitive statement:
there are no conclusive proofs that the standard model is
mathematically inconsistent. But it could be. And it would not be
surprising at all, given the experience we have with simpler models.
The latter argument was an analogy that the tools (formalisms) affect
the shape of science. With that I have no disagreement. The science
we do now is affected by the existance of computers. But that's
because no one without computers would work on, say, fluid dynamics
simulations requiring trillions of calculations. It's not because the
science is fundamentally different.
Yes, and still a lot of science is done without computers. I never
used a computer for my scientific work, expect for writing my papers
in latex ;)
The other solution is to know everything about everything, and, well,
I don't know about you but my brain is finite. While I can remember
a few abstractions, I am not omnipotent.


we are not omnipotent nor omniscient, but still we may learn something ;)
Jul 18 '05 #280

This thread has been closed and replies have been disabled. Please start a new discussion.

Similar topics

73
8072
by: RobertMaas | last post by:
After many years of using LISP, I'm taking a class in Java and finding the two roughly comparable in some ways and very different in other ways. Each has a decent size library of useful utilities as a standard portable part of the core language, the LISP package, and the java.lang package, respectively. Both have big integers, although only LISP has rationals as far as I can tell. Because CL supports keyword arguments, it has a wider range...
699
34235
by: mike420 | last post by:
I think everyone who used Python will agree that its syntax is the best thing going for it. It is very readable and easy for everyone to learn. But, Python does not a have very good macro capabilities, unfortunately. I'd like to know if it may be possible to add a powerful macro system to Python, while keeping its amazing syntax, and if it could be possible to add Pythonistic syntax to Lisp or Scheme, while keeping all of the...
34
2688
by: nobody | last post by:
This article is posted at the request of C.W. Yang who asked me to detail my opinion of Lisp, and for the benefit of people like him, who may find themselves intrigued by this language. The opinions expressed herein are my personal ones, coming from several years of experience with Lisp. I did plenty of AI programming back in the day, which is what would now be called "search" instead.
82
5388
by: nobody | last post by:
Howdy, Mike! mikecoxlinux@yahoo.com (Mike Cox) wrote in message news:<3d6111f1.0402271647.c20aea3@posting.google.com>... > I'm a C++ programmer, and have to use lisp because I want to use > emacs. I've gotten a book on lisp, and I must say lisp is the ugliest > looking language syntax wise. What is up with this: (defun(foo()). (DEFUN FOO () NIL) > What were the lisp authors thinking? Why did Stallman use lisp in
852
28731
by: Mark Tarver | last post by:
How do you compare Python to Lisp? What specific advantages do you think that one has over the other? Note I'm not a Python person and I have no axes to grind here. This is just a question for my general education. Mark
0
9627
marktang
by: marktang | last post by:
ONU (Optical Network Unit) is one of the key components for providing high-speed Internet services. Its primary function is to act as an endpoint device located at the user's premises. However, people are often confused as to whether an ONU can Work As a Router. In this blog post, we’ll explore What is ONU, What Is Router, ONU & Router’s main usage, and What is the difference between ONU and Router. Let’s take a closer look ! Part I. Meaning of...
0
10287
Oralloy
by: Oralloy | last post by:
Hello folks, I am unable to find appropriate documentation on the type promotion of bit-fields when using the generalised comparison operator "<=>". The problem is that using the GNU compilers, it seems that the internal comparison operator "<=>" tries to promote arguments from unsigned to signed. This is as boiled down as I can make it. Here is my compilation command: g++-12 -std=c++20 -Wnarrowing bit_field.cpp Here is the code in...
1
10060
by: Hystou | last post by:
Overview: Windows 11 and 10 have less user interface control over operating system update behaviour than previous versions of Windows. In Windows 11 and 10, there is no way to turn off the Windows Update option using the Control Panel or Settings app; it automatically checks for updates and installs any it finds, whether you like it or not. For most users, this new feature is actually very convenient. If you want to control the update process,...
0
9922
tracyyun
by: tracyyun | last post by:
Dear forum friends, With the development of smart home technology, a variety of wireless communication protocols have appeared on the market, such as Zigbee, Z-Wave, Wi-Fi, Bluetooth, etc. Each protocol has its own unique characteristics and advantages, but as a user who is planning to build a smart home system, I am a bit confused by the choice of these technologies. I'm particularly interested in Zigbee because I've heard it does some...
1
7469
isladogs
by: isladogs | last post by:
The next Access Europe User Group meeting will be on Wednesday 1 May 2024 starting at 18:00 UK time (6PM UTC+1) and finishing by 19:30 (7.30PM). In this session, we are pleased to welcome a new presenter, Adolph Dupré who will be discussing some powerful techniques for using class modules. He will explain when you may want to use classes instead of User Defined Types (UDT). For example, to manage the data in unbound forms. Adolph will...
0
6721
by: conductexam | last post by:
I have .net C# application in which I am extracting data from word file and save it in database particularly. To store word all data as it is I am converting the whole word file firstly in HTML and then checking html paragraph one by one. At the time of converting from word file to html my equations which are in the word document file was convert into image. Globals.ThisAddIn.Application.ActiveDocument.Select();...
0
5367
by: TSSRALBI | last post by:
Hello I'm a network technician in training and I need your help. I am currently learning how to create and manage the different types of VPNs and I have a question about LAN-to-LAN VPNs. The last exercise I practiced was to create a LAN-to-LAN VPN between two Pfsense firewalls, by using IPSEC protocols. I succeeded, with both firewalls in the same network. But I'm wondering if it's possible to do the same thing, with 2 Pfsense firewalls...
2
3621
muto222
by: muto222 | last post by:
How can i add a mobile payment intergratation into php mysql website.
3
2859
bsmnconsultancy
by: bsmnconsultancy | last post by:
In today's digital era, a well-designed website is crucial for businesses looking to succeed. Whether you're a small business owner or a large corporation in Toronto, having a strong online presence can significantly impact your brand's success. BSMN Consultancy, a leader in Website Development in Toronto offers valuable insights into creating effective websites that not only look great but also perform exceptionally well. In this comprehensive...

By using Bytes.com and it's services, you agree to our Privacy Policy and Terms of Use.

To disable or enable advertisements and analytics tracking please visit the manage ads & tracking page.