473,396 Members | 1,853 Online
Bytes | Software Development & Data Engineering Community
Post Job

Home Posts Topics Members FAQ

Join Bytes to post your question to a community of 473,396 software developers and data experts.

What is different with Python ?

I apologize in advance for launching this post but I might get enlightment
somehow (PS: I am _very_ agnostic ;-).

- 1) I do not consider my intelligence/education above average
- 2) I am very pragmatic
- 3) I usually move forward when I get the gut feeling I am correct
- 4) Most likely because of 1), I usually do not manage to fully explain 3)
when it comes true.
- 5) I have developed for many years (>18) in many different environments,
languages, and O/S's (including realtime kernels) .
Yet for the first time I get (most) of my questions answered by a language I
did not know 1 year ago.

As I do try to understand concepts when I'm able to, I wish to try and find
out why Python seems different.

Having followed this newsgroup for sometimes, I now have the gut feeling
(see 3)) other people have that feeling too.
Quid ?

Regards,

Philippe




Jul 19 '05
137 6902
Andrea Griffini <ag****@tin.it> wrote:
Hehehe... a large python string is a nice idea for modelling
memory.


Actually, a Python string is only good for modelling ROM. If you want to
model read-write memory, you need a Python list.
Jul 19 '05 #51
On Mon, 13 Jun 2005 01:54:53 -0500, Mike Meyer <mw*@mired.org> wrote:
Andrea Griffini <ag****@tin.it> writes:
In short, you're going to start in the middle.
I've got "bad" news for you. You're always in the
middle :-D.


That's what I just said.


Yeah. I should stop replying before breakfast.
I disagree. If you're going to make competent programmers of them,
they need to know the *cost* of those details, but not necessarily the
actual details themselves. It's enough to know that malloc may lead to
a context switch; you don't need to know how malloc actually works.
Unless those words have a real meaning for you then
you'll forget them... I've seen this a jillion times
with C++. Unless you really understand how an
std::vector is implemented you'll end up doing stupid
things like looping erasing the first element.

Actually I cannot blame someone for forgetting that
insert at the beginning is O(n) and at the end is
amortized O(1) if s/he never understood how a vector
is implemented and was told to just learn those two
little facts. Those little facts are obvious and
can easily be remembered only if you've a conceptual
model where they fit. If they're just random notions
then the very day after the C++ exam you'll forget
everything.
That's the way *your* brain works. I'd not agree that mine works that
way. Then again, proving either statement is an interesting
proposition.
Are you genuinely saying that abelian groups are
easier to understand than relative integers ?
The explanation has been stated a number of times: because you're
letting them worry about learning how to program, before they worry
about learning how to evaluate the cost of a particular
construct. Especially since the latter depends on implementation
details, which are liable to have to be relearned for every different
platform.
You'll get programmers that do not understand how
their programs work. This unavoidably will be a show
stopper when their programs will not work (and it's
when, not if...).
I don't normally ask how people learned to program, but I will observe
that most of the CS courses I've been involved with put aside concrete
issues - like memory management - until later in the course, when it
was taught as part of an OS internals course. The exception would be
those who were learning programming as part of an engineering (but not
software engineering) curriculum. The least readable code examples
almost uniformly came from the latter group.


I suppose that over there who is caught reading
TAOCP is slammed in jail ...

Placing memory allocation in the "OS internals" course
is very funny. Let's hope you're just joking.

Andrea
Jul 19 '05 #52
On Mon, 13 Jun 2005 22:23:39 +0200, Bruno Desthuilliers
<bd*****************@free.quelquepart.fr> wrote:
Being familiar with
fondamental *programming* concepts like vars, branching, looping and
functions proved to be helpful when learning C, since I only had then to
focus on pointers and memory management.


If you're a good programmer (no idea, I don't know
you and you avoided the issue) then I think you
wasted a lot of energy and neurons learning that way.
Even high-level scripting languages are quite far
from a perfect virtualization, and either the code
you wrote in them was terrible *OR* you were able
to memorize an impressive quantity of black magic
details (or you were just incredibly lucky ;-) ).

Andrea
Jul 19 '05 #53
Andrea Griffini <ag****@tin.it> writes:
On Mon, 13 Jun 2005 01:54:53 -0500, Mike Meyer <mw*@mired.org> wrote:
Andrea Griffini <ag****@tin.it> writes:
I disagree. If you're going to make competent programmers of them,
they need to know the *cost* of those details, but not necessarily the
actual details themselves. It's enough to know that malloc may lead to
a context switch; you don't need to know how malloc actually works.
Unless those words have a real meaning for you then
you'll forget them... I've seen this a jillion times
with C++. Unless you really understand how an
std::vector is implemented you'll end up doing stupid
things like looping erasing the first element.


But this same logic applies to why you want to teach abstract things
before concrete things. Since you like concrete examples, let's look
at a simple one:

a = b + c

Now, in a modern OO language (like Python) this can invoke arbitrary
bits of code. In a 70s-era structured language (like C), this will
mean one of a fixed set of things, but you still don't know enough to
say how many abstract operations this statement involves. In a very
few languages (BCPL being one), this means exactly one thing. But
until you know the underlying architecture, you still can't say how
many operations it is.

All of which is so much noise to someone who's just starting
programming. Anything beyond the abstract statement "a gets the result
of adding b to c" is wasted on them. Learning about programming at
that level is complicated enough. After they've mastered that, you can
teach them the concrete details that determine what actually happens,
and how much it costs: things like method lookup (Python), namespace
lookups (Python), operator overloading (C), implicit type conversions
(C and Python), integer overflow (BCPL and C), and floating point
behavior in general (C and Python), register vs. stack architectures,
RISC vs. CISC, caching, and other such things.

Actually I cannot blame someone for forgetting that
insert at the beginning is O(n) and at the end is
amortized O(1) if s/he never understood how a vector
is implemented and was told to just learn those two
little facts. Those little facts are obvious and
can easily be remembered only if you've a conceptual
model where they fit. If they're just random notions
then the very day after the C++ exam you'll forget
everything.
It's true that in some cases, it's easier to remember the
implementation details and work out the cost than to remember the cost
directly. That's also true for abstract things as well.
That's the way *your* brain works. I'd not agree that mine works that
way. Then again, proving either statement is an interesting
proposition.


Are you genuinely saying that abelian groups are
easier to understand than relative integers ?


Yup. Then again, my formal training is as a mathematician. I *like*
working in the problem space - with the abstact. I tend to design
top-down.
The explanation has been stated a number of times: because you're
letting them worry about learning how to program, before they worry
about learning how to evaluate the cost of a particular
construct. Especially since the latter depends on implementation
details, which are liable to have to be relearned for every different
platform.


You'll get programmers that do not understand how
their programs work. This unavoidably will be a show
stopper when their programs will not work (and it's
when, not if...).


The same is true of programmers who started with concrete details on a
different platform - unless they relearn those details for that
platform. The critical things a good programmer knows about those
concrete details is which ones are platform specific and which aren't,
and how to go about learning those details when they go to a new
platform.

If you confuse the issue by teaching the concrete details at the same
time as you're teaching programming, you get people who can't make
that distinction. Such people regularly show up with horrid Python
code because they were used to the details for C, or Java, or
whatever.
I don't normally ask how people learned to program, but I will observe
that most of the CS courses I've been involved with put aside concrete
issues - like memory management - until later in the course, when it
was taught as part of an OS internals course. The exception would be
those who were learning programming as part of an engineering (but not
software engineering) curriculum. The least readable code examples
almost uniformly came from the latter group.


I suppose that over there who is caught reading
TAOCP is slammed in jail ...


Those taught the concrete method would never have been exposed to
anything so abstract.
Placing memory allocation in the "OS internals" course
is very funny. Let's hope you're just joking.


Actually, it's a natural place to teach the details of that kind of
thing. An OS has to deal with allocating a number of different kinds
of memory, with radically different behaviors and constraints. As
such, it provides reasons to examine a number of different memory
allocation strategies and their behaviors, and to evaluate them in
light of those varying behaviors and constraints.

<mike
--
Mike Meyer <mw*@mired.org> http://www.mired.org/home/mwm/
Independent WWW/Perforce/FreeBSD/Unix consultant, email for more information.
Jul 19 '05 #54
D H
Philippe C. Martin wrote:
Yet for the first time I get (most) of my questions answered by a language I
did not know 1 year ago.

^^^^^^^^^^
You're in the Python honeymoon stage.
Jul 19 '05 #55
D H
Andrea Griffini wrote:
On Sat, 11 Jun 2005 21:52:57 -0400, Peter Hansen <pe***@engcorp.com>
wrote:

I think new CS students have more than enough to learn with their
*first* language without having to discover the trials and tribulations
of memory management (or those other things that Python hides so well).

I'm not sure that postponing learning what memory
is, what a pointer is and others "bare metal"
problems is a good idea. Those concept are not
"more complex" at all, they're just more *concrete*
than the abstract concept of "variable".
Human mind work best moving from the concrete to
the abstract,


You're exactly right that people learn better going from concrete to
abstract, but you're examples (pointers and memory management) are not
what is typically meant by concrete in learning contexts.

Starting concretely would mean using programming to solve real problems
and develop useful tools. In programming it is often good to start with
examples - some common ones I've seen in informal learning of
programming include a calculator, an RSS viewer or aggregator, a video
game, etc.

But what you are getting at is more akin to our mental model of what the
computer is doing when we write and run a program. Without a
fundamental understanding of memory and addresses, a programmer can make
certain mistakes that reveal this lack of understanding. But that
doesn't mean they have to learn about memory management at the very
beginning of their instruction.
Jul 19 '05 #56
D H
Andrea Griffini wrote:
On Mon, 13 Jun 2005 22:23:39 +0200, Bruno Desthuilliers
<bd*****************@free.quelquepart.fr> wrote:

Being familiar with
fondamental *programming* concepts like vars, branching, looping and
functions proved to be helpful when learning C, since I only had then to
focus on pointers and memory management.

If you're a good programmer (no idea, I don't know
you and you avoided the issue) then I think you
wasted a lot of energy and neurons learning that way.
Even high-level scripting languages are quite far
from a perfect virtualization, and either the code
you wrote in them was terrible *OR* you were able
to memorize an impressive quantity of black magic
details (or you were just incredibly lucky ;-) ).


The best race driver doesn't necessarily know the most about their car's
engine. The best baseball pitcher isn't the one who should be teaching
a class in physics and aerodynamics. Yes, both can improve their
abilities by learning about the fundamentals of engines, aerodynamics,
etc., but they aren't "bad" at what they do if they do not know the
underlying principles operating.

If you want to understand what engineers like programmers, and
scientists approach their work, look up the structure-behavior-function
framework. Engineers work from function (the effect something has on
its environment, in this case the desired effect), to structure - how to
consistently constrain behavior to achieve that desired function.
Scientists, on the other hand, primarily work from structure and
behavior to function. Here is an unknown plant - why does it have this
particular structure or behavior? What is its function, or what in its
environment contributed to its evolution? See descriptions of SBF by
Cindy Hmelo and others.
Jul 19 '05 #57

"Roy Smith" <ro*@panix.com> wrote in message
news:ro***********************@reader1.panix.com.. .
Andrea Griffini <ag****@tin.it> wrote:
Hehehe... a large python string is a nice idea for modelling
memory.


Actually, a Python string is only good for modelling ROM. If you want to
model read-write memory, you need a Python list.


or array from the array module.

Jul 19 '05 #58
Andrea Griffini wrote:
This is investigating. Programming is more similar to building
instead (with a very few exceptions). CS is not like physics or
chemistry or biology where you're given a result (the world)
and you're looking for the unknown laws. In programming *we*
are building the world. This is a huge fundamental difference!


Philosophically I disagree. Biology and physics depends on
models of how the world works. The success of a model depends
on how well it describes and predicts what's observed.

Programming too has its model of how things work; you've mentioned
algorithmic complexity and there are models of how humans
interact with computers. The success depends in part on how
well it fits with those models.

In biology there's an extremely well developed body of evidence
to show the general validity of evolution. That doesn't mean
that a biological theory of predator-prey cycles must be based
in an evolutionary model. Physics too has its share of useful
models which aren't based on QCD or gravity; weather modeling
is one and the general term is "phenomenology."

In programming you're often given a result ("an inventory
management system") and you're looking for a solution which
combines models of how people, computers, and the given domain work.

Science also has its purely observational domains. A
biologist friend of mine talked about one of his conferences
where the conversations range from the highly theoretical
to the "look at this sucker we caught!"

My feeling is that most scientists do not develop new fundamental
theories. They instead explore and explain things within
existing theory. I think programming is similar. Both fields
may build new worlds, but success is measured by its impact
in this world.

Andrew
da***@dalkescientific.com

Jul 19 '05 #59
D H wrote:
But what you are getting at is more akin to our mental model of what the
computer is doing when we write and run a program. Without a
fundamental understanding of memory and addresses, a programmer can make
certain mistakes that reveal this lack of understanding. But that
doesn't mean they have to learn about memory management at the very
beginning of their instruction.


Finally somebody who gets it :-)

Every home handyman needs to know the difference
between a claw hammer and a tack hammer and rubber
hammer, and why you shouldn't use a claw hammer to
knock dints out of steel sheeting, but they don't need
to learn them all on the first time they look at a nail.

I'm reminded about the concept "lies for children",
used by Ian Stewart and Jack Cohen (mathematician and
biologist respectively) in some of their books. The
point they make is that much of what we teach each
other -- even adults -- is "lies for children". The
Earth is not a sphere, even though we can often get
away with pretending it is. High and low tides aren't
caused by the moon. Testosterone doesn't cause
aggression. DNA is not a blueprint. And adding two
strings together is not a simple operation, despite
appearances.

"Lies for children" in this sense are not *bad*, they
are a necessary step towards a more sophisticated
understanding. Imagine trying to learn about projectile
motion if you needed to understand general relativity
and friction just to get started. Now imagine you are a
ballistic missile scientist, and you assume that the
problem of firing a missile from here to there is
independent of the shape of the Earth's gravitional
field, air resistance and relativity. You've just
missed your target by a kilometre. Ooops.

Learning when you can ignore implementation details is
just as important a skill as learning how to use
variables. Just as you wouldn't try teaching closures
to people who haven't yet learnt what a function is,
you don't want to bog people down with implementation
details too early -- but if you don't teach them *at
all*, you are creating second-class programmers who
will write slow, buggy, hard-to-maintain code.

--
Steven.

Jul 19 '05 #60
On Mon, 13 Jun 2005 21:33:50 -0500, Mike Meyer <mw*@mired.org> wrote:
But this same logic applies to why you want to teach abstract things
before concrete things. Since you like concrete examples, let's look
at a simple one:

a = b + c
....In a very
few languages (BCPL being one), this means exactly one thing. But
until you know the underlying architecture, you still can't say how
many operations it is.
That's exactly why

mov eax, a
add eax, b
mov c, eax

or, even more concrete and like what I learned first

lda $300
clc
adc $301
sta $302

is simpler to understand. Yes... for some time I even
worked with the computer in machine language without
using a symbolic assembler; I unfortunately paid a
price to it and now I've a few neurons burnt for
memorizing irrelevant details like that the above
code is (IIRC) AD 00 03 18 6D 01 03 8D 02 03... but
I think it wasn't a complete waste of energy.

Writing programs in assembler takes longer exactly beacuse
the language is *simpler*. Assembler has less implicit
semantic because it's closer to the limited brain of our
stupid silicon friend.
Programming in assembler also really teaches (deeply
to your soul) who is the terrible "undefined behaviour"
monster you'll meet when programming in C.
Anything beyond the abstract statement "a gets the result
of adding b to c" is wasted on them.
But saying for example that

del v[0]

just "removes the first element from v" you will end up
with programs that do that in a stupid way, actually you
can easily get unusable programs, and programmers that
go around saying "python is slow" for that reason.
It's true that in some cases, it's easier to remember the
implementation details and work out the cost than to
remember the cost directly.
I'm saying something different i.e. that unless you
understand (you have a least a rough picture, you
don't really need all the details... but there must
be no "magic" in it) how the standard C++ library is
implemented there is no way at all you have any
chance to remember all the quite important implications
for your program. It's just IMO impossible to memorize
such a big quantity of unrelated quirks.
Things like for example big O, but also undefined
behaviours risks like having iterators invalidated
when you add an element to a vector.
Are you genuinely saying that abelian groups are
easier to understand than relative integers ?


Yup. Then again, my formal training is as a mathematician. I *like*
working in the problem space - with the abstact. I tend to design
top-down.


The problem with designing top down is that when
building (for example applications) there is no top.
I found this very simple and very powerful rationalization
about my gut feeling on building complex systems in
Meyer's "Object Oriented Software Construction" and
it's one to which I completely agree. Top down is a
nice way for *explaining* what you already know, or
for *RE*-writing, not for creating or for learning.

IMO no one can really think that teaching abelian
groups to kids first and only later introducing
them to relative numbers is the correct path.
Human brain simply doesn't work like that.

You are saying this, but I think here it's more your
love for discussion than really what you think.
The same is true of programmers who started with concrete details on a
different platform - unless they relearn those details for that
platform.
No. This is another very important key point.
Humans are quite smart at finding general rules
from details, you don't have to burn your hand on
every possible fire. Unfortunately sometimes there
is the OPPOSITE problem... we infer general rules
that do not apply from just too few observations.
The critical things a good programmer knows about those
concrete details is which ones are platform specific and which aren't,
and how to go about learning those details when they go to a new
platform.
I never observed this problem. You really did ?

That is such not a problem that Knuth for example
decided to use an assembler language for a processor
that doesn't even exist (!).
If you confuse the issue by teaching the concrete details at the same
time as you're teaching programming, you get people who can't make
that distinction. Such people regularly show up with horrid Python
code because they were used to the details for C, or Java, or
whatever.
Writing C code with python is indeed a problem that
is present. But I think this is a minor price to pay.
Also it's something that with time and experience
it will be fixed.
I suppose that over there who is caught reading
TAOCP is slammed in jail ...


Those taught the concrete method would never have been exposed to
anything so abstract.


Hmmm; TACOP is The Art Of Computer Programming, what
is the abstract part of it ? The code presented is
only MIX assembler. There are math prerequisites for
a few parts, but I think no one could call it "abstract"
material (no one that actually spent some time reading
it, that is).
Actually, it's a natural place to teach the details of that kind of
thing. An OS has to deal with allocating a number of different kinds
of memory, with radically different behaviors and constraints.


hehehe... and a program like TeX instead doesn't even
need to allocate memory. Pairing this with that teaching
abelian groups first to kids (why not fiber spaces then ?)
and that TAOCP is too "abstract" tells me that apparently
you're someone that likes to talk just for talking, or
that your religion doesn't allow you to type in smileys.

Andrea
Jul 19 '05 #61
On Mon, 13 Jun 2005 22:19:19 -0500, D H <d@e.f> wrote:
The best race driver doesn't necessarily know the most about their car's
engine. The best baseball pitcher isn't the one who should be teaching
a class in physics and aerodynamics. Yes, both can improve their
abilities by learning about the fundamentals of engines, aerodynamics,
etc., but they aren't "bad" at what they do if they do not know the
underlying principles operating.


And when you've a problem writing your software who is
your mechanic ? Who are you calling on the phone for help ?

Andrea
Jul 19 '05 #62
On Tue, 14 Jun 2005 04:18:06 GMT, Andrew Dalke
<da***@dalkescientific.com> wrote:
In programming you're often given a result ("an inventory
management system") and you're looking for a solution which
combines models of how people, computers, and the given domain work.
Yes, at this higher level I agree. But not on how
a computer works. One thing is applied math, another
thing is math itself. When you're trying to find a solution
of a problem it's often the fine art of compromise.
Science also has its purely observational domains.


I agree that "applied CS" is one of them (I mean the
art of helping people by using computers). But not
about the language or explaining how computers work.
I know that looking at the art of installing (or
uninstalling!) windows applications seems that
this is a completely irrational world where no rule
indeed exists... but this is just an illusion; there
are clear rules behind it and, believe it or not, we
know *all* of them.

Andrea
Jul 19 '05 #63
Andrea Griffini wrote:
This is investigating. Programming is more similar to building
instead (with a very few exceptions). CS is not like physics or
chemistry or biology where you're given a result (the world)
and you're looking for the unknown laws. In programming *we*
are building the world. This is a huge fundamental difference!


It looks like you do not have a background in Physics research.
We *do* build the world! ;)

Michele Simionato

Jul 19 '05 #64
On Tue, Jun 14, 2005 at 12:02:29AM +0000, Andrea Griffini wrote:
However I do not think that going this low (that's is still
IMO just a bit below assembler and still quite higher than
HW design) is very common for programmers. Well, at least one University (Technical University Vienna) does it
this way. Or did it at least when I passed the courses ;)
Or how does one explain that a "stupid and slow" algorithm can be in
effect faster than a "clever and fast" algorithm, without explaining
how a cache works. And what kinds of caches there are. (I've seen
documented cases where a stupid search was faster because all hot data
fit into the L1 cache of the CPU, while more clever algorithms where
slower).
Caching is indeed very important, and sometimes the difference
is huge. I think anyway that it's probably something confined
in a few cases (processing big quantity of data with simple
algorithms, e.g. pixel processing).
It's also a field where if you care about the details the
specific architecture plays an important role, and anything
you learned about say the Pentium III could be completely
pointless on the Pentium 4.


Nope. While it's certainly true that it's different from architecture
to architecture, you need to learn the different types of caches,
etc. so that one can quickly crasp the architecture currently in use.
Except by general locality rules I would say that everything
else should be checked only if necessary and on a case-by-case
approach. I'm way a too timid investor to throw in neurons
on such a volatile knowledge.
It's not volatile knowledge. Knowledge what CPU uses what cache
organisation is volatile and wasted knowledge. Knowledge what kinds
of caches are common is quite useful ;)

Easy Question:
You've got 2 programs that are running in parallel.
Without basic knowledge about caches, the naive answer would be that
the programs will probably run double time. The reality is different.

Or you get perfect abstract designs, that are horrible when
implemented.


Current trend is that you don't even need to do a
clear design. Just draw some bubbles and arrows on
a white board with a marker, throw in some buzzword
and presto! you basically completed the new killing app.


Well, somehow I'm happy that it is this way. That ensures enough work
for me, because with this approach somebody will call in the fire
department when it doesn't work anymore ;)
Or the startup goes belly up.
Real design and implementation are minutiae for bozos.

Even the mighty python is incredibly less productive
than powerpoint ;-)
Yes. But for example to understand the memory behaviour of Python
understanding C + malloc + OS APIs involved is helpful.


This is a key issue. If you've the basis firmly placed
most of what follows will be obvious. If someone tells
you that inserting an element at the beginning of an array
is O(n) in the number of elements then you think "uh... ok,
sounds reasonable", if they say that it's amortized O(1)
instead then you say "wow..." and after some thinking
"ok, i think i understand how it could be done" and in
both cases you'll remember it. It's a clear *concrete* fact
that I think just cannot be forgot.


Believe it can ;)
But that's the idea why certain courses forced us to implement at
least the most important data structures by ourselves.

Because looking at it in a book is so much less intensive than doing
it ;)

Andreas

Jul 19 '05 #65
Andreas Kostyrka wrote:
On Tue, Jun 14, 2005 at 12:02:29AM +0000, Andrea Griffini wrote:
Caching is indeed very important, and sometimes the difference
is huge.
... Easy Question:
You've got 2 programs that are running in parallel.
Without basic knowledge about caches, the naive answer would be that
the programs will probably run double time. The reality is different.


Okay, I admit I'm making a comment almost solely to have
Andrea, Andreas and Andrew in the same thread.

I've seen superlinear and sublinear performance for this.
Superlinear when the problem fits into 2x cache size but not
1x cache size and is nicely decomposable, and sublinear when
the data doesn't have good processor affinity.

Do I get an A for Andre.*? :)

Andrew
da***@dalkescientific.com

Jul 19 '05 #66
Andrea Griffini schrieb:
On Mon, 13 Jun 2005 13:35:00 +0200, Peter Maas <pe***@somewhere.com>
wrote:

I think Peter is right. Proceeding top-down is the natural way of
learning.

Depends if you wanna build or investigate.


Learning is investigating. By top-down I mean high level (cat,
dog, table sun, sky) to low level (molecules, atoms, fields ...).
And to know the lowest levels is not strictly necessary for
programming. I have seen good programmers who didn't know about
logic gates.
Hehehe... a large python string is a nice idea for modelling
memory. This shows clearly what I mean with that without firm
understanding of the basis you can do pretty huge and stupid
mistakes (hint: strings are immutable in python... ever
wondered what does that fancy word mean ?)


Don't nail me down on that stupid string, I know it's immutable but
didn't think about it when answering your post. Take <some mutable
replacement> instead.

--
-------------------------------------------------------------------
Peter Maas, M+R Infosysteme, D-52070 Aachen, Tel +49-241-93878-0
E-mail 'cGV0ZXIubWFhc0BtcGx1c3IuZGU=\n'.decode('base64')
-------------------------------------------------------------------
Jul 19 '05 #67
Steven D'Aprano <st***@REMOVEMEcyber.com.au> wrote:
High and low tides aren't caused by the moon.


They're not???
Jul 19 '05 #68
Roy Smith wrote:
Steven D'Aprano <st***@REMOVEMEcyber.com.au> wrote:
High and low tides aren't caused by the moon.


They're not???


Probably he's referring to something like this, from Wikipedia, which
emphasizes that while tides are caused primarily by the moon, the height
of the high and low tides involves the sun as well:

"The height of the high and low tides (relative to mean sea level) also
varies. Around new and full Moon, the tidal forces due to the Sun
reinforce those of the Moon, due to the syzygy found at those times -
both the Sun and the Moon are 'pulling the water in the same direction.'"

(If I'm right about this, then the statement is still wrong, since even
without the sun there would be high and low tides, just not of the
magnitude we have now.)

-Peter
Jul 19 '05 #69
Andrew Dalke schrieb:
Peter Maas wrote:
I think Peter is right. Proceeding top-down is the natural way of
learning (first learn about plants, then proceed to cells, molecules,
atoms and elementary particles).

Why in the world is that way "natural"? I could see how biology
could start from molecular biology - how hereditary and self-regulating
systems work at the simplest level - and using that as the scaffolding
to describe how cells and multi-cellular systems work.


Yes, but what did you notice first when you were a child - plants
or molecules? I imagine little Andrew in the kindergarten fascinated
by molecules and suddenly shouting "Hey, we can make plants out of
these little thingies!" ;)

--
-------------------------------------------------------------------
Peter Maas, M+R Infosysteme, D-52070 Aachen, Tel +49-241-93878-0
E-mail 'cGV0ZXIubWFhc0BtcGx1c3IuZGU=\n'.decode('base64')
-------------------------------------------------------------------
Jul 19 '05 #70
> > High and low tides aren't caused by the moon.
They're not???
I suppose, that the trick here is to state,
that not the moon, but the earth rotation relative
to the moon causes it, so putting the moon at
cause is considered wrong, because its existance
alone were not the cause for high and low tides
in case both rotations were at synch. It is probably
a much more complicated thingy where also the
sun and maybe even the planets must be
considered if going into the details, but this is
another chapter.

I experienced once a girl who pointing me to the
visible straight beams from earth to sky one can see
as result of sunlight coming through the clouds
said: "look, along this visible beams the water from
the lake wents upwards and builds the clouds".
She was very convinced it's true, because she
learned it at school, so I had no chance to go the
details explaining, that there is no need for the
visible straight light beams coming through the
holes in the clouds for it.

I can imagine, that many believe that the
moon is orbiting each day around the earth
even if they know, that earth rotates around
its own axle and around the sun. Its not that
important for them to ask for details, so that
is the mechanism how the "lies" are born -
caused by lack of the necessity or the laziness
to achieve deeper understanding.

Claudio

"Roy Smith" <ro*@panix.com> schrieb im Newsbeitrag
news:ro***********************@reader1.panix.com.. . Steven D'Aprano <st***@REMOVEMEcyber.com.au> wrote:
High and low tides aren't caused by the moon.


They're not???

Jul 19 '05 #71
Andrew Dalke wrote:
Andrea Griffini wrote:
This is investigating. Programming is more similar to building
instead (with a very few exceptions). CS is not like physics or
chemistry or biology where you're given a result (the world)
and you're looking for the unknown laws. In programming *we*
are building the world. This is a huge fundamental difference!


Philosophically I disagree. Biology and physics depends on
models of how the world works. The success of a model depends
on how well it describes and predicts what's observed.

Programming too has its model of how things work; you've mentioned
algorithmic complexity and there are models of how humans
interact with computers. The success depends in part on how
well it fits with those models.


And this is different from building? I don't disagree with the
other things you say, but I think Andrea is right here, although
I might have said construction or engineering rather than building.

To program is to build. While scientists do build and create things,
the ultimate goal of science is understanding. Scientists build
so that they can learn. Programmers and engineers learn so that
they can build.

There is a big overlap between science and engineering. I hope we
can embrace each other's perspectives and see common goals, but
I also think the distinction is useful.

It seems to me that a lot of so called science is really more
focused on achieving a certain goal than to understand the
world. I think Richard Feynman said something like "disciplines
with the word 'science' in their names aren't", and I feel that
he had a point.

I find the idea of computer science a bit odd. Fields like civil
engineering or electronics rely solidly on science and the laws
of nature. We must, or else our gadgets fail. We work closely
with field such as physics and chemistry, but we're not scientists,
because we learn to build, not vice versa. Our goal is problem
solving and solutions, not knowledge and understanding.

As you said:
"The success of a model depends on how well it describes and
predicts what's observed."
It's quite obvious that this is as true when we build houses,
airplanes or bridges, and when we build programs. Right?

It seems to me that *real* computer scientists are very rare. I
suspect that the label computer scientist comes from a lack of
a better word. Erh, computer engineering without engineering?
What do we call this? I don't mean to offend anyone. I have all
respect for both the education and the students of the discipline
called computer science, and I think it's vital that there is a
foundation of science and mathematics in this field, but most
practitioners aren't scientists any more than engineers are
scientists. Ok, my degree is "Master of Science" in English,
but my academic discipline is electronic engineering, not
electronic science--because the goal with the education is to
be able to use scientific knowledge to solve practical problems,
which is, by definition, what engineers do.

Oh well, I guess it's a bit late to try to rename the Computer
Science discipline now.
Jul 19 '05 #72
On 6/14/05, Magnus Lycka <ly***@carmen.se> wrote:
Andrew Dalke wrote:
Andrea Griffini wrote:
This is investigating. Programming is more similar to building
instead (with a very few exceptions). CS is not like physics or
chemistry or biology where you're given a result (the world)
and you're looking for the unknown laws. In programming *we*
are building the world. This is a huge fundamental difference!
Philosophically I disagree. Biology and physics depends on
models of how the world works. The success of a model depends
on how well it describes and predicts what's observed.

Programming too has its model of how things work; you've mentioned
algorithmic complexity and there are models of how humans
interact with computers. The success depends in part on how
well it fits with those models.


And this is different from building? I don't disagree with the
other things you say, but I think Andrea is right here, although
I might have said construction or engineering rather than building.

To program is to build. While scientists do build and create things,
the ultimate goal of science is understanding. Scientists build
so that they can learn. Programmers and engineers learn so that
they can build.

<snip stuff I agree with>
It seems to me that *real* computer scientists are very rare.


I'd like to say that I think that they do, in fact, exist, and that
it's a group which should grow and begin to do things more like their
biological counterparts. Why? Because, as systems get more complex,
they must be studied like biological systems.

I spent a while in college studying latent semantic indexing (LSI)
[1], which is an algorithm that can be used to group things for
clustering, searching, and other uses. It is known *to* be effective
in some circumstances, but nobody (at least when I was studying it ~2
years ago) knows *why* it is effective.

With the help of my professor, I was helping to try and determine that
*why*. We had a hypothesis [2], and my job was basically to build
experiments to test our hypothesis. First, I built a framework to
perform LSI on arbitrary documents (in python of course, let's keep it
on topic :), then I started to do experiments on different bodies of
text and different variations of our hypothesis. I kept a lab journal
detailing what I had changed between experiments, some of which took
days to run.

I believe that there are at least a fair number of computer scientists
working like this, and I believe that they need to recognize
themselves as a separate discipline with separate rules. I'd like to
see them open source their code when they publish papers as a matter
of standard procedure. I'd like to see them publish reports much more
like biologists than like mathematicians. In this way, I think that
the scientific computer scientists could begin to become more like
real scientists than like engineers.

Just my 2 cents.

Peace
Bill Mill

[1] http://javelina.cet.middlebury.edu/l...definition.htm
[2] http://llimllib.f2o.org/files/lsi_paper.pdf
Jul 19 '05 #73
Magnus Lycka:
While scientists do build and create things,
the ultimate goal of science is understanding. Scientists build
so that they can learn. Programmers and engineers learn so that
they can build.


Well put! I am going to add this to my list of citations :)

Michele Simionato

Jul 19 '05 #74
Magnus Lycka wrote:
It seems to me that *real* computer scientists are very rare. I suspect the analysis of algorithms people are among that group.
It is intriguing to me when you can determine a lower and upper
bound on the time for the best solution to a problem relatively
independent of the particular solution.
Oh well, I guess it's a bit late to try to rename the Computer
Science discipline now.

The best I've heard is "Informatics" -- I have a vague impression
that this is a more European name for the field.

--Scott David Daniels
Sc***********@Acm.Org
Jul 19 '05 #75
Scott David Daniels wrote:
Magnus Lycka wrote:
It seems to me that *real* computer scientists are very rare.


I suspect the analysis of algorithms people are among that group.
It is intriguing to me when you can determine a lower and upper
bound on the time for the best solution to a problem relatively
independent of the particular solution.


On the other hand, you could argue that algorithms and mathematics
are branches of philosophy rather than science. :) Very useful for
science, just as many other parts of philosophy, but algorithms
are really abstract concepts that stand on their own, regardless
of the physical world. But perhaps that distinction is fuzzy. After
all, all philosophical theories rest on observations of the world,
just as scientific theories. Hm...we're really far off topic now.
Jul 19 '05 #76
Peter Maas wrote:
Learning is investigating. By top-down I mean high level (cat,
dog, table sun, sky) to low level (molecules, atoms, fields ...).
Aha. So you must learn cosmology first then. I don't think so. ;)

I don't know if you really think that you learn things top
down, but I doubt that you do. I know I don't. It's very
rare that I have a good understanding of the overall workings
of a system before I start to learn details. As I learn little
details here and there, my understanding of the entire system
is gradually growing and changing. Sometimes, a minor detail
might completely change my overall picture of a system.

In my experience, software designs developed by people who
lack a good hands-on understanding of the details in a software
system rarely work. We can't know the "top" properly unless
we know a lot about the "bottom".

Do you think children learn to speak by arriving at some
kind of general understanding of grammer before they learn
individual words? :) People typically don't learn things top
down, and I think there are good reasons for that.

Regarding software development, Bertrand Meyer said something
like this: "Top-down design doesn't work, because real software
systems have no top." His view of Object-Oriented development
is that it's a process where we go from that which is known
to us, to that which is (still) unknown. It might go up or
down, but we start with what we know, and gradually expand our
understanding into new territories armed with what we have
learnt so far.

Analysis or decomposition is only half of logical thinking.
Synthesis or composing parts into entire systems are just
as important, and we can't just do one part first, the other
after, and then be done with it.

We are all different intellectually, both genetically and by
training and education. Some people find it easier to grasp
theories, and others need more tangible examples. For each
person, the useful approach will probably depend on the previous
experience in that particular field, or in related areas.
And to know the lowest levels is not strictly necessary for
programming. I have seen good programmers who didn't know about
logic gates.


People might be "good programmers" in some sense, but they will
make mistakes if they don't understand the underlying mechanisms.
This doesn't mean that you need to understand all details down
to the quantum mechanics level, but if you see programing as a
purely "logical" activity, without regards for the technical
implementation, you will run into plenty of problems.

One classical trap is to fail to understand how floating point
numbers are represented in computers. Other common problems among
people who lack a technical understanding for computers is that
they fail to see how and why their solutions use up different
amounts of resources, be it memory, CPU time or whatever. I've
also often seen confusion because people fail to understand the
distinction between other logical and an internal representations,
such as cunfusion over the fact that a date which looks like
eight digits don't use eight bytes in the database, and
frustration over the fact that the database doesn't allow you
to store 0000-00-00 in a date field.

It's just the same as with cars. You can be a fair driver without
knowing anything about mechanics, but the more you know about how
the car works from a mechanical and physical point of view, the
easier it will be to understand why it behaves the way it does,
and the less likely you are to make mistakes when unusual things
happen.

Just as you can learn how cars behave from experience, or from
advice from others with experience, without understanding the
underlying mechanisms, you can learn programming this way. We
have learned things that way for thousands of years. I don't
think it's optimal though.

Robert Pirsig's "Zen and the Art of Motorcycle Maintenance"
covers this subject.

But we all have to start somewhere, and it's never to late to
learn something new. Curiosity and an open mind often means
more than formal education. Some people with a very "good"
education seem to believe that the thing they've learnt is
all that matters, and they will fall into other traps. There
is always another perspective that might give us a new insight.
Jul 19 '05 #77
Peter Maas wrote:
Yes, but what did you notice first when you were a child - plants
or molecules? I imagine little Andrew in the kindergarten fascinated
by molecules and suddenly shouting "Hey, we can make plants out of
these little thingies!" ;)


One of the first science books that really intrigued me
was a book on stars I read in 1st or 2nd grade.

As I mentioned, I didn't understand the science of biology
until I was in college.

Teaching kids is different than teaching adults. The
latter can often take bigger steps and start from a
sound understanding of logical and intuitive thought.
"Simple" for an adult is different than for a child.

Andrew
da***@dalkescientific.com

Jul 19 '05 #78
On 14 Jun 2005 00:37:00 -0700, "Michele Simionato"
<mi***************@gmail.com> wrote:
It looks like you do not have a background in Physics research.
We *do* build the world! ;)

Michele Simionato


Wow... I always get surprises from physics. For example I
thought that no one could drop confutability requirement
for a theory in an experimental science... I mean that I
always agreed with the logic principle that unless you
tell me an experiment whose result could be a confutation
of your theory or otherwise you're not saying anything
really interesting.
In other words if there is no means by which the theory
could be proved wrong by an experiment then that theory
is just babbling without any added content.
A friend of mine however told me that this principle that
I thought was fundamental for talking about science has
indeed been sacrified to get unification. I was told that
in physics there are current theories for which there
is no hypotetical experiment that could prove them wrong...
(superstrings may be ? it was a name like that but I
don't really remember).
To me looks like e.g. saying that objects are moved around
by invisible beings with long beards and tennis shoes
and that those spirits like to move them under apparent
laws we know because they're having a lot of fun fooling
us. However every now and then they move things a bit
differently just to watch at our surprised faces while we
try to see where is the problem in our measuring instrument.

My temptation is to react for this dropping of such a logical
requirement with a good laugh... what could be the result
of a theory that refuses basic logic ? On a second thought
however laughing at strange physics theories is not a good
idea. Especially if you live in Hiroshima.

Andrea
Jul 19 '05 #79
Andrea Griffini <ag****@tin.it> writes:
On Mon, 13 Jun 2005 21:33:50 -0500, Mike Meyer <mw*@mired.org> wrote:
But this same logic applies to why you want to teach abstract things
before concrete things. Since you like concrete examples, let's look
at a simple one:

a = b + c
...

n>>In a very
few languages (BCPL being one), this means exactly one thing. But
until you know the underlying architecture, you still can't say how
many operations it is.


That's exactly why

mov eax, a
add eax, b
mov c, eax


Um, you didn't do the translation right.
or, even more concrete and like what I learned first

lda $300
clc
adc $301
sta $302

is simpler to understand.
No, it isn't - because you have to worry about more details. In
particular, when programming in an HLL the compiler will take care of
allocating storage for the variables. In assembler, the programmer has
to deal with it. These extra details make the code more complicated.
Writing programs in assembler takes longer exactly beacuse
the language is *simpler*. Assembler has less implicit
semantic because it's closer to the limited brain of our
stupid silicon friend.
You're right, but you have the wrong reasons. There were studies done
during the 70s/80s that showed that debugged LOC from programmers is
independent of the language being written. Assembler takes longer to
write not because it's simpler, but because it takes more LOC to do
the same operations.

You can avoid that problem - to a degree - if you use an assembler
that eschews the standard assembler syntax for something more
abstract. For instance, whitesmith had a z80 assembler that let you write:

a = b + c

and it would generate the proper instructions via direct
translation. This didn't reduce the LOC to the level of C, but it
makes a significant dent in it.

Also, you can only claim that being closer to the chip is "simpler"
because you haven't dealt with a sufficiently complicated chip
yet. Try writing code where you have to worry about gate settling
times or pipeline stalls, and then tell me whether you think it's
"simpler" than dealing with an HLL.
Programming in assembler also really teaches (deeply
to your soul) who is the terrible "undefined behaviour"
monster you'll meet when programming in C.
Anything beyond the abstract statement "a gets the result
of adding b to c" is wasted on them.
But saying for example that

del v[0]

just "removes the first element from v" you will end up
with programs that do that in a stupid way, actually you
can easily get unusable programs, and programmers that
go around saying "python is slow" for that reason.


That's an implementation detail. It's true in Python, but isn't
necessarily true in other languages. Yes, good programmers need to
know that information - or, as I said before, they need to know that
they need to know that information, and where to get it.
It's true that in some cases, it's easier to remember the
implementation details and work out the cost than to
remember the cost directly.


I'm saying something different i.e. that unless you
understand (you have a least a rough picture, you
don't really need all the details... but there must
be no "magic" in it) how the standard C++ library is
implemented there is no way at all you have any
chance to remember all the quite important implications
for your program. It's just IMO impossible to memorize
such a big quantity of unrelated quirks.
Things like for example big O, but also undefined
behaviours risks like having iterators invalidated
when you add an element to a vector.


That may well be true of the standard C++ library - I don't write
it. But it certainly doesn't appear to be true of, for instance,
Python internals. I've never seen someone explain why, for instance,
string addition is O(n^2) beyond the very abstract "it creates a new
string with each addition". No concrete details at all.

Are you genuinely saying that abelian groups are
easier to understand than relative integers ?


Yup. Then again, my formal training is as a mathematician. I *like*
working in the problem space - with the abstact. I tend to design
top-down.


The problem with designing top down is that when
building (for example applications) there is no top.


This is simply false. The top of an application is the
application-level object
I found this very simple and very powerful rationalization
about my gut feeling on building complex systems in
Meyer's "Object Oriented Software Construction" and
it's one to which I completely agree. Top down is a
nice way for *explaining* what you already know, or
for *RE*-writing, not for creating or for learning.
I also like OOSC - because it makes the abstract type system a part of
the language. The approach Meyer takes emphasis the abstract.

You haven't stated how you think complex systems should be built, and
I think we took different lessons away from Meyer, so you'll have to
state it explicitly.

I agree - you don't generally build systems top-down. But building is
not designing. The common alternative to top-down design - bottom-up
design - makes the possibility of a semantic disconnect when you reach
the top to likely.
IMO no one can really think that teaching abelian
groups to kids first and only later introducing
them to relative numbers is the correct path.
Human brain simply doesn't work like that.
You didn't ask how it should be taught - you asked which I found more
natural. Numbers are an abstract concept, and can be taught as such.
You are saying this, but I think here it's more your
love for discussion than really what you think.
Actually, I think it's a terminology problem. Or maybe a problem
defining the levels.
The same is true of programmers who started with concrete details on a
different platform - unless they relearn those details for that
platform.


No. This is another very important key point.
Humans are quite smart at finding general rules
from details, you don't have to burn your hand on
every possible fire. Unfortunately sometimes there
is the OPPOSITE problem... we infer general rules
that do not apply from just too few observations.


Your opposite problem is avoided by not teaching the details until
they are needed, and making sure you teach that those are
implementation details, so they student knows not to draw such
conclusions from them.
The critical things a good programmer knows about those
concrete details is which ones are platform specific and which aren't,
and how to go about learning those details when they go to a new
platform.


I never observed this problem. You really did ?


As mentioned, you see it all the time in c.l.python. People come from
other languages, and try to write Python as if the rules for that
other language apply.
If you confuse the issue by teaching the concrete details at the same
time as you're teaching programming, you get people who can't make
that distinction. Such people regularly show up with horrid Python
code because they were used to the details for C, or Java, or
whatever.


Writing C code with python is indeed a problem that
is present. But I think this is a minor price to pay.
Also it's something that with time and experience
it will be fixed.


It can be fixed from the start by teaching the student the difference
between abstract programming concepts and implementation details.
I suppose that over there who is caught reading
TAOCP is slammed in jail ...


Those taught the concrete method would never have been exposed to
anything so abstract.


Hmmm; TACOP is The Art Of Computer Programming, what
is the abstract part of it ? The code presented is
only MIX assembler. There are math prerequisites for
a few parts, but I think no one could call it "abstract"
material (no one that actually spent some time reading
it, that is).


It tackled abstract problems like "sorting". The students I'm talking
about never dealt with anything that abstract.
Actually, it's a natural place to teach the details of that kind of
thing. An OS has to deal with allocating a number of different kinds
of memory, with radically different behaviors and constraints.


hehehe... and a program like TeX instead doesn't even
need to allocate memory.


You're confusing "using a facility" with "building a facility". Yes,
TeX needs to allocate memory. You don't need to know anything about
the insides of a memory allocator to allocate memory. You can write an
application like TeX without having to write a memory allocator. You
can't write an OS without having to write *several* memory allocators.
Are you going to claim that the right way to learn about memory
allocators is by looking at uses of them, or by looking at
implementations of them?
Pairing this with that teaching
abelian groups first to kids (why not fiber spaces then ?)
and that TAOCP is too "abstract" tells me that apparently
you're someone that likes to talk just for talking, or
that your religion doesn't allow you to type in smileys.


Now you're resorting to straw men and name-calling. That's an
indication that you no longer have any real points.

<mike
--
Mike Meyer <mw*@mired.org> http://www.mired.org/home/mwm/
Independent WWW/Perforce/FreeBSD/Unix consultant, email for more information.
Jul 19 '05 #80
Mike Meyer <mw*@mired.org> wrote:
I've never seen someone explain why, for instance, string addition is
O(n^2) beyond the very abstract "it creates a new string with each
addition". No concrete details at all.


I took a shot at that very question a while ago. Elephants never forget,
and neither does google (http://tinyurl.com/9nrnz).
Jul 19 '05 #81
Roy Smith <ro*@panix.com> writes:
Mike Meyer <mw*@mired.org> wrote:
I've never seen someone explain why, for instance, string addition is
O(n^2) beyond the very abstract "it creates a new string with each
addition". No concrete details at all.


I took a shot at that very question a while ago. Elephants never forget,
and neither does google (http://tinyurl.com/9nrnz).


While that's an excellent explanation of why string addition is
O(n^2), it doesn't really look at the concrete details of the Python
string implementation - which is what I was referring to. I'm sorry
that I was unclear in what I said.

<mike
--
Mike Meyer <mw*@mired.org> http://www.mired.org/home/mwm/
Independent WWW/Perforce/FreeBSD/Unix consultant, email for more information.
Jul 19 '05 #82
Andrea Griffini wrote:
Wow... I always get surprises from physics. For example I
thought that no one could drop confutability requirement
for a theory in an experimental science...
Some physicists (often mathematical physicists) propose
alternate worlds because the math is interesting.

There is a problem in physics in that we know (I was
trained as a physicist hence the "we" :) quantum mechanics
and gravity don't agree with each other. String theory
is one attempt to reconcile the two. One problem is
the math of string theory is hard enough that it's hard
to make a good prediction. Another problem is the
realm where QM and GR disagree requires such high energies
that it's hard to test directly.
I was told that
in physics there are current theories for which there
is no hypotetical experiment that could prove them wrong...
(superstrings may be ? it was a name like that but I
don't really remember).


If we had a machine that could reach Planck scale energies
then I'm pretty sure there are tests. But we don't, by
a long shot.

Andrew Dalke

Jul 19 '05 #83
Andrea Griffini:
Wow... I always get surprises from physics. For example I
thought that no one could drop confutability requirement
for a theory in an experimental science... I mean that I
always agreed with the logic principle that unless you
tell me an experiment whose result could be a confutation
of your theory or otherwise you're not saying anything
really interesting.
In other words if there is no means by which the theory
could be proved wrong by an experiment then that theory
is just babbling without any added content.
A friend of mine however told me that this principle that
I thought was fundamental for talking about science has
indeed been sacrified to get unification. I was told that
in physics there are current theories for which there
is no hypotetical experiment that could prove them wrong...


You must always distinguish between Science (=what we know)
and Research (=what we do not know). Research is performed
with all methods except the scientific one, even if we don't
tell the others ;)

Michele Simionato

Jul 19 '05 #84
Roy Smith wrote:
Steven D'Aprano <st***@REMOVEMEcyber.com.au> wrote:
High and low tides aren't caused by the moon.

They're not???


Nope. They are mostly caused by the continents. If the
Earth was completely covered by ocean, the difference
between high and low tide would be about 10-14 inches.
(Over deep ocean, far from shore, the difference is
typically less than 18 inches.)

The enormous difference between high and low tide
measured near the shore (up to 45 feet in the Bay of
Fundy in Canada, almost forty times larger) is caused
by the interaction of the continents with the ocean. In
effect, the water piles up against the shore, like in a
giant bathtub when you slosh the water around.

The true situation is that tides are caused by the
interaction of the gravitational fields of the sun, the
moon and the Earth, the rotation of the Earth, the
physical properties of water, its salinity, the depth,
shape and composition of the coast and shoreline, the
prevailing ocean currents, vibrationary modes of the
ocean (including up to 300 minor harmonics), ocean
storms, and even the wind. You can understand why we
usually simplify it to "the moon causes the tides",
even though the moon isn't even the largest
contributing factor.

See, for example:

http://home.hiwaay.net/~krcool/Astro/moon/moontides/
--
Steven

Jul 19 '05 #85
Peter Hansen wrote:
Roy Smith wrote:
Steven D'Aprano <st***@REMOVEMEcyber.com.au> wrote:
High and low tides aren't caused by the moon.

They're not???

Probably he's referring to something like this, from Wikipedia, which
emphasizes that while tides are caused primarily by the moon, the height
of the high and low tides involves the sun as well:

"The height of the high and low tides (relative to mean sea level) also
varies. Around new and full Moon, the tidal forces due to the Sun
reinforce those of the Moon, due to the syzygy found at those times -
both the Sun and the Moon are 'pulling the water in the same direction.'"

(If I'm right about this, then the statement is still wrong, since even
without the sun there would be high and low tides, just not of the
magnitude we have now.)


Close :-)

If tides are caused by the moon, then removing the moon
would end tides. But this clearly isn't true: without
the moon, we would still have tides, only smaller.

On the other hand, the magnitude of the tides is much
larger than the magnitude of the moon's effect on the
oceans (up to forty times larger). So the moon is, at
best, merely a minor cause of the tides.

Of course, all this is quibbling. But it does
illustrate exactly what Cohen and Stewart mean when
they talk about "lies for children". The truth is a lot
more complicated that the simple, easy to understand
"the moon causes the tides".
--
Steven.

Jul 19 '05 #86
Andrea Griffini wrote:
A friend of mine however told me that this principle that
I thought was fundamental for talking about science has
indeed been sacrified to get unification. I was told that
in physics there are current theories for which there
is no hypotetical experiment that could prove them wrong...
(superstrings may be ? it was a name like that but I
don't really remember).


I think either you or your friend has misunderstood.
There are a number of physics theories where we are
rapidly approaching the point that there are no
PRACTICAL tests we can apply, not that there are no
hypothetical tests imaginable. Physicists are aware of,
and disturbed by, the problem with this situation, and
they like it no better than you do.
--
Steven.

Jul 19 '05 #87
On Tue, 14 Jun 2005 16:40:42 -0500, Mike Meyer <mw*@mired.org> wrote:
Um, you didn't do the translation right.
Whoops.

So you know assembler, no other possibility as it's such
a complex language that unless someone already knows it
(and in the specific architecture) what i wrote is pure
line noise.

You studied it after python, I suppose.
or, even more concrete and like what I learned first

lda $300
clc
adc $301
sta $302

is simpler to understand.


No, it isn't - because you have to worry about more details.


In assembler details are simply more explicit. Unfortunately
with computers you just cannot avoid details, otherwise your
programs will suck bad. When I wrote in an high level language
or even a very high level one the details are understood even
if I'm not writing down them. After a while a programmer will
even be able to put them at a subconscius level and e.g. by
just looking at O(N^2) code that could be easily rewritten as
O(N) or O(1) a little bell will ring in you brain telling you
"this is ugly". But you cannot know if something is O(1) or
O(N) or O(N^2) unless you know some detail. If you don't like
details then programming is just not the correct field.

In math when I write down the derivative of a complex function
it doesn't mean I don't know what is the definition of
derivative in terms of limits, or what are the conditions
that must be met to be able to write down it. Yet I'm not
writing them every time (some times I'll write them when they're
not obvious, but when they're obvious it doesn't mean I'm not
considering them or, worse, I don't know or understand them).
If you don't really understand what a derivative is and when
it makes sense and when it doesn't, your equations risk
to be good just for after dinner pub jokes.
In particular, when programming in an HLL the compiler will take care of
allocating storage for the variables. In assembler, the programmer has
to deal with it. These extra details make the code more complicated.
Just more explicit. So explicit that it can become boring.
After a while certain operations are so clearly understood
that you are able to write a program to do them to save us
some time (and to preventing us to get bored).
That's what HLL are for... to save you from doing, not to
save you from understanding. What the HLL is doing for you
is something that you don't do in the details, but that you
better not taking without any critic or even comphrension,
because, and this is anoter very important point, *YOU*
will be responsible of the final result, and the final
result will depend a lot (almost totally, actually) on
what you call details.
Just to make another example programming without the faintes
idea of what's happening is not really different from using
those "wizards" to generate a plethora of code you do not
understand. When the wizard will also take the *responsability*
for that code we may discuss it again, but until then if
you don't understand what the wizard does and you just
accept its code then you're not going to go very far.
Just resaying it if you don't understand why it works
there is just no possibility at all you'll understand
why it doesn't work.

Think that "a = b + c" in computes the sum of two real
numbers and your program will fail (expecting, how fool,
that adding ten times 0.1 you get 1.0) and you'll spend
some time wondering why the plane crashed... your code
was "correct" after all.
For instance, whitesmith had a z80 assembler that let you write:

a = b + c

and it would generate the proper instructions via direct
translation.
To use that I've to understand what registers will be
affected and how ugly (i.e. inefficient) the code could
get. Programmin in assembler using such an high level
feature without knowing those little details woul be
just suicidal.
Also, you can only claim that being closer to the chip is "simpler"
because you haven't dealt with a sufficiently complicated chip yet.
True that one should start with something reasonable.
I started with 6502 and just love its semplicity. Now
at work we've boards based on DSP TMSC320 and, believe
me, that assembler gives new meanings to the word ugly.
But saying for example that

del v[0]

just "removes the first element from v" you will end up
with programs that do that in a stupid way, actually you
can easily get unusable programs, and programmers that
go around saying "python is slow" for that reason.


That's an implementation detail. It's true in Python, but isn't
necessarily true in other languages.


Yeah. And you must know which is which. Otherwise you'll
write programs that just do not give the expected result
(because the user killed them earlier).
Yes, good programmers need to know that information - or,
as I said before, they need to know that they need to know
that information, and where to get it.
I think that a *decent* programmer must understand if the
code being written is roughly O(n) or O(n^2). Without
at least that the possibility of writing useful code,
excluding may be toy projects, is a flat zero.
Looking that information later may be just "too" late,
because the wrong data structure has already been used
and nothing can be done (except rewriting everything).
That may well be true of the standard C++ library - I don't write
it. But it certainly doesn't appear to be true of, for instance,
Python internals. I've never seen someone explain why, for instance,
string addition is O(n^2) beyond the very abstract "it creates a new
string with each addition". No concrete details at all.
The problem is that unless you really internalized what
that means you'll forget about it. Don't ask me why,
but it happens. Our mind works that way. You just cannot
live with a jillion of unrelated details you cannot place
in a scheme. It doesn't work. One would do thousand times
the effort that would be done using instead a model able
to justify those details.
The problem with designing top down is that when
building (for example applications) there is no top.


This is simply false. The top of an application is the
application-level object


Except that the marketing will continuosly shift what
you application is supposed to do. And this is good, and
essential. This is "building". Sometimes marketing will
change specifications *before* you complete the very
first prototype. For complex enough projects this is more
the rule than the exception. In the nice "the pragmatic
programmer" book (IIRC) is told that there's no known
complex project in which specification was changed less
than four times before the first release... and the only
time they were changed just three times it was when the
guy running with the fourth variations was hit by a
lightning on the street.
Unfortunately sometimes there
is the OPPOSITE problem... we infer general rules
that do not apply from just too few observations.


Your opposite problem is avoided by not teaching the details until
they are needed, and making sure you teach that those are
implementation details, so they student knows not to draw such
conclusions from them.


What you will obtain is that people that will build
wrong models. Omitting details, if they can really
affect the result, is not a good idea.
This is completely different from omitting details
that no one without a 3km particle accellerator
could detect to kids in 4th grade school.
The critical things a good programmer knows about those
concrete details is which ones are platform specific and which aren't,
and how to go about learning those details when they go to a new
platform.


I never observed this problem. You really did ?


As mentioned, you see it all the time in c.l.python. People come from
other languages, and try to write Python as if the rules for that
other language apply.


That's exactly because they don't know the details of
any of the languages you used. Someone knowing the
details would be curious to know *how* "del v[0]"
is implemented in python. Actually it could be changed
easily in an O(1) operation with just a little slowdown
in element access (still O(1) but with a bigger constant).
This is a compromise that has not been accepted and
this very fact is important to know if you plan to
use python seriously.
It can be fixed from the start by teaching the student the difference
between abstract programming concepts and implementation details.
Sorry, but I really don't agree that big O is a "detail"
that could be ignored. Only bubble-and-arrow powerpoint
gurus could think that; I'm not in that crew.
Ignore those little details and your program will be
just as good as ones that don't even compile.
It tackled abstract problems like "sorting". The students I'm talking
about never dealt with anything that abstract.


Sorting is abstract ?
Pairing this with that teaching
abelian groups first to kids (why not fiber spaces then ?)
and that TAOCP is too "abstract" tells me that apparently
you're someone that likes to talk just for talking, or
that your religion doesn't allow you to type in smileys.


Now you're resorting to straw men and name-calling. That's an
indication that you no longer have any real points.


I'll blame my bad english for understanding that you
said that abelian groups should be taught before
relative numbers (somehow I crazily thought the point
of discussion was what's the correct order of learning
how to program), that TAOCP is too abstract (a book
where every single code listing is in assembler!)
and that big-o when programming is a detail that can
be safely ignored (good luck, IMO you'll need hell a
lot of it).

Andrea
Jul 19 '05 #88
Magnus Lycka schrieb:
Peter Maas wrote:
Learning is investigating. By top-down I mean high level (cat,
dog, table sun, sky) to low level (molecules, atoms, fields ...).

Aha. So you must learn cosmology first then. I don't think so. ;)


I wasn't talking about size but about sensual accessibility. And
I'm going to withdraw from this discussion. My English is not good
enough for this kind of stuff. And ... it's off topic! ;)

--
-------------------------------------------------------------------
Peter Maas, M+R Infosysteme, D-52070 Aachen, Tel +49-241-93878-0
E-mail 'cGV0ZXIubWFhc0BtcGx1c3IuZGU=\n'.decode('base64')
-------------------------------------------------------------------
Jul 19 '05 #89
If you're thinking of things like superstrings, loop quantum gravity
and other "theories of everything" then your friend has gotten
confused somewhere. There is certainly no current experiments which we
can do in practise, which is widely acknowledged as a flaw. Lots of
physicists are trying to work out low-energy consequences of these
theories so that they can be tested, but the maths is extremely hard
and the theories aren't even well understood in many cases; but that
doesn't mean that they've decided that they'll accept them fully and
not bother testing them!

On 6/14/05, Andrea Griffini <ag****@tin.it> wrote:
On 14 Jun 2005 00:37:00 -0700, "Michele Simionato"
Wow... I always get surprises from physics. For example I
thought that no one could drop confutability requirement
for a theory in an experimental science... I mean that I
always agreed with the logic principle that unless you
tell me an experiment whose result could be a confutation
of your theory or otherwise you're not saying anything
really interesting.
In other words if there is no means by which the theory
could be proved wrong by an experiment then that theory
is just babbling without any added content.
A friend of mine however told me that this principle that
I thought was fundamental for talking about science has
indeed been sacrified to get unification. I was told that
in physics there are current theories for which there
is no hypotetical experiment that could prove them wrong...
(superstrings may be ? it was a name like that but I
don't really remember).

Jul 19 '05 #90
Steven D'Aprano wrote:
Roy Smith wrote:
Steven D'Aprano <st***@REMOVEMEcyber.com.au> wrote:
>High and low tides aren't caused by the moon.

They're not???


Nope. They are mostly caused by the continents. ...
The true situation is that tides are caused by the interaction of the
gravitational fields of the sun, the moon and the Earth, the rotation of
the Earth, the physical properties of water, its salinity, the depth,
shape and composition of the coast and shoreline, the prevailing ocean
currents, vibrationary modes of the ocean (including up to 300 minor
harmonics), ocean storms, and even the wind. You can understand why we
usually simplify it to "the moon causes the tides", even though the moon
isn't even the largest contributing factor.

See, for example:

http://home.hiwaay.net/~krcool/Astro/moon/moontides/


Steve, please go and read that page again. While I readily accept that
you may be far more of an expert on tides than I, Roy, and whoever
contributed to the Wikipedia article, nearly every section on the page
that *you* referenced directly contradicts your basic claims. I quote:

"Tides are created because the Earth and the moon are attracted to each
other..."

"The sun's gravitational force on the earth is only 46 percent that of
the moon. Making the moon the single most important factor for the
creation of tides." [you said the moon isn't the largest factor]

"Since the moon moves around the Earth, it is not always in the same
place at the same time each day. So, each day, the times for high and
low tides change by 50 minutes." [if the moon were not such a large
cause, it wouldn't have this effect]

I believe you are still just saying that the *magnitude* of the tides is
greatly affected by other things, such as the shoreline, but what I keep
reading is you basically saying "the moon is not the cause and is only a
minor factor".

I also see nothing to suggest that if the moon and the sun were removed
from the picture, there would be much in the way of tides at all. (The
page you quoted says the sun has about 46% the effect of the moon which,
if true, means the statement "the presence of the moon and the sun cause
tides" still seems pretty accurate, certainly not a "lie for children"
but merely a simplification, if anything.

-Peter
Jul 19 '05 #91
On Wed, 15 Jun 2005 10:27:19 +0100, James <sp*****@gmail.com> wrote:
If you're thinking of things like superstrings, loop quantum gravity
and other "theories of everything" then your friend has gotten
confused somewhere.


More likely I was the one that didn't understand. Reading
what wikipedia tells about it i understood better what is
the situation. From a philosophical point of view I think
however looks like there is indeed a problem. Is just a
theorical - but infeasible - experiment for confutation
sufficient ?

I think I triggered my friend telling him that I found on
the web a discussion about a certain book of theorical
physics was considered just a joke made up by piling
physics buzzwords by some, and a real theory by others.
Finding that even this was a non-obvious problem amused
me, and he told me that advanced physics now has this kind
of problems with current theories that are so complex
and so impossible to check that it's even questionable
they're not breaking logic and the scientific method
and are instead a question of faith and opinions.

Andrea
Jul 19 '05 #92
Steven D'Aprano <st***@REMOVEMEcyber.com.au> wrote:
Roy Smith wrote:
Steven D'Aprano <st***@REMOVEMEcyber.com.au> wrote:
High and low tides aren't caused by the moon.

They're not???


Nope. They are mostly caused by the continents. If the
Earth was completely covered by ocean, the difference
between high and low tide would be about 10-14 inches.


Yeah, I know about all that stuff. But, let's explore this from a teaching
point of view.
The true situation is that tides are caused by the
interaction of the gravitational fields of the sun, the
moon and the Earth, the rotation of the Earth, the
physical properties of water, its salinity, the depth,
shape and composition of the coast and shoreline, the
prevailing ocean currents, vibrationary modes of the
ocean (including up to 300 minor harmonics), ocean
storms, and even the wind.


That's a lot of detail to absorb, and is appropriate for a college-level
course taken by oceanography majors. The key to teaching something is to
strip away all the details and try to get down to one nugget of truth with
which you can lay a foundation upon which further learning can happen.

Yes, both the sun and the moon have gravitational fields which affect
tides. But the moon's gravitational field is much stronger than the sun's,
so as a first-order approximation, we can ignore the sun.

And, yes, there's a huge amplifying effect caused by coastline shape and
resonant frequencies of the ocean basins, but if you took away the moon
(remember, we're ignoring the sun for now), there would be no tides at all.
If you took away all the continents, there would still be tides, they would
just be a lot less (and nobody would notice them!).

And, yes, wind affects tide. I live at the western tip of Long Island
Sound. If the wind is blowing hard along the axis of the Sound for a solid
day or two, I can see that it has a drastic effect on the tides.

This says to me that "The tides are created by the moon, amplified by the
shapes of the land masses, and altered by the wind".

Sure, "the moon causes the tides" is not the whole picture, and from a
quantitative point of view, may not even be the major player, but from a
basic "How do I explain this physical process to a 6th grade child in a way
that's both easy to understand and fundamentally correct", I think "the
moon causes the tides" is the only reasonable explanation. Once that basic
idea is planted, all the other stuff can get layered on top to improve the
understanding of how tides work.

So, to try and bring this back to the original point of this thread, which
is that Python is a better first language than C, let's think of the moon
as the "algorithms, data structures, and flow control" fundamentals of
programming, and memory management as the continents and ocean basins.
What you want to teach somebody on day one is the fundamentals. Sure,
there are cases where poor memory management can degrade performance to the
point where it swamps all other effects, but it's still not the fundamental
thing you're trying to teach to a new CS student.
Jul 19 '05 #93
On Tuesday 14 June 2005 02:12 pm, Andrew Dalke wrote:
Teaching kids is different than teaching adults. The
latter can often take bigger steps and start from a
sound understanding of logical and intuitive thought.
"Simple" for an adult is different than for a child.


Of course, since children are vastly better at learning than
adults, perhaps adults are stupid to do this. ;-)

Quantum mechanics notwithstanding, I'm not sure there
is a "bottom" "most-reduced" level of understanding. It's
certainly not clear that it is relevant to programming.

It is invariably true that a deeper understanding of the
technology you use will improve your power of using it.
So, I have no doubt that knowing C (and the bits-and-bytes
approach to programming) will improve your performance
as a programmer if you started with Python. Just as learning
assembler surely made me a better C programmer.

But I could write quite nice programs (nice enough for my
needs at the time) in BASIC, and I certainly can in Python.

Sometimes, you just don't care if your algorithm is ideal, or
if "it will slow to a crawl when you deliver it to the customer".

When did the "customer" get into this conversation? I mean,
you're a neophyte who just learned how to program, and you're
already flogging your services on the unsuspecting masses?

In my experience, people first learn to program for their own
needs, and it's only a long time later that somebody decides
they want to become a professional. And maybe, when you're
a pro, knowledge of machine details is really important.

But an awful lot of people just want to putz around with the
computer (or more accurately just want to solve their own
problems using it). Surely, they don't need to know anything
about quantum transitions, transistors, or malloc to do that.

In fact, I find such stuff introduces a lot of noise in my
thinking with Python. Awhile back I wanted to write a
program that would take a title like "My fun trip to Europe,
and a thousand and one things I saw there" and make a
mnemonic file-name less than 32 characters long out of it,
that obeyed simple naming conventions. I wanted it to spit
something out like "trip_europe_1001_things", the way a
human might name such a file. Well, first of all, with all
the clutter involved in processing strings, I would never
have tried this in C, it would've taken me a month! But
my real point follows ...

I used several different methods to go about squeezing down
a title to get rid of "less important" words.

For one, I wanted to know what the most common words were,
so I could make a hit list of words to delete from titles. I did this
using three or four titles from Project Gutenberg, and the Python
interpretor. It took maybe fifteen minutes, and went something
like this:

s = open('cia_factbook', 'r').read() + open('pride_prej', 'r').read() + open('alice', 'r').read()
words = s.split()
unique_words = {}
for word in words:
unique_words[word] = unique_words.get(word, 0) + 1
word_freqs = unique_words.items()
word_freqs.sort(lambda a,b: cmp(a[1],b[1]))
for word, freq in word_freqs[:100]:
print "%5d: %s" % (word, freq)

This is, of course, totally brutal on my computer's memory allocation,
because the source data is quite large. So, my C programming
instincts would've encouraged me to do all kinds of optimizing. But
why bother? Like I said, it took 15 minutes. Done. And that includes
the programming time. I took a completely naive approach and
it worked, so why should I bother making it hard for myself?

Sure, in the profession of writing commercial, off-the-shelf word counting
software to sell to "the customer", I should be truly ashamed, but who
cares? I didn't even save this program to disk, I've just tried to rewrite
it from memory here.

The idea that one must learn assembler to learn C and in turn
to learn Python strikes me as elitist or protectionist --- just another
way to turn people away from programming.

You should learn that stuff when it becomes obvious that you need it,
not from the outset. And for many people, it simply may never be
needed. Python is actually remarkably good at solving things in a
nearly optimal way.

This program also had another interesting property -- it couldn't have
a rigid specification. There's no way to write one, it has to succeed
intuitively, by producing output that is "mnemonic". But there's no
way to unit test for that. ;-) So no amount of "deep understanding of
what the machine is doing" would've really helped that much.

I think there's an awful lot of programming out there that is like that --
the problem is about solving a problem with appropriate *ideas*
not trying to find the most efficient *methods*. Often, it's not how
efficiently I can do a thing that interests me, but whether it can be
done at all.

I don't think total reductionism is particularly useful for this. My
thermodynamics professor once argued that, never having seen any,
but only knowing the laws of physics and thermodynamics, that
physicists would never have predicted the existence of *liquids*,
let alone oceans, nucleic acids, and life forms. The formation of those
things is still far beyond us on first-principles, and surely even if we
do come to an understanding of such things, the particular biology
of, say, flowering plants will be almost completely unaffected by such
knowledge.

Likewise, the creation and study of more complex ideas in software
is unlikely to benefit much from assembler-level understanding of
the machines on which it runs. It's simply getting too close to the
problem --- assembly language problems have become so well
understood, that they are becoming a domain for automated solution,
which is what high-level programming is all about.

--
Terry Hancock ( hancock at anansispaceworks.com )
Anansi Spaceworks http://www.anansispaceworks.com

Jul 19 '05 #94
On Tuesday 14 June 2005 10:21 am, Scott David Daniels wrote:
Oh well, I guess it's a bit late to try to rename the Computer
Science discipline now.

The best I've heard is "Informatics" -- I have a vague impression
that this is a more European name for the field.


It's the reverse-translation from the French "Informatique".

--
Terry Hancock ( hancock at anansispaceworks.com )
Anansi Spaceworks http://www.anansispaceworks.com

Jul 19 '05 #95
On Tuesday 14 June 2005 08:12 am, Magnus Lycka wrote:
Oh well, I guess it's a bit late to try to rename the Computer
Science discipline now.


Computer programming is a trade skill, not a science. It's like
being a machinist or a carpenter --- a practical art.

Unfortunately, our society has a very denigrative view of
craftsmen, and does not pay them well enough, so computer
programmers have been motivated to attempt to elevate the
profession by using the appellative of "science".

How different would the world be if we (more accurately)
called it "Computer Arts"?

--
Terry Hancock ( hancock at anansispaceworks.com )
Anansi Spaceworks http://www.anansispaceworks.com

Jul 19 '05 #96
On Wednesday 15 June 2005 05:13 am, Peter Hansen wrote:
I also see nothing to suggest that if the moon and the sun were removed
from the picture, there would be much in the way of tides at all. (The
page you quoted says the sun has about 46% the effect of the moon which,
if true, means the statement "the presence of the moon and the sun cause
tides" still seems pretty accurate, certainly not a "lie for children"
but merely a simplification, if anything.


Okay, I haven't read the book, but I suspect that "lies for children" means
very much that it is merely a simplification. Every simplification is a lie.
You have to distort the truth in some way to make it simpler.

This is done with the best intentions, and the understanding is that
next year, when the child is a little older, you will contradict that lie
with another lie that is a little closer to the truth. Trying to start with
"the Truth" is impossible because:

1) You don't know the Truth either, just a much higher-order lie.
2) Your pupil can't handle the Truth yet.

This has more to do with the nature of Truth than with the ethics
of teaching, you see. ;-)

Certainly, I have this understanding of how science is taught. This
is why I really, really hate "true-false" tests. Because, while they
may be easy for people who rely only on the knowledge learned in
class, they are extremely hard for people who learn on their own ---
which bit of the truth am I not supposed to know, I ask myself.

For example, is the statement "Viruses are made of DNA" true, or
false?

In high school, the answer might be "true", but in college it is
certainly "false", because some viruses are made of RNA, and most
are also made of protein, and even a few have a membrane or
"envelope" analogous to a cell.

Virtually everything is more complicated when you look at it closely.

For example, consider these statements:

"The Earth goes around the Sun"

No, actually the Sun, Earth, and all the rest of the planets go around
something called the "barycenter" or "center of gravity" of the whole
solar system. Now, since more than 99% of the mass in the Solar
System is in the Sun, that's pretty close to the Sun, but it isn't quite.
In fact, IIRC, it isn't even inside of the photosphere, due to Jupiter
being so massive.

"The Moon orbits the Earth"

See above, although the barycenter is beneath the Earth's crust. But,
more importantly, the Moon is only "loosely bound" to the Earth, it
never actually goes backward relative to the Earth-Moon orbit around
the Sun, and if the Earth were, say, destroyed to make way for a
hyperspace bypass, the Moon would settle pretty happily into almost
the same orbit as the Earth has now. In fact, the Moon isn't a moon,
the Earth-Luna system is really a binary planet. *Moons* are what
Mars and Jupiter have (for example).

"Plants consume CO2 and make O2"

Well, yes, but they also consume O2, just like animals. *On balance*,
the statement is *usually* true. But most plants would probably
die in a pure-CO2 environment (unless they can drive the atmosphere
to a better composition fast enough).

And as for the subject line, I'd say the Python list is very much
at high-tide here. ;-)

--
Terry Hancock ( hancock at anansispaceworks.com )
Anansi Spaceworks http://www.anansispaceworks.com

Jul 19 '05 #97
Yes, both the sun and the moon have gravitational fields which affect
tides. But the moon's gravitational field is much stronger than the sun's, so as a first-order approximation, we can ignore the sun.
Here we are experiencing further small lie which found its way
into a text written by an author probably not aware, that he is
creating it.
I have picked it out, not because I am so biased towards
the very detail, but because it is a good example of how hard
it is to discuss without creating small lies one after another.

The school books are filled with statements similar to the above
causing endless confusion and misunderstandings.

"the moon's gravitational field is much stronger than the sun's"
....
Sure it is the opposite, i.e. the gravitational field of the sun is
much stronger than that of the moon. What was intended to
state is probably, that the gravitational force caused by attraction
of the masses of earth and the sun applied to earth is lower
than the gravitational force caused by attraction of the masses
of earth and the moon (applied to earth).
I am sure, that if someone will analyse the statement above
deep enough he will find some more small lies originated
by the temptation to keep any statements short and simple
without giving a precise definition of all assumptions
required to be known in order to understand it right.

What leads to confusion is also the explanation that the water
of the oceans is attracted by the moon. It is only half of the
true. The another half is, that the moon is attracted by the
water of the oceans. It is very interesting, that in the minds
of many people the gravitational force acts only on one body.
Many think I am stupid, when I try to insist that the gravitational
force a human body exerts on earth is the same as that which
the earth exerts on a human body.
"the human body is so small, so the gravitational force it exerts
on earth can be neglected compared to the gravitational force
the earth exerts on the human body" is the explanation.

The problem of science is, that for communication of its findings
it has to use the same language which is used also for many other
purposes. The problem of text writers is, that it is so convienient
to use shortcuts to what one thinks, but has just forgotten to mention
before. It is also common to expresses own thoughts in an
inappropriate way because at the moment the right words are just
not there.

Hope this above has cleared away all what was clear before ;/)

What has it all to do with Python? To be not fully off-topic, I
suggest here, that it is much easier to discuss programming
related matters (especially in case of Python :-) or mathematics
than any other subjects related to nature, because programming is
_so easy_ compared to what is going on in the "real world".
I see the reason for that in the fact, that programming is based
on ideas and rules developed by humans themselves, so it is
relatively easy to test and proove if statements are right or not.
Exception is when the source code is hidden like the rules
directing the behaviour of our Universe, what sure shouldn't be
interpreted, that people hiding source code behave more
like The Creator than others making it public ;/)

Claudio
"Roy Smith" <ro*@panix.com> schrieb im Newsbeitrag
news:ro***********************@reader1.panix.com.. . Steven D'Aprano <st***@REMOVEMEcyber.com.au> wrote:
Roy Smith wrote:
Steven D'Aprano <st***@REMOVEMEcyber.com.au> wrote:

>High and low tides aren't caused by the moon.
They're not???
Nope. They are mostly caused by the continents. If the
Earth was completely covered by ocean, the difference
between high and low tide would be about 10-14 inches.


Yeah, I know about all that stuff. But, let's explore this from a

teaching point of view.
The true situation is that tides are caused by the
interaction of the gravitational fields of the sun, the
moon and the Earth, the rotation of the Earth, the
physical properties of water, its salinity, the depth,
shape and composition of the coast and shoreline, the
prevailing ocean currents, vibrationary modes of the
ocean (including up to 300 minor harmonics), ocean
storms, and even the wind.
That's a lot of detail to absorb, and is appropriate for a college-level
course taken by oceanography majors. The key to teaching something is to
strip away all the details and try to get down to one nugget of truth with
which you can lay a foundation upon which further learning can happen.

Yes, both the sun and the moon have gravitational fields which affect
tides. But the moon's gravitational field is much stronger than the

sun's, so as a first-order approximation, we can ignore the sun.

And, yes, there's a huge amplifying effect caused by coastline shape and
resonant frequencies of the ocean basins, but if you took away the moon
(remember, we're ignoring the sun for now), there would be no tides at all. If you took away all the continents, there would still be tides, they would just be a lot less (and nobody would notice them!).

And, yes, wind affects tide. I live at the western tip of Long Island
Sound. If the wind is blowing hard along the axis of the Sound for a solid day or two, I can see that it has a drastic effect on the tides.

This says to me that "The tides are created by the moon, amplified by the
shapes of the land masses, and altered by the wind".

Sure, "the moon causes the tides" is not the whole picture, and from a
quantitative point of view, may not even be the major player, but from a
basic "How do I explain this physical process to a 6th grade child in a way that's both easy to understand and fundamentally correct", I think "the
moon causes the tides" is the only reasonable explanation. Once that basic idea is planted, all the other stuff can get layered on top to improve the
understanding of how tides work.

So, to try and bring this back to the original point of this thread, which
is that Python is a better first language than C, let's think of the moon
as the "algorithms, data structures, and flow control" fundamentals of
programming, and memory management as the continents and ocean basins.
What you want to teach somebody on day one is the fundamentals. Sure,
there are cases where poor memory management can degrade performance to the point where it swamps all other effects, but it's still not the fundamental thing you're trying to teach to a new CS student.



Jul 19 '05 #98
Terry Hancock <ha*****@anansispaceworks.com> wrote:
is the statement "Viruses are made of DNA" true, or false?


False. Viruses were made of Word macros :-)

Jul 19 '05 #99
Terry Hancock wrote:
"Plants consume CO2 and make O2"

Well, yes, but they also consume O2, just like animals. *On balance*,
the statement is *usually* true. But most plants would probably
die in a pure-CO2 environment (unless they can drive the atmosphere
to a better composition fast enough).


Ha, finally one I can comment one with a reasonable level of confidence.
On balance, plants consume CO2 and produce O2 *as long as they are
growing*. You see, they use the C from the CO2 to build the material
they are made of. Once they are full-grown, the amount of O2 produced by
their photosynthesis is balanced by the amount of O2 consumed by their
respiration; the same goes for the CO2 produced in consumed in both
processes.

This can be generalized to whole forests too. Often people think that
the Amazon rain forests produce large quantities of O2, but that's just
not true. During their initial growth, they indeed produced large
quantities of O2. As long as they stay the same size (the same amount of
biomass actually), there is no net effect. Now that large areas are
burnt to make place for roads and agriculture, the CO2 comes back in the
atmosphere while O2 from the atmosphere is consumed in the flames.

It's very well possible that this is a simplification that glosses over
a few details such as other sources of carbon, but in general it's good
enough.

--
If I have been able to see further, it was only because I stood
on the shoulders of giants. -- Isaac Newton

Roel Schroeven
Jul 19 '05 #100

This thread has been closed and replies have been disabled. Please start a new discussion.

Similar topics

220
by: Brandon J. Van Every | last post by:
What's better about Ruby than Python? I'm sure there's something. What is it? This is not a troll. I'm language shopping and I want people's answers. I don't know beans about Ruby or have...
54
by: Brandon J. Van Every | last post by:
I'm realizing I didn't frame my question well. What's ***TOTALLY COMPELLING*** about Ruby over Python? What makes you jump up in your chair and scream "Wow! Ruby has *that*? That is SO...
3
by: Chris Cioffi | last post by:
I started writing this list because I wanted to have definite points to base a comparison on and as the starting point of writing something myself. After looking around, I think it would be a...
7
by: Jonathan Fine | last post by:
Giudo has suggested adding optional static typing to Python. (I hope suggested is the correct word.) http://www.artima.com/weblogs/viewpost.jsp?thread=85551 An example of the syntax he proposes...
92
by: Reed L. O'Brien | last post by:
I see rotor was removed for 2.4 and the docs say use an AES module provided separately... Is there a standard module that works alike or an AES module that works alike but with better encryption?...
4
by: MrBlueSky | last post by:
Hello! I've just finished working on my first Python app (a Tkinter-based program that displays the content of our application log files in graphical format). It was a great experience that's...
34
by: emrahayanoglu | last post by:
Hello Everyone, Now, I'm working on a new web framework. I tried many test on the other programming languages. Then i decided to use python on my web framework project. Now i want to listen...
0
by: Charles Arthur | last post by:
How do i turn on java script on a villaon, callus and itel keypad mobile phone
0
by: ryjfgjl | last post by:
In our work, we often receive Excel tables with data in the same format. If we want to analyze these data, it can be difficult to analyze them because the data is spread across multiple Excel files...
0
by: emmanuelkatto | last post by:
Hi All, I am Emmanuel katto from Uganda. I want to ask what challenges you've faced while migrating a website to cloud. Please let me know. Thanks! Emmanuel
0
BarryA
by: BarryA | last post by:
What are the essential steps and strategies outlined in the Data Structures and Algorithms (DSA) roadmap for aspiring data scientists? How can individuals effectively utilize this roadmap to progress...
1
by: nemocccc | last post by:
hello, everyone, I want to develop a software for my android phone for daily needs, any suggestions?
1
by: Sonnysonu | last post by:
This is the data of csv file 1 2 3 1 2 3 1 2 3 1 2 3 2 3 2 3 3 the lengths should be different i have to store the data by column-wise with in the specific length. suppose the i have to...
0
by: Hystou | last post by:
There are some requirements for setting up RAID: 1. The motherboard and BIOS support RAID configuration. 2. The motherboard has 2 or more available SATA protocol SSD/HDD slots (including MSATA, M.2...
0
marktang
by: marktang | last post by:
ONU (Optical Network Unit) is one of the key components for providing high-speed Internet services. Its primary function is to act as an endpoint device located at the user's premises. However,...
0
by: Hystou | last post by:
Overview: Windows 11 and 10 have less user interface control over operating system update behaviour than previous versions of Windows. In Windows 11 and 10, there is no way to turn off the Windows...

By using Bytes.com and it's services, you agree to our Privacy Policy and Terms of Use.

To disable or enable advertisements and analytics tracking please visit the manage ads & tracking page.