469,326 Members | 1,423 Online
Bytes | Developer Community
New Post

Home Posts Topics Members FAQ

Post your question to a community of 469,326 developers. It's quick & easy.

Python from Wise Guy's Viewpoint

THE GOOD:

1. pickle

2. simplicity and uniformity

3. big library (bigger would be even better)

THE BAD:

1. f(x,y,z) sucks. f x y z would be much easier to type (see Haskell)
90% of the code is function applictions. Why not make it convenient?

2. Statements vs Expressions business is very dumb. Try writing
a = if x :
y
else: z

3. no multimethods (why? Guido did not know Lisp, so he did not know
about them) You now have to suffer from visitor patterns, etc. like
lowly Java monkeys.

4. splintering of the language: you have the inefficient main language,
and you have a different dialect being developed that needs type
declarations. Why not allow type declarations in the main language
instead as an option (Lisp does it)

5. Why do you need "def" ? In Haskell, you'd write
square x = x * x

6. Requiring "return" is also dumb (see #5)

7. Syntax and semantics of "lambda" should be identical to
function definitions (for simplicity and uniformity)

8. Can you undefine a function, value, class or unimport a module?
(If the answer is no to any of these questions, Python is simply
not interactive enough)

9. Syntax for arrays is also bad [a (b c d) e f] would be better
than [a, b(c,d), e, f]

420

P.S. If someone can forward this to python-dev, you can probably save some
people a lot of soul-searching
Jul 18 '05
467 18279
> THE GOOD:
THE BAD:

1. f(x,y,z) sucks. f x y z would be much easier to type (see Haskell)
90% of the code is function applictions. Why not make it convenient?

9. Syntax for arrays is also bad [a (b c d) e f] would be better
than [a, b(c,d), e, f]


Agreed with your analysis, except for these two items.

#1 is a matter of opinion, but in general:

- f(x,y) is the standard set by mathematical notation and all the
mainstream programming language families, and is library neutral:
calling a curried function is f(x)(y), while calling an uncurried
function is f(x,y).

- "f x y" is unique to the Haskell and LISP families of languages, and
implies that most library functions are curried. Otherwise you have a
weird asymmetry between curried calls "f x y" and uncurried calls
which translate back to "f(x,y)". Widespread use of currying can lead
to weird error messages when calling functions of many parameters: a
missing third parameter in a call like f(x,y) is easy to report, while
with curried notation, "f x y" is still valid, yet results in a type
other than what you were expecting, moving the error up the AST to a
less useful obvious.

I think #9 is inconsistent with #1.

In general, I'm wary of notations like "f x" that use whitespace as an
operator (see http://www.research.att.com/~bs/whitespace98.pdf).
Jul 18 '05 #51
On Mon, 20 Oct 2003 13:52:14 -0700, Tim Sweeney wrote:
- "f x y" is unique to the Haskell and LISP families of languages, and
implies that most library functions are curried.


No, Lisp doesn't curry. It really writes "(f x y)", which is different
from "((f x) y)" (which is actually Scheme, not Lisp).

In fact the syntax "f x y" without mandatory parens fits non-lispish
non-curried syntaxes too. The space doesn't have to be left- or
right-associative; it just binds all arguments at once, and this
expression is different both from "f (x y)" and "(f x) y".

The only glitch is that you have to express application to 0 arguments
somehow. If you use "f()", you can't use "()" as an expression (for
empty tuple for example). But when you accept it, it works. It's my
favorite function application syntax.

--
__("< Marcin Kowalczyk
\__/ qr****@knm.org.pl
^^ http://qrnik.knm.org.pl/~qrczak/

Jul 18 '05 #52
Pascal Bourguignon:
We all agree that it would be better to have a perfect world and
perfect, bug-free, software. But since that's not the case, I'm
saying that instead of having software that behaves like simple unix C
tools, where as soon as there is an unexpected situation, it calls
perror() and exit(), it would be better to have smarter software that
can try and handle UNEXPECTED error situations, including its own
bugs. I would feel safer in an AI rocket.


Since it was written in Ada and not C, and since it properly raised
an exception at that point (as originally designed), which wasn't
caught at a recoverable point, ending up in the default "better blow
up than kill people" handler ... what would your AI rocket have
done with that exception? How does it decide that an UNEXPECTED
error situation can be recovered? How would you implement it?
How would you test it? (Note that the above software wasn't
tested under realistic conditions; I assume in part because of cost.)

I agree it would be better to have software which can do that.
I have no good idea of how that's done. (And bear in mind that
my XEmacs session dies about once a year, eg, once when NFS
was acting flaky underneath it and a couple times because it
couldn't handle something X threw at it. ;)

The best examples of resilent architectures I've seen come from
genetic algorithms and other sorts of feedback training; eg,
subsumptive architectures for robotics and evolvable hardware.
There was a great article in CACM on programming an FPGA
via GAs, in 1998/'99 (link, anyone?). It worked quite well (as
I recall) but pointed out the hard part about this approach is
that it's hard to understand, and the result used various defects
on the chip (part of the circuit wasn't used but the chip wouldn't
work without it) which makes the result harder to mass produce.

Andrew
da***@dalkescientific.com
Jul 18 '05 #53
On 20 Oct 2003 22:08:30 +0200, Pascal Bourguignon
<sp**@thalassa.informatimago.com> wrote:
AFAIK, while this parameter was out of range, there was no instability
and the rocket was not uncontrolable.
That's perfectly true, but also perfectly irrelevant. When your
carefully designed software has just told you that your rocket, which,
you may recall, is traveling at several thousand metres per second, has
just entered a "can't possibly happen" state, you don't exactly have a
lot of time in which to analyze all of the conflicting information and
decide which to trust and which not to trust. Whether that sort of
decision-making is done by engineers on the ground or by human pilots or
by some as yet undesigned intelligent flight control system, the answer
is the same: Do the safe thing first, and then try to figure out what
happened.

All well-posed problems have boundary conditions, and the solutions to
those problems are bounded as well. No matter what the problem or its
means of solution, a boundary is there, and if you somehow cross that
boundary, you're toast. In particular, the difficulty with AI systems is
that while they can certainly enlarge the boundary, they also tend to
make it fuzzier and less predictable, which means that testing becomes
much less reliable. There are numerous examples where human operators
have done the "sensible" thing, with catastrophic consequences.
My point.
Well, actually, no. I assure you that my point is very different from
yours.
This "can't possibly happen" failure did happen, so clearly it was not
a "can't possibly happen" physically, which means that the problem was
with the software.
No, it still was a "can't possibly happen" scenario, from the point of
view of the designed solution. And there was nothing wrong with the
software. The difficulty arose because the solution for one problem was
applied to a different problem (i.e., the boundary was crossed).
it would be better to have smarter software that can try and handle
UNEXPECTED error situations
I think you're failing to grasp the enormity of the concept of "can't
possibly happen." There's a big difference between merely "unexpected"
and "can't possibly happen." "Unexpected" most often means that you
haven't sufficiently analyzed the situation. "Can't possibly happen," on
the other hand, means that you've analyzed the situation and determined
that the scenario is outside the realm of physical or logical
possibility. There is simply no meaningful means of recovery from a
"can't possibly happen" scenario. No matter how smart your software is,
there will be "can't possibly happen" scenarios outside the boundary,
and your software is going to have to shut down.
I would feel safer in an AI rocket.


What frightens me most is that I know that there are engineers working
on safety-critical systems that feel the same way. By all means, make
your flight control system as sophisticated and intelligent as you want,
but don't forget to include a simple, reliable, dumber-than-dirt
ejection system that "can't possibly fail" when the "can't possibly
happen" scenario happens.

Let me try to summarize the philosophical differences here: First of
all, I wholeheartedly agree that a more sophisticated software system
_may_ have prevented the destruction of the rocket. Even so, I think the
likelihood of that is rather small. (For some insight into why I think
so, you might want to take a look at Henry Petroski's _To Engineer is
Human_.) Where we differ is how much impact we believe that more
sophisticated software would have on the problem. I get the impression
that you believe that an AI-based system would drastically reduce
(perhaps even eliminate?) the "can't possibly happen" scenario. I, on
the other hand, believe that even the most sophisticated system enlarges
the boundary of the solution space by only a very small amount--the area
occupied by "can't possibly happen" scenarios remains far greater than
that occupied by "software works correctly and saves the rocket"
scenarios.

-Steve

Jul 18 '05 #54
On Mon, Oct 20, 2003 at 01:52:14PM -0700, Tim Sweeney wrote:
1. f(x,y,z) sucks. f x y z would be much easier to type (see Haskell)
90% of the code is function applictions. Why not make it convenient?

9. Syntax for arrays is also bad [a (b c d) e f] would be better
than [a, b(c,d), e, f] #1 is a matter of opinion, but in general:

- f(x,y) is the standard set by mathematical notation and all the
mainstream programming language families, and is library neutral:
calling a curried function is f(x)(y), while calling an uncurried
function is f(x,y).


And lambda notation is: \xy.yx or something like that. Math notation is
rather ad-hoc, designed for shorthand scribbling on paper, and in
general a bad idea to imitate for programming languages which are
written on the computer in an ASCII editor (which is one thing which
bothers me about ML and Haskell).
- "f x y" is unique to the Haskell and LISP families of languages, and
implies that most library functions are curried. Otherwise you have a
weird asymmetry between curried calls "f x y" and uncurried calls
which translate back to "f(x,y)".
Here's an "aha" moment for you:

In Haskell and ML, the two biggest languages with built-in syntactic
support for currying, there is also a datatype called a tuple (which is
a record with positional fields). All functions, in fact, only take a
single argument. The trick is that the syntax for tuples and the syntax
for currying combine to form the syntax for function calling:

f (x, y, z) ==> calling f with a tuple (x, y, z)
f x (y, z) ==> calling f with x, and then calling the result with (y, z).

This, I think, is a win for a functional language. However, in a
not-so-functionally-oriented language such as Lisp, this gets in the way
of flexible parameter-list parsing, and doesn't provide that much value.
In Lisp, a form's meaning is determined by its first element, hence (f x
y) has a meaning determined by F (whether it is a macro, or functionally
bound), and Lisp permits such things as "optional", "keyword" (a.k.a. by
name) arguments, and ways to obtain the arguments as a list.

"f x y", to Lisp, is just three separate forms (all symbols).
Widespread use of currying can lead
to weird error messages when calling functions of many parameters: a
missing third parameter in a call like f(x,y) is easy to report, while
with curried notation, "f x y" is still valid, yet results in a type
other than what you were expecting, moving the error up the AST to a
less useful obvious.
Nah, it should still be able to report the line number correctly.
Though I freely admit that the error messages spat out of compilers like
SML/NJ are not so wonderful.
I think #9 is inconsistent with #1.
I think that if the parser recognizes that it is directly within a [ ]
form, it can figure out that these are not function calls but rather
elements, though it would require that function calls be wrapped in (
)'s now. And the grammar would be made much more complicated I think.

Personally, I prefer (list a (b c d) e f).
In general, I'm wary of notations like "f x" that use whitespace as an
operator (see http://www.research.att.com/~bs/whitespace98.pdf).


Hmm, rather curious paper. I never really though of "f x" using
whitespace as an operator--it's a delimiter in the strict sense. The
grammar of ML and Haskell define that consecutive expressions form a
function application. Lisp certainly uses whitespace as a simple
delimiter. I'm not a big fan of required commas because it gets
annoying when you are editting large tables or function calls with many
parameters. The behavior of Emacs's C-M-t or M-t is not terribly good
with extraneous characters like those, though it does try.

--
; Matthew Danish <md*****@andrew.cmu.edu>
; OpenPGP public key: C24B6010 on keyring.debian.org
; Signed or encrypted mail welcome.
; "There is no dark side of the moon really; matter of fact, it's all dark."
Jul 18 '05 #55
> > In general, I'm wary of notations like "f x" that use whitespace as an
operator (see http://www.research.att.com/~bs/whitespace98.pdf).
Hmm, rather curious paper. I never really though of "f x" using
whitespace as an operator--it's a delimiter in the strict sense. The
grammar of ML and Haskell define that consecutive expressions form a
function application. Lisp certainly uses whitespace as a simple
delimiter...


Did you read the cited paper *all the way to the end*?

-Mike
Jul 18 '05 #56
"Andrew Dalke" <ad****@mindspring.com> writes:
Pascal Bourguignon:
We all agree that it would be better to have a perfect world and
perfect, bug-free, software. But since that's not the case, I'm
saying that instead of having software that behaves like simple unix C
tools, where as soon as there is an unexpected situation, it calls
perror() and exit(), it would be better to have smarter software that
can try and handle UNEXPECTED error situations, including its own
bugs. I would feel safer in an AI rocket.
Since it was written in Ada and not C, and since it properly raised
an exception at that point (as originally designed), which wasn't
caught at a recoverable point, ending up in the default "better blow
up than kill people" handler ... what would your AI rocket have
done with that exception? How does it decide that an UNEXPECTED
error situation can be recovered?


By having a view at the big picture!

The blow up action would be activated only when the big picture shows
that the AI has no control of the rocket and that it is going down.

How would you implement it?
Like any AI.
How would you test it? (Note that the above software wasn't
tested under realistic conditions; I assume in part because of cost.)
In a simulator. In any case, the point is to have a software that is
able to handle even unexpected failures.

I agree it would be better to have software which can do that.
I have no good idea of how that's done. (And bear in mind that
my XEmacs session dies about once a year, eg, once when NFS
was acting flaky underneath it and a couple times because it
couldn't handle something X threw at it. ;)
XEmacs is not AI.
The best examples of resilent architectures I've seen come from
genetic algorithms and other sorts of feedback training; eg,
subsumptive architectures for robotics and evolvable hardware.
There was a great article in CACM on programming an FPGA
via GAs, in 1998/'99 (link, anyone?). It worked quite well (as
I recall) but pointed out the hard part about this approach is
that it's hard to understand, and the result used various defects
on the chip (part of the circuit wasn't used but the chip wouldn't
work without it) which makes the result harder to mass produce.

Andrew
da***@dalkescientific.com


In any case, you're right, the main problem may be that it was
specified to blow up when an unhandled exception was raised...

--
__Pascal_Bourguignon__
http://www.informatimago.com/
Do not adjust your mind, there is a fault in reality.
Lying for having sex or lying for making war? Trust US presidents :-(
Jul 18 '05 #57
Me:
How would you test it? (Note that the above software wasn't
tested under realistic conditions; I assume in part because of cost.)
Pascal Bourguignon:
In a simulator. In any case, the point is to have a software that is
able to handle even unexpected failures.
Like I said, the existing code was not tested in a simulator. Why
do you think some AI code *would* be tested for this same case?
(Actually, I believe that an AI would need to be trained in a
simulator, just like humans, but that it would require so much
testing as to preclude its use, for now, in rocket control systems.)

Nor have you given any sort of guideline on how to implement
this sort of AI in the first place. Without it, you've just restated
the dream of many people over the last few centuries. It's a
dream I would like to see happen, which is why I agreed with you.
couldn't handle something X threw at it. ;)

XEmacs is not AI


Yup, which is why the smiley is there. You said that C was
not the language to use (cf your perror/exit comment) and implied
that Ada wasn't either, so I assumed you had a more resiliant
programming language in mind. My response was to point
out that Emacs Lisp also crashes (rarely) given unexpected
errors and so imply that Lisp is not the answer.

Truely I believe that programming languages as we know
them are not the (direct) solution, hence my pointers to
evolvable hardware and similar techniques.

Even then, we still have a long way to go before they
can be used to control a rocket. They require a lot of
training (just like people) and software simulators just
won't cut it. The first "AI"s will replace those things
we find simple and commonplace[*] (because our brain
evolved to handle it), and not hard and rare.

Andrew
da***@dalkescientific.com[*]
In thinking of some examples, I remembered a passage in
on of Cordwainer Smith's stories. In them, dogs, cats,
eagles, cows, and many other animals were artifically
endowed with intelligence and a human-like shape.
Turtles were bred for tasks which required long patience.
For example, one turtle was assigned the task of standing
by a door in case there was trouble, which he did for
100 years, without complaint.
Jul 18 '05 #58
Andrew Dalke fed this fish to the penguins on Monday 20 October 2003
21:41 pm:

For example, one turtle was assigned the task of standing
by a door in case there was trouble, which he did for
100 years, without complaint.
I do hope he was allowed time-out for the occassional lettuce leaf or
other veggies... <G>

-- ================================================== ============ <
wl*****@ix.netcom.com | Wulfraed Dennis Lee Bieber KD6MOG <
wu******@dm.net | Bestiaria Support Staff <
================================================== ============ <
Bestiaria Home Page: http://www.beastie.dm.net/ <
Home Page: http://www.dm.net/~wulfraed/ <


Jul 18 '05 #59
On Mon, Oct 20, 2003 at 07:27:49PM -0700, Michael Geary wrote:
In general, I'm wary of notations like "f x" that use whitespace as an
operator (see http://www.research.att.com/~bs/whitespace98.pdf).

Hmm, rather curious paper. I never really though of "f x" using
whitespace as an operator--it's a delimiter in the strict sense. The
grammar of ML and Haskell define that consecutive expressions form a
function application. Lisp certainly uses whitespace as a simple
delimiter...


Did you read the cited paper *all the way to the end*?


Why bother? It says "April 1" in the Abstract, and got boring about 2
paragraphs later. I should have scare-quoted "operator" above, or
rather the lack of one, which is interpreted as meaning function
application.

--
; Matthew Danish <md*****@andrew.cmu.edu>
; OpenPGP public key: C24B6010 on keyring.debian.org
; Signed or encrypted mail welcome.
; "There is no dark side of the moon really; matter of fact, it's all dark."
Jul 18 '05 #60
Alex Martelli <al***@aleax.it> wrote in
news:Ow*******************@news1.tin.it:
Yes -- which is exactly why many non-programmers would prefer the
parentheses-less notation -- with more obvious names of course;-).
E.g.:
emitwarning URGENT "meltdown imminent!!!"
DOES look nicer to non-programmers than
emitwarning(URGENT, "meltdown imminent!!!")

Indeed, such languages as Visual Basic and Ruby do allow calling
without parentheses, no doubt because of this "nice look" thing.


I know we are agreed that Visual Basic is fundamentally broken, but it
might be worth pointing out the massive trap that it provides for
programmers in the subtle difference between:

someProcedure x

and

someProcedure(x)

and

call someProcedure(x)

If 'someProcedure' is a procedure taking a single reference parameter, and
modifying that parameter, then the first and third forms will call the
procedure and modify 'x'. The second form on the other hand will call the
procedure and without any warning or error will simply discard the
modifications leaving 'x' unchanged.

--
Duncan Booth du****@rcp.co.uk
int month(char *p){return(124864/((p[0]+p[1]-p[2]&0x1f)+1)%12)["\5\x8\3"
"\6\7\xb\1\x9\xa\2\0\4"];} // Who said my code was obscure?
Jul 18 '05 #61
Duncan Booth wrote:
...
Indeed, such languages as Visual Basic and Ruby do allow calling
without parentheses, no doubt because of this "nice look" thing.


I know we are agreed that Visual Basic is fundamentally broken, but it
might be worth pointing out the massive trap that it provides for


I'm not sure, but I think that's one of the many VB details changed
(mostly for the better, but still, _massive_ incompatibility) in the
current version (VB7 aka VB.NET) wrt older ones (VB6, VBA, etc).
Alex

Jul 18 '05 #62
Pascal Bourguignon wrote:
AFAIK, while this parameter was out of range, there was no
instability and the rocket was not uncontrolable.
Actually, the rocket had started correcting its orientation according to
the bogus data, which resulted in uncontrollable turning. The rocket
would have broken into parts in an uncontrollable manner, so it was
blewn up.
(The human operator decided to press the emergency self-destruct button
seconds before the control software would have initiated self destruct.)
My point. This "can't possibly happen" failure did happen, so
clearly it was not a "can't possibly happen" physically, which means
that the problem was with the software. We know it, but what I'm
saying is that a smarter software could have deduced it on fly.
No. The smartest software will not save you from human error. It was a
specification error.
The only way to detect this error (apart from more testing) would have
been to model the physics of the rocket, in software, and either verify
the flight control software against the rocket model or to test run the
whole thing in software. (I guess neither of these options would have
been cheaper than the simple test runs that were deliberately omitted,
probably on the grounds of "we /know/ it works, it worked in the Ariane 4".)
We all agree that it would be better to have a perfect world
and perfect, bug-free, software. But since that's not the case,
I'm saying that instead of having software that behaves like simple
unix C tools, where as soon as there is an unexpected situation,
it calls perror() and exit(), it would be better to have smarter
software that can try and handle UNEXPECTED error situations,
including its own bugs. I would feel safer in an AI rocket.


This all may be true, but you're solving problems that didn't cause the
Ariane crash.

Regards,
Jo

Jul 18 '05 #63
Alex Martelli <al***@aleax.it> wrote in
news:WP***********************@news2.tin.it:
Duncan Booth wrote:
...
Indeed, such languages as Visual Basic and Ruby do allow calling
without parentheses, no doubt because of this "nice look" thing.


I know we are agreed that Visual Basic is fundamentally broken, but it
might be worth pointing out the massive trap that it provides for


I'm not sure, but I think that's one of the many VB details changed
(mostly for the better, but still, _massive_ incompatibility) in the
current version (VB7 aka VB.NET) wrt older ones (VB6, VBA, etc).

Yes, I just checked and VB7 now requires parentheses on all argument lists,
so:

someProcedure x

is now illegal.

someProcedure(x)

and

call someProcedure(x)

now do the same thing. The Visual Studio.Net editor will automatically
'correct' the first form into the second (unless you tell it not to).

Of course, while it is less likely to cause a major headache, the confusing
behaviour is still present, just pushed down a level. These are both legal,
but the second one ignores changes to x. At least you are less likely to
type it accidentally.

someProcedure(x)

someProcedure((x))

--
Duncan Booth du****@rcp.co.uk
int month(char *p){return(124864/((p[0]+p[1]-p[2]&0x1f)+1)%12)["\5\x8\3"
"\6\7\xb\1\x9\xa\2\0\4"];} // Who said my code was obscure?
Jul 18 '05 #64
Tim Sweeney wrote:

1. f(x,y,z) sucks. f x y z would be much easier to type (see Haskell)
90% of the code is function applictions. Why not make it convenient?

9. Syntax for arrays is also bad [a (b c d) e f] would be better
than [a, b(c,d), e, f]
Agreed with your analysis, except for these two items.

#1 is a matter of opinion, but in general:

- f(x,y) is the standard set by mathematical notation and all the
mainstream programming language families, and is library neutral:
calling a curried function is f(x)(y), while calling an uncurried
function is f(x,y).


Well, in most languages, curried functions are the standard.
This has some syntactic advantages, in areas that go beyond mathematical
tradition. (Since each branch of mathematics has its own traditions,
it's probably possible to find a branch where the functional programming
way of writing functions is indeed tradition *g*)
- "f x y" is unique to the Haskell and LISP families of languages, and
implies that most library functions are curried.
No, Lisp languages require parentheses around the call, i.e.
(f x y)
Lisp does share the trait that it doesn't need commas.
Otherwise you have a
weird asymmetry between curried calls "f x y" and uncurried calls
which translate back to "f(x,y)".
It's not an asymmetry. "f x y" is a function of two parameters.
"f (x, y)" is a function of a single parameter, which is an ordered pair.
In most cases such a difference is irrelevant, but there are cases where
it isn't.
Widespread use of currying can lead
to weird error messages when calling functions of many parameters: a
missing third parameter in a call like f(x,y) is easy to report, while
with curried notation, "f x y" is still valid, yet results in a type
other than what you were expecting, moving the error up the AST to a
less useful obvious.
That's right.
On the other hand, it makes it easy to write code that just fills the
first parameter of a function, and returns the result. Such code is so
commonplace that having weird error messages is considered a small price
to pay.
Actually, writing functional code is more about sticking together
functions than actually calling them. With such use, having to write
code like
f (x, ...)
instead of
f x
will gain in precision, but it will clutter up the code so much that I'd
exptect the gain in readability to be little, nonexistent or even negative.
It might be interesting to transform real-life code to a more standard
syntax and see whether my expectation indeed holds.
In general, I'm wary of notations like "f x" that use whitespace as an
operator (see http://www.research.att.com/~bs/whitespace98.pdf).


That was an April Fool's joke. A particularly clever one: the paper
starts by laying a marginally reasonable groundwork, only to advance
into realms of absurdity later on.
It would be unreasonable to make whitespace an operator in C++. This
doesn't mean that a language with a syntax designed for whitespace
cannot be reasonable, and in fact some languages do that, with good
effect. Reading Haskell code is like a fresh breeze, since you don't
have to mentally filter out all that syntactic noise.
The downside is that it's easy to get some detail wrong. One example is
a decision (was that Python?) to equate a tab with eight blanks, which
tends to mess up syntactic structure when editing the code with
over-eager editors. There are some other lessons to learn - but then,
whitespace-as-syntactic-element is a relatively new concept, and people
are still playing with it and trying out alternatives. The idea in
itself is useful, its incarnations aren't perfect (yet).

Regards,
Jo

Jul 18 '05 #65
Alex Martelli <al***@aleax.it> writes:
[..] the EXISTING call to foo() will NOT be "affected" by the "del
foo" that happens right in the middle of it, since there is no
further attempt to look up the name "foo" in the rest of that call's
progress. [..]
What this and my other investigations amount to, is that in Python a
"name" is somewhat like a lisp symbol [1]. In particluar, it is an
object that has a pre-computed hash-key, which is why
hash-table/dictionary lookups are reasonably efficient. My worry was
that the actual string hash-key would have to be computed at every
function call, which I believe would slow down the process some 10-100
times. I'm happy to hear it is not so.

[1] One major difference being that Pyhon names are not first-class
objects. This is a big mistake wrt. to supporting interactive
programming in my personal opinion.
As for your worries elsewhere expressed that name lookup may impose
excessive overhead, in Python we like to MEASURE performance issues
rather than just reason about them "abstractly"; which is why Python
comes with a handy timeit.py script to time a code snippet
accurately. [...]


Thank you for the detailed information. Still, I'm sure you will agree
that sometimes reasoning about things can provide insight with
predictive powers that you cannot achieve by mere experimentation.

--
Frode Vatvedt Fjeld
Jul 18 '05 #66

"Frode Vatvedt Fjeld" <fr****@cs.uit.no> wrote in message
news:2h************@vserver.cs.uit.no...
What this and my other investigations amount to, is that in Python a
"name" is somewhat like a lisp symbol [1].
This is true in that names are bound to objects rather than
representing a block of memory.
In particluar, it is an object that has a pre-computed hash-key,


NO. There is no name type. 'Name' is a grammatical category, with
particular syntax rules, for Python code, just like 'expression',
'statement' and many others.

A name *may* be represented at runtime as a string, as CPython
*sometimes* does. The implementation *may*, for efficiency, give
strings a hidden hash value attributre, which CPython does.

For even faster runtime 'name lookup' an implementation may represent
names
as slot numbers (indexes) for a hiddem, non-Python array. CPython
does this (with C pointer arrays) for function locals whenever the
list of locals is fixed at compile time, which is usually. (To
prevent this optimization, add to a function body something like 'from
mymod import *', if still allowed, that makes the number of locals
unknowable until runtime.)

To learn about generated bytecodes, read the dis module docs and use
dis.dis.
For example:
import dis
def f(a): .... b=a+1
.... dis.dis(f)

0 SET_LINENO 1

3 SET_LINENO 2
6 LOAD_FAST 0 (a)
9 LOAD_CONST 1 (1)
12 BINARY_ADD
13 STORE_FAST 1 (b)
16 LOAD_CONST 0 (None)
19 RETURN_VALUE
This says: load (onto stack) first pointer in local_vars array and
second pointer in local-constants array, add referenced values and
replace operand pointers with pointer to result, store that result
pointer in the second slot of local_vars, load first constant pointer
(always to None), and return.

Who knows what *we* do when we read, parse, and possibly execute
Python code.

Terry J. Reedy
Jul 18 '05 #67
"Terry Reedy" <tj*****@udel.edu> writes:
[..] For even faster runtime 'name lookup' an implementation may
represent names as slot numbers (indexes) for a hiddem, non-Python
array. CPython does this (with C pointer arrays) for function
locals whenever the list of locals is fixed at compile time, which
is usually. (To prevent this optimization, add to a function body
something like 'from mymod import *', if still allowed, that makes
the number of locals unknowable until runtime.) [..]


This certainly does not ease my worries over Pythons abilities with
respect to interactivity and dynamism.

--
Frode Vatvedt Fjeld
Jul 18 '05 #68
Frode Vatvedt Fjeld wrote:
...
excessive overhead, in Python we like to MEASURE performance issues
rather than just reason about them "abstractly"; which is why Python
comes with a handy timeit.py script to time a code snippet
accurately. [...]


Thank you for the detailed information. Still, I'm sure you will agree
that sometimes reasoning about things can provide insight with
predictive powers that you cannot achieve by mere experimentation.


A few centuries ago, a compatriot of mine was threatened with
torture, and backed off, because he had dared state that "all
science comes from experience" -- he refuted the "reasoning
about things" by MEASURING (and fudging the numbers, if the
chi square tests about his reports about the sloping-plane
experiments are right -- but then, Italians _are_ notoriously
untrustworthy, even though sometimes geniuses;-).

These days, I'd hope not to be threatened with torture if I assert:
"reasoning" is cheap, that's its advantage -- it can lead you to
advance predictive hypotheses much faster than mere "data
mining" through masses of data might yield them. But those
hypotheses are very dubious until you've MEASURED what they
predict. If you don't (or can't) measure, you don't _really KNOW_;
you just _OPINE_ (reasonably or not, justifiably or not, etc). One
independently repeatable measurement trumps a thousand clever
reasonings, when that measurement gives numbers contradicting
the reasonings' predictions -- that one number sends you back to
the drawing board.

Or, at least, that's how we humble engineers see the world...
Alex

Jul 18 '05 #69
"Andrew Dalke" <ad****@mindspring.com> writes:
[...]
Nor have you given any sort of guideline on how to implement
this sort of AI in the first place. Without it, you've just restated
the dream of many people over the last few centuries. It's a
dream I would like to see happen, which is why I agreed with you.
[...]
Truely I believe that programming languages as we know
them are not the (direct) solution, hence my pointers to
evolvable hardware and similar techniques.


You're right, I did not answer. I think that what is missing in
classic software, and that ought to be present in AI software, is some
introspective control: having a process checking that the other
processes are live and progressing, and able to act to correct any
infinite loop, break down or dead-lock. Some hardware may help in
controling this controling software, like on the latest Macintosh:
they automatically restart when the system is hung. And purely at the
hardware level, for a real life system, you can't rely on only one
processor.

--
__Pascal_Bourguignon__
http://www.informatimago.com/
Jul 18 '05 #70

ti*@epicgames.com (Tim Sweeney) writes:
In general, I'm wary of notations like "f x" that use whitespace as an
operator (see http://www.research.att.com/~bs/whitespace98.pdf).


The \\ comment successor is GREAT!

--
__Pascal_Bourguignon__
http://www.informatimago.com/
Jul 18 '05 #71
Andrew Dalke <ad****@mindspring.com> wrote:
The best examples of resilent architectures I've seen come from
genetic algorithms and other sorts of feedback training; eg,
subsumptive architectures for robotics and evolvable hardware.
There was a great article in CACM on programming an FPGA
via GAs, in 1998/'99 (link, anyone?). It worked quite well (as
I recall) but pointed out the hard part about this approach is
that it's hard to understand, and the result used various defects
on the chip (part of the circuit wasn't used but the chip wouldn't
work without it) which makes the result harder to mass produce.


something along these lines?
http://www.cogs.susx.ac.uk/users/adr...m99/node3.html
John

Jul 18 '05 #72
"Jarek Zgoda" <jz****@gazeta.usun.pl> wrote in message news:bm**********@nemesis.news.tpi.pl...
mi*****@ziplip.com <mi*****@ziplip.com> pisze:
8. Can you undefine a function, value, class or unimport a module?
(If the answer is no to any of these questions, Python is simply
not interactive enough)


Yes. By deleting a name from namespace. You better read some tutorial,
this will save you some time.


Forgive my ignorance, but why would one want to
delete a function name? What does it buy you?
I can see a use for interactive redefinition of a function
name, but deleting?
Marshall
Jul 18 '05 #73
Alex Martelli wrote:
Yes -- which is exactly why many non-programmers would prefer the
parentheses-less notation -- with more obvious names of course;-).
E.g.:
emitwarning URGENT "meltdown imminent!!!"
DOES look nicer to non-programmers than
emitwarning(URGENT, "meltdown imminent!!!")


So let's write:

raise URGENT, "meltdown imminent!!!"

Gerrit.

--
182. If a father devote his daughter as a wife of Mardi of Babylon (as
in 181), and give her no present, nor a deed; if then her father die, then
shall she receive one-third of her portion as a child of her father's
house from her brothers, but Marduk may leave her estate to whomsoever she
wishes.
-- 1780 BC, Hammurabi, Code of Law
--
Asperger Syndroom - een persoonlijke benadering:
http://people.nl.linux.org/~gerrit/
Kom in verzet tegen dit kabinet:
http://www.sp.nl/

Jul 18 '05 #74
"Scott McIntire" <mc******************@comcast.net> wrote in message news:MoEkb.821534$YN5.832338@sccrnsc01...
It seems to me that the Agency would have fared better if they just used
Lisp - which has bignums - and relied more on regression suites and less on
the belief that static type checking systems would save the day.


I find that an odd conclusion. Given that the cost of bugs is so high
(especially in the cited case) I don't see a good reason for discarding
*anything* that leads to better correctness. Yes, bignums is a good
idea: overflow bugs in this day and age are as bad as C-style buffer
overruns. Why work with a language that allows them when there
are languages that don't?

But why should more regression testing mean less static type checking?
Both are useful. Both catch bugs. Why ditch one for the other?
Marshall
Jul 18 '05 #75
Marshall Spight wrote:
"Scott McIntire" <mc******************@comcast.net> wrote in message news:MoEkb.821534$YN5.832338@sccrnsc01...
It seems to me that the Agency would have fared better if they just used
Lisp - which has bignums - and relied more on regression suites and less on
the belief that static type checking systems would save the day.

I find that an odd conclusion. Given that the cost of bugs is so high
(especially in the cited case) I don't see a good reason for discarding
*anything* that leads to better correctness. Yes, bignums is a good
idea: overflow bugs in this day and age are as bad as C-style buffer
overruns. Why work with a language that allows them when there
are languages that don't?

But why should more regression testing mean less static type checking?
Both are useful. Both catch bugs. Why ditch one for the other?


....because static type systems work by reducing the expressive power of
a language. It can't be any different for a strict static type system.
You can't solve the halting problem in a general-purpose language.

This means that eventually you might need to work around language
restrictions, and this introduces new potential sources for bugs.

(Now you could argue that current sophisticated type systems cover 90%
of all cases and that this is good enough, but then I would ask you for
empirical studies that back this claim. ;)

I think soft typing is a good compromise, because it is a mere add-on to
an otherwise dynamically typed language, and it allows programmers to
override the decisions of the static type system when they know better.
Pascal

--
Pascal Costanza University of Bonn
mailto:co******@web.de Institute of Computer Science III
http://www.pascalcostanza.de Römerstr. 164, D-53117 Bonn (Germany)

Jul 18 '05 #76
Pascal Bourguignon <sp**@thalassa.informatimago.com> wrote:
You're right, I did not answer. I think that what is missing in
classic software, and that ought to be present in AI software, is some
introspective control: having a process checking that the other
processes are live and progressing, and able to act to correct any
infinite loop, break down or dead-lock.
so assume this AI software was running on Ariane 5, and the same
condition occurs. based on the previously referenced design
assumptions, it is told that there's been a hardware failure, and that
numerical calculations can no longer be trusted. how does it cope
with this?
Some hardware may help in
controling this controling software, like on the latest Macintosh:
they automatically restart when the system is hung.
in this case, a restart would cause the same calculations to occur,
and the same failure to be reported.
And purely at the
hardware level, for a real life system, you can't rely on only one
processor.


absolutely right. though, in this case, this wouldn't have helped either.

the fatal error was a process error, and it occurred long before launch.

----
Garry Hodgson, Technology Consultant, AT&T Labs

Be happy for this moment.
This moment is your life.

Jul 18 '05 #77
In article <bn**********@f1node01.rhrz.uni-bonn.de>, Pascal Costanza wrote:
Marshall Spight wrote:
But why should more regression testing mean less static type checking?
Both are useful. Both catch bugs. Why ditch one for the other?
...because static type systems work by reducing the expressive power of
a language. It can't be any different for a strict static type system.
You can't solve the halting problem in a general-purpose language.


What do you mean by "reducing the expressive power of the language"? There
are many general purpose statically typed programming languages that are
Turing complete, so it's not a theoretical consideration, as you allude.
This means that eventually you might need to work around language
restrictions, and this introduces new potential sources for bugs.

(Now you could argue that current sophisticated type systems cover 90%
of all cases and that this is good enough, but then I would ask you for
empirical studies that back this claim. ;)
Empirically, i write a lot of O'Caml code, and i never have to write
something in a non-intuitive manner to work around the type system. On the
contrary, every type error the compiler catches in my code indicates code
that *doesn't make sense*. I'd hate to imagine code that doesn't make
sense passing into regression testing. What if i forget to test a
non-sensical condition?

On the flip-side of the coin, i've also written large chunks of Scheme
code, and I *did* find myself making lots of nonsense errors that weren't
caught until run time, which significantly increased development time
and difficulty.

Furthermore, thinking about types during the development process keeps me
honest: i'm much more likely to write code that works if i've spent some
time understanding the problem and the types involved. This sort of
pre-development thinking helps to *eliminate* potential sources for bugs,
not introduce them. Even Scheme advocates encourage this (as in Essentials
of Programming Languages by Friedman, Wand, and Haynes).
I think soft typing is a good compromise, because it is a mere add-on to
an otherwise dynamically typed language, and it allows programmers to
override the decisions of the static type system when they know better.


When do programmers know better? An int is an int and a string is a
string, and nary the twain shall be treated the same. I would rather
``1 + "bar"'' signal an error at compile time than at run time.

Personally, i don't understand all this bally-hoo about "dynamic languages"
being the next great leap. Static typing is a luxury!

William
Jul 18 '05 #78
Pascal Costanza:
...because static type systems work by reducing the expressive power of
a language. It can't be any different for a strict static type system.
You can't solve the halting problem in a general-purpose language.

This means that eventually you might need to work around language
restrictions, and this introduces new potential sources for bugs.


Given what I know of embedded systems, I can effectively
guarantee you that all the code on the rocket was proven
to halt in not only a finite amount of time but a fixed amount of
time.

So while what you say may be true for a general purpose
language, that appeal to the halting problem doesn't apply given
a hard real time constraint.

Andrew
da***@dalkescientific.com
Jul 18 '05 #79
Pascal Costanza <co******@web.de> writes:
...because static type systems work by reducing the expressive power
of a language.
It depends a whole lot on what you consider "expressive". In my book,
static type systems (at least some of them) work by increasing the
expressive power of the language because they let me express certain
intended invariants in a way that a compiler can check (and enforce!)
statically, thereby expediting the discovery of problems by shortening
the edit-compile-run-debug cycle.
(Now you could argue that current sophisticated type systems cover 90%
of all cases and that this is good enough, but then I would ask you
for empirical studies that back this claim. ;)


In my own experience they seem to cover at least 99%.

(And where are _your_ empirical studies which show that "working around
language restrictions increases the potential for bugs"?)

Matthias
Jul 18 '05 #80
William Lovas <wl****@force.stwing.upenn.edu> writes:
[...] Static typing is a luxury!


Very well put!
Jul 18 '05 #81
Garry Hodgson <ga***@sage.att.com> writes:
Pascal Bourguignon <sp**@thalassa.informatimago.com> wrote:
You're right, I did not answer. I think that what is missing in
classic software, and that ought to be present in AI software, is some
introspective control: having a process checking that the other
processes are live and progressing, and able to act to correct any
infinite loop, break down or dead-lock.


so assume this AI software was running on Ariane 5, and the same
condition occurs. based on the previously referenced design
assumptions, it is told that there's been a hardware failure, and that
numerical calculations can no longer be trusted. how does it cope
with this?


I just read yesterday an old paper by Sussman about how they designed
a Lisp on a chip, including the garbage collector and the eval
function. Strangely enough that did not included any ALU (only a test
for zero and an incrementer, for address scanning).

You can implement an eval without arithmetic and you can implement
theorem prover above it still without arithmetic. You can still do a
great deal of thinking without any arithmetic...

Some hardware may help in
controling this controling software, like on the latest Macintosh:
they automatically restart when the system is hung.


in this case, a restart would cause the same calculations to occur,
and the same failure to be reported.


In this case, since the problem was not in the supposed AI controlling
agent, there would have been no restart.

And purely at the
hardware level, for a real life system, you can't rely on only one
processor.


absolutely right. though, in this case, this wouldn't have helped either.
the fatal error was a process error, and it occurred long before launch.


I think it would have been helped. For example, an architecture like
the Shuttle's where there are five computer differently programmed
would have helped, because at least one of the computers would not
have had the Ariane-4 module.

--
__Pascal_Bourguignon__
http://www.informatimago.com/
Jul 18 '05 #82
William Lovas wrote:
In article <bn**********@f1node01.rhrz.uni-bonn.de>, Pascal Costanza wrote:
Marshall Spight wrote:
But why should more regression testing mean less static type checking?
Both are useful. Both catch bugs. Why ditch one for the other?
...because static type systems work by reducing the expressive power of
a language. It can't be any different for a strict static type system.
You can't solve the halting problem in a general-purpose language.


What do you mean by "reducing the expressive power of the language"? There
are many general purpose statically typed programming languages that are
Turing complete, so it's not a theoretical consideration, as you allude.


For example, static type systems are incompatible with dynamic
metaprogramming. This is objectively a reduction of expressive power,
because programs that don't allow for dynamic metaprogramming can't be
extended in certain ways at runtime, by definition.
This means that eventually you might need to work around language
restrictions, and this introduces new potential sources for bugs.

(Now you could argue that current sophisticated type systems cover 90%
of all cases and that this is good enough, but then I would ask you for
empirical studies that back this claim. ;)


Empirically, i write a lot of O'Caml code, and i never have to write
something in a non-intuitive manner to work around the type system. On the
contrary, every type error the compiler catches in my code indicates code
that *doesn't make sense*. I'd hate to imagine code that doesn't make
sense passing into regression testing. What if i forget to test a
non-sensical condition?


You need some testing discipline, which is supported well by unit
testing frameworks.
On the flip-side of the coin, i've also written large chunks of Scheme
code, and I *did* find myself making lots of nonsense errors that weren't
caught until run time, which significantly increased development time
and difficulty.

Furthermore, thinking about types during the development process keeps me
honest: i'm much more likely to write code that works if i've spent some
time understanding the problem and the types involved. This sort of
pre-development thinking helps to *eliminate* potential sources for bugs,
not introduce them. Even Scheme advocates encourage this (as in Essentials
of Programming Languages by Friedman, Wand, and Haynes).


Yes, thinking about a problem to understand it better occasionally helps
to write better code. This has nothing to do with static typing. This
could also be achieved by placing some other arbitrary restrictions on
your coding style.
I think soft typing is a good compromise, because it is a mere add-on to
an otherwise dynamically typed language, and it allows programmers to
override the decisions of the static type system when they know better.


When do programmers know better? An int is an int and a string is a
string, and nary the twain shall be treated the same. I would rather
``1 + "bar"'' signal an error at compile time than at run time.


Such code would easily be caught very soon in your unit tests.
Pascal

Jul 18 '05 #83
Pascal Bourguignon wrote:
[...] For example, an architecture like
the Shuttle's where there are five computer differently programmed
would have helped, because at least one of the computers would not
have had the Ariane-4 module.


Even the Ariane team is working under budget constraints. Obviously, in
this case, the budget didn't allow a re-check of the SRI design wrt.
Ariane-5 specifications, much less programming the same software five(!)
times over.

Besides, programming the same software multiple times would have helped
regardless of whether you're doing it with an AI or traditionally. I
still don't see how AI could have helped prevent the Ariane-5 crash. As
far as I have seen, any advances in making chips or programs smarter
have consistently been offset by higher testing efforts: you still have
to formally specify what the system is supposed to do, and then test
against that specification.
Actually, AI wouldn't have helped in the least bit here: the
specification was wrong, so even an AI module, at whatever
sophistication level, wouldn't have worked.

The only difference is that AI might allow people to write higher-level
specifications. I.e. something like "the rocket must be stable" instead
of "the rocket must not deviate more than 12.4 degrees from the
vertical"... but even "the rocket must be stable" would have to be
broken down into much more technical terms, with leeway for much the
same design and specification errors as those that caused the Ariane-5
software to lose control.

Regards,
Jo

Jul 18 '05 #84
Andrew Dalke wrote:
Pascal Costanza:
...because static type systems work by reducing the expressive power of
a language. It can't be any different for a strict static type system.
You can't solve the halting problem in a general-purpose language.

This means that eventually you might need to work around language
restrictions, and this introduces new potential sources for bugs.

Given what I know of embedded systems, I can effectively
guarantee you that all the code on the rocket was proven
to halt in not only a finite amount of time but a fixed amount of
time.


Yes, this is a useful restriction for a certian scenario. I don't have
anything against restrictions put on code, provided these restrictions
are justified.

Static type systems are claimed to generally improve your code. I don't
see that.
Pascal

Jul 18 '05 #85
Pascal Costanza <co******@web.de> writes:
Marshall Spight wrote:
But why should more regression testing mean less static type checking?
Both are useful. Both catch bugs. Why ditch one for the other?
...because static type systems work by reducing the expressive power of
a language. It can't be any different for a strict static type system.
You can't solve the halting problem in a general-purpose language.


Most modern "statically typed" languages (e.g. Mercury, Glasgow Haskell,
OCaml, C++, Java, C#, etc.) aren't *strictly* statically typed anyway.
They generally have some support for *optional* dynamic typing.

This is IMHO a good trade-off. Most of the time, you want static typing;
it helps in the design process, with documentation, error checking, and
efficiency. Sometimes you need a bit more flexibility than the
static type system allows, and then in those few cases, you can make use
of dynamic typing ("univ" in Mercury, "Dynamic" in ghc,
"System.Object" in C#, etc.). The need to do this is not uncommon
in languages like C# and Java that don't support parametric polymorphism,
but pretty rare in languages that do.
I think soft typing is a good compromise, because it is a mere add-on to
an otherwise dynamically typed language, and it allows programmers to
override the decisions of the static type system when they know better.


Soft typing systems give you dynamic typing unless you explicitly ask
for static typing. That is the wrong default, IMHO. It works much
better to add dynamic typing to a statically typed language than the
other way around.

--
Fergus Henderson <fj*@cs.mu.oz.au> | "I have always known that the pursuit
The University of Melbourne | of excellence is a lethal habit"
WWW: <http://www.cs.mu.oz.au/~fjh> | -- the last words of T. S. Garp.
Jul 18 '05 #86
Matthias Blume wrote:
Pascal Costanza <co******@web.de> writes:

...because static type systems work by reducing the expressive power
of a language.

It depends a whole lot on what you consider "expressive". In my book,
static type systems (at least some of them) work by increasing the
expressive power of the language because they let me express certain
intended invariants in a way that a compiler can check (and enforce!)
statically, thereby expediting the discovery of problems by shortening
the edit-compile-run-debug cycle.


The set of programs that are useful but cannot be checked by a static
type system is by definition bigger than the set of useful programs that
can be statically checked. So dynamically typed languages allow me to
express more useful programs than statically typed languages.
(Now you could argue that current sophisticated type systems cover 90%
of all cases and that this is good enough, but then I would ask you
for empirical studies that back this claim. ;)


In my own experience they seem to cover at least 99%.


I don't question that. If this works well for you, keep it up. ;)
(And where are _your_ empirical studies which show that "working around
language restrictions increases the potential for bugs"?)


I don't need a study for that statement because it's a simple argument:
if the language doesn't allow me to express something in a direct way,
but requires me to write considerably more code then I have considerably
more opportunities for making mistakes.
Pascal

Jul 18 '05 #87
Pascal Costanza wrote:
....because static type systems work by reducing the expressive power of
a language. It can't be any different for a strict static type system.
You can't solve the halting problem in a general-purpose language.
The final statement is correct, but you don't need to solve the halting
problem: it's enough to allow the specification of some easy-to-prove
properties, without hindering the programmer too much.

Most functional languages with a static type system don't require that
the programmer writes down the types, they are inferred from usage. And
the type checker will complain as soon as the usage of some data item is
inconsistent.
IOW if you write
a = b + "asdf"
the type checker will infer that both a and b are strings; however, if
you continue with
c = a + b + 3
it will report a type error because 3 and "adsf" don't have a common
supertype with a "+" operation.

It's the best of both worlds: no fuss with type declarations (which is
one of the less interesting things one spends time with) while getting
good static checking.
(Nothing is as good in practice as it sounds in theory, and type
inference is no exception. Interpreting type error messages requires
some getting used to - just like interpreting syntax error messages is a
bit of an art, leaving one confounded for a while until one "gets it".)
(Now you could argue that current sophisticated type systems cover 90%
of all cases and that this is good enough, but then I would ask you for
empirical studies that back this claim. ;)


My 100% subjective private study reveals not a single complaint about
over-restrictive type systems in comp.lang.functional in the last 12 months.

Regards,
Jo

Jul 18 '05 #88
Fergus Henderson wrote:
Pascal Costanza <co******@web.de> writes:

Marshall Spight wrote:
But why should more regression testing mean less static type checking?
Both are useful. Both catch bugs. Why ditch one for the other?
...because static type systems work by reducing the expressive power of
a language. It can't be any different for a strict static type system.
You can't solve the halting problem in a general-purpose language.

Most modern "statically typed" languages (e.g. Mercury, Glasgow Haskell,
OCaml, C++, Java, C#, etc.) aren't *strictly* statically typed anyway.
They generally have some support for *optional* dynamic typing.

This is IMHO a good trade-off. Most of the time, you want static typing;
it helps in the design process, with documentation, error checking, and
efficiency.


+ Design process: There are clear indications that processes like
extreme programming work better than processes that require some kind of
specification stage. Dynamic typing works better with XP than static
typing because with dynamic typing you can write unit tests without
having the need to immediately write appropriate target code.

+ Documentation: Comments are usually better for handling documentation.
;) If you want your "comments" checked, you can add assertions.

+ Error checking: I can only guess what you mean by this. If you mean
something like Java's checked exceptions, there are clear signs that
this is a very bad feature.

+ Efficiency: As Paul Graham puts it, efficiency comes from profiling.
In order to achieve efficiency, you need to identify the bottle-necks of
your program. No amount of static checks can identify bottle-necks, you
have to actually run the program to determine them.
Sometimes you need a bit more flexibility than the
static type system allows, and then in those few cases, you can make use
of dynamic typing ("univ" in Mercury, "Dynamic" in ghc,
"System.Object" in C#, etc.). The need to do this is not uncommon
in languages like C# and Java that don't support parametric polymorphism,
but pretty rare in languages that do.


I wouldn't count the use of java.lang.Object as a case of dynamic
typing. You need to explicitly cast objects of this type to some class
in order to make useful method calls. You only do this to satisfy the
static type system. (BTW, this is one of the sources for potential bugs
that you don't have in a decent dynamically typed language.)
I think soft typing is a good compromise, because it is a mere add-on to
an otherwise dynamically typed language, and it allows programmers to
override the decisions of the static type system when they know better.


Soft typing systems give you dynamic typing unless you explicitly ask
for static typing. That is the wrong default, IMHO. It works much
better to add dynamic typing to a statically typed language than the
other way around.


I don't think so.
Pascal

Jul 18 '05 #89
Joachim Durchholz wrote:
Most functional languages with a static type system don't require that
the programmer writes down the types, they are inferred from usage. And
the type checker will complain as soon as the usage of some data item is
inconsistent.
I know about type inference. The set of programs that can be checked
with type inference is still a subset of all useful programs.
My 100% subjective private study reveals not a single complaint about
over-restrictive type systems in comp.lang.functional in the last 12
months.


I am not surprised. :)
Pascal

Jul 18 '05 #90
Joachim Durchholz <jo***************@web.de> writes:
My 100% subjective private study reveals not a single complaint about
over-restrictive type systems in comp.lang.functional in the last 12 months.


While I tend to agree that such complaints are rare, such complaints also
tend to be language-specific, and thus get posted to language-specific
forums, e.g. the Haskell mailing list, the Clean mailing list, the OCaml
mailing list, etc., rather than to more general forums like
comp.lang.functional.

--
Fergus Henderson <fj*@cs.mu.oz.au> | "I have always known that the pursuit
The University of Melbourne | of excellence is a lethal habit"
WWW: <http://www.cs.mu.oz.au/~fjh> | -- the last words of T. S. Garp.
Jul 18 '05 #91
On Thu, Oct 23, 2003 at 12:38:50AM +0000, Fergus Henderson wrote:
Pascal Costanza <co******@web.de> writes:
Marshall Spight wrote:
But why should more regression testing mean less static type checking?
Both are useful. Both catch bugs. Why ditch one for the other?


...because static type systems work by reducing the expressive power of
a language. It can't be any different for a strict static type system.
You can't solve the halting problem in a general-purpose language.


Most modern "statically typed" languages (e.g. Mercury, Glasgow Haskell,
OCaml, C++, Java, C#, etc.) aren't *strictly* statically typed anyway.
They generally have some support for *optional* dynamic typing.

This is IMHO a good trade-off. Most of the time, you want static typing;
it helps in the design process, with documentation, error checking, and
efficiency. Sometimes you need a bit more flexibility than the
static type system allows, and then in those few cases, you can make use
of dynamic typing ("univ" in Mercury, "Dynamic" in ghc,
"System.Object" in C#, etc.). The need to do this is not uncommon
in languages like C# and Java that don't support parametric polymorphism,
but pretty rare in languages that do.


The trouble with these `dynamic' extensions is that they are `dynamic
type systems' from a statically typed viewpoint. A person who uses
truly dynamically typed languages would not consider them to be the
same thing.

In SML, for example, such an extension might be implemented using a sum
type, even using an `exn' type so that it can be extended in separate
places. The moment this system fails (and when a true dynamic system
carries on) is when such a type is redefined. The reason is because the
new type is not considered to be the same as the old type, due to
generativity of type names, and old code requires recompilation.

I'm told Haskell has extensions that will work around even this, but the
last time I tried to play with those, it failed miserably because
Haskell doesn't really support an interactive REPL so there was no way
to test it. (Maybe this was ghc's fault?)

As for Java/C#, downcasting is more of an example of static type systems
getting in the way of OOP rather than of a dynamic type system. (It's
because those languages are the result of an unholy union between the
totally dynamic Smalltalk and the awkwardly static C++).
I think soft typing is a good compromise, because it is a mere add-on to
an otherwise dynamically typed language, and it allows programmers to
override the decisions of the static type system when they know better.


Soft typing systems give you dynamic typing unless you explicitly ask
for static typing. That is the wrong default, IMHO. It works much
better to add dynamic typing to a statically typed language than the
other way around.


I view static typing as an added analysis stage. In that light, it
makes no sense to `add' dynamic typing to it. Also, I think that static
typing should be part of a more comprehensive static analysis phase
which itself is part of a greater suite of tests.

--
; Matthew Danish <md*****@andrew.cmu.edu>
; OpenPGP public key: C24B6010 on keyring.debian.org
; Signed or encrypted mail welcome.
; "There is no dark side of the moon really; matter of fact, it's all dark."
Jul 18 '05 #92
Pascal Costanza <co******@web.de> writes:
The set of programs that are useful but cannot be checked by a static
type system is by definition bigger than the set of useful programs
that can be statically checked.
By whose definition? What *is* your definition of "useful"? It is
clear to me that static typing improves maintainability, scalability,
and helps with the overall design of software. (At least that's my
personal experience, and as others can attest, I do have reasonably
extensive experience either way.)

A 100,000 line program in an untyped language is useless to me if I am
trying to make modifications -- unless it is written in a highly
stylized way which is extensively documented (and which usually means
that you could have captured this style in static types). So under
this definition of "useful" it may very well be that there are fewer
programs which are useful under dynamic typing than there are under
(modern) static typing.
So dynamically typed languages allow
me to express more useful programs than statically typed languages.
There are also programs which I cannot express at all in a purely
dynamically typed language. (By "program" I mean not only the executable
code itself but also the things that I know about this code.)
Those are the programs which are protected against certain bad things
from happening without having to do dynamic tests to that effect
themselves. (Some of these "bad things" are, in fact, not dynamically
testable at all.)
I don't question that. If this works well for you, keep it up. ;)


Don't fear. I will.
(And where are _your_ empirical studies which show that "working around
language restrictions increases the potential for bugs"?)


I don't need a study for that statement because it's a simple
argument: if the language doesn't allow me to express something in a
direct way, but requires me to write considerably more code then I
have considerably more opportunities for making mistakes.


This assumes that there is a monotone function which maps token count
to error-proneness and that the latter depends on nothing else. This
is a highly dubious assumption. In many cases the few extra tokens
you write are exactly the ones that let the compiler verify that your
thinking process was accurate (to the degree that this fact is
captured by types). If you get them wrong *or* if you got the
original code wrong, then the compiler can tell you. Without the
extra tokens, the compiler is helpless in this regard.

To make a (not so far-fetched, btw :) analogy: Consider logical
statements and formal proofs. Making a logical statement is easy and
can be very short. It is also easy to make mistakes without noticing;
after all saying something that is false while still believing it to
be true is extremely easy. Just by looking at the statement it is
also often hard to tell whether the statement is right. In fact,
computers have a hard time with this task, too. Theorem-proving is
hard.
On the other hand, writing down the statement with a formal proof is
impossible to get wrong without anyone noticing because checking the
proof for validity is trivial compared to coming up with it in the
first place. So even though writing the statement with a proof seems
harder, once you have done it and it passes the proof checker you can
rest assured that you got it right. The longer "program" will have fewer
"bugs" on average.

Matthias
Jul 18 '05 #93
Pascal Bourguignon:
You can implement an eval without arithmetic and you can implement
theorem prover above it still without arithmetic. You can still do a
great deal of thinking without any arithmetic...
But theorem proving and arithmetic are isomorphic. TP-> arithmetic
is obvious. Arithmetic -> TP is through Godel.
I think it would have been helped. For example, an architecture like
the Shuttle's where there are five computer differently programmed
would have helped, because at least one of the computers would not
have had the Ariane-4 module.


Manned space projects get a lot more money for safety checks.
If a few rockets blow up for testing then it's still cheaper than
quintupling the development costs.

Andrew
da***@dalkescientific.com
Jul 18 '05 #94
Pascal Costanza:
The set of programs that are useful but cannot be checked by a static
type system is by definition bigger than the set of useful programs that
can be statically checked. So dynamically typed languages allow me to
express more useful programs than statically typed languages.
Ummm, both are infinite and both are countably infinite, so those sets
are the same size. You're falling for Hilbert's Paradox.

Also, while I don't know a proof, I'm pretty sure that type inferencing
can do addition (and theorem proving) so is equal in power to
programming.
I don't need a study for that statement because it's a simple argument:
if the language doesn't allow me to express something in a direct way,
but requires me to write considerably more code then I have considerably
more opportunities for making mistakes.


The size comparisons I've seen (like the great programming language
shootout) suggest that Ocaml and Scheme require about the same amount
of code to solve small problems. Yet last I saw, Ocaml is strongly typed
at compile time. How do you assume then that strongly&statically typed
languages require "considerable more code"?

Andrew
da***@dalkescientific.com
Jul 18 '05 #95
"Pascal Costanza" <co******@web.de> wrote in message news:bn**********@newsreader2.netcologne.de...

When do programmers know better? An int is an int and a string is a
string, and nary the twain shall be treated the same. I would rather
``1 + "bar"'' signal an error at compile time than at run time.


Such code would easily be caught very soon in your unit tests.


Provided you think to write such a test, and expend the effort
to do so. Contrast to what happens in a statically typed language,
where this is done for you automatically.

Unit tests are great; I heartily endorse them. But they *cannot*
do everything that static type checking can do. Likewise,
static type checking *cannot* do everything unit testing
can do.

So again I ask, why is it either/or? Why not both? I've had
*great* success building systems with comprehensive unit
test suites in statically typed languages. The unit tests catch
some bugs, and the static type checking catches other bugs.
Marshall
Jul 18 '05 #96
Joachim Durchholz <jo***************@web.de> writes:
My 100% subjective private study reveals not a single complaint about
over-restrictive type systems in comp.lang.functional in the last 12
months.


I also read c.l.functional (albeit only lightly). In the last 12
months, I have encountered dozens of complaints about over-restrictive
type sytems in Haskell, OCaml, SML, etc.

The trick is that these complaints are not phrased in precisely that
way. Rather, someone is trying to do some specific task, and has
difficulty arriving at a usable type needed in the task. Often posters
provide good answers--Durchholz included. But the underlying complaint
-really was- about the restrictiveness of the type system.

That's not even to say that the overall advantages of a strong type
system are not worthwhile--even perhaps better than more dynamic
languages. But it's quite disingenuous to claim that no one ever
complains about it. Obviously, no one who finds a strong static type
system unacceptable is going to be committed to using, e.g.
Haskell--the complaint doesn't take the form of "I'm taking my marbles
and going home".

Yours, Lulu...

--
Keeping medicines from the bloodstreams of the sick; food from the bellies
of the hungry; books from the hands of the uneducated; technology from the
underdeveloped; and putting advocates of freedom in prisons. Intellectual
property is to the 21st century what the slave trade was to the 16th.

Jul 18 '05 #97
Pascal Bourguignon fed this fish to the penguins on Wednesday 22
October 2003 13:44 pm:


I think it would have been helped. For example, an architecture like
the Shuttle's where there are five computer differently programmed
would have helped, because at least one of the computers would not
have had the Ariane-4 module.

Are you sure? What if all the variants reused code from variants of
the A-4... They could all have had different versions of the same
problem...

Merely comparing the A-4 requirements to the A-5 requirements and then
testing the code associated with different performance limits would
also have found the problem...

There isn't much you can do, as a programmer, if the bean counters up
above decree: Use this module -- unchanged -- since it works on the
previous generation... No, you don't need to test it -- we said it
works...

-- ================================================== ============ <
wl*****@ix.netcom.com | Wulfraed Dennis Lee Bieber KD6MOG <
wu******@dm.net | Bestiaria Support Staff <
================================================== ============ <
Bestiaria Home Page: http://www.beastie.dm.net/ <
Home Page: http://www.dm.net/~wulfraed/ <


Jul 18 '05 #98
Quoth Lulu of the Lotus-Eaters <me***@gnosis.cx>:
| Joachim Durchholz <jo***************@web.de> writes:
|> My 100% subjective private study reveals not a single complaint about
|> over-restrictive type systems in comp.lang.functional in the last 12
|> months.
|
| I also read c.l.functional (albeit only lightly). In the last 12
| months, I have encountered dozens of complaints about over-restrictive
| type sytems in Haskell, OCaml, SML, etc.
|
| The trick is that these complaints are not phrased in precisely that
| way. Rather, someone is trying to do some specific task, and has
| difficulty arriving at a usable type needed in the task. Often posters
| provide good answers--Durchholz included. But the underlying complaint
| -really was- about the restrictiveness of the type system.
|
| That's not even to say that the overall advantages of a strong type
| system are not worthwhile--even perhaps better than more dynamic
| languages. But it's quite disingenuous to claim that no one ever
| complains about it. Obviously, no one who finds a strong static type
| system unacceptable is going to be committed to using, e.g.
| Haskell--the complaint doesn't take the form of "I'm taking my marbles
| and going home".

No one said that strict typing is free, requires no effort or learning
from the programmer. That would be ridiculous - of course a type system
is naturally restrictive, that's its nature. A restrictive system that
imposes a constraint on the programmer, who needs to learn about that
in order to use the language effectively. `Over-restrictive' is
different. If there are questions about static typing, it does not
follow that it's over-restrictive, nor that the questions constitute
a complaint to that effect.

Donn Cave, do**@drizzle.com
Jul 18 '05 #99
"Pascal Costanza" <co******@web.de> wrote in message news:bn**********@newsreader2.netcologne.de...

I wouldn't count the use of java.lang.Object as a case of dynamic
typing. You need to explicitly cast objects of this type to some class
in order to make useful method calls. You only do this to satisfy the
static type system. (BTW, this is one of the sources for potential bugs
that you don't have in a decent dynamically typed language.)


Huh? The explicit-downcast construct present in Java is the
programmer saying to the compiler: "trust me; you can accept
this type of parameter." In a dynamically-typed language, *every*
call is like this! So if this is a source of errors (which I believe it
is) then dynamically-typed languages have this potential source
of errors with every function call, vs. statically-typed languages
which have them only in those few cases where the programmer
explicitly puts them in.
Marshall
Jul 18 '05 #100

This discussion thread is closed

Replies have been disabled for this discussion.

By using this site, you agree to our Privacy Policy and Terms of Use.