469,326 Members | 1,464 Online
Bytes | Developer Community
New Post

Home Posts Topics Members FAQ

Post your question to a community of 469,326 developers. It's quick & easy.

Python from Wise Guy's Viewpoint

THE GOOD:

1. pickle

2. simplicity and uniformity

3. big library (bigger would be even better)

THE BAD:

1. f(x,y,z) sucks. f x y z would be much easier to type (see Haskell)
90% of the code is function applictions. Why not make it convenient?

2. Statements vs Expressions business is very dumb. Try writing
a = if x :
y
else: z

3. no multimethods (why? Guido did not know Lisp, so he did not know
about them) You now have to suffer from visitor patterns, etc. like
lowly Java monkeys.

4. splintering of the language: you have the inefficient main language,
and you have a different dialect being developed that needs type
declarations. Why not allow type declarations in the main language
instead as an option (Lisp does it)

5. Why do you need "def" ? In Haskell, you'd write
square x = x * x

6. Requiring "return" is also dumb (see #5)

7. Syntax and semantics of "lambda" should be identical to
function definitions (for simplicity and uniformity)

8. Can you undefine a function, value, class or unimport a module?
(If the answer is no to any of these questions, Python is simply
not interactive enough)

9. Syntax for arrays is also bad [a (b c d) e f] would be better
than [a, b(c,d), e, f]

420

P.S. If someone can forward this to python-dev, you can probably save some
people a lot of soul-searching
Jul 18 '05
467 18279
pr***********@comcast.net wrote:
Dirk Thierbach <dt********@gmx.de> writes: As I mentioned earlier in this thread, the two things I object to in a
static type system are these:

1) The rejection of code I know to be useful, yet the system is
unable to prove correct.

2) The requirement that I decorate my program with type
information to give the type checker hints to enable it
to check things that are, to me, obviously correct.
Yes. I have also said it earlier, but (1) is nearly never going to
happen (if you substitute "correct" with "type correct", and for a
suitable definition of useful), and you don't have to do (2) if there
is type inference.
Being unable to prove code correct is the same thing as being able to
prove it incorrect. This makes three classes of code:

1) provably type safe for all inputs
2) provably erroneous for all inputs
3) neither of the above

Both dynamic and static type systems behave the same way for classes 1
and 2 except that static systems reject the code in section 2 upon
compilation while dynamic systems reject it upon execution. It is
section 3 that is under debate. Those of us that are suspicious of static typing believe that there
are a large enough number of class 3 programs that they will be
regularly or frequently encountered.
Yep. The static typers believe (from experience) that there is a very
small number of class 3 programs, and usually all programs you
normally write fall into class 1. They also know that this is not true
for all static type systems. For languages with a lousy static type
system, there is a large number of class 3 programs, where you have to
cheat to convince the type checker that they are really type safe. The
static typers also believe that with a good static type system, all
useful class 3 programs can be changed just a little bit so they fall
into class 1. In this process, they will become better in almost all
cases (for a suitable definition of "better").
We believe that a large number of these class 3 programs will have
inputs that cannot be decided by a static type checker but are
nonetheless `obviously' correct. (By obvious I mean fairly apparent
with perhaps a little thought, but certainly nothing so convoluted
as to require serious reasoning.)
Why do you believe this? A HM type checker doesn't do anything but
automate this "obvious" reasoning (in a restricted field), and points
out any errors you might have made (after all, humans make errors).
Static type check aficionados seem to believe that the number of class
3 programs is vanishingly small and they are encountered rarely. They
appear to believe that any program that is `obviously' correct can be
shown to be correct by a static type checking system. Conversely,
they seem to believe that programs that a static type checker find
undecidable will be extraordinarily convoluted and contrived, or
simply useless curiosities.
Yes.
No one has offered any empirical evidence one way or another, but the
static type people have said `if class 3 programs are so ubiquitous,
then it should be easy to demonstrate one'. I'll buy that. Some of
the people in here have offered programs that take user input as an
example. No sane person claims to read the future, so any statically
typed check program would have to perform a runtime check.
Yes, and that's what it does. No problem.
I (and Don Geddis) happen to believe that there are a huge number of
perfectly normal, boring, pedestrian programs that will stymie a
static type checker. I've been offering a few that I hope are small
enough to illustrate the problem and not so contrived as to fall into
the `pointless' class.

The first program I wrote (a CPS lookup routine) generally confuses
wimp-ass static type checking like in Java.

Stephan Bevan showed that SML/NJ had no problem showing it was type
safe without giving a single type declaration. I find this to be
very cool.
Good.
I offered my `black hole' program and got these responses: [...] Will this the response for any program that does, in fact, fool a
number of static type checkers?
No. You have also been shown that it can check statically both in
Haskell and OCaml. (I cannot think of any way to make it pass
in C++ or Java, but that's to be expected.)
This appears to be getting very close to arguing that static type
checkers never reject useful programs because, by definition, a
program rejected by a static type checker is useless.
It may not be useless, but you will definitely need to think about
it. And if it is useful, in nearly all cases you can make a little
change or express it a little differently, and it will check.
Another one was a CPS routine that traverses two trees in a lazy
manner and determins if they have the same fringe. The point was not
to find out if this could be done in SML or Haskell. I know it can.
The point was to find out if the static type checker could determine
the type safety of the program.
There is a deliberate reason I did not create a `lazy cons'
abstraction. I wanted to see if the type checker, with no help
whatsover, could determine that the type of the lambda abstractions
that are *implementing* the lazy consing.
See below for "lazy cons".
Re-writing the program in C++ with huge amounts of type decorations
and eliminating the lambdas by introducing parameterized classes does
not demonstrate the power of static type checking.
No, it doesn't. That was just Brian saying "it's so easy, I'll
do it the hard way and demonstrate that it can even be done with
a lousy static type system as in C++".
Re-writing the program in a lazy language also does nothing to
demonstrate the power of static type checking.


May I suggest that you try it yourself? It might be a good way to
get some experience with OCaml, though for a beginner it might take
some time to get used to the syntax etc. Pure lambda terms are the
same in every functional language, after all.

The only issue is that one should use datatypes for recursive
structures, i.e. one would normally use a 'lazy cons' abstraction
(which would be the natural thing to do anyway). In OCaml, you can get
around this with -rectypes (someone has already demonstrated how to do
lazy lists in this way). In Haskell you would need them, but since
Haskell is already lazy, that's probably not the point.

So yes, for recursive types you need to give the type checker "a
little hint", if you want to put it that way. But that's done on
purpose, because it is the natural way to do it, and it is note done
because it would be impossible to statically check such terms.

So if you are extremely picky, and if you refuse to do the "lazy cons"
initially and consider command line options cheating, that's one of
the programs of class 3 that can be put into class 1 by making a
little change, where this change turns out to make your program better
to understand in the first place. Is this so bad that you want to stay
clear from static typing at any price?

(I bet someone will now comment on that ignoring the "extremely picky"
part :-( .)

- Dirk
Jul 18 '05 #301
Pascal Costanza <co******@web.de> wrote:
Dirk Thierbach wrote:
Maybe. It sure would help if you'd tell me your view instead of having
me guess :-) For me, a type is a certain class of values, and a static
type is given by a limited language describing such classes. A
dynamic type is a tag that is associated with a value. Arbitrary
classes of values (like "all reals less than 200") are not a type.


What are "non-arbitrary classes of values"?


Those that can be described by the language available for static types
e.g. in Haskell or OCaml.
According to your criteria, (real * 200) is
- a certain class of values
- given in a limited language describing that class
Yes, but you will have a hard time developing a static type checker
that will work with such a language.
I would be interesting to see a convincing definition of the term "type"
that precludes (real * 200), and similar type specifications.
Look at the definition for type in Haskell or OCaml, for example.
(Maybe it helps to explain that CHECK-TYPE doesn't check a value per se.
(check-type 5 (real * 200)) doesn't work. CHECK-TYPE checks a property
of the variable it is being passed.)
Yes, I know. What does that explain?
"very rare situations"
"never a show-stopper"

How do you know?

If you are only talking about your personal experiences, that's fine,
but you should say so.
Of course I am talking from personal experience, like everyone does.
There is not other way. But in this case, I think my experience is
sufficient to say that.
The only claim I make is that static type systems need to reject
well-behaved programs. That's an objective truth.
This depends on the definition of "well-behaved". The claim I make is
that for a suitable definition of "well-behaved", it is not an
objective truth. And even if you stick to your definition of
"well-behaved", it doesn't really matter in practice.
All I hear is that you (and many others) say that the disadvantages
of static typing are negligible. However, I haven't found any
convincing arguments for that claim.
What kind of arguments would you like to have? I have tried to
show with a few examples that even programs that you think should
be rejected with static typing will be accepted (if you allow for
the fact that they are written in a different language).

What else is there I could possibly do? The simplest way is probably
that you just sit down and give it a try. No amount of talking can
replace personal experience. Get Haskell or OCaml and do a few simple
examples. You will find that they have many things that you won't
like, e.g. no good IDE, no "eval", and (as every language) they
require a slightly different mindset compared to what you're used
to. They might also not have the library functions that you are used
to. But it might give you a good idea what programs are statically
typeable and what are not.
But I am interested in the question why you (or others) think that
almost all software should be developed like that.
I didn't say that. Please do not put up a strawman. In fact, I
explicitely said "you use whatever tool you like best".
I have chosen to illustrate examples in which a dynamic approach might
be considerably better.
And you didn't convince me; all your examples can be statically
typed.
Again, to make this absolutely clear, it is my personal experience
that dynamic type checking is in many situations superior to static
type checking.
That's maybe the important point. HOW DO YOU KNOW IF YOU HAVE NEVER
TRIED IT? (In a language with good static typing, not in a language
with lousy static typing). And obviously you haven't tried it,
otherwise you wouldn't give examples that can be easily statically
typed, or confuse exceptions or dynamic checks with static type checks.
So it cannot come from your personal experience.
But I don't ask anyone to unconditionally use dynamically typed
languages.


But you insist that dynamically typed languages are "better" or
"superior to" statically typed, because you claim you cannot do things
in a statically typed language that you can do in a dynamically typed
one. That's the point where I disagree. I don't ask you to use a
statically typed language, I just want you to give admit that both
are equally good in this respect, or at least you should sit down and
verify that yourself before saying it.

- Dirk

Jul 18 '05 #302
Pascal Costanza <co******@web.de> writes:
Should we then conclude that compile-time syntax checking is not
worth having?


No. Syntax errors make the program fail, regardless whether this is
checked at compile-time or at runtime.

A type "error" detected at compile-time doesn't imply that the
program will fail.


Actually it does, in a statically typed language. If you write a
function which expects a Boolean and you pass it a string instead,
it's going to fail one way or another.

OK, the bad call of that function might never be reachable in actual
execution, but equally the syntax error in Tcl code might not be
reached. I'd rather find out about both kinds of mistake sooner
rather than later.

(I do mean a type error and not a type 'error' - obviously if you have
some mechanism to catch exceptions caused by passing the wrong type,
you wouldn't want this to be checked at compile time.)

--
Ed Avis <ed@membled.com>
Jul 18 '05 #303
Ed Avis <ed@membled.com> writes:
Pascal Costanza <co******@web.de> writes:
Should we then conclude that compile-time syntax checking is not
worth having?


No. Syntax errors make the program fail, regardless whether this is
checked at compile-time or at runtime.

A type "error" detected at compile-time doesn't imply that the
program will fail.


Actually it does, in a statically typed language.


Nitpick: Neither syntactic nor statically checked type errors make
programs fail. Instead, their presence simply implies the absence of a
program. No program, no program failing...

Matthias
Jul 18 '05 #304
Dirk Thierbach wrote:
Pascal Costanza <co******@web.de> wrote:
Dirk Thierbach wrote:


Maybe. It sure would help if you'd tell me your view instead of having
me guess :-) For me, a type is a certain class of values, and a static
type is given by a limited language describing such classes. A
dynamic type is a tag that is associated with a value. Arbitrary
classes of values (like "all reals less than 200") are not a type.


What are "non-arbitrary classes of values"?


Those that can be described by the language available for static types
e.g. in Haskell or OCaml.


This sounds like a circular definiton.
According to your criteria, (real * 200) is
- a certain class of values
- given in a limited language describing that class


Yes, but you will have a hard time developing a static type checker
that will work with such a language.


I am not asking for a definition of the term "static type", but for a
definition of the term "type".
I would be interesting to see a convincing definition of the term "type"
that precludes (real * 200), and similar type specifications.


Look at the definition for type in Haskell or OCaml, for example.


Haskell: "An expression evaluates to a value and has a static type."
(http://www.haskell.org/onlinereport/intro.html#sect1.3 )

Where is the definiton for "type"? (without "static"?)

I haven't found a definiton in http://caml.inria.fr/ocaml/htmlman/index.html
(Maybe it helps to explain that CHECK-TYPE doesn't check a value per se.
(check-type 5 (real * 200)) doesn't work. CHECK-TYPE checks a property
of the variable it is being passed.)


Yes, I know. What does that explain?


Let's first get our terminology right.
The only claim I make is that static type systems need to reject
well-behaved programs. That's an objective truth.


This depends on the definition of "well-behaved". The claim I make is
that for a suitable definition of "well-behaved", it is not an
objective truth. And even if you stick to your definition of
"well-behaved", it doesn't really matter in practice.


"well-behaved" means "doesn't show malformed behavior at runtime", i.e.
especially "doesn't core dump".

"Behavior" is a term that describes dynamic processes. I am only
interested in dynamic behavior here.

I don't mind if you want to change that terminology. Let's just rephrase
it: Static type systems need to reject programs that wouldn't
necessarily fail in serious ways at runtime.
All I hear is that you (and many others) say that the disadvantages
of static typing are negligible. However, I haven't found any
convincing arguments for that claim.


What kind of arguments would you like to have? I have tried to
show with a few examples that even programs that you think should
be rejected with static typing will be accepted (if you allow for
the fact that they are written in a different language).


Yes, for some of them.
But I am interested in the question why you (or others) think that
almost all software should be developed like that.


I didn't say that. Please do not put up a strawman. In fact, I
explicitely said "you use whatever tool you like best".


But that was the original question that initiated this thread. If we
have an agreement here, that's perfect!
I have chosen to illustrate examples in which a dynamic approach might
be considerably better.


And you didn't convince me; all your examples can be statically
typed.


What about the example in
http://groups.google.com/groups?selm....netcologne.de
?

I don't think this can be done without a serious rewrite.
Again, to make this absolutely clear, it is my personal experience
that dynamic type checking is in many situations superior to static
type checking.


That's maybe the important point. HOW DO YOU KNOW IF YOU HAVE NEVER
TRIED IT? (In a language with good static typing, not in a language
with lousy static typing). And obviously you haven't tried it,
otherwise you wouldn't give examples that can be easily statically
typed, or confuse exceptions or dynamic checks with static type checks.
So it cannot come from your personal experience.


Right, it comes from a more principled consideration: You can't have
metacircularity in a statically type language. You might be able to have
metacircularity if you strictly separate the stages, but as soon as you
want to be able to at least occasionally call base code from meta code
and vice versa, then you lose.

Metacircularity gives me the guaranntee that I can always code around
any unforeseeable limitations that might come up, without having to
start from scratch.

So, yes, I am interested in having the opportunity to change invariants
during the runtime of a program. This might sound self-contradictory,
but in practice it isn't. Remember, "One man's constant is another man's
variable." (see http://www-2.cs.cmu.edu/afs/cs.cmu.e...sd/perlis.html )
But I don't ask anyone to unconditionally use dynamically typed
languages.


But you insist that dynamically typed languages are "better" or
"superior to" statically typed, because you claim you cannot do things
in a statically typed language that you can do in a dynamically typed
one. That's the point where I disagree. I don't ask you to use a
statically typed language, I just want you to give admit that both
are equally good in this respect, or at least you should sit down and
verify that yourself before saying it.


I haven't said that I can do more things in a dynamically typed
language. I have said that statically typed languages need to reject
well-behaved programs. That's a different claim. We are not talking
about Turing equivalence.

If a base program calls its meta program and changes types, you can't
type check such a program by definition.

For example:

(defun check (x)
(integerp x))

(defun example-1 ()
(let ((x 5))
(assert (check x))
(print 'succeeded)
(eval '(defun check (x)
(stringp x)))))

Now, this might seem nonsensical, but consider this:

(defun example-2 ()
(eval '(defun check (x)
(realp x)))
(example-1))

Pascal

--
Pascal Costanza University of Bonn
mailto:co******@web.de Institute of Computer Science III
http://www.pascalcostanza.de Römerstr. 164, D-53117 Bonn (Germany)

Jul 18 '05 #305
Ed Avis wrote:
Pascal Costanza <co******@web.de> writes:

Should we then conclude that compile-time syntax checking is not
worth having?
No. Syntax errors make the program fail, regardless whether this is
checked at compile-time or at runtime.

A type "error" detected at compile-time doesn't imply that the
program will fail.


Actually it does, in a statically typed language.


No, it doesn't.
If you write a
function which expects a Boolean and you pass it a string instead,
it's going to fail one way or another.
Yes, that's one example. This doesn't mean that this implication always
holds. What part of "doesn't imply" is the one you don't understand?
OK, the bad call of that function might never be reachable in actual
execution, but equally the syntax error in Tcl code might not be
reached. I'd rather find out about both kinds of mistake sooner
rather than later.
I don't care for unreachable code in this specific context. A part of
the program that passes a value of type "don't know" to a variable of
type "don't know" might be unacceptable to a static type sytem, but
might still not throw any exceptions at all at runtime. Or, in other
scenarios, only exceptions that you can correct at runtime, which might
be still useful, or even the preferred behavior.
(I do mean a type error and not a type 'error' - obviously if you have
some mechanism to catch exceptions caused by passing the wrong type,
you wouldn't want this to be checked at compile time.)


Exactly.
Pascal

--
Pascal Costanza University of Bonn
mailto:co******@web.de Institute of Computer Science III
http://www.pascalcostanza.de Römerstr. 164, D-53117 Bonn (Germany)

Jul 18 '05 #306
Matthias Blume <fi**@my.address.elsewhere> writes:
Ed Avis <ed@membled.com> writes:
Pascal Costanza <co******@web.de> writes:
>>Should we then conclude that compile-time syntax checking is not
>>worth having?
>
>No. Syntax errors make the program fail, regardless whether this is
>checked at compile-time or at runtime.
>
>A type "error" detected at compile-time doesn't imply that the
>program will fail.


Actually it does, in a statically typed language.


Nitpick: Neither syntactic nor statically checked type errors make
programs fail. Instead, their presence simply implies the absence of a
program. No program, no program failing...


Nitpick: You are defining as a program that which passes a static type
checker. What would you like to call those constructs that make sense
to a human and would run correctly despite failing a static type
check? These are the ones that are interesting to debate.
Jul 18 '05 #307
Matthias Blume wrote:
Ed Avis <ed@membled.com> writes:

Pascal Costanza <co******@web.de> writes:

Should we then conclude that compile-time syntax checking is not
worth having?

No. Syntax errors make the program fail, regardless whether this is
checked at compile-time or at runtime.

A type "error" detected at compile-time doesn't imply that the
program will fail.


Actually it does, in a statically typed language.

Nitpick: Neither syntactic nor statically checked type errors make
programs fail. Instead, their presence simply implies the absence of a
program.


Yes, the absence of a program that might not fail if it wouldn't have
been rejected by the static type system.
Pascal

--
Pascal Costanza University of Bonn
mailto:co******@web.de Institute of Computer Science III
http://www.pascalcostanza.de Römerstr. 164, D-53117 Bonn (Germany)

Jul 18 '05 #308
Don Geddis <do*@geddis.org> wrote:
For example:
(defun foo (x)
(check-type x (integer 0 10))
(+ 1 x) )
(defun fib (n)
(check-type n (integer 0 *))
(if (< n 2)
1
(+ (fib (- n 1)) (fib (- n 2))) ))
(print (foo (fib 5)))
This program prints "9", and causes no run-time type errors. Will
it be successfully type-checked at compile time by a static system?
Almost certainly the answer is no.


Dirk Thierbach <dt********@gmx.de> writes: It will be successfully type checked, because the static type system
does not allow you to express assumptions about value ranges of types.
I was working on the assumption that the type language *would* allow
one to express arbitrary types. Certainly one can create a
sufficiently weak static type system that terminates under all
conditions and produces correct answers within the system. Lisp has
one: everything is a subtype of type t and all programs pass. The
inference is trivial.

But I surely wouldn't be impressed by a type checker that allowed this
to pass:

(defun byte-increment (x)
(check-type x (integer 0 (256)))
(+ x 1))

(defun fib (n)
(if (< n 2)
1
(+ (fib (- n 1)) (fib (- n 2)))))

(print (byte-increment (fib 13)))
On the other hand, if there is no static type mismatch, that doesn't
mean that the program will not crash because of runtime errors
(division by zero, or dynamically checked restrictions).


I think most people here were assuming that passing an integer greater
than 255 to a routine expecting a single 8-bit byte is a type error,
and something that could cause a crash.

Jul 18 '05 #309
John Atwood wrote:
Pascal Costanza <co******@web.de> wrote:

No, you definitely can do a lot of things with macros in Lisp that are
impossible to do in other languages. There are papers that show this
convincingly. Try
ftp://publications.ai.mit.edu/ai-pub...df/AIM-453.pdf for a
start. Then continue, for example, with some articles on Paul Graham's
website, or download and read his book "On Lisp".


That's a great paper; however, see Steele's later work:
http://citeseer.nj.nec.com/steele94building.html


Yes, I have read that paper. If you want to work with monads, you
probably want a static type system.

(And I think he still likes Scheme and Lisp. ;)

Pascal

--
Pascal Costanza University of Bonn
mailto:co******@web.de Institute of Computer Science III
http://www.pascalcostanza.de Römerstr. 164, D-53117 Bonn (Germany)

Jul 18 '05 #310
Joachim Durchholz <jo***************@web.de> writes:
pr***********@comcast.net wrote:
My point is that type systems can reject valid programs.


And the point of the guys with FPL experience is that, given a good
type system[*], there are few if any practial programs that would be
wrongly rejected.


We're stating a pretty straightforward, objective, testable hypothesis:

``There exists valid programs that cannot be statically checked by
such-and-such a system.''

and we get back

`yes, but...'
`in our experience'
`*I've* never seen it'
`if the type system is any good'
`few programs'
`no practical programs'
`no useful programs'
`isolated case'
`99% of the time'
`most commercial programs'
`most real-world programs'
`only contrived examples'
`nothings perfect'
`in almost every case'

Excuse us if we are skeptical.
Jul 18 '05 #311
Pascal Costanza <co******@web.de> writes:
What I want is:

testxyz obj = (concretemethod obj == 42)

Does the code compile as long as concretemethod doesn't exist?


No, because the likelihood of that being a typo (e.g. for `concrete_method')
is too high.

I recently added support to the Mercury compiler for an option
"--allow-stubs". For the equivalent code in Mercury, if this option
is enabled, then it will compile if there is a _declaration_ for
concretemethod, even if there is no _definition_. The compiler will
issue a warning and automatically generate a stub definition which just
throws an exception if it is ever called.

It would be fairly straight-forward to also add support for allowing
code like that to compile even if there was no declaration, but that
does not seems like a good idea to me -- it would make it easier for
typos to go unnoticed, with insufficient compensating benefit.

I'm sure it would also be easy for developers of other statically typed
languages to implement what you want, if they thought it was a good idea.

--
Fergus Henderson <fj*@cs.mu.oz.au> | "I have always known that the pursuit
The University of Melbourne | of excellence is a lethal habit"
WWW: <http://www.cs.mu.oz.au/~fjh> | -- the last words of T. S. Garp.
Jul 18 '05 #312
Joe Marshall <jr*@ccs.neu.edu> writes:

Nitpick: Neither syntactic nor statically checked type errors make
programs fail. Instead, their presence simply implies the absence of a
program. No program, no program failing...
Nitpick: You are defining as a program that which passes a static type
checker.


Yes, that's how it is usually done with statically typed languages.
What would you like to call those constructs that make sense
to a human and would run correctly despite failing a static type
check?
I don't know. What would you *like* to call them? (They might be
called "programs" -- just programs in another language.)
These are the ones that are interesting to debate.


Right, I am not disputing this. (I was simply nitpicking on
terminology.)

Matthias
Jul 18 '05 #313
In article <bn**********@f1node01.rhrz.uni-bonn.de>, Pascal Costanza wrote:
Matthias Blume wrote:

Nitpick: Neither syntactic nor statically checked type errors make
programs fail. Instead, their presence simply implies the absence of a
program.


Yes, the absence of a program that might not fail if it wouldn't have
been rejected by the static type system.


That sentence reflects a misunderstanding of what something like ML's
type system *means*. You're trying to hammer ML into the Lisp mold,
which is leading you to incorrect conclusions.

A value in any programming language is some pattern of bits
interpreted under a type. In Scheme or Lisp, there is a single
universe (a single type) that that all values belong to, which is why
it's legal to pass any value to a function. But in ML, there are
multiple universes of values, one for each type.

This means that the same bit pattern can represent different values,
which is not true in a dynamically typed language. To make this
concrete, consider the following Ocaml code:

type foo = A | B

type baz = C | D

let f1 x =
match x with
| A -> C
| B -> D

let f2 x =
match x with
| C -> 0
| D -> 1

Some apparently-similar Scheme code would look like:

(define (f1 x)
(case x
((A) 0)
((B) 1)))

(define (f2 x)
(case x
((C) 0)
((D) 1)))

The difference between these two programs gets revealed when you look
at the assembly code that the Ocaml compiler produces for f1 and f2,
side by side[*]:

f1: f2:
..L101: .L103:
cmpl $1, %eax cmpl $1, %eax
je .L100 je .L102
movl $3, %eax movl $3, %eax
ret ret
..L100: .L102:
movl $1, %eax movl $1, %eax
ret ret

The code generated for each function is identical, modulo label names.
This means that the bit patterns for the data constructors A/C and B/D
are identical, and the the program only makes sense because A and C,
and B and D are interpreted at different types. (In fact, notice that
the bit-representation of the integer zero and and the constructors A
and C are the same, too.) In contrast, this would not be a valid
compilation of the Scheme program, because A, B, C, and D would all
have to have distinct bit patterns.

So eliminating the types from an ML program means you no longer have a
program, because you can no longer consistently interpret bit patterns
as ML values -- there *isn't* any universal domain that all ML values
really belong to, as you are supposing.
[*] I cleaned up some extraneous alignment stuff, to make what's going
on clearer. But all that was identical too.

--
Neel Krishnaswami
ne***@cs.cmu.edu
Jul 18 '05 #314
Pascal Costanza <co******@web.de> writes:
Matthias Blume wrote:
Ed Avis <ed@membled.com> writes:
Pascal Costanza <co******@web.de> writes:
>Should we then conclude that compile-time syntax checking is not
>worth having?

No. Syntax errors make the program fail, regardless whether this is
checked at compile-time or at runtime.

A type "error" detected at compile-time doesn't imply that the
program will fail.

Actually it does, in a statically typed language.

Nitpick: Neither syntactic nor statically checked type errors make

programs fail. Instead, their presence simply implies the absence of a
program.


Yes, the absence of a program that might not fail if it wouldn't have
been rejected by the static type system.


"would" "if" bah

You first have to define what the meaning of a phrase is going to be
if you let it slip past the type checker even though it is not
well-typed. As Andreas Rossberg pointed out, it is quite often the
case that the type is essential for understanding the semantics.
Simply ignoring types does not necessarily make sense under such
circumstances. So it all depends on how you re-interpret the language
after getting rid of static types.

Matthias
Jul 18 '05 #315
Fergus Henderson wrote:
Pascal Costanza <co******@web.de> writes:

What I want is:

testxyz obj = (concretemethod obj == 42)

Does the code compile as long as concretemethod doesn't exist?
No, because the likelihood of that being a typo (e.g. for `concrete_method')
is too high.


This proves that a static type system requires you to write more code
than strictly necessary. (Please think twice before you react. This is
not meant as a pejorative remark, even if it strongly sounds like one.
It's just an objective truth. Even if you think that this is how it
should be, it doesn't make my statement wrong.)
I recently added support to the Mercury compiler for an option
"--allow-stubs". For the equivalent code in Mercury, if this option
is enabled, then it will compile if there is a _declaration_ for
concretemethod, even if there is no _definition_. The compiler will
issue a warning and automatically generate a stub definition which just
throws an exception if it is ever called.

It would be fairly straight-forward to also add support for allowing
code like that to compile even if there was no declaration, but that
does not seems like a good idea to me -- it would make it easier for
typos to go unnoticed, with insufficient compensating benefit.
A good development environment gives you immediate feedback on such
kinds of typos. A good compiler for a dynamically type language issues a
warning. So these typos don't go unnoticed. The only difference is that
a dynamically typed language trusts the programmer by default, whereas a
statically typed languages doesn't trust the programmer. (To rephrase
it: A statically typed language gives you stricter support, while the
dynamically typed language gives you weaker support. But that's a
actually more or less the same statement.)
I'm sure it would also be easy for developers of other statically typed
languages to implement what you want, if they thought it was a good idea.


Of course.

It might be interesting to note that dynamically typed language are
probably a really bad idea when you don't have a good IDE. The features
that fans of statically typed languages care about are usually regarded
as part of the development environment's jobs. This is only to indicate
that programming in a dynamically typed language is not as horrible as
you might think when seen in the right context.

And here is one of the probable replies: Statically typed languages
don't require a sophisticated IDE in order to do useful work. This might
be an advantage in some scenarios.

Pascal

--
Pascal Costanza University of Bonn
mailto:co******@web.de Institute of Computer Science III
http://www.pascalcostanza.de Römerstr. 164, D-53117 Bonn (Germany)

Jul 18 '05 #316
Matthias Blume wrote:
You first have to define what the meaning of a phrase is going to be
if you let it slip past the type checker even though it is not
well-typed. As Andreas Rossberg pointed out, it is quite often the
case that the type is essential for understanding the semantics.
Simply ignoring types does not necessarily make sense under such
circumstances. So it all depends on how you re-interpret the language
after getting rid of static types.


Can you show me an example of a program that does't make sense anymore
when you strip off the static type information?
Pascal

--
Pascal Costanza University of Bonn
mailto:co******@web.de Institute of Computer Science III
http://www.pascalcostanza.de Römerstr. 164, D-53117 Bonn (Germany)

Jul 18 '05 #317
Pascal Costanza <co******@web.de> writes:
Fergus Henderson wrote:
Most modern "statically typed" languages (e.g. Mercury, Glasgow Haskell,
OCaml, C++, Java, C#, etc.) aren't *strictly* statically typed anyway.
They generally have some support for *optional* dynamic typing.

This is IMHO a good trade-off. Most of the time, you want static typing;
it helps in the design process, with documentation, error checking, and
efficiency.
+ Design process: There are clear indications that processes like
extreme programming work better than processes that require some kind of
specification stage.


This is certainly not clear to me, especially if you consider writing
type declarations to be "some kind of specification stage".
Dynamic typing works better with XP than static
typing because with dynamic typing you can write unit tests without
having the need to immediately write appropriate target code.
That one seems to have been pretty thoroughly debunked by other responses
in this thread. A static type system won't stop you writing unit tests.
And if you want to actually run the unit tests, then you are going to
need appropriate target code, regardless of whether the system is
statically or dynamically typed.
+ Error checking: I can only guess what you mean by this.
I mean compile-time detection of type errors.
I'm just talking about ordinary everyday detection of typos, functions
called with the wrong number of arguments, arguments in the wrong order,
arguments of the wrong type -- that kind of thing.
If you mean something like Java's checked exceptions,
there are clear signs that this is a very bad feature.
No, I do not mean that. I tend to agree that statically checked
exception specifications are not worth the complication and can be
positively harmful in some situations.
+ Efficiency: As Paul Graham puts it, efficiency comes from profiling.
In order to achieve efficiency, you need to identify the bottle-necks of
your program. No amount of static checks can identify bottle-necks, you
have to actually run the program to determine them.


It's not enough to just identify the bottlenecks. You have to make those
bottlenecks go fast! That's a lot harder with a dynamically typed language,
because you pay a lot of overhead: greater memory usage, and hence worse
cache performance, due to less efficient representations of types in memory;
plus added overhead from all of those dynamic type checks. Of course good
compilers for dynamic languages analyze the source to try to infer the types,
but since the language is not statically typed, such analysis will often fail.

--
Fergus Henderson <fj*@cs.mu.oz.au> | "I have always known that the pursuit
The University of Melbourne | of excellence is a lethal habit"
WWW: <http://www.cs.mu.oz.au/~fjh> | -- the last words of T. S. Garp.
Jul 18 '05 #318
Fergus Henderson wrote:
Dynamic typing works better with XP than static
typing because with dynamic typing you can write unit tests without
having the need to immediately write appropriate target code.

That one seems to have been pretty thoroughly debunked by other responses
in this thread. A static type system won't stop you writing unit tests.
And if you want to actually run the unit tests, then you are going to
need appropriate target code, regardless of whether the system is
statically or dynamically typed.


Not if I only want to check whether the first ten tests work, and don't
care about the remaining ones.
+ Efficiency: As Paul Graham puts it, efficiency comes from profiling.
In order to achieve efficiency, you need to identify the bottle-necks of
your program. No amount of static checks can identify bottle-necks, you
have to actually run the program to determine them.

It's not enough to just identify the bottlenecks. You have to make those
bottlenecks go fast! That's a lot harder with a dynamically typed language,
because you pay a lot of overhead: greater memory usage, and hence worse
cache performance, due to less efficient representations of types in memory;
plus added overhead from all of those dynamic type checks. Of course good
compilers for dynamic languages analyze the source to try to infer the types,
but since the language is not statically typed, such analysis will often fail.


Good dynamically typed languages provide very good options in this
regard. However, other Lispers than me can probably provide much better
comments on that.
Pascal

--
Pascal Costanza University of Bonn
mailto:co******@web.de Institute of Computer Science III
http://www.pascalcostanza.de Römerstr. 164, D-53117 Bonn (Germany)

Jul 18 '05 #319
Pascal Costanza <co******@web.de> writes:
In a statically typed language, when I write a test case that calls a
specific method, I need to write at least one class that implements at
least that method, otherwise the code won't compile.
No -- you don't need to implement the method. You only need to declare it.

Even the need to declare it is really just a property of implementations,
not languages.
Well, the research that ultimately lead to the HotSpot Virtual Machine
originated in virtual machines for Smalltalk and for Self. Especially
Self is an "extremely" dynamic language, but they still managed to make
it execute reasonably fast.


Please correct me if I'm wrong, but as I understand it, iterating over a
collection of values is still going to require keeping some representation
of the type of each element around at runtime, and testing the type for
each element accessed, in case it is not the expected type. AFAIK neither
HotSpot nor the Self compiler do the kind of optimizations which would
be needed to avoid that.

--
Fergus Henderson <fj*@cs.mu.oz.au> | "I have always known that the pursuit
The University of Melbourne | of excellence is a lethal habit"
WWW: <http://www.cs.mu.oz.au/~fjh> | -- the last words of T. S. Garp.
Jul 18 '05 #320
Pascal Costanza <co******@web.de> wrote:
Dirk Thierbach wrote:
According to your criteria, (real * 200) is
- a certain class of values
- given in a limited language describing that class
Yes, but you will have a hard time developing a static type checker
that will work with such a language. I am not asking for a definition of the term "static type", but for a
definition of the term "type".
I am happy with a definition of "type" that allows arbitrary sets of
values, but how does this apply to static typing? Or dynamic type
checking? I was talking all the time about a definition of a "static
type", because that it what is relevant here.

Philosophically, there are a lot more sensible definitions for type,
but how does one such definition relate to our discussion?
Haskell: "An expression evaluates to a value and has a static type."
(http://www.haskell.org/onlinereport/intro.html#sect1.3 )

Where is the definiton for "type"? (without "static"?)
There is none, because that is not relevant.
Let's first get our terminology right.
Maybe we should also agree on what we want to use the terminology for.
I don't mind if you want to change that terminology. Let's just
rephrase it: Static type systems need to reject programs that
wouldn't necessarily fail in serious ways at runtime.
I think I already agreed with that several times, didn't I?

But then you have also to add "even if they won't necessarily fail,
nearly all of them won't be well-behaved".
But I am interested in the question why you (or others) think that
almost all software should be developed like that.

I didn't say that. Please do not put up a strawman. In fact, I
explicitely said "you use whatever tool you like best".

But that was the original question that initiated this thread. If we
have an agreement here, that's perfect!
Finally. *Sigh*. Why do I have to repeat that multiple times?
What about the example in
http://groups.google.com/groups?selm....netcologne.de
? I don't think this can be done without a serious rewrite.
The problem here is that the object system of CLOS and OCaml is
quite different, and Haskell has no objects at all. So you cannot
directly transfer that example to those languages. Not because they
are statically typed, but because they are different. It probably
wouldn't be possible to do exactly the same in another arbitrary
dynamically typed language, either.

But you can trivially simulate the *effects* of your program once
you accept that such a simulation need not use classes.

Please don't mix the differences because of statically/dynamically
typing and because of other language features.
Right, it comes from a more principled consideration: You can't have
metacircularity in a statically type language. You might be able to have
metacircularity if you strictly separate the stages, but as soon as you
want to be able to at least occasionally call base code from meta code
and vice versa, then you lose.
But you don't need metacircularity, because then you simply solve
your problem in a different way.
Metacircularity gives me the guaranntee that I can always code around
any unforeseeable limitations that might come up, without having to
start from scratch.
You can also create very subtle bugs that are difficult to find.
I haven't said that I can do more things in a dynamically typed
language. I have said that statically typed languages need to reject
well-behaved programs. That's a different claim. We are not talking
about Turing equivalence.


Neither am I talking about Turing equivalence.

But if you can agree that it is not harder to express something in a
(properly) statically typed languages then to express it in a
dynamically typed language, then we can stop the discussion here.

What I want you is to give up the point of view that dynamically
languages have an *advantage* because they are dynamically typed.

- Dirk

Jul 18 '05 #321
Pascal Costanza <co******@web.de> writes:
Do you have a reference for extensible record types. Google comes up,
among other things, with Modula-3, and I am pretty sure that's not what
you mean.


The following are at least a bit closer to what is meant:

[1] Mark P Jones and Simon Peyton Jones. Lightweight Extensible
Records for Haskell. In Haskell Workshop, Paris, September 1999.
<http://citeseer.nj.nec.com/jones99lightweight.html>

[2] B. R. Gaster and M. P. Jones. A polymorphic type
system for extensible records and variants. Technical
report NOTTCS-TR-96-3, Department of Computer Science,
University of Nottingham, UK, 1996. Available from URL
<http://www.cs.nott.ac.uk/Department/Staff/~mpj/polyrec.html>.
<http://citeseer.nj.nec.com/gaster96polymorphic.html>.

--
Fergus Henderson <fj*@cs.mu.oz.au> | "I have always known that the pursuit
The University of Melbourne | of excellence is a lethal habit"
WWW: <http://www.cs.mu.oz.au/~fjh> | -- the last words of T. S. Garp.
Jul 18 '05 #322
"Paul F. Dietz" <di***@dls.net> writes:
Ralph Becket wrote:
Here's the way I see it: .... (3) type errors flagged by a compiler for an ESCTS can pinpoint the source
of the problem whereas ad hoc assertions in code will only identify a
symptom of a type error;
....BTW, is (3) really justified? My (admittedly old) experience with ML
was that type errors can be rather hard to track back to their sources.


It depends on whether you declare the types of functions or not.
If you leave it up to the compiler to infer the types of all the
functions, then compilers have a difficult job of pinpointing errors,
because sometimes your incorrectly-implemented functions will be
type-correct, just with a different type than you expected, and this
will then leave to type errors further up the call tree.

But declaring the intended types of functions improves things dramatically.
If you get a type error, and you can't immediately figure out what is wrong,
declaring the intended types of the functions involved and recompiling
will allow you to quickly pinpoint the problem.

Of course, the type checker's error messages won't tell you _exactly_
where the error is; they can only point out inconsistencies, e.g. between
the code for a function and its type declaration, or between the code
for a function and one or more of the type declarations for the functions
that it calls. But such inconsistencies should be easy to resolve;
the programmer should be able to tell which of the contradictory parts
are wrong. (The exception is when the inconsistency actually reveals
a design error; in the worst case, a major redesign may be required.)

--
Fergus Henderson <fj*@cs.mu.oz.au> | "I have always known that the pursuit
The University of Melbourne | of excellence is a lethal habit"
WWW: <http://www.cs.mu.oz.au/~fjh> | -- the last words of T. S. Garp.
Jul 18 '05 #323
Joe Marshall wrote:
Joachim Durchholz <jo***************@web.de> writes:
pr***********@comcast.net wrote:
My point is that type systems can reject valid programs.


And the point of the guys with FPL experience is that, given a good
type system[*], there are few if any practial programs that would be
wrongly rejected.


We're stating a pretty straightforward, objective, testable hypothesis:

``There exists valid programs that cannot be statically checked by
such-and-such a system.''

and we get back

`yes, but...'
`in our experience'
`*I've* never seen it'
`if the type system is any good'
`few programs'
`no practical programs'
`no useful programs'
`isolated case'
`99% of the time'
`most commercial programs'
`most real-world programs'
`only contrived examples'
`nothings perfect'
`in almost every case'

Excuse us if we are skeptical.


Then be sceptical if you like.
Actually your hypothesis is ill-worded anyway: it says "valid" without
specifying what kind of validity means (which already has lead us into
many irrelevant meanderings about what correctness and validity mean,
and that they cannot be expressed if you don't have specifications). It
ignores that some programs need restructuring to have any useful static
types associated with its items (you can always put a Lisp-style type
system on top of a statically-typed language, with little effort). It
ignores the practical experience of people who indeed say "static typing
with type inference is not a straightjacket but a ladder: supports where
it's helpful, and extending my reach in directions that were unthinkable
before they were invented".
However, all I hear is questions "how to you type this". Whining for the
known and trusted idioms, instead of even thinking about the idioms that
static typing might provide.

Just take the testimony and accept it. Just as I take the testimony that
one can write large programs in Lisp. Stay sceptical if you like - I
certainly do.

I'm just tired of the Costanza-style constant nitpicking, topic evasion,
and not answering to critical questions (such as his experience with
modern type systems - he claims it but shows little evidence for that,
something that makes me suspicious about his credibility in general).

Technical newsgroups are a place for exchanging new ideas, and for
helping people along. This thread has already served that purpose to the
utmost extent imaginable, what's left is squabbling.

Good night and Over and Out for me (except for direct answers to my
posts, where appropriate).

Regards,
Jo

Jul 18 '05 #324
Pascal Costanza wrote:

Can you show me an example of a program that does't make sense anymore
when you strip off the static type information?


Here is a very trivial example, in SML:

20 * 30

Multiplication, as well as literals, are overloaded. Depending on
whether you type this expression as Int8.int (8-bit integers) or
IntInf.int (infinite precision integer) the result is either 600 or an
overflow exception.

So the program does not make sense without type information, because it
does not have an unambiguous (i.e. no) semantics.

I'm ready to admit that it may be a dubious example of a typing feature.
But it is simple, and clearly sufficient to disprove your repeated claim
that static types don't add expressiveness to a language. If you did not
have them for the example above, you needed some other feature to
express the disambiguation.

- Andreas

--
Andreas Rossberg, ro******@ps.uni-sb.de

"Computer games don't affect kids; I mean if Pac Man affected us
as kids, we would all be running around in darkened rooms, munching
magic pills, and listening to repetitive electronic music."
- Kristian Wilson, Nintendo Inc.

Jul 18 '05 #325
Pascal Costanza <co******@web.de> wrote:
John Atwood wrote:
Pascal Costanza <co******@web.de> wrote:
No, you definitely can do a lot of things with macros in Lisp that are
impossible to do in other languages. There are papers that show this
convincingly. Try
ftp://publications.ai.mit.edu/ai-pub...df/AIM-453.pdf for a
start. Then continue, for example, with some articles on Paul Graham's
website, or download and read his book "On Lisp".


That's a great paper; however, see Steele's later work:
http://citeseer.nj.nec.com/steele94building.html


Yes, I have read that paper. If you want to work with monads, you
probably want a static type system.

(And I think he still likes Scheme and Lisp. ;)

Perhaps, but the paper shows convincingly that statically typed languages
can do a lot of things that Lispers use macros for. Work in
meta-programming, aka mult-stage programming, shows how to more finely
control the power of macros, reflection, etc. See, e.g.:
http://citeseer.nj.nec.com/sheard00accomplishments.html
http://citeseer.nj.nec.com/taha99multistage.html

John
Jul 18 '05 #326
pr***********@comcast.net writes:
"Andreas Rossberg" <ro******@ps.uni-sb.de> writes:
Sorry, but that reply of yours somewhat indicates that you haven't really
used modern type systems seriously.

All decent type systems allow you to define your own types. You can express
any domain-specific abstraction you want in types. Hence the type language
gives you additional expressive power wrt the problem domain.


Cool! So I can declare `Euclidean rings' as a type an ensure that I
never pass a non-Euclidean ring to a function?


You can easily declare "EuclideanRings" as a type class
(in Haskell/Clean/Mercury, or similar constructs in other languages),
and ensure that you only pass this function values whose type has been
declared to be an instance of that type class.

Generally the type system won't be able to enforce that the
"EuclideanRings" type class really represents Euclidean rings
that conform to all the appropriate axioms. However, declaring
an abstract type like this is nevertheless very useful as documentation:
it makes it clear where the proof obligation lies. _If_ you prove that
every declared instance of the type class is really a Euclidean ring,
then that, together with the type system, will be enough to guarantee
that the function's argument is always a Euclidean ring.

--
Fergus Henderson <fj*@cs.mu.oz.au> | "I have always known that the pursuit
The University of Melbourne | of excellence is a lethal habit"
WWW: <http://www.cs.mu.oz.au/~fjh> | -- the last words of T. S. Garp.
Jul 18 '05 #327
gt*****@prism.gatech.edu (Brian McNamara!) writes:
pr***********@comcast.net once said:
(defun black-hole (x)
#'black-hole)

(for non lispers, the funny #' is a namespace operator.
The black-hole function gobbles an argument and returns
the black-hole function.)


Finally, an example that I don't think you can type in Haskell.
You score a point for that. :)


I don't think it deserves any points, because as another poster said,
in Haskell that is equivalent to

black_hole _ = undefined

--
Fergus Henderson <fj*@cs.mu.oz.au> | "I have always known that the pursuit
The University of Melbourne | of excellence is a lethal habit"
WWW: <http://www.cs.mu.oz.au/~fjh> | -- the last words of T. S. Garp.
Jul 18 '05 #328
Fergus Henderson <fj*@cs.mu.oz.au> writes:
pr***********@comcast.net writes:

Cool! So I can declare `Euclidean rings' as a type an ensure that I
never pass a non-Euclidean ring to a function?


_If_ you prove that
every declared instance of the type class is really a Euclidean ring,
then that, together with the type system, will be enough to guarantee
that the function's argument is always a Euclidean ring.


Yeah, but *that's* the easy part.

Jul 18 '05 #329
On Mon, Oct 27, 2003 at 05:57:00PM +0100, Joachim Durchholz wrote:
(you can always put a Lisp-style type system on top of a
statically-typed language, with little effort).


This is not true, as I've pointed out on several occasions. Such
systems do not behave like a Lisp-style type system when dealing with
redefinition.

--
; Matthew Danish <md*****@andrew.cmu.edu>
; OpenPGP public key: C24B6010 on keyring.debian.org
; Signed or encrypted mail welcome.
; "There is no dark side of the moon really; matter of fact, it's all dark."
Jul 18 '05 #330
On Mon, Oct 27, 2003 at 07:00:01PM +0100, Andreas Rossberg wrote:
Pascal Costanza wrote:

Can you show me an example of a program that does't make sense anymore
when you strip off the static type information?


Here is a very trivial example, in SML:

20 * 30

Multiplication, as well as literals, are overloaded. Depending on
whether you type this expression as Int8.int (8-bit integers) or
IntInf.int (infinite precision integer) the result is either 600 or an
overflow exception.


May I point out that the correct answer is 600, not overflow?

Something that annoys me about many statically-typed languages is the
insistence that arithmetic operations should return the same type as the
operands. 2 / 4 is 1/2, not 0. Arithmetically, 1 * 1.0 is
well-defined, so why can I not write this in an SML program?

I do believe Haskell does it right, though, with its numeric tower
derived from Lisp.

--
; Matthew Danish <md*****@andrew.cmu.edu>
; OpenPGP public key: C24B6010 on keyring.debian.org
; Signed or encrypted mail welcome.
; "There is no dark side of the moon really; matter of fact, it's all dark."
Jul 18 '05 #331
Fergus Henderson wrote:
Pascal Costanza <co******@web.de> writes:

Well, the research that ultimately lead to the HotSpot Virtual Machine
originated in virtual machines for Smalltalk and for Self. Especially
Self is an "extremely" dynamic language, but they still managed to make
it execute reasonably fast.

Please correct me if I'm wrong, but as I understand it, iterating over a
collection of values is still going to require keeping some representation
of the type of each element around at runtime, and testing the type for
each element accessed, in case it is not the expected type. AFAIK neither
HotSpot nor the Self compiler do the kind of optimizations which would
be needed to avoid that.


You don't need to check the type on each access. If you only copy a
value from one place to other, and both places are untyped, you don't
need any check at all.

Furthermore, if I remember correctly, dynamically compiled systems use
type inferencing at runtime to reduce the number of type checks.
Pascal

Jul 18 '05 #332
John Atwood wrote:
Pascal Costanza <co******@web.de> wrote:
John Atwood wrote:
Pascal Costanza <co******@web.de> wrote:

No, you definitely can do a lot of things with macros in Lisp that are
impossible to do in other languages. There are papers that show this
convincingly. Try
ftp://publications.ai.mit.edu/ai-pub...df/AIM-453.pdf for a
start. Then continue, for example, with some articles on Paul Graham's
website, or download and read his book "On Lisp".

That's a great paper; however, see Steele's later work:
http://citeseer.nj.nec.com/steele94building.html
Yes, I have read that paper. If you want to work with monads, you
probably want a static type system.

(And I think he still likes Scheme and Lisp. ;)


Perhaps, but the paper shows convincingly that statically typed languages
can do a lot of things that Lispers use macros for.


Right. I have no problems with that. Monads are pretty interesting, and
monads and static typing go very well together.
Work in
meta-programming, aka mult-stage programming, shows how to more finely
control the power of macros, reflection, etc. See, e.g.:
http://citeseer.nj.nec.com/sheard00accomplishments.html
http://citeseer.nj.nec.com/taha99multistage.html


Wait: Meta-programming and multi-stage programming is not the same
thing. The latter is only a subset of the former.

The metaprogramming facility that doesn't let you call the meta program
from the base program and vice versa in the same environment, o
grasshopper, is not the true metaprogramming facility.

Pascal

Jul 18 '05 #333
Andreas Rossberg wrote:
Pascal Costanza wrote:

Can you show me an example of a program that does't make sense anymore
when you strip off the static type information?

Here is a very trivial example, in SML:

20 * 30

Multiplication, as well as literals, are overloaded. Depending on
whether you type this expression as Int8.int (8-bit integers) or
IntInf.int (infinite precision integer) the result is either 600 or an
overflow exception.

So the program does not make sense without type information, because it
does not have an unambiguous (i.e. no) semantics.

I'm ready to admit that it may be a dubious example of a typing feature.
But it is simple, and clearly sufficient to disprove your repeated claim
that static types don't add expressiveness to a language. If you did not
have them for the example above, you needed some other feature to
express the disambiguation.


Sorry, do you really want to say that I can't make my program throw an
exception when some variables are not inside a specified range?

(assert (typep (* 20 30) '(integer 0 255)))
Pascal

Jul 18 '05 #334
On Mon, 27 Oct 2003 13:40:24 -0500, Matthew Danish wrote:
Something that annoys me about many statically-typed languages is the
insistence that arithmetic operations should return the same type as the
operands. 2 / 4 is 1/2, not 0. Arithmetically, 1 * 1.0 is
well-defined, so why can I not write this in an SML program?


Confusing integer division with rational division is not a consequence
of static typing, except that with static typing it's not as dangerous as
with dynamic typing (because a function declared as taking floating point
arguments and performing / on them will do the same even if you pass
integers to it, which in most languages will be automatically converted).

But in Haskell / is defined only on types in class Fractional, which
doesn't include integers. Integer division is `div` and `quot` (they
differ for negative numbers). In Pascal 2/4 is 0.5, in SML / is only
for floats, in both languages integer division is spelled div. Most
languages don't support rational numbers, so / on integers can only
return a floating point number or be an error.

Yes, many languages inherited the confusion of divisions from C. This
includes some dynamically typed languages, e.g. Python - but it's planned
to be fixed, and Ruby.

Mixed-type arithmetic is a different story. I'm talking only about 1/2
being equal to 0 in some languages - this doesn't coincide with static
typing.

--
__("< Marcin Kowalczyk
\__/ qr****@knm.org.pl
^^ http://qrnik.knm.org.pl/~qrczak/

Jul 18 '05 #335
Joe Marshall <jr*@ccs.neu.edu> wrote:
Dirk Thierbach <dt********@gmx.de> writes:
It will be successfully type checked, because the static type system
does not allow you to express assumptions about value ranges of types.
I was working on the assumption that the type language *would* allow
one to express arbitrary types.
It doesn't. (There are languages with a type languages that do
allow to express arbitrary types, but the consequence is that typing
is no longer decidable at compile-time, so your compilation may
not terminate. For many people, this is not acceptable.)
Certainly one can create a sufficiently weak static type system that
terminates under all conditions and produces correct answers within
the system.
Yes. The trick is that there is a fine line between "too weak" and
"not decidable". As I repeatedly said, it helps to think of the
static type system as a tool that automatically writes tests in
a limited area, but these tests are more powerful than unit tests
because they work on all possible execution paths. But this tool
clearly won't write all the tests you need, so for the rest, you
insert checks or write the unit tests by hand.
But I surely wouldn't be impressed by a type checker that allowed this
to pass:
[more stuff with value ranges]

It's not a question whether it will pass or fail. You cannot express
this in the type language.

Maybe the misunderstanding is that you and others think that a static
type checker must now check everything at compile time that is checked
at runtime with dynamic typing. It doesn't, and in those cases where
it doesn't, you of course write the same tests in a statically typed
language as you do in a dynamically typed language.

But that doesn't mean static typechecking is not useful; it *will*
help you in a lot of cases.
I think most people here were assuming that passing an integer greater
than 255 to a routine expecting a single 8-bit byte is a type error,
and something that could cause a crash.


But that's not how it works. What you can use the static type system
for is to make two different types, say Byte and Integer, and then
write a conversion routine from Integer to Byte that does a range check,
and the type checker will make sure that this conversion routine
is called everywhere where it is necessary. So you can use the type
checker to remind you when you forget that. (That's quite helpful,
because it is easy to forget adding a manual type check.)

(And, BTW, that's quite different from C where the static typechecker
allows you to assign values of different ranges without ever asking
for a range check.)

- Dirk

Jul 18 '05 #336
Pascal Costanza <co******@web.de> writes:
Andreas Rossberg wrote:
Pascal Costanza wrote:

Can you show me an example of a program that does't make sense
anymore when you strip off the static type information?
Here is a very trivial example, in SML:

20 * 30

Multiplication, as well as literals, are overloaded. Depending on
whether you type this expression as Int8.int (8-bit integers) or
IntInf.int (infinite precision integer) the result is either 600 or
an overflow exception.

So the program does not make sense without type information, because
it does not have an unambiguous (i.e. no) semantics.

I'm ready to admit that it may be a dubious example of a typing
feature. But it is simple, and clearly sufficient to disprove your
repeated claim that static types don't add expressiveness to a
language. If you did not have them for the example above, you needed
some other feature to express the disambiguation.

Sorry, do you really want to say that I can't make my program throw an
exception when some variables are not inside a specified range?


No. Where did you get that from.

His point was that without the type information you don't know whether
the above "program" should be transliterated into this:
(assert (typep (* 20 30) '(integer 0 255)))


or simply this:

(* 20 30)

Matthias
Jul 18 '05 #337
Matthew Danish wrote:
On Mon, Oct 27, 2003 at 07:00:01PM +0100, Andreas Rossberg wrote:
Pascal Costanza wrote:
Can you show me an example of a program that does't make sense anymore
when you strip off the static type information?


Here is a very trivial example, in SML:

20 * 30

Multiplication, as well as literals, are overloaded. Depending on
whether you type this expression as Int8.int (8-bit integers) or
IntInf.int (infinite precision integer) the result is either 600 or an
overflow exception.


May I point out that the correct answer is 600, not overflow?


No, the correct answer isn't 600 in all cases.
If it's infinite-precision arithmetic, the correct answer is indeed 600.
If it's 8-bit arithmetic with overflow, there is no correct answer.
If it's 8-bit arithmetic with wrap-around, the correct answer is 88.
If it's 8-bit arithmetic with saturation, the correct answer is 255.
(The result happens to be independent of whether the arithmetic is
signed or unsigned.)

Using infinite-precision internally for all calculations isn't always
practicable, and in some algorithms, this will give you even wrong
results (e.g. when multiplying or dividing using negative numbers, or
with saturating arithmetic).

Of course, this doesn't say much about the distinction between static
and dynamic typing, actually the issues and unlucky fringe cases seem
very similar to me. (In particular, overloading and type inference don't
tell us whether the multiplication should be done in 8-bit, 16-bit,
machine-precision, infinite-precision, wrap-around, or saturated
arithmetic, and run-time types don't give us any useful answer either.)

Regards,
Jo

Jul 18 '05 #338
Matthew Danish wrote:
On Mon, Oct 27, 2003 at 05:57:00PM +0100, Joachim Durchholz wrote:
(you can always put a Lisp-style type system on top of a
statically-typed language, with little effort).


This is not true, as I've pointed out on several occasions. Such
systems do not behave like a Lisp-style type system when dealing with
redefinition.


Add a State monad and they will do even that.

(I thought we were talking about differences between compile-time and
run-time typing, not about specifics of Lisp.)

Regards,
Jo

Jul 18 '05 #339
Joe Marshall <jr*@ccs.neu.edu> writes:
Nitpick: You are defining as a program that which passes a static
type checker. What would you like to call those constructs that make
sense to a human and would run correctly despite failing a static
type check?


In a language such as Haskell, there are no constructs that 'would run
correctly' despite being ill-typed. If you define

f :: Int -> Int

then there's no way that f 'would run' if you somehow passed it a
String instead. I should point out, however, that typically a
function will be defined over a whole class of values, eg

f :: Eq a => a -> a

so that the function f works with any type that has equality defined.

In Lisp too, there are no programs that 'would run correctly' after
failing type-checking, because a type checker for Lisp would have to
be more liberal (and so potentially less helpful) than one for a
language like Haskell or ML.

The job of a type checker, just like a syntax checker, is to eliminate
programs which do not work. That's all there is to it.

By narrowing the set of programs which work - that is, introducing
some redundancy into the programming language by increasing the number
of programs which are 'wrong' - you can often save the human
programmer some effort by catching mistakes.

--
Ed Avis <ed@membled.com>
Jul 18 '05 #340
Fergus Henderson wrote:
Pascal Costanza <co******@web.de> writes:

Do you have a reference for extensible record types. Google comes up,
among other things, with Modula-3, and I am pretty sure that's not what
you mean.

The following are at least a bit closer to what is meant:

[1] Mark P Jones and Simon Peyton Jones. Lightweight Extensible
Records for Haskell. In Haskell Workshop, Paris, September 1999.
<http://citeseer.nj.nec.com/jones99lightweight.html>

[2] B. R. Gaster and M. P. Jones. A polymorphic type
system for extensible records and variants. Technical
report NOTTCS-TR-96-3, Department of Computer Science,
University of Nottingham, UK, 1996. Available from URL
<http://www.cs.nott.ac.uk/Department/Staff/~mpj/polyrec.html>.
<http://citeseer.nj.nec.com/gaster96polymorphic.html>.


Thanks.
Pascal

Jul 18 '05 #341
Marcin 'Qrczak' Kowalczyk wrote:
On Mon, 27 Oct 2003 13:40:24 -0500, Matthew Danish wrote:

Something that annoys me about many statically-typed languages is the
insistence that arithmetic operations should return the same type as the
operands. 2 / 4 is 1/2, not 0. Arithmetically, 1 * 1.0 is
well-defined, so why can I not write this in an SML program?

Confusing integer division with rational division is not a consequence
of static typing, except that with static typing it's not as dangerous as
with dynamic typing (because a function declared as taking floating point
arguments and performing / on them will do the same even if you pass
integers to it, which in most languages will be automatically converted).


Sorry, I don't get this. Why should it be more dangerous with dynamic
typing? Common Lisp definitely gets this right, and most probably some
other dynamically typed languages.
Mixed-type arithmetic is a different story. I'm talking only about 1/2
being equal to 0 in some languages - this doesn't coincide with static
typing.


Yes, dynamic vs static typing seems to be irrelevant here. (Although I
wonder why you should need to distinguish between / and div...)
Pascal

Jul 18 '05 #342
Ed Avis <ed@membled.com> writes:
Joe Marshall <jr*@ccs.neu.edu> writes:
Nitpick: You are defining as a program that which passes a static
type checker. What would you like to call those constructs that make
sense to a human and would run correctly despite failing a static
type check?


In a language such as Haskell, there are no constructs that 'would run
correctly' despite being ill-typed. If you define

f :: Int -> Int

then there's no way that f 'would run' if you somehow passed it a
String instead. I should point out, however, that typically a
function will be defined over a whole class of values, eg

f :: Eq a => a -> a

so that the function f works with any type that has equality defined.


What would you like to call those `syntactic constructs' that do not
currently type check in Haskell, yet *may* belong to a class of
syntactic constructs that *could* conceivably be type checked in a
successor to Haskell that has a better type inferencing engine?

Did I close all the loopholes?

Look, early versions of Haskell did not have as powerful a type
inferencing mechanism as the recent versions. What do you call those
syntactic constructs that weren't programs before but are now? What
do you call the syntactic constructs that aren't programs now, but
might be tomorrow? *THOSE ARE THE THINGS WE ARE TALKING ABOUT*

Jul 18 '05 #343
Matthias Blume wrote:
Pascal Costanza <co******@web.de> writes:

Andreas Rossberg wrote:

Pascal Costanza wrote:
Can you show me an example of a program that does't make sense
anymore when you strip off the static type information?

Here is a very trivial example, in SML:

20 * 30

Multiplication, as well as literals, are overloaded. Depending on
whether you type this expression as Int8.int (8-bit integers) or
IntInf.int (infinite precision integer) the result is either 600 or
an overflow exception.

So the program does not make sense without type information, because
it does not have an unambiguous (i.e. no) semantics.

I'm ready to admit that it may be a dubious example of a typing
feature. But it is simple, and clearly sufficient to disprove your
repeated claim that static types don't add expressiveness to a
language. If you did not have them for the example above, you needed
some other feature to express the disambiguation.

Sorry, do you really want to say that I can't make my program throw an
exception when some variables are not inside a specified range?

No. Where did you get that from.

His point was that without the type information you don't know whether
the above "program" should be transliterated into this:

(assert (typep (* 20 30) '(integer 0 255)))

or simply this:

(* 20 30)


Well, what's the obvious meaning?

Meta comment: I think we are already side-stepping too much here. I
don't think this is a useful example to illustrate serious advantages of
either static or dynamic typing. In all languages, you could simply
define a default meaning for the version that doesn't have explicit type
annotations, and then force the programmer to use explicit annotations
for the other ones. (Andreas already said it is a dubious example.)

Can you give a better example of a program that would render meaningless
without type annotations?

Pascal

Jul 18 '05 #344
Pascal Costanza <co******@web.de> writes:
Can you give a better example of a program that would render
meaningless without type annotations?


fun f A = "A"
| f B = "B"

This could be the function that always returns "A" for arbitrary
arguments, it could be the function that can only be legally applied
to the value (constructed by) A where it returns "A", it could be the
function that can be applied precisely to values A and B, it could
accept more than those two and raise an exception if the argument is
neither A nor B, it could be the function that can be applied to A
where it returns "A" and also many other values where it always
returns "B", ...

Which of these versions you get depends in part on the typing
environment that is in effect when the above code is encountered by
the compiler.

Matthias

PS: Just in case you wonder, here are examples of type definitions
which would trigger the results outlined above. I go in the same
order:

1. (* no type definition in effect *)
2. datatype t = A
3. datatype t = A | B
4. datatype t = A | B | C
5. datatype t = A | C | D of int | E of real | F of t -> t

Notice that the compiler will probably warn about a redundant match in
cases 1. and 2. and about a non-exhaustive match in case 4.
Jul 18 '05 #345
Pascal Costanza <co******@web.de> writes:
Fergus Henderson wrote:
Dynamic typing works better with XP than static typing because with
dynamic typing you can write unit tests without having the need to
immediately write appropriate target code.

That one seems to have been pretty thoroughly debunked by other
responses
in this thread. A static type system won't stop you writing unit tests.
And if you want to actually run the unit tests, then you are going to
need appropriate target code, regardless of whether the system is
statically or dynamically typed.


Not if I only want to check whether the first ten tests work, and
don't care about the remaining ones.


Perhaps I'm just a low tech kind of guy but if I just want to run the
first ten then I comment out the rest. Even without a fancy IDE that
only take a few key presses.
Jul 18 '05 #346
On Mon, Oct 27, 2003 at 10:08:15PM +0100, Joachim Durchholz wrote:
Matthew Danish wrote:
On Mon, Oct 27, 2003 at 07:00:01PM +0100, Andreas Rossberg wrote:
Pascal Costanza wrote:

Can you show me an example of a program that does't make sense anymore
when you strip off the static type information?

Here is a very trivial example, in SML:

20 * 30

Multiplication, as well as literals, are overloaded. Depending on
whether you type this expression as Int8.int (8-bit integers) or
IntInf.int (infinite precision integer) the result is either 600 or an
overflow exception.
May I point out that the correct answer is 600, not overflow?


No, the correct answer isn't 600 in all cases.
If it's infinite-precision arithmetic, the correct answer is indeed 600.
If it's 8-bit arithmetic with overflow, there is no correct answer.
If it's 8-bit arithmetic with wrap-around, the correct answer is 88.
If it's 8-bit arithmetic with saturation, the correct answer is 255.
(The result happens to be independent of whether the arithmetic is
signed or unsigned.)


What is this stuff? I am talking about integers here! You know, the
infinite set with the same cardinality as natural numbers. Why can't
the implementation figure out how to represent them most efficiently?
Using infinite-precision internally for all calculations isn't always
practicable, and in some algorithms, this will give you even wrong
results (e.g. when multiplying or dividing using negative numbers, or
with saturating arithmetic).
If it gives you the wrong results, I'm not interested. How hard is it
to get the arithmetically correct result of 20 * 30? Surely 20 * -30 is
not that difficult either. It is a pet peeve I have with many
programming languages, especially the ones that are so big on proofs and
correctness.

Arithmetic as it exists in ML is a good example of the difference
between correctness and type-safety. It is also a good example of the
difference between correctness and proof.

Of course, you may claim, that the definition of div, or / or whatever
it is called, is "correct" in that it conforms to the specification.
All that says to me is that the specification is wrong. Garbage can be
proven, in some system. Doesn't mean I'm interested.
Of course, this doesn't say much about the distinction between static
and dynamic typing, actually the issues and unlucky fringe cases seem
very similar to me. (In particular, overloading and type inference don't
tell us whether the multiplication should be done in 8-bit, 16-bit,
machine-precision, infinite-precision, wrap-around, or saturated
arithmetic, and run-time types don't give us any useful answer either.)


Lisp gets exact rational arithmetic right, why don't ML or Haskell?
Clever compilers like CMUCL can even emit efficient code when it can
figure out the specific types. Surely a statically typed language would
find this even easier!

--
; Matthew Danish <md*****@andrew.cmu.edu>
; OpenPGP public key: C24B6010 on keyring.debian.org
; Signed or encrypted mail welcome.
; "There is no dark side of the moon really; matter of fact, it's all dark."
Jul 18 '05 #347
On Tue, 28 Oct 2003 02:46:54 +0100, Pascal Costanza wrote:
Sorry, I don't get this. Why should it be more dangerous with dynamic
typing? Common Lisp definitely gets this right, and most probably some
other dynamically typed languages.


Common Lisp, Scheme and Perl get it right: the result of / on integers
is a rational or floating point number, and integer division is spelled
differently.

Python and Ruby get it wrong: the result is an integer (truncated
division), very different from the result of / on the same numbers
represented in a different type.

If a statically typed language get's this "wrong", it doesn't hurt, except
newbies which write 1/n. For example consider this:

double foo(double *a, int len)
{
... Some computation involving a[i]/a[j] which is supposed
... to produce a true real quotient.
}

You make some array of doubles, set a[i] = exp(i) (a double) and it works.
Then you set a[i] = 2*i (an integer) and it still works, because the
integer has been converted to double during assignment. An integer can be
used in place of a double with the same value.

Now in a dynamically typed language the analogous code would set some
array elements to integers without conversion. If division on them means
a different thing, an integer can no longer be used in place of a floating
point number with the same value. And division is the only operation whith
breaks this.

Dynamically typed languages shouldn't use / for both real and truncating
division. For statically typed languages it's only a matter of taste (I
prefer to use different symbols), but for dynamically typed languages it's
important to prevent bugs.

--
__("< Marcin Kowalczyk
\__/ qr****@knm.org.pl
^^ http://qrnik.knm.org.pl/~qrczak/

Jul 18 '05 #348
Matthew Danish <md*****@andrew.cmu.edu> writes:
On Mon, Oct 27, 2003 at 10:08:15PM +0100, Joachim Durchholz wrote:
Matthew Danish wrote:
On Mon, Oct 27, 2003 at 07:00:01PM +0100, Andreas Rossberg wrote: Pascal Costanza wrote: Can you show me an example of a program that does't make sense anymore
> when you strip off the static type information? Here is a very trivial example, in SML:

20 * 30
No, the correct answer isn't 600 in all cases.
What is this stuff? I am talking about integers here!
But the SML program isn't. Or it may be, and maybe not. So it's
ambigous without type information.
Why can't the implementation figure out how to represent them most
efficiently?
Because it needs a type annotation or inference to decide that the
numbers are indeed integers, and not a different set with different
arithmetic properties.
Lisp gets exact rational arithmetic right, why don't ML or Haskell?


Could you point to a case where they don't? I don't understand your
criticism at all. Is the ability to do modulo arithmetic "wrong"?

-kzm
--
If I haven't seen further, it is by standing in the footprints of giants
Jul 18 '05 #349
On Tue, Oct 28, 2003 at 11:20:45AM +0100, ke********@ii.uib.no wrote:
Matthew Danish <md*****@andrew.cmu.edu> writes:
What is this stuff? I am talking about integers here!


But the SML program isn't. Or it may be, and maybe not. So it's
ambigous without type information.
Why can't the implementation figure out how to represent them most
efficiently?


Because it needs a type annotation or inference to decide that the
numbers are indeed integers, and not a different set with different
arithmetic properties.


1 is an integer. Simple type inference. In Lisp, it's also a fixnum,
it's also an (unsigned-byte 23), it's also an (integer 1 (2)), etc.
Lisp gets exact rational arithmetic right, why don't ML or Haskell?


Could you point to a case where they don't? I don't understand your
criticism at all. Is the ability to do modulo arithmetic "wrong"?


- fun fact 0 = 1 | fact n = n * fact (n - 1);
val fact = fn : int -> int
- fact 10;
val it = 3628800 : int
- fact 15;
val it = 1307674368000 : int (* ideally *)

- 1 * 1.0;
val it = 1.0 : float (* ideally *)

- 2 / 4;
val it = 1/2 : ratio (* ideally *)

- 2 truncate 4;
val it = 0 : int

--
; Matthew Danish <md*****@andrew.cmu.edu>
; OpenPGP public key: C24B6010 on keyring.debian.org
; Signed or encrypted mail welcome.
; "There is no dark side of the moon really; matter of fact, it's all dark."
Jul 18 '05 #350

This discussion thread is closed

Replies have been disabled for this discussion.

Similar topics

4 posts views Thread by Berco Beute | last post: by
reply views Thread by Luke Kenneth Casson Leighton | last post: by
1 post views Thread by CARIGAR | last post: by
reply views Thread by zhoujie | last post: by
reply views Thread by suresh191 | last post: by
reply views Thread by harlem98 | last post: by
reply views Thread by listenups61195 | last post: by
By using this site, you agree to our Privacy Policy and Terms of Use.