469,326 Members | 1,359 Online
Bytes | Developer Community
New Post

Home Posts Topics Members FAQ

Post your question to a community of 469,326 developers. It's quick & easy.

Python from Wise Guy's Viewpoint

THE GOOD:

1. pickle

2. simplicity and uniformity

3. big library (bigger would be even better)

THE BAD:

1. f(x,y,z) sucks. f x y z would be much easier to type (see Haskell)
90% of the code is function applictions. Why not make it convenient?

2. Statements vs Expressions business is very dumb. Try writing
a = if x :
y
else: z

3. no multimethods (why? Guido did not know Lisp, so he did not know
about them) You now have to suffer from visitor patterns, etc. like
lowly Java monkeys.

4. splintering of the language: you have the inefficient main language,
and you have a different dialect being developed that needs type
declarations. Why not allow type declarations in the main language
instead as an option (Lisp does it)

5. Why do you need "def" ? In Haskell, you'd write
square x = x * x

6. Requiring "return" is also dumb (see #5)

7. Syntax and semantics of "lambda" should be identical to
function definitions (for simplicity and uniformity)

8. Can you undefine a function, value, class or unimport a module?
(If the answer is no to any of these questions, Python is simply
not interactive enough)

9. Syntax for arrays is also bad [a (b c d) e f] would be better
than [a, b(c,d), e, f]

420

P.S. If someone can forward this to python-dev, you can probably save some
people a lot of soul-searching
Jul 18 '05
467 18282
pr***********@comcast.net wrote:
Dirk Thierbach <dt********@gmx.de> writes:
Now why does a type system reject a program? Because there's a type
mismatch in some branch if the program.

*or* because the type system was unable to prove that there *isn't* a
type mismatch in *all* branches.


I am not sure if I read this correctly, but it seems equivalent to what
I say.

\exists branch. mismatch-in (branch)

should be the same as

\not \forall branch. \not mismatch-in (branch)

Anyway, I don't understand your point.

(Hindley-Milner typechecking works by traversing the expression tree
in postfix order, matching types on binary application nodes. Typing
fails if and only if such a match fails (ignoring constraints or
similar extensions for the moment) If a match fails, there's a problem
in this branch).

- Dirk

Jul 18 '05 #251
<pr***********@comcast.net> wrote in message news:wu**********@comcast.net...
"Marshall Spight" <ms*****@dnai.com> writes:
It would be really interesting to see a small but useful example
of a program that will not pass a statically typed language.
It seems to me that how easy it is to generate such programs
will be an interesting metric.


Would this count?

(defun noisy-apply (f arglist)
(format t "I am now about to apply ~s to ~s" f arglist)
(apply f arglist))


Interesting, interesting. Thanks for taking me seriously!

I'm trying to map this program into Java, and it's possible
but there are enough different ways to go about it that
I'm having a hard time reasoning about the result.

For one thing, what would f be? Probably an instance of
a class that implements a specific interface. But then,
implementing a specific interface is like saying we
know what type f is. Is it a function that takes a
single argument of type Object? If we concede
all those points, then this is fairly easy to map
into Java. If we say that f can take any single argument
then we can do it with reflection if we are willing
to add 15 lines of code, which is certainly not pretty.
If it can take any number of arguments then it's
looking triply awful now.

It's starting to feel like this is merely a demonstration
of Java's weakness in generic programming, and
not something hooked into Goedel.

Anyone have any comments?
Marshall
Jul 18 '05 #252

"Marshall Spight" <ms*****@dnai.com> writes:
"Kenny Tilton" <kt*****@nyc.rr.com> wrote in message news:Zn*******************@twister.nyc.rr.com...

Lights out for static typing.


That kind of statement reminds me a lot of the people
who were saying in 1985 that CISC computing was
dead.


Because Intel ultimately triumphed over the pundits advocating another
solution?

Because differences in instruction set architecture due to
implementation advances ultimately became irrelevant for
high-performance computers? (Meaning all non-embedded ones, that is.)

Because a big, somewhat worse standard (x86) beat a squabbling horde of
somewhat better contenders (RISCs)?

Because the marketplace moved on from "workstations", stranding the
high-cost, high-performance systems in favour of low-cost,
nearly-same-performance systems? Which then overtook the former
champions.

Something else?

Best,
Thomas
--
Thomas Lindgren
"It's becoming popular? It must be in decline." -- Isaiah Berlin

Jul 18 '05 #253
"Thomas Lindgren" <***********@*****.***> wrote in message news:m3************@localhost.localdomain...

"Marshall Spight" <ms*****@dnai.com> writes:
Lights out for static typing.
That kind of statement reminds me a lot of the people
who were saying in 1985 that CISC computing was
dead.


Because [...] ?


Because at no time did CISC machines ever fall below 95%
market share.

Because a big, somewhat worse standard (x86) beat a squabbling horde of
somewhat better contenders (RISCs)?


Hey, x86 is a *lot* worse! :-)
Marshall
Jul 18 '05 #254
On Sat, Oct 25, 2003 at 02:47:28AM +0200, Andreas Rossberg wrote:
It is not, because Lisp hasn't been designed with types in mind. It is
pretty much folklore that retrofitting a type system onto an arbitrary
language will not work properly. For example, Lisp makes no distinction
between tuples and lists, which is crucial for type inference.
Tell that to the hackers who worked on the "Python" compiler found in
CMUCL, SBCL, and some others. It does extensive type inference for both
efficiency and correctness. But it doesn't get in the way (except in
certain rare cases), it just (noisely) informs you what it thinks.

Tell that to the people who wrote Chapter 4 of the Common Lisp standard.
http://www.lispworks.com/reference/H...c/Body/04_.htm
If you want to have extensible overloading then static types are the only
way I know for resolving it. Witness Haskell for example. It has a very
powerful overloading mechanism (for which the term 'overloading' actually is
an understatement). It could not possibly work without static typing, which
is obvious from the fact that Haskell does not even have an untyped
semantics.
Correction: it could not work without typing--dynamic typing does not
imply a lack of typing. I could be wrong, but it seems you would rule
out generic functions in the CLOS (and dynamic dispatch in general) with
the above statement.
Erasing type information from a program that uses type abstraction to
guarantee certain post conditions will invalidate those post conditions. So
you get a program with a different meaning. It expresses something
different, so the types it contained obviously had some expressive power.
This doesn't sound right: erasing type information should not invalidate
the post conditions; it should simply make it more difficult
(impossible?) to check the validity of the post conditions.

This program should still work, even if you fail to type-check it, if
said type-checking would have passed successfully.
Erasing type information from a program that uses overloading simply makes
it ambiguous, i.e. takes away any meaning at all. So the types definitely
expressed something relevant.


This statement is irrelevant because dynamic typing does not eliminate
type information.

--
; Matthew Danish <md*****@andrew.cmu.edu>
; OpenPGP public key: C24B6010 on keyring.debian.org
; Signed or encrypted mail welcome.
; "There is no dark side of the moon really; matter of fact, it's all dark."
Jul 18 '05 #255
"Donn Cave" <do**@drizzle.com> writes:
ERROR "hello.hs":8 - Unresolved top-level overloading
*** Binding : main
*** Outstanding context : Show b

Not to get into the details here, but is someone who takes our
hint here and checks out Haskell really headed for a terrific
demonstration of how modern static typing is an easy and fun way
to increase programming productivity?


(This is Hugs, no?) One of the things I find much better now compared
to when I started with Haskell a couple of years ago are the error
messages of GHC -- and it will generally tell you exactly what to do
(add Show b to the context) in this case.

Then there's Helium, which has a simplified type system and primarily
tries to be helpful to beginners.

True, typing can give you difficult to comprehend error messages, but
this is obviously something developers take very seriously.

-kzm
--
If I haven't seen further, it is by standing in the footprints of giants
Jul 18 '05 #256
On Sat, Oct 25, 2003 at 12:37:38AM +0000, John Atwood wrote:
Pascal Costanza <co******@web.de> wrote:
- when a test case gives me an exception, I can inspect the runtime
environment and analyze how far the test case got, what it already
successfully did, what is missing, and maybe even why it is missing.
With a statically typed language, I wouldn't be able to get that far.

Furthermore, when I am still in the exceptional situation, I can change
variable settings, define a function on the fly, return some value from
a yet undefined method by hand to see if it can make the rest of the
code work, and so on.


That's because you're in an interpreted environemt, not because you're
using a dynamically typed language. Interpreters for statically typed
languages allow the same.


Wrong on all counts.

* Most Common Lisp environments compile to native code, even when
working interactively.
SBCL, for example, has no interpreter whatsoever. The interpreter is
simulated by calling the compiler and evaluating the resulting
function immediately.
* There exists statically typed language implementations which do the
same (SML/NJ)
* The behavior of redefinition in a statically typed environment
is far different from the behavior in a dynamically typed environment.
For one thing, generativity of names kicks in, which makes it
basically impossible to redefine types and functions without
recompiling all uses (and thus restarting your program), in a static
environment.

--
; Matthew Danish <md*****@andrew.cmu.edu>
; OpenPGP public key: C24B6010 on keyring.debian.org
; Signed or encrypted mail welcome.
; "There is no dark side of the moon really; matter of fact, it's all dark."
Jul 18 '05 #257
In comp.lang.lisp Marshall Spight <ms*****@dnai.com> wrote:
I'm having a hard time following you.


I have a guess about this. You seem -- by your references to Python --
to be implicitly assuming that Pascal, Joe &co are setting up Python
as the lightbearer of dynamism.

I find this rather unlikely, as most -- if not all -- of the
pro-dynamic side of this argument hail from Lisp, not Python.

Just a datapoint, since there seemed to be an unstated
misunderstanding there.

Cheers,

-- Nikodemus
Jul 18 '05 #258
Marshall Spight wrote:
"Pascal Costanza" <co******@web.de> wrote:
See the example of downcasts in Java.


Downcasts in Java are not a source of problems.


Huh?
Alone the need to downcast whenever I take something out of a container
would suffice to term it as a "serious problem".
Unless you meant: it's not the downcasts that are the problem, it's the
many language mechanisms that require downcasts that are.

Regards,
Jo

Jul 18 '05 #259
Dirk Thierbach <dt********@gmx.de> writes:
pr***********@comcast.net wrote:
Dirk Thierbach <dt********@gmx.de> writes:

Now why does a type system reject a program? Because there's a type
mismatch in some branch if the program.

*or* because the type system was unable to prove that there *isn't* a
type mismatch in *all* branches.


I am not sure if I read this correctly, but it seems equivalent to what
I say.

\exists branch. mismatch-in (branch)

should be the same as

\not \forall branch. \not mismatch-in (branch)

Anyway, I don't understand your point.


Only if you assume binary logic. If there are three values that
can arise --- provable-mismatch, provable-non-mismatch, and undecided
--- then you cannot assume that ~provable-mismatch = provable-non-mismatch.

My point is that type systems can reject valid programs.
Jul 18 '05 #260
"Marshall Spight" <ms*****@dnai.com> once said:
<pr***********@comcast.net> wrote in message news:wu**********@comcast.net...
"Marshall Spight" <ms*****@dnai.com> writes:
> It would be really interesting to see a small but useful example
> of a program that will not pass a statically typed language.
> It seems to me that how easy it is to generate such programs
> will be an interesting metric.
Would this count?

(defun noisy-apply (f arglist)
(format t "I am now about to apply ~s to ~s" f arglist)
(apply f arglist))


Interesting, interesting. Thanks for taking me seriously!

I'm trying to map this program into Java, and it's possible

....Anyone have any comments?


Well, in C++ you could say

template <class F, class A>
typename result_of<F(A)>::type
noisy_apply( const F& f, const A& a ) {
cout << "I am now about to apply " << f << " to " << a << endl;
return f(a);
}

These assume that both "f" and "a" work with the out-streaming operator
(<<). This is just an ad-hoc version of what would be "class Show" in
Haskell. In C++ practice, most functions aren't "showable", but many
common data types are. So the most useful version of the function would
probably be

// This version works for all "f" and all Showable "a"
template <class F, class A>
typename result_of<F(A)>::type
noisy_apply( const F& f, const A& a ) {
cout << "I am now about to apply a function with type "
<< typeid(f).name() << " to the value " << a << endl;
return f(a);
}

Again, provided that we have some notion of "class Show" in our
statically-typed language, then examples like these are easy to type.
(What dynamically-typed languages typically buy you is that every object
in the system provides some basic methods like toString(), which
eliminates the Show-able constraint that the statically-typed version
needs.)

--
Brian M. McNamara lo****@acm.org : I am a parsing fool!
** Reduce - Reuse - Recycle ** : (Where's my medication? ;) )
Jul 18 '05 #261
"Joachim Durchholz" <jo***************@web.de> wrote in message news:bn**********@news.oberberg.net...
Marshall Spight wrote:
Downcasts in Java are not a source of problems.


Huh?
Alone the need to downcast whenever I take something out of a container
would suffice to term it as a "serious problem".


Why? Here, you've assumed your conclusion. You've not given
me any reason. I will not accept any a priori criticisms of
downcasting except this one: downcasting requires the programmer
to type more characters.

Downcasts are not a source of problems in Java because
empirically, no problems result from them. (It's hard to
prove the absense of something, eh?) Years pass; hundreds
of thousands of lines of code are written, with no errors
arising from downcasting the result of ArrayList.get().
That has been my experience.

Does the existence of downcasts point out a place where
Java is lame? Absolutely. Does extra effort result from
this lameness? Certainly. Does this extra effort cause
bugs? Nope.

(Anyway, the situation is much better with Java generics,
available now in prerelease form; mainsteam in the next
major version.)
Marshall
Jul 18 '05 #262
pr***********@comcast.net wrote:
"Marshall Spight" <ms*****@dnai.com> writes:

It would be really interesting to see a small but useful example
of a program that will not pass a statically typed language.
It seems to me that how easy it is to generate such programs
will be an interesting metric.


Would this count?

(defun noisy-apply (f arglist)
(format t "I am now about to apply ~s to ~s" f arglist)
(apply f arglist))


It wouldn't typecheck in Haskell because you don't restrict the elements
of the arglist to be of the "Show" type class, which is the group of
types that have a printable representation.

Other than that, the declaration of this function would be (in a
Pascal-inspired notation)

noisy_apply (f: function (x: a): b, x: a): b

Note that a and b are type parameters here: if noisy_apply is given a
function with input type a and output type b, it will /demand/ that its
second parameter is of type a, and its result type will be b.

I.e. in C++, you'd write something like

template <a, b> {
b noisy_apply ( (b) (f (a)), a x );
}

(syntax is most likely dead wrong, it's been a while since I was
actively working with C++).

For completeness, here's the Haskell type:

(a -> b) -> a -> b

(I hope I got this one right.)
The first "a -> b" translates as "a function that takes any type a and
returns any type b".
The x -> y -> z notations translates as "a function taking input values
of types x and y and returning a value of z type".
If the same type letter occurs more than once, if must be the same type
in calls.
(Yes, the function is polymorphic, though in a different way than OO
polymorphism: most OO languages don't allow expressing the restriction
that the two "a" parameters must be of the same type but the caller is
free to choose any type.)
To sum it all up: the above specifications are all intended to say the
same, namely "noisy_apply is a function that takes an arbitrary
function, and another parameter that must be of the same type as the
input parameter for the supplied function, and that will return a value
of the same type as the supplied function will return".
Modern static type systems can express such types :-)
You might ask "where's the argument list"?
The answer is that I'm assuming a "currying" language. All functions in
such a language have a single argument; functions with multiple
arguments are written as a function that takes an argument and returns
another function which expects the next argument.

I.e.
add (3, 4)
is first evaluated as
[***] (4)
where [***] is an internally-created function that adds 3 to its single
argument.
(That's just the theory how the operation is defined. Currying languages
will usually be executed in the obvious manner, unless the code takes
specific advantage of currying.)
HTH
Jo

Jul 18 '05 #263
Marshall Spight wrote:
<pr***********@comcast.net> wrote in message news:wu**********@comcast.net...
"Marshall Spight" <ms*****@dnai.com> writes:

(defun noisy-apply (f arglist)
(format t "I am now about to apply ~s to ~s" f arglist)
(apply f arglist))


It's starting to feel like this is merely a demonstration
of Java's weakness in generic programming, and
not something hooked into Goedel.

Anyone have any comments?


You're dead right: Java has insufficient support for typing this.

C++ would allow it, but the result isn't pretty either... which says a
lot about C++'s qualities for higher-order programming.

Regards,
Jo

Jul 18 '05 #264
<pr***********@comcast.net> wrote in message news:pt**********@comcast.net...
Dirk Thierbach <dt********@gmx.de> writes:

My point is that type systems can reject valid programs.


Agreed. But: does it matter? One thing that would help
in figuring out if it matters or not would be seeing a
small, useful program that cannot be proven typesafe.

If these programs are "all around us," and writing equivalent
typesafe programs is somewhat harder, then it matters.
If these programs are hard to come across, and writing
equivalent typesafe programs is easy, then this doesn't
matter.
Marshall
Jul 18 '05 #265
"Marshall Spight" <ms*****@dnai.com> writes:
<pr***********@comcast.net> wrote in message news:wu**********@comcast.net...
"Marshall Spight" <ms*****@dnai.com> writes:
It would be really interesting to see a small but useful example
of a program that will not pass a statically typed language.
It seems to me that how easy it is to generate such programs
will be an interesting metric.
Would this count?

(defun noisy-apply (f arglist)
(format t "I am now about to apply ~s to ~s" f arglist)
(apply f arglist))


Interesting, interesting. Thanks for taking me seriously!

I'm trying to map this program into Java, and it's possible


Isn't Java a particularly bad thing to try here?
but there are enough different ways to go about it that
I'm having a hard time reasoning about the result.

For one thing, what would f be? Probably an instance of


As written, f can be any object of any type. arglist is any set of
arguments with arity 0 to <hardware limit>. It will do something
useful in all cases and something more useful if f is a function

/Jon
Jul 18 '05 #266
"Andreas Rossberg" <ro******@ps.uni-sb.de> writes:
It is not, because Lisp hasn't been designed with types in mind. It is
pretty much folklore that retrofitting a type system onto an arbitrary
language will not work properly. For example, Lisp makes no distinction
between tuples and lists, which is crucial for type inference.


Your credibility just took a nose dive.

/Jon
Jul 18 '05 #267
"Marshall Spight" <ms*****@dnai.com> writes:
<pr***********@comcast.net> wrote in message news:pt**********@comcast.net...
Dirk Thierbach <dt********@gmx.de> writes:

My point is that type systems can reject valid programs.


Agreed. But: does it matter? One thing that would help
in figuring out if it matters or not would be seeing a
small, useful program that cannot be proven typesafe.


(defun lookup (item table if-found if-missing)
(cond ((null table) (funcall if-missing))
((eq item (entry-key (first-entry table)))
(funcall if-found (entry-value (first-entry table))))
(t (lookup item (remaining-entries table)
if-found
if-missing))))

(defun lookup-default (item local-table default-table if-found if-not-found)
(lookup item local-table
if-found
(lambda ()
(lookup item default-table if-found if-not-found))))

(defun transform-list (list local-table default-table if-ok if-fail)
(if (null list)
(funcall if-ok '())
(lookup-default (car list) local-table default-table
(lambda (result)
(transform-list (cdr list) local-table default-table
(lambda (remainder)
(funcall if-ok (cons result remainder)))
if-fail))
(lambda () (funcall if-fail (car list))))))

I know that simple static type checkers will be lost with this.
I do not know if the smarter ones will.
Jul 18 '05 #268
gt*****@prism.gatech.edu (Brian McNamara!) writes:
"Marshall Spight" <ms*****@dnai.com> once said:
<pr***********@comcast.net> wrote in message news:wu**********@comcast.net...
"Marshall Spight" <ms*****@dnai.com> writes:
> It would be really interesting to see a small but useful example
> of a program that will not pass a statically typed language.
> It seems to me that how easy it is to generate such programs
> will be an interesting metric.

Would this count?

(defun noisy-apply (f arglist)
(format t "I am now about to apply ~s to ~s" f arglist)
(apply f arglist))


Interesting, interesting. Thanks for taking me seriously!

I'm trying to map this program into Java, and it's possible

...
Anyone have any comments?


Well, in C++ you could say

template <class F, class A>
typename result_of<F(A)>::type
noisy_apply( const F& f, const A& a ) {
cout << "I am now about to apply " << f << " to " << a << endl;
return f(a);
}


I don't mean to nitpick, but APPLY takes an arbitrary list of arguments.
How do you parameterize over that without enumerating the power set
of potential types?

What if F `returns' void?
Jul 18 '05 #269
Joachim Durchholz <jo***************@web.de> writes:
pr***********@comcast.net wrote:
"Marshall Spight" <ms*****@dnai.com> writes:
It would be really interesting to see a small but useful example
of a program that will not pass a statically typed language.
It seems to me that how easy it is to generate such programs
will be an interesting metric.


Would this count?
(defun noisy-apply (f arglist)
(format t "I am now about to apply ~s to ~s" f arglist)
(apply f arglist))


To sum it all up: the above specifications are all intended to say the
same, namely "noisy_apply is a function that takes an arbitrary
function, and another parameter that must be of the same type as the
input parameter for the supplied function, and that will return a
value of the same type as the supplied function will return".
Modern static type systems can express such types :-)


Are they happy with something like this?

(defun black-hole (x)
#'black-hole)

(for non lispers, the funny #' is a namespace operator.
The black-hole function gobbles an argument and returns
the black-hole function.)

Jul 18 '05 #270
pr***********@comcast.net once said:
gt*****@prism.gatech.edu (Brian McNamara!) writes:
Well, in C++ you could say

template <class F, class A>
typename result_of<F(A)>::type
noisy_apply( const F& f, const A& a ) {
cout << "I am now about to apply " << f << " to " << a << endl;
return f(a);
}

I don't mean to nitpick, but APPLY takes an arbitrary list of arguments.
How do you parameterize over that without enumerating the power set
of potential types?


This isn't really possible for normal C++ functions.

You can always program in a style where every function takes exactly
one argument, which is an N-ary tuple, and use boost::mpl and
boost::tuple to then generalize things. (Indeed, using such libraries,
you can simulate "apply" rather convincingly. But somewhere under the
hood, someone has to have written N different overloads for 0-arg,
1-arg, ... N-arg, up to some fixed ("large enough") N.)

So C++ can only mimic "noisy_apply" so well. I expect that Haskell can
mimic it better in this respect.
What if F `returns' void?


It still works. (You are allowed to say "return f(a)" inside a template
function returning void, provided f(a) "returns" void as well.)

--
Brian M. McNamara lo****@acm.org : I am a parsing fool!
** Reduce - Reuse - Recycle ** : (Where's my medication? ;) )
Jul 18 '05 #271
Pascal Costanza <co******@web.de> writes:
P.S., to Joe Marshall: I hope you don't mind that I hire and fire you
within a split second. ;-)


I'm getting used to it.
Jul 18 '05 #272
Darius <dd*****@hotpop.com> writes:
On Sat, 25 Oct 2003 23:27:46 GMT
pr***********@comcast.net wrote:
I think I have a stumper. I'll be impressed if a type checker can
validate this. The same-fringe function takes a pair of arbitrary
trees and lazily determines if they have the same leaf elements in the
same order. Computation stops as soon as a difference is found so
that if one of the trees is infinite, it won't cause divergence.
-- this is all sooo pointless in Haskell


Well, yeah.
data LazyList a
= Nil
| Cons (forall r. ((a,() -> LazyList a) -> r) -> r)
I specifically didn't define the LazyList type. I don't want to write
type annotations. I want the type inference engine to deduce them.

I should have been more specific: I *know* that all these things can
be done in a statically typed language. What I don't know is whether
these can be done as *easily*. Will this type check without giving
the type checker hints about the type of
(lambda (x y) (lambda (z) (z x y)))? Will it type check if you
don't tell it that the `cons' in lazySequence is the same `cons'
in lazySame and lazyFringe?
I already have a bunch of Church encoded data structures as well if
that's next.


I'm not just trying to come up with convoluted examples for the sake
of quizzing you. I'm trying to find the point where the type checker
gets confused. I'm not writing it in Haskell because I don't know
Haskell, so I'm writing in Lisp. It will always be possible to
*recast* the problem in your favorite language, but can you
transliterate it and let the type checker infer that the program is
correct?
Jul 18 '05 #273
gt*****@prism.gatech.edu (Brian McNamara!) writes:
pr***********@comcast.net once said:
I think I have a stumper. I'll be impressed if a type checker can
validate this. The same-fringe function takes a pair of arbitrary
trees and lazily determines if they have the same leaf elements in the
same order. Computation stops as soon as a difference is found so
that if one of the trees is infinite, it won't cause divergence.


Laziness is pretty orthogonal to static typing.

This one is so easy, we do it in C++ just for spite. :)
I'm using the FC++ library as a helper.

[snip]

Yuck! Type declarations *everywhere*. Where's this famous inference?

What's this LispCons type? I don't have that in my source. My code
is built of lambda abstractions. I don't have a lispvalue type either.
And it isn't limited to integers in the list. Where did my functions go?

And how can you compare a LispCons to an integer?

This works, but it doesn't faithfully represent what I wrote.
In addition, with all the type decorations, it only serves to reinforce
the idea the static type checking means a lot of extra keystrokes.
Jul 18 '05 #274
On Sun, 26 Oct 2003 04:11:44 GMT
pr***********@comcast.net wrote:
Darius <dd*****@hotpop.com> writes:
data LazyList a
= Nil
| Cons (forall r. ((a,() -> LazyList a) -> r) -> r)


I specifically didn't define the LazyList type. I don't want to write
type annotations. I want the type inference engine to deduce them.


This is a data declaration, -not- a type declaration. For example,
Bool is data Bool = True | False and lists would be data [a] = [] | a
:[a]. If I were to use Java would I not be allowed to use 'class'?
class introduces a type too.

Furthermore, 'data' doesn't just give you a type. It also gives you
constructors, destructors (kind of), and recognition.

data Either a b = Left a | Right b
The idiomatic Lisp functions that would correspond to what this
declaration provides would be something like: left, right, leftp,
rightp, eitherp, get-left, get-right.

Also, that type distinction let's me add LazyList to typeclasses like
Show, Read, Eq, Ord, Monad, Num, etc. Perhaps, I don't like my lazy
lists displaying as #<closure> or my trees as ((3 2) (4)) when what I'd
want is((3 . (2 . ()) . (4 . ())). Perhaps, I don't want
(equal<the lazy list 1 2 3> <the lazy list 1 2 3>) to return nil because
they aren't the same procedure.

Also, ignoring the testit function, my code with those four laborious
"type annotations" is 16 lines compared to 29 lines of lisp. (Including
it, it's still shorter.)

If all you want to do is show an example in Lisp that won't type check
in most statically typed language without any alterations you could
simply write,(list t 3) or (if t 'a 10).

Finally, I don't want to write some insane continuation passing
sequence. I want the language to not overevaluate. I guess lazy Lisp or
dynamic Haskell beats us both.
Jul 18 '05 #275
Brian McNamara! <gt*****@prism.gatech.edu> wrote:
pr***********@comcast.net once said:
(defun black-hole (x)
#'black-hole) (for non lispers, the funny #' is a namespace operator.
The black-hole function gobbles an argument and returns
the black-hole function.)

Finally, an example that I don't think you can type in Haskell.
It's a bit tricky. As Remi has shown, you need recursive types.
Recursive types in Haskell always need an intervening data type
constructor. That's a conscious design decision, because recursive
types are very often a result of a real typing error. (IIRC, that's
why OCaml added an option to enable recursive typing after having it
enabled as default for some time, but Remi should know that better
than I do.)

We also need an existential type in this constructor, because the
argument can be of different type for each application of the black hole.
data BlackHole = BH (forall a. a -> BlackHole)
Now we can write the black hole function itself:
black_hole :: BlackHole
black_hole = BH (\_ -> black_hole)
That's it. However, we cannot apply it directly. We have to "unfold"
it explicitely by taking it out of the data constructor. We define an
auxiliary infix function to take care of that.
infixl 0 $$ ($$) :: BlackHole -> a -> BlackHole
(BH f) $$ x = f x
Now we can write a function like
f = black_hole $$ "12" $$ 5 $$ True


which will nicely typecheck.

That's the first time one actually has to add non-obvious stuff to
"please" the type checker. OTOH, the black hole function is pretty
bogus, so one can argue that this is a realistic price to pay to say
that you really really want this strange function. I would be very
surprised if this function is of any use in a real-world program.

And there is nothing else you can do with arguments of arbitary type
but silently discard them, so I guess there are not many other examples
like this.

- Dirk
Jul 18 '05 #276
pr***********@comcast.net wrote:
gt*****@prism.gatech.edu (Brian McNamara!) writes:
This one is so easy, we do it in C++ just for spite. :)
I'm using the FC++ library as a helper.

Yuck! Type declarations *everywhere*. Where's this famous inference?
You *did* notice that it is C++, which hasn't type inference (and
a lousy typesystem? :-) So what Brian is saying that this is so easy
he can even to it with hands tied on his back and standing on his
head.
In addition, with all the type decorations, it only serves to reinforce
the idea the static type checking means a lot of extra keystrokes.


Yes. C++ and Java are to blame for that, and it's a completely
justified obverservation. Static typechecking in C++ and Java just
sucks, and gets in the way all the time. If I had the choice between one
of them and a dynamically typed language, I would of course use the
dynamically typed one.

But there are other statically typechecked languages where you don't
have this kind of trouble. I don't know if this thread turns out do to
something useful for anyone, but if it does, I would hope it gets at
least this idea across.

- Dirk

Jul 18 '05 #277
Brian McNamara! <gt*****@prism.gatech.edu> wrote:
Pascal Costanza <co******@web.de> once said:
Brian McNamara! wrote:
But I suppose you really want this example
(defun f (x)
(unless (< x 200)
(cerror "Type another number"
"You have typed a wrong number")
(f (read)))
(* x 2))
statically typed, huh?
After quite some time, I think I have finally figured out what Pascal
meant with
The type system might test too many cases.
I have the impression that he is confusing dynamic type errors and
runtime errors. In Lisp and Smalltalk they are more or less the same,
and since dynamic type errors map to static type errors, he may think
by analogy that other runtime errors must necessarily also map to
compile errors somehow involved with static typing. Of course this is
nonsense; those two are completely different things. The "too many
cases" referred to some cases of runtime errors he didn't want to be
checked at compile time. As you cannot get "too many test cases" by
type annotations static typing (which was the context where he made
this comment), like you cannot get "too many test cases" by writing too
many of them by hand, I really had a hard time figuring this out.

To sum it up: Unit tests (some, not all!) correspond to type
annotations, static type checking is the same as running the test
suite, dynamic types correspond to data types, runtime errors
correspond to runtime errors (surprise :-).
Ok, I'll try. If I come up short, I expect it's because I'm fluent
in neither Haskell nor Lisp, not because it can't be done.


You don't really need runtime errors for the above example,
but here's a similar version to yours that throws an error in
'cerror' to return to the toplevel. No Maybe types.

cerror :: String -> String -> IO a -> IO a
cerror optmsg errmsg cont = do
print errmsg
print ("1: " ++ optmsg)
print ("2: Fail")
s <- getLine
x <- readIO s
let choice 1 = cont
choice 2 = ioError (userError errmsg)
choice x

f :: Integer -> IO (Integer)
f x =
if (x < 200)
then return (x * 2)
else cerror
"Type another number"
"You have typed a wrong number"
(getLine >>= readIO >>= f)
I don't want an "approximation of cerror". I want cerror!


And you got it, exactly as you wanted. Perfectly typeable.
(Please don't say now "but I want to use the Lisp code, and
it should run exactly as it is, without any changes in syntax").
You could even assign the very same types in Lisp if any of the
extensions of Lisp support that. (I didn't try.)

- Dirk
Jul 18 '05 #278
"Matthew Danish" <md*****@andrew.cmu.edu> wrote:
It is not, because Lisp hasn't been designed with types in mind. It is
pretty much folklore that retrofitting a type system onto an arbitrary
language will not work properly. For example, Lisp makes no distinction
between tuples and lists, which is crucial for type inference.
Tell that to the hackers who worked on the "Python" compiler found in
CMUCL, SBCL, and some others. It does extensive type inference for both
efficiency and correctness. But it doesn't get in the way (except in
certain rare cases), it just (noisely) informs you what it thinks.


Clarification: I was talking about strong typing, i.e. full and precise
inference. As I wrote in another posting, I don't believe in soft typing,
since it has exactly the weaknesses that seems to make proponents of dynamic
typing judge type systems having marginal utility: its only use is to flag
more or less trivial errors, and unreliable so.

I would be very, very surprised to see a strong type system put on top of an
untyped language successfully, i.e. without changing or restricting the
language severely.
If you want to have extensible overloading then static types are the only way I know for resolving it. Witness Haskell for example. It has a very
powerful overloading mechanism (for which the term 'overloading' actually is an understatement). It could not possibly work without static typing, which is obvious from the fact that Haskell does not even have an untyped
semantics.


Correction: it could not work without typing--dynamic typing does not
imply a lack of typing. I could be wrong, but it seems you would rule
out generic functions in the CLOS (and dynamic dispatch in general) with
the above statement.


See my example below.
Erasing type information from a program that uses type abstraction to
guarantee certain post conditions will invalidate those post conditions. So you get a program with a different meaning. It expresses something
different, so the types it contained obviously had some expressive power.
This doesn't sound right: erasing type information should not invalidate
the post conditions; it should simply make it more difficult
(impossible?) to check the validity of the post conditions.
It does, because it also invalidates the pre conditions, on which the post
conditions depend.
This program should still work, even if you fail to type-check it, if
said type-checking would have passed successfully.


This is only a meaningful observation if you look at a closed program. If
you simply erase types from some given piece of code it may or may not
continue to work, depending on how friendly client code will act. In that
sense it definitely changes meaning. You had to protect against that using
other, incomparable features. So there is a difference in expressiveness.

And making modular composition more reliable is exactly the main point about
the expressiveness types add!
Erasing type information from a program that uses overloading simply makes it ambiguous, i.e. takes away any meaning at all. So the types definitely expressed something relevant.


This statement is irrelevant because dynamic typing does not eliminate
type information.


Yes it does. With "dynamic typing" in all its incarnations I'm aware of type
infromation is always bound to values. Proper type systems are not
restricted to this, they can express free-standing or relational type
information.

As an example of the kind of "overloading" (or type dispatch, if you want)
you cannot express in dynamically typed lagnuages: in Haskell, you can
naturally define functions which are overloaded on their return type, i.e.
you don't need any value at all at runtime to do the "dispatch". For
example, consider an overloaded function fromString, that produces values of
potentially arbitrary types from strings.

- Andreas

Jul 18 '05 #279
"Andreas Rossberg" <ro******@ps.uni-sb.de> writes:
Clarification: I was talking about strong typing, i.e. full and precise
inference.
Please let us be careful with the terminology. Strong typing means
that type checking happens at some point, and that a type error is
never allowed to complete (and thus do something non-sensical).
Inference is a completely independent axis -- any type system may or
may not have type inference. If you happen to like strong
*dynamically* typed languages then it is extremely irritating when
people assume strong typing can only happen at compile time.

I don't know the exact word you are looking for, unfortunately.
"complete" doesn't seem right, because that seems to be a property of
an inferencer: an inferencer is complete if it finds a type assignment
for any program that can be type checked. "sound" doesn't seem right,
either. A sound type system produces *correct* types, but those types
may not be tight enough to predict an absence of type errors.

Maybe you should just say "static checking". People will assume you
probably mean that programs get rejected if they have type errors.
As an example of the kind of "overloading" (or type dispatch, if you want)
you cannot express in dynamically typed lagnuages: in Haskell, you can
naturally define functions which are overloaded on their return type, i.e.
you don't need any value at all at runtime to do the "dispatch". For
example, consider an overloaded function fromString, that produces values of
potentially arbitrary types from strings.

You can do this kind of thing in dynamically typed languages, too,
though unfortunately it is not very common. Do a web search on
"object-oriented" and "roles" and a lot comes up. This is the same
thing: depending on the role that a requestor sees an object under,
the object can respond differently even to the same messages. For
example, someone may respond to #requestPurchase differently depending
on whether the request treats them like a father or treats them like
the president of a company. And it works for functional languages,
too, as is clearly exhibited by your fromString() overloaded function.
It could be viewed as acting differently depending on whether it is in
an int-producer role or a float-producer role.
(Incidentally, a lot of the results that turn up regard languages for
databases. I was surprised at how much interesting language research
happens in the database world. They take the view that long-lived
data is important. This is common to Smalltalk people, and perhaps to
people who like Lisp machines, but even Scheme and Lisp people don't
seem to think a whole lot about this idea. Database people certainly
do, however. You can't just repopulate a database all the time!)

Anyway, depending on the role that the requestor is talking to, a
responder (be it a function or an object) can act differently. In
some cases, the expected role can be figured out automatically,
instead of needing to be written explicitly. This is somewhat similar
to the behavior of C++ when you invoke a non-virtual method: the type
of the *variable* decides what will happen.
Getting aside from theory and research, Matlab and Perl both allow
this kind of context sensitivity. When you call a Matlab function,
the function knows how many return values you are expecting. If you
call a function in Perl, it can tell whether you want a scalar or a
vector back, and it can act accordingly. (At least, the built-in Perl
functions can do this.)

-Lex
Jul 18 '05 #280
pr***********@comcast.net wrote:

Yuck! Type declarations *everywhere*. Where's this famous inference?


Hey, what do you expect of C++ code?

No type declarations in Haskell.

Regards,
Jo

Jul 18 '05 #281
David Mertz wrote:
I would challenge any enthusiast of Haskell, SML, Mozart, Clean, or the
like to come up with something similarly direct.


Take Mozart out: it uses run-time typing.

Regards,
Jo

Jul 18 '05 #282
Dirk Thierbach wrote:
Brian McNamara! <gt*****@prism.gatech.edu> wrote:
Pascal Costanza <co******@web.de> once said:
Brian McNamara! wrote:
But I suppose you really want this example
(defun f (x)
(unless (< x 200)
(cerror "Type another number"
"You have typed a wrong number")
(f (read)))
(* x 2))
statically typed, huh?
Sidenote: The code above has a bug. Here is the correct version:

(defun f (x)
(if (< x 200)
(* x 2)
(progn
(cerror "Type another number"
"You have typed a wrong number")
(print '>)
(f (read)))))

(Noone spotted this bug before. All transliterations I have seen so far
have silently corrected this bug. It is interesting to note that it
probably weren't the type systems that corrected it.)
After quite some time, I think I have finally figured out what Pascal
meant with

The type system might test too many cases.

I have the impression that he is confusing dynamic type errors and
runtime errors. In Lisp and Smalltalk they are more or less the same,
and since dynamic type errors map to static type errors, he may think
by analogy that other runtime errors must necessarily also map to
compile errors somehow involved with static typing. Of course this is
nonsense; those two are completely different things. The "too many
cases" referred to some cases of runtime errors he didn't want to be
checked at compile time. As you cannot get "too many test cases" by
type annotations static typing (which was the context where he made
this comment), like you cannot get "too many test cases" by writing too
many of them by hand, I really had a hard time figuring this out.
I think you have a restricted view of what "type" means. Here is the
same program written in a "type-friendly" way. (Again, in standard ANSI
Common Lisp. Note that the type is defined inside of f and not visible
to the outside.)

(defun f (x)
(check-type x (real * 200))
(* x 2))

CL-USER 1 > (f 5)
10

CL-USER 2 > (f 666)

Error: The value 666 of X is not of type (REAL * 200).
1 (continue) Supply a new value of X.
2 (abort) Return to level 0.
3 Return to top loop level 0.

Type :b for backtrace, :c <option number> to proceed, or :? for other
options

CL-USER 3 : 1 > :c 1

Enter a form to be evaluated: 66
132
Ok, I'll try. If I come up short, I expect it's because I'm fluent
in neither Haskell nor Lisp, not because it can't be done.

You don't really need runtime errors for the above example,
but here's a similar version to yours that throws an error in
'cerror' to return to the toplevel. No Maybe types.

cerror :: String -> String -> IO a -> IO a
cerror optmsg errmsg cont = do
print errmsg
print ("1: " ++ optmsg)
print ("2: Fail")
s <- getLine
x <- readIO s
let choice 1 = cont
choice 2 = ioError (userError errmsg)
choice x

f :: Integer -> IO (Integer)
f x =
if (x < 200)
then return (x * 2)
else cerror
"Type another number"
"You have typed a wrong number"
(getLine >>= readIO >>= f)

I don't want an "approximation of cerror". I want cerror!

And you got it, exactly as you wanted. Perfectly typeable.


Nice.
(Please don't say now "but I want to use the Lisp code, and
it should run exactly as it is, without any changes in syntax").
No, I don't care about syntax.

You have used CPS - that's a nice solution. However, you obviously
needed to make the type declaration for f :: Integer -> IO (Integer)

Note that my code is not restricted to integers. (But I don't know
whether this is actually a serious restriction in Haskell. I am not a
Haskell programmer.)
You could even assign the very same types in Lisp if any of the
extensions of Lisp support that. (I didn't try.)


What you actually did is: you have assigned a "broad" type statically,
and then you revert to a manual dynamic check to make the fine-grained
distinction.

In both Lisp versions, you have exactly one place where you check the
(single) type. A static type system cannot provide this kind of
granularity. It has to make more or less distinctions.
I regard the distinction between dynamic type errors and runtime errors
to be an artificial one, and in fact a red herring. I would rather call
the former type _exceptions_. Exceptions are situations that occur at
runtime and that I have a chance to control and correct in one way or
the other. Errors are beyond control.

This might be terminological nitpicking, but I think it is a serious
confusion, and it probably stems from weakly typed languages that allow
for arbitrary memory access and jumps to arbitrary machine addresses.
The niche in which such languages are still useful is getting smaller by
the time. (Languages like Java, Python, Ruby, etc., did a fine job in
this regard, i.e. to promote writing safe programs, rather than using
languages for high-level purposes that are actually designed for writing
hardware drivers.)

Anything below real runtime errors (core dumps) has the chance to be
controlled and corrected at runtime. (Including endless loops - they
should never be unbreakable in the first place.)

This can give you a real advantage for long-running systems or for very
large applications that you don't want to, or cannot stop and restart on
a regular basis.

Both (correct) code snippets I have provided are considerably smaller
than your solution and both are more general. And my second solution
still provides a way to correct a wrong parameter at runtime at no
additional cost. I regard this a feature, not a problem.

Assume that you need to write a program that has only vague
specifications and requires a high level of such flexibility in its
specs. _I_ would fire the programmer who would insist to use a language
that requires him to program all this flexibility by hand, and
especially wants to see a specification for "unforeseen problems",
whether formal or not.

See http://www.dilbert.com/comics/dilber...-20031025.html ;)
Pascal

P.S.: Please always remember the original setting of this thread. The
original original questions was something along the lines of "why on
earth would one not want a static type system?" I don't want to prove
the general superiority of dynamic type checking over static type
checking. Heck, there isn't even a general superiority of typing over
no-typing-at-all. All these approaches have their respective advantages
and disadvantages, and should be used in the right circumstances.

Yes, you can clearly tell from my postings that I prefer dynamic typing
over static typing. This in turn means that I am probbaly not a good fit
for projects that require a strong static approach. And everyone who
prefers static typing is probably not a good fit for projects that
require a strong dynamic approach. So what? All of us who participate in
this discussion are probably not good fits for writing hardware drivers.
;)

Everyone should do what they are best at, and everyone should use the
tools that fit their style most.

But why on earth should anyone want to prove that their own preferred
_personal_ style is generally superior to all others?

"If I have seen farther than others, it is because I was
standing on the shoulder of giants."

--- Isaac Newton
"In computer science, we stand on each other's feet."

--- Brian K. Reed
P.P.S.: And I still think that soft typing is the best compromise. It's
the only approach I know of that has the potential to switch styles
during the course without the need to completely start from scratch.

Jul 18 '05 #283
Pascal Costanza <co******@web.de> wrote:
Dirk Thierbach wrote: Sidenote: The code above has a bug. Here is the correct version:

(defun f (x)
(if (< x 200)
(* x 2)
(progn
(cerror "Type another number"
"You have typed a wrong number")
(print '>)
(f (read)))))

(Noone spotted this bug before. All transliterations I have seen so far
have silently corrected this bug.
Because all people who did transliterations guessed what the program
does, instead of testing it directly. I don't have a Lisp system here,
so I couldn't try it out.
It is interesting to note that it probably weren't the type systems
that corrected it.)
While doing the transliteration, I actually had quite a few attempts
that didn't work, and all of them had type errors. Once the typing
was right, everything worked. (And I had to put the continuation in,
because, as I said, non-local control transfers don't translate
one-to-one.)
The type system might test too many cases.

I think you have a restricted view of what "type" means.
Maybe. It sure would help if you'd tell me your view instead of having
me guess :-) For me, a type is a certain class of values, and a static
type is given by a limited language describing such classes. A
dynamic type is a tag that is associated with a value. Arbitrary
classes of values (like "all reals less than 200") are not a type.
Here is the same program written in a "type-friendly" way.
I'm not sure what's "type-friendly" about it. It uses a dynamic check
with a mix of dynamic type checking and value checking, yes. (I guess
"(real * 200)" means "a real number below 200", does it?)
(Again, in standard ANSI Common Lisp. Note that the type is defined
inside of f and not visible to the outside.)
You do not define any type, you do a dynamic check.
(defun f (x)
(check-type x (real * 200))
(* x 2))
[...] You have used CPS - that's a nice solution. However, you obviously
needed to make the type declaration for f :: Integer -> IO (Integer)
In this case, you're right, but I am not sure if you know the reason
for it :-) (I usually make type annotations for the same reason you
write unit tests, i.e. almost everwhere unless the function is
trivial) So why do I have to make it here? Is the type annotation for
cerror also necessary?
What you actually did is: you have assigned a "broad" type statically,
and then you revert to a manual dynamic check to make the fine-grained
distinction.
If you mean by "broad" type the static integer type, yes. I have to
assign some type. If I don't want to restrict it to integers, I assign
a more general type (like the Number type I used in the other example).

Dynamic type checks either can be dropped because static type
checking makes them unnecessary, or they translate to similar dynamic
checks in the other language.

Since you cannot statically verify that a user-supplied value is less
than 200, this check becomes a dynamic check (what else should it
translate to?)
In both Lisp versions, you have exactly one place where you check the
(single) type.
You have one place where you do the dynamic check, I have one place where
I do the dynamic check. The static type is only there because there
has to be a type. It really doesn't matter which one I use statically.
The static type check does not play any role for the dynamic check.
A static type system cannot provide this kind of granularity.
I am sorry, but this is nonsense (and it isn't really useful either).

This is like saying "I don't write unit tests in Lisp, because I have
a single place inside my program where I can check everything that
can go wrong."
I regard the distinction between dynamic type errors and runtime errors
to be an artificial one, and in fact a red herring. I would rather call
the former type _exceptions_.
Fine with me. I would go further and say that without dynamic
type checking, there are in fact only exceptions.
Both (correct) code snippets I have provided are considerably smaller
than your solution and both are more general. And my second solution
still provides a way to correct a wrong parameter at runtime at no
additional cost. I regard this a feature, not a problem.
I would debate both points, but this is about static typing, not about
comparisons a la "in my language I can write programs that are
two lines shorter than yours."
P.S.: Please always remember the original setting of this thread. The
original original questions was something along the lines of "why on
earth would one not want a static type system?"
The answer to this is of course "You use what you like best. End of
story."

What I wanted to do was to show that the claim "There are things that
are easy to do in a dynamically typed language, but one cannot do them
conveniently in a statically typed language, because they won't
typecheck. Hence, statically typed languages are less expressive" is
wrong in all but very very rare situations. Of course the amount of
convenience with which you can do something varies from language to
language, and with the available libraries, but static typing (if done
properly) is never a show-stopper.
Yes, you can clearly tell from my postings that I prefer dynamic typing
over static typing.
And that's fine with me. What I am a bit allergic to is making
unjustified general claims about static typing (or any other things,
for that matter) that are just not true.
But why on earth should anyone want to prove that their own preferred
_personal_ style is generally superior to all others?


I don't know. As far as I am concerned, this thread was never about
"superiority", and I said so several times.

- Dirk
Jul 18 '05 #284
> pr***********@comcast.net reasonably noted:
If there are three values that can arise --- provable-mismatch,
provable-non-mismatch, and undecided --- then you cannot assume that
~provable-mismatch = provable-non-mismatch.

Dirk Thierbach <dt********@gmx.de> writes: Hindley-Milner type inference always terminates. The result is either
a provable mismatch, or a provable-non-mismatch.
You're completely wrong, which can be easily demonstrated.

The fact that it terminates isn't the interesting part. Any inference
procedure can also "always terminate" simply by having a timeout, and reporting
"no proof" if it can't find one in time.

So what's interesting is whether the conclusions are correct.

Let's take as our ideal what a dynamic type system (say, a program in Lisp)
would report upon executing the program. The question is, can your type
inference system make exactly the same conclusions at compile time, and predict
all (and only!) the type errors that the dynamic type system would report at
run time?

The answer is no.
A provable mismatch means that there is an execution branch that will crash
if you ever get there. If for some reason this branch will never get
executed, either because it's (non-provably) dead code
That's one obvious case, so even you know that your claim of a "provable
mismatch" is incorrect. There are programs that will never have run-time
errors, but your static type inference will claim a type error.
or because you have an implicit restriction for possible arguments to this
expression the type system doesn't know about, than you could call it a
"valid program", but it will still be rejected, yes.


So haven't you just contradicted yourself? Perhaps you think this "implicit
restriction" is unfair, because you've kept information from the system.
But the real problem is that all the information might be there, but the
system isn't capable of making sufficient inferences to figure it out.

For example:
(defun foo (x)
(check-type x (integer 0 10))
(+ 1 x) )
(defun fib (n)
(check-type n (integer 0 *))
(if (< n 2)
1
(+ (fib (- n 1)) (fib (- n 2))) ))
(print (foo (fib 5)))

This program prints "9", and causes no run-time type errors. Will it be
successfully type-checked at compile time by a static system? Almost certainly
the answer is no.

In case the code isn't clear; FOO is a function that increments a number by
one. Its domain is [0,10], and its range is [1,11]. FIB is the Fibonacci
sequence, with domain [0,infinity] and range [1,infinity].

Are you allowed to call FOO with the output of FIB? In general, no, because
the range of FIB is much greater than the domain of FOO.

However, in the particular case of the particular code in this program, it
turns out that only (FIB 5) is called, which happens to compute to 8, well
within the domain of FOO. Hence, no run-time type error. Unfortunately, the
only way to figure this out is to actually compute the fifth Fibonacci number,
which surely no static type inference system is going to do. (And if you do
happen to find one, I'm sure I can come up with a version of the halting
problem that will break that system too.)

Do you now accept that your static type inference systems do NOT partition
all programs into "either a provable [type] mismatch, or a provable [type]
non-mismatch"?

Finally, to get back to the point of the dynamic typing fans: realizing that
type inference is not perfect, we're annoyed to be restricted to writing only
programs that can be successfully type checked at compile time. Nobody
objects to doing compile-time type inference (e.g. as the CMUCL compiler for
Common Lisp does) -- especially just to generate warnings -- but many people
object to refusing to compile programs that can not be proven type-safe at
compile time (by your limited type inference systems).

-- Don
__________________________________________________ _____________________________
Don Geddis http://don.geddis.org/ do*@geddis.org
Jul 18 '05 #285
Dirk Thierbach <dt********@gmx.de> writes:
pr***********@comcast.net wrote:
gt*****@prism.gatech.edu (Brian McNamara!) writes:

This one is so easy, we do it in C++ just for spite. :)
I'm using the FC++ library as a helper.

Yuck! Type declarations *everywhere*. Where's this famous inference?


You *did* notice that it is C++, which hasn't type inference (and
a lousy typesystem? :-) So what Brian is saying that this is so easy
he can even to it with hands tied on his back and standing on his
head.


I did in fact notice that. It's *supposed* to be easy. It's
*supposed* to be something simple and useful.

As I mentioned earlier in this thread, the two things I object to in a
static type system are these:

1) The rejection of code I know to be useful, yet the system is
unable to prove correct.

2) The requirement that I decorate my program with type
information to give the type checker hints to enable it
to check things that are, to me, obviously correct.

These are personal preferences, but they are shared by many of my
fellow lisp hackers. To my knowledge, not one of my fellow lisp
hackers would mind a static type checker that noticed code fragments
that could be proven to never produce anything but an error provided
that said type checker never need hints and is never wrong. Many lisp
systems already have a weak form of this: try compiling a program that
invokes CAR on 2 arguments or attempts to use a literal string as a
function.

Being unable to prove code correct is the same thing as being able to
prove it incorrect. This makes three classes of code:

1) provably type safe for all inputs
2) provably erroneous for all inputs
3) neither of the above

Both dynamic and static type systems behave the same way for classes 1
and 2 except that static systems reject the code in section 2 upon
compilation while dynamic systems reject it upon execution. It is
section 3 that is under debate.

We can categorize class 3 by partitioning the inputs. Some inputs
can be shown to always produce a type error, some can be shown to
*never* produce a type error, and some are undecidable. All the class
3 programs contain inputs in at least two of these partitions.

The question remains over what to do with class 3 programs. Of course
what we do may depend on how often a class 3 program is encountered,
what the risk might be of running one, and possibly the compile time
cost of detecting one.

Those of us that are suspicious of static typing believe that there
are a large enough number of class 3 programs that they will be
regularly or frequently encountered. We believe that a large number
of these class 3 programs will have inputs that cannot be decided by a
static type checker but are nonetheless `obviously' correct. (By
obvious I mean fairly apparent with perhaps a little thought, but
certainly nothing so convoluted as to require serious reasoning.)

Static type check aficionados seem to believe that the number of class
3 programs is vanishingly small and they are encountered rarely. They
appear to believe that any program that is `obviously' correct can be
shown to be correct by a static type checking system. Conversely,
they seem to believe that programs that a static type checker find
undecidable will be extraordinarily convoluted and contrived, or
simply useless curiosities.

Some people in the static type check camp are making blanket
statements like:

Matthias Blume
Typical "Gödel" statements tend to be pretty contrived.

Dirk Thierbach <dt********@gmx.de> writes:
...the sane programs that are taken away are in most cases at
least questionable (they will be mostly of the sort: There is a
type error in some execution branch, but this branch will never be
reached)

Joachim Durchholz <jo***************@web.de> writes:
... given a good type system, there are few if any practial
programs that would be wrongly rejected.

Matthias Blume
`...humans do not write such programs, at least not
intentionally...'
No one has offered any empirical evidence one way or another, but the
static type people have said `if class 3 programs are so ubiquitous,
then it should be easy to demonstrate one'. I'll buy that. Some of
the people in here have offered programs that take user input as an
example. No sane person claims to read the future, so any statically
typed check program would have to perform a runtime check.

But this is shooting fish in a barrel, so I'll give you a pass on it.

I (and Don Geddis) happen to believe that there are a huge number of
perfectly normal, boring, pedestrian programs that will stymie a
static type checker. I've been offering a few that I hope are small
enough to illustrate the problem and not so contrived as to fall into
the `pointless' class.

The first program I wrote (a CPS lookup routine) generally confuses
wimp-ass static type checking like in Java.

Stephan Bevan showed that SML/NJ had no problem showing it was type
safe without giving a single type declaration. I find this to be
very cool.

I offered my `black hole' program and got these responses:

Remi Vanicat <va*************@labri.fr>
`I don't see how this function can be useful...'

Jesse Tov <to*@eecs.harvREMOVEard.edu>
`we don't need functions with those types.'

Dirk Thierbach <dt********@gmx.de>
`recursive types are very often a result of a real typing error.'

"Marshall Spight" <ms*****@dnai.com>
`I don't believe the feature this function illustrates could be
useful.'

Will this the response for any program that does, in fact, fool a
number of static type checkers?

Marshall Spight wrote:
`I'm trying to see if anyone can come up with a program
that is small, useful, and not provably typesafe.'

to which Joachim Durchholz <jo***************@web.de> replied:
``Matthias' and my position is that such a program doesn't exist
(and cannot even exist)''
This appears to be getting very close to arguing that static type
checkers never reject useful programs because, by definition, a
program rejected by a static type checker is useless.

Another one was a CPS routine that traverses two trees in a lazy
manner and determins if they have the same fringe. The point was not
to find out if this could be done in SML or Haskell. I know it can.
The point was to find out if the static type checker could determine
the type safety of the program.

There is a deliberate reason I did not create a `lazy cons'
abstraction. I wanted to see if the type checker, with no help
whatsover, could determine that the type of the lambda abstractions
that are *implementing* the lazy consing.

Re-writing the program in C++ with huge amounts of type decorations
and eliminating the lambdas by introducing parameterized classes does
not demonstrate the power of static type checking.

Re-writing the program in a lazy language also does nothing to
demonstrate the power of static type checking.

Jul 18 '05 #286
Don Geddis <do*@geddis.org> writes:
Let's take as our ideal what a dynamic type system (say, a program in
Lisp) would report upon executing the program. The question is, can
your type inference system make exactly the same conclusions at
compile time, and predict all (and only!) the type errors that the
dynamic type system would report at run time?

The answer is no.


Indeed not. But if you were using a statically typed language you'd
write the program in a slightly different style, one that does let
type errors be found at compile time. For example instead of a
function that takes 'either an integer, or a string' with that checked
at run time, you'd have to be more explicit:

data T = TI Integer | TS String

f :: T -> Bool
f (TI i) = ...
f (TS s) = ...

The question is whether having to change your program in this way is
so horribly cramping that it negates the advantages from having errors
found earlier. In my experience I haven't found that to be the case.

- Hmm, I wonder if I might make an analogy between type checking and
syntax checking. In the language Tcl almost everything, including
code, is a string, and the 'if' statement is just something that takes
three strings and evaluates the first, followed by either the second
or third. The syntax of these strings is not checked at compile time.
You can write code like

if $cond { puts "hello" ( } else { puts "goodbye" }

and the syntax error in the 'hello' branch won't be found unless you
run a test case setting $cond true.

The question is, can a static syntax checking system make exactly the
same conclusions at compile time, and predict all and only the syntax
errors that Tcl would report at run time?

The answer is no.

Should we then conclude that compile-time syntax checking is not worth
having?

--
Ed Avis <ed@membled.com>
Jul 18 '05 #287
pr***********@comcast.net writes:
As I mentioned earlier in this thread, the two things I object to in a
static type system are these:

1) The rejection of code I know to be useful, yet the system is
unable to prove correct.
This has never happened to me with Haskell, and I don't think it has
ever happened with C. With C++ I have often felt as though the type
system was being picky and getting in the way, but this was usually a
case of me finding it difficult to make my intentions clear to the
compiler - which is also a disadvantage of static typing, of course.
Languages like Haskell and ML don't tend to suffer that problem,
because the type system is a bit more understandable than C++ and will
infer things it's not told explicitly.
2) The requirement that I decorate my program with type
information to give the type checker hints to enable it
to check things that are, to me, obviously correct.
This is why type inference as used in the above-mentioned languages
and many others is so useful. You can still add explicit type
declarations where you want to, either to help find out where you have
made a mistake (analogously to putting in extra assertions to narrow
down the point where something goes wrong at run time) and to act as
documentation (which, unlike comments, is automatically checked by the
compiler to make sure it's up to date).
To my knowledge, not one of my fellow lisp hackers would mind a
static type checker that noticed code fragments that could be proven
to never produce anything but an error provided that said type
checker never need hints and is never wrong.
I believe this is what you get with standard Haskell (though some
language extensions may require explicit type annotations in some
places). Certainly the type checker is never wrong: it never flags a
type error in a correct program, and never gives the all-clear for a
program containing a type error.

But of course, the checker is checking Haskell programs, not Lisp
programs. Nobody is suggesting to use some form of full compile-time
type checking on Common Lisp, nor could such a thing exist.

[later]
This appears to be getting very close to arguing that static type
checkers never reject useful programs because, by definition, a
program rejected by a static type checker is useless.


Absolutely true - in a statically typed language! In Lisp or a
dynamically typed language, obviously not true - except that the
statement doesn't really make any sense for Lisp, because one cannot
write a full static type checker for that language.

--
Ed Avis <ed@membled.com>
Jul 18 '05 #288
Andreas Rossberg wrote:
As an example of the kind of "overloading" (or type dispatch, if you want)
you cannot express in dynamically typed lagnuages: in Haskell, you can
naturally define functions which are overloaded on their return type, i.e.
you don't need any value at all at runtime to do the "dispatch". For
example, consider an overloaded function fromString, that produces values of
potentially arbitrary types from strings.


Wrong.

(defmethod from-string-expansion (to-type string)
(if (or (subtypep to-type 'sequence)
(subtypep to-type 'character)
(eq to-type t))
`(coerce ,string ',to-type)
`(coerce (read-from-string ,string) ',to-type)))

(defmacro set-from-string (x string &environment env)
(multiple-value-bind
(bound localp declarations)
(variable-information x env)
(declare (ignore bound localp))
(let ((type (or (cdr (assoc 'type declarations)) t)))
`(setf ,x ,(from-string-expansion type string)))))
Session transcript:
? (let (x)
(declare (integer x))
(set-from-string x "4711")
(print x))

4711

? (let (x)
(declare (string x))
(set-from-string x "0815")
(print x))

"0815"

? (defmethod from-string-expansion ((to-type (eql 'symbol)) string)
`(intern ,string))
#<standard-method from-string-expansion ((eql symbol) t)>

? (let (x)
(declare (symbol x))
(set-from-string x "TEST")
(print x))

test
The macro delegates the decision which conversion function to use to the
generic function FROM-STRING-EXPANSION, but this is all executed at
compile time (as required for a compiled implementation of Common Lisp).

Pascal
P.S.: This is not ANSI Common Lisp, but uses a feature as defined in Guy
Steele's book "Common Lisp, The Language - 2nd Edition" (->
VARIABLE-INFORMATION). The code above works in Macintosh Common Lisp.

Jul 18 '05 #289
Ed Avis wrote:
- Hmm, I wonder if I might make an analogy between type checking and
syntax checking. In the language Tcl almost everything, including
code, is a string, and the 'if' statement is just something that takes
three strings and evaluates the first, followed by either the second
or third. The syntax of these strings is not checked at compile time.
You can write code like

if $cond { puts "hello" ( } else { puts "goodbye" }

and the syntax error in the 'hello' branch won't be found unless you
run a test case setting $cond true.

The question is, can a static syntax checking system make exactly the
same conclusions at compile time, and predict all and only the syntax
errors that Tcl would report at run time?

The answer is no.

Should we then conclude that compile-time syntax checking is not worth
having?


No. Syntax errors make the program fail, regardless whether this is
checked at compile-time or at runtime.

A type "error" detected at compile-time doesn't imply that the program
will fail.
Pascal

Jul 18 '05 #290
Dirk Thierbach wrote:
I think you have a restricted view of what "type" means.

Maybe. It sure would help if you'd tell me your view instead of having
me guess :-) For me, a type is a certain class of values, and a static
type is given by a limited language describing such classes. A
dynamic type is a tag that is associated with a value. Arbitrary
classes of values (like "all reals less than 200") are not a type.


What are "non-arbitrary classes of values"?

According to your criteria, (real * 200) is
- a certain class of values
- given in a limited language describing that class

I would be interesting to see a convincing definition of the term "type"
that precludes (real * 200), and similar type specifications.

Most of your other comments depend on this, so I don't comment on them
in detail.

(Maybe it helps to explain that CHECK-TYPE doesn't check a value per se.
(check-type 5 (real * 200)) doesn't work. CHECK-TYPE checks a property
of the variable it is being passed.)
P.S.: Please always remember the original setting of this thread. The

original original questions was something along the lines of "why on
earth would one not want a static type system?"


The answer to this is of course "You use what you like best. End of
story."


Fine. (Really.)
What I wanted to do was to show that the claim "There are things that
are easy to do in a dynamically typed language, but one cannot do them
conveniently in a statically typed language, because they won't
typecheck. Hence, statically typed languages are less expressive" is
wrong in all but very very rare situations. Of course the amount of
convenience with which you can do something varies from language to
language, and with the available libraries, but static typing (if done
properly) is never a show-stopper.


"very rare situations"
"never a show-stopper"

How do you know?

If you are only talking about your personal experiences, that's fine,
but you should say so.
Yes, you can clearly tell from my postings that I prefer dynamic typing
over static typing.


And that's fine with me. What I am a bit allergic to is making
unjustified general claims about static typing (or any other things,
for that matter) that are just not true.


The only claim I make is that static type systems need to reject
well-behaved programs. That's an objective truth.

This implies that there is a trade-off involved. That's also an
objective truth.

You choose to downplay the importance of dynamic type checking. All I
hear is that you (and many others) say that the disadvantages of static
typing are negligible. However, I haven't found any convincing arguments
for that claim. This claim is simply repeated again and again and again,
ad infinitum. But how do you actually _justify_ that claim?

"It doesn't matter in practice." is not a valid response! Why do think
it doesn't matter in practice? "That's my personal experience." is not a
valid response either. The claim suggests to be valid for a much broader
scale than just your personal experience. Why do you think your personal
experience translates well (or should translate well) to other people?

If it's a personal, subjective choice, that's fine with me. Great! Go
on, use what helps you most.

But I am interested in the question why you (or others) think that
almost all software should be developed like that. This is a very strong
claim, and definitely deserves more justification than "well, I guess
that's better".

I have chosen to illustrate examples in which a dynamic approach might
be considerably better. I am decidedly not trying to downplay static
typing. It can be a rational choice to use a statically typed language
in specific cases. But the claim that static typing is almost always the
better choice is irrational if it is not based on empirical evidence.

Again, to make this absolutely clear, it is my personal experience that
dynamic type checking is in many situations superior to static type
checking. But I don't ask anyone to unconditionally use dynamically
typed languages.
Pascal

Jul 18 '05 #291
pr***********@comcast.net wrote:
As I mentioned earlier in this thread, the two things I object to in a
static type system are these:

1) The rejection of code I know to be useful, yet the system is
unable to prove correct.
Er... for static type systems, it's just "type-correct". General
correctness isn't very well-defined and not what current main-stream
type systems are designed for anyway.
2) The requirement that I decorate my program with type
information to give the type checker hints to enable it
to check things that are, to me, obviously correct.
I'd agree with these points, and restate that both are addressed by
Hindley-Milner type systems.

1) Most useful code is type-correct. The code that isn't uses quite
advanced types, but these are borderline cases like "black-hole" above.
Besides, I don't know of any borderline case that cannot be written
using different techniques.

2) Decorating programs with type annotations is an absolute exception.
The only case where types are declared are those where you define a new
type (i.e. record types).
These are personal preferences, but they are shared by many of my
fellow lisp hackers. To my knowledge, not one of my fellow lisp
hackers would mind a static type checker that noticed code fragments
that could be proven to never produce anything but an error provided
that said type checker never need hints and is never wrong.
A HM checker flags some usages as "wrong" that would work in Lisp -
black-hole is an example.
I'm not sure that such usage serves a good purpose.
Many lisp
systems already have a weak form of this: try compiling a program that
invokes CAR on 2 arguments or attempts to use a literal string as a
function.
That's exceedingly weak indeed.
Distinguishing square from nonsquare matrices goes a bit beyond the
standard usage of HM type systems, but it is possible.
Being unable to prove code correct is the same thing as being able to
prove it incorrect.
This true for HM typing but not true for inference systems in general.
But probably you wanted to say something else anyway.
This makes three classes of code:

1) provably type safe for all inputs
2) provably erroneous for all inputs
3) neither of the above

Both dynamic and static type systems behave the same way for classes 1
and 2 except that static systems reject the code in section 2 upon
compilation while dynamic systems reject it upon execution. It is
section 3 that is under debate.
Agreed.
Programs in class 2 are rarely encountered and quickly fixed, so I'm not
sure that this category is very interesting.
We can categorize class 3 by partitioning the inputs. Some inputs
can be shown to always produce a type error, some can be shown to
*never* produce a type error, and some are undecidable. All the class
3 programs contain inputs in at least two of these partitions.

The question remains over what to do with class 3 programs. Of course
what we do may depend on how often a class 3 program is encountered,
what the risk might be of running one, and possibly the compile time
cost of detecting one.
There's also the question how to decide whether a program is in class 1
or class 3.
Those of us that are suspicious of static typing believe that there
are a large enough number of class 3 programs that they will be
regularly or frequently encountered. We believe that a large number
of these class 3 programs will have inputs that cannot be decided by a
static type checker but are nonetheless `obviously' correct.
This belief is ill-founded. Just take a dynamically-typed program and
annotate every parameter with all the types that you expect it to take
on. Exclude possibilities if you hit a contradiction. Get suspicious if,
for some parameter, there is /no/ type left :-)
You'll find that most programs will type readily.
(HM typing indeed starts with allowing all types for all names, then
excluding all types that lead to a contradiction somewhere. The actual
wording of the algorithm is somewhat different, but that's the gist of it.)
(By
obvious I mean fairly apparent with perhaps a little thought, but
certainly nothing so convoluted as to require serious reasoning.)

Static type check aficionados seem to believe that the number of class
3 programs is vanishingly small and they are encountered rarely. They
appear to believe that any program that is `obviously' correct can be
shown to be correct by a static type checking system. Conversely,
they seem to believe that programs that a static type checker find
undecidable will be extraordinarily convoluted and contrived, or
simply useless curiosities.

Some people in the static type check camp are making blanket
statements like:

Matthias Blume
Typical "Gödel" statements tend to be pretty contrived.

Dirk Thierbach <dt********@gmx.de> writes:
...the sane programs that are taken away are in most cases at
least questionable (they will be mostly of the sort: There is a
type error in some execution branch, but this branch will never be
reached)

Joachim Durchholz <jo***************@web.de> writes:
... given a good type system, there are few if any practial
programs that would be wrongly rejected.

Matthias Blume
`...humans do not write such programs, at least not
intentionally...'

No one has offered any empirical evidence one way or another, but the
static type people have said `if class 3 programs are so ubiquitous,
then it should be easy to demonstrate one'. I'll buy that. Some of
the people in here have offered programs that take user input as an
example. No sane person claims to read the future, so any statically
typed check program would have to perform a runtime check.

But this is shooting fish in a barrel, so I'll give you a pass on it.

I (and Don Geddis) happen to believe that there are a huge number of
perfectly normal, boring, pedestrian programs that will stymie a
static type checker. I've been offering a few that I hope are small
enough to illustrate the problem and not so contrived as to fall into
the `pointless' class.

The first program I wrote (a CPS lookup routine) generally confuses
wimp-ass static type checking like in Java.

Stephan Bevan showed that SML/NJ had no problem showing it was type
safe without giving a single type declaration. I find this to be
very cool.

I offered my `black hole' program and got these responses:

Remi Vanicat <va*************@labri.fr>
`I don't see how this function can be useful...'

Jesse Tov <to*@eecs.harvREMOVEard.edu>
`we don't need functions with those types.'

Dirk Thierbach <dt********@gmx.de>
`recursive types are very often a result of a real typing error.'

"Marshall Spight" <ms*****@dnai.com>
`I don't believe the feature this function illustrates could be
useful.'

Will this the response for any program that does, in fact, fool a
number of static type checkers?

Marshall Spight wrote:
`I'm trying to see if anyone can come up with a program
that is small, useful, and not provably typesafe.'

to which Joachim Durchholz <jo***************@web.de> replied:
``Matthias' and my position is that such a program doesn't exist
(and cannot even exist)''
Sorry, you're mixing up topics here - that was about correctness in
general, not about type correctness (which is a subset of general
correctness for the vast majority of type systems).
This appears to be getting very close to arguing that static type
checkers never reject useful programs because, by definition, a
program rejected by a static type checker is useless.


That's a misunderstanding, partly due to conflation of correctness and
type correctness, partly because people using a good static type checker
don't feel constrained by that type checking because "you just don't do
that" (similarly to "you just don't do some special things in Lisp",
like abusing MOP mechanisms etc.)

Regards,
Jo

Jul 18 '05 #292
Don Geddis wrote:
Dirk Thierbach <dt********@gmx.de> writes:
Hindley-Milner type inference always terminates. The result is either
a provable mismatch, or a provable-non-mismatch.
You're completely wrong, which can be easily demonstrated.

The fact that it terminates isn't the interesting part. Any inference
procedure can also "always terminate" simply by having a timeout, and reporting
"no proof" if it can't find one in time.


Now you are completely wrong.
Of course you can make any type checker terminate by such draconian
measures, but such a type system would be near-useless: code may
suddenly become incorrect if compiled on a smaller machine.

There are better ways of doing this, like cutting down on the size of
some intermediate result during type checking (such as C++, where
template nesting depth or something similar is cut off at a relatively
small, fixed number IIRC).
Standard type systems don't have, need or want such cut-offs though :-)
So what's interesting is whether the conclusions are correct.

Let's take as our ideal what a dynamic type system (say, a program in Lisp)
would report upon executing the program. The question is, can your type
inference system make exactly the same conclusions at compile time, and predict
all (and only!) the type errors that the dynamic type system would report at
run time?

The answer is no.


Not precisely.
The next question, however, is whether the programs where the answers
differ are interesting.
There's also a narrow and a broad sense here: obviously, it's not
possible to type check all Lisp idioms, but are we allowed to present
alternative idioms that do type check and serve the same purpose?

[Snipping the parts that are getting ad hominem-ish :-( ]

Regards,
Jo

Jul 18 '05 #293
Joachim Durchholz <jo***************@web.de> writes:
pr***********@comcast.net wrote:
Being unable to prove code correct is the same thing as being able to
prove it incorrect.


This true for HM typing but not true for inference systems in
general. But probably you wanted to say something else anyway.


Um, yeah. I did:

Being unable to prove code correct is *not* the same thing as being
able to prove it incorrect.
Jul 18 '05 #294
Joachim Durchholz <jo***************@web.de> writes:
There's also a narrow and a broad sense here: obviously, it's not
possible to type check all Lisp idioms, but are we allowed to present
alternative idioms that do type check and serve the same purpose?


I don't have a problem with this, but I don't want to split hairs on
what constitutes an `idiom' vs. what constitutes a complete rewrite.

Presumably, an alternative idiom would involve only *local* changes,
not global ones, and could be performed incrementally, i.e., each
use of an idiom could be independently replaced and thus reduce the
the the type checking errors.

If a change involves pervasive edits, say, for instance, editing all
callers of some function to pass an extra argument, or wrapping a
conditional branch around all uses of an object, that would not be
an alternative idiom.

Jul 18 '05 #295
Joachim Durchholz wrote:
The next question, however, is whether the programs where the answers
differ are interesting.
There's also a narrow and a broad sense here: obviously, it's not
possible to type check all Lisp idioms, but are we allowed to present
alternative idioms that do type check and serve the same purpose?


No, and that's exactly the point. "We" can write completely idiomatic
and well-behaved Lisp code that works, can be understood and maintained
by other Lispers, and provides various useful additional behavior at
runtime.

Why should we rewrite it just to make a static type checker happy?
That's redundant work with no obvious advantages.

Why should we, on the other hand, accept statically type-checked code
that does less than our straightforward solutions?

We could _additionally_, if we wanted to, write our code in a style that
is acceptable by a static type checker whithout switching the language.
We could add a full-blown static type checker to our language and ask
for a different coding style for those parts of the code that would use
it. See Qi and ACL2 for two real-world examples.

This is the gist of unrestricted expressive power: we objectively have
more options! A language that forces your code to adhere to a static
type system has less options!

You may probably not care about the loss of options. You might even
think it's an advantage to have less options. But you objectively have
less options to write well-behaved code!
Pascal

Jul 18 '05 #296
Brian McNamara! napisa³:
pr***********@comcast.net once said:
Are they happy with something like this?

(defun black-hole (x)
#'black-hole)

(for non lispers, the funny #' is a namespace operator.
The black-hole function gobbles an argument and returns
the black-hole function.)


Finally, an example that I don't think you can type in Haskell.
You score a point for that. :)

If we have a static type system which admits infinite types, then we
can assign black-hole a type. So it's still typeable, just not in any
common language I can name offhand. :)


You are making things a bit too complicated. I think you can write
blackHole in Haskell:

blackHole :: a
blackHole = error "black-hole"

*BH> :t blackHole 1 2 3 'a' "ho" (blackHole, 1.2)
blackHole 1 2 3 'a' "ho" (blackHole, 1.2) :: forall t. t

*BH> blackHole 1 2 3 'a' "ho" (blackHole, 1.2)
*** Exception: black-hole

*BH> let _ = blackHole 1 2 3 'a' "ho" (blackHole, 1.2) in "abcdef"
"abcdef"

Best regards,
Tom

--
..signature: Too many levels of symbolic links
Jul 18 '05 #297
pr***********@comcast.net wrote:
Joachim Durchholz <jo***************@web.de> writes:
There's also a narrow and a broad sense here: obviously, it's not
possible to type check all Lisp idioms, but are we allowed to present
alternative idioms that do type check and serve the same purpose?
I don't have a problem with this, but I don't want to split hairs on
what constitutes an `idiom' vs. what constitutes a complete rewrite.


When it comes to comparing whether static types are "getting in the
way", even a complete rewrite would be OK according to my book.
Sticking to the original code will, of course, make a more compelling
case - not because that is better, but because it's better understood.
If a change involves pervasive edits, say, for instance, editing all
callers of some function to pass an extra argument, or wrapping a
conditional branch around all uses of an object, that would not be
an alternative idiom.


If you mean that Lisp has a point if it can process an arbitrary number
of parameters in one line of code, while a non-Lisp would need an extra
handler for each parameter: then I agree.
Though that isn't usually needed for currying languages. (I know of a
single instance in the standard Haskell libraries where it's needed -
and in that case, it's about taking apart tuples of various arities, not
about taking apart parameter lists which is usually a snap.)

Regards,
Jo

Jul 18 '05 #298
Pascal Costanza <co******@web.de> writes:
Fergus Henderson wrote:
In my experience, people who have difficulties in getting their programs
to typecheck usually have an inconsistent design, not a design which is
consistent but which the type checker is too restrictive to support.
Have you made sure that this is not a circular argument?


Yes.
Does "consistent design" mean "acceptable by a type checker" in your book?


No.

--
Fergus Henderson <fj*@cs.mu.oz.au> | "I have always known that the pursuit
The University of Melbourne | of excellence is a lethal habit"
WWW: <http://www.cs.mu.oz.au/~fjh> | -- the last words of T. S. Garp.
Jul 18 '05 #299
Don Geddis <do*@geddis.org> wrote:
pr***********@comcast.net reasonably noted:
> If there are three values that can arise --- provable-mismatch,
> provable-non-mismatch, and undecided --- then you cannot assume that
> ~provable-mismatch = provable-non-mismatch.
Dirk Thierbach <dt********@gmx.de> writes:
Hindley-Milner type inference always terminates. The result is either
a provable mismatch, or a provable-non-mismatch.
You're completely wrong, which can be easily demonstrated.


Unfortunately, I can *prove* that the HM-type inference always terminates,
without any timeout, by induction on the length of the expression.
The fact that it terminates isn't the interesting part. Any
inference procedure can also "always terminate" simply by having a
timeout, and reporting "no proof" if it can't find one in time.
Yes, you can do that. But it's not done during HM-type inference.
Let's take as our ideal what a dynamic type system (say, a program
in Lisp) would report upon executing the program. The question is,
can your type inference system make exactly the same conclusions at
compile time, and predict all (and only!) the type errors that the
dynamic type system would report at run time? The answer is no.
Right. It cannot, because that question is not decidable.
That's one obvious case, so even you know that your claim of a "provable
mismatch" is incorrect.
It's still a provable mismatch. Only that part of the code never gets
executed, so you don't have a dynamic type error. I would consider a
program which has a branch that contains an error, but fortunately
never executes that branch pretty bogus. I don't see any advantage
in admitting such a program. It's a bad program, and you should either
correct the error in the dead branch, or remove the dead branch
completely if it isn't going to be executed anyway.
There are programs that will never have run-time
errors, but your static type inference will claim a type error.
Yes. But these programs will have problems, even without a run-time error.
Rewrite them to increase the quality of your software.
or because you have an implicit restriction for possible arguments to this
expression the type system doesn't know about, than you could call it a
"valid program", but it will still be rejected, yes.

For example:
(defun foo (x)
(check-type x (integer 0 10))
(+ 1 x) )
(defun fib (n)
(check-type n (integer 0 *))
(if (< n 2)
1
(+ (fib (- n 1)) (fib (- n 2))) ))
(print (foo (fib 5))) This program prints "9", and causes no run-time type errors. Will
it be successfully type-checked at compile time by a static system?
Almost certainly the answer is no.
It will be successfully type checked, because the static type system
does not allow you to express assumptions about value ranges of types.
These things have to be checked dynamically, as in your program. So the
type system "doesn't get in the way": It admits the program.
Unfortunately, the only way to figure this out is to actually
compute the fifth Fibonacci number, which surely no static type
inference system is going to do.
Yes. That's why you cannot express such restrictions statically.
Do you now accept that your static type inference systems do NOT partition
all programs into "either a provable [type] mismatch, or a provable [type]
non-mismatch"?
No. But you probably don't understand what I mean with that. A provable
type mismatch means that the program contains a location where it can
be statically verified that executing the location will cause trouble.
It cannot statically check that this location will be indeed executed,
so it might (wrongly) reject the program. But this rejection is acceptable,
because the program is bogus. On the other hand, if there is no static
type mismatch, that doesn't mean that the program will not crash because
of runtime errors (division by zero, or dynamically checked restrictions).
Finally, to get back to the point of the dynamic typing fans:
realizing that type inference is not perfect, we're annoyed to be
restricted to writing only programs that can be successfully type
checked at compile time.


You may not believe it, but I perfectly understand that :-) The
problem is that this is just not true (with a good static type
system): It is not a real restriction. If your program is a good
program, it will type check. If it doesn't type check, then there is
something wrong with it.

I am annoyed in the very same way if I have to write programs in a
language with a bad type system (say, C++). I cannot express myself as
abstractly as I would want to, I have to write down lots of
unnecessary type annotions, and I have to invent tricks to please the
type checker and let it allow me do what I want to do. Really
horrible.

Again, try thinking of the static type systems as an automatic testing
tool. Saying that you want to write programs that will be rejected by
the static typing is like saying that you want to write programs that
will be rejected by your unit tests. It just doesn't make any sense;
the unit tests are there to guarantee that you write good quality
software, so why would you want to ignore them? The only case when you
want to do that is if they are bad tests; when they point to problems
that are not really there. But with a good static type system, that
doesn't happen.

- Dirk
Jul 18 '05 #300

This discussion thread is closed

Replies have been disabled for this discussion.

Similar topics

4 posts views Thread by Berco Beute | last post: by
reply views Thread by Luke Kenneth Casson Leighton | last post: by
1 post views Thread by CARIGAR | last post: by
reply views Thread by suresh191 | last post: by
reply views Thread by harlem98 | last post: by
reply views Thread by listenups61195 | last post: by
By using this site, you agree to our Privacy Policy and Terms of Use.