473,394 Members | 2,071 Online
Bytes | Software Development & Data Engineering Community
Post Job

Home Posts Topics Members FAQ

Join Bytes to post your question to a community of 473,394 software developers and data experts.

Python from Wise Guy's Viewpoint

THE GOOD:

1. pickle

2. simplicity and uniformity

3. big library (bigger would be even better)

THE BAD:

1. f(x,y,z) sucks. f x y z would be much easier to type (see Haskell)
90% of the code is function applictions. Why not make it convenient?

2. Statements vs Expressions business is very dumb. Try writing
a = if x :
y
else: z

3. no multimethods (why? Guido did not know Lisp, so he did not know
about them) You now have to suffer from visitor patterns, etc. like
lowly Java monkeys.

4. splintering of the language: you have the inefficient main language,
and you have a different dialect being developed that needs type
declarations. Why not allow type declarations in the main language
instead as an option (Lisp does it)

5. Why do you need "def" ? In Haskell, you'd write
square x = x * x

6. Requiring "return" is also dumb (see #5)

7. Syntax and semantics of "lambda" should be identical to
function definitions (for simplicity and uniformity)

8. Can you undefine a function, value, class or unimport a module?
(If the answer is no to any of these questions, Python is simply
not interactive enough)

9. Syntax for arrays is also bad [a (b c d) e f] would be better
than [a, b(c,d), e, f]

420

P.S. If someone can forward this to python-dev, you can probably save some
people a lot of soul-searching
Jul 18 '05
467 21132
Matthias Blume <fi**@my.address.elsewhere> writes:
Declarations can take this further, such that a compiler as smart as
CMUCL can manipulate raw (unsigned-byte 32) values, for example.
Oh, so you're saying you want static declarations, checked and
enforced by the compiler?


CL declarations are nothing but hints to the compiler that it is
allowed to optimize. Sometimes this is useful.
Hmm, I've read this point of view in this thread somewhere.


Now you're conflating two readings of "want declarations" (i.e. "want
them whenever they're convenient for optimizing" vs. "want them
everywhere and always")
--
(espen)
Jul 18 '05 #401
Matthias Blume <fi**@my.address.elsewhere> writes:
Matthew Danish <md*****@andrew.cmu.edu> writes:
Declarations can take this further, such that a compiler as smart as
CMUCL can manipulate raw (unsigned-byte 32) values, for example.


Oh, so you're saying you want static declarations, checked and
enforced by the compiler? Hmm, I've read this point of view in this
thread somewhere.


The point is that you can use static typing when you want. It doesn't
stand in the way when you don't need it, which is most of the time.
Are the vast majority of your programs the type which behave properly
within machine-word integers?

idea that the only correct result of 20 * 30 has to be 600.)


(20 * 30) mod 256 is, of course, a completely different expression.


Undoubtedly, it is a different expression. But it might mean the
same, given a correspondingly chosen domain for 20 and 30, together
with an certain * operation.


Indeed. It could well be 42. Or 3.141592. Or maybe "hum". Who knows,
who knows.

Just change the type declarations and - violà! - popcorn.
Jul 18 '05 #402
Fergus Henderson wrote:
Pascal Costanza <co******@web.de> writes:

Can you show me an example of a program that does't make sense anymore
when you strip off the static type information?

Here's a C++ example:

x << y

Depending on the types of x and y, this might mean left shift, I/O,
or something entirely different.


And depending on the runtime types, this can be correctly dispatched at
runtime.
Here's another C++ example:

#include "foo.h"
main() {
auto Foo x; // constructor has side-effects
}

If you strip away the static type information here, i.e. change "auto Foo x;"
to just "auto x;", then there's no way to know which constructor to call!


But that's not type information, that's just a messy way to implicitly
call a function. (C++ confuses types and classes here.)
Pascal

--
Pascal Costanza University of Bonn
mailto:co******@web.de Institute of Computer Science III
http://www.pascalcostanza.de Römerstr. 164, D-53117 Bonn (Germany)

Jul 18 '05 #403
Fergus Henderson wrote:
Pascal Costanza <co******@web.de> writes:

Fergus Henderson wrote:

Pascal Costanza <co******@web.de> writes:
Well, the research that ultimately lead to the HotSpot Virtual Machine
originated in virtual machines for Smalltalk and for Self. Especially
Self is an "extremely" dynamic language, but they still managed to make
it execute reasonably fast.

Please correct me if I'm wrong, but as I understand it, iterating over a
collection of values is still going to require keeping some representation
of the type of each element around at runtime, and testing the type for
each element accessed, in case it is not the expected type. AFAIK neither
HotSpot nor the Self compiler do the kind of optimizations which would
be needed to avoid that.


You don't need to check the type on each access. If you only copy a
value from one place to other, and both places are untyped, you don't
need any check at all.

Great. I feel so much better now. Now my type errors are free to
propagate throughout my program's data structures, so that when they
are finally detected, it may be far from the true source of the problem.


Now, you have changed the topic from optimization to catching errors
again. Could you please focus what you want to talk about?

And guess what, "in 99% of all cases, such type errors don't occur in
practice, at least not in my eperience". ;-P (Sorry, couldn't resist. I
sincerely hope you read this as a joke, and not as an attack. ;)
But the example that I was thinking of did actually want to access the
value, not just copy it.
Furthermore, if I remember correctly, dynamically compiled systems use
type inferencing at runtime to reduce the number of type checks.


In cases such as the one described above, they may reduce the number of
times that the type of the _collection_ is checked, but they won't be
able to avoid checking the element type at every element access.


Why? If the collection happens to contain only elements of a single type
(or this type at most), you only need to check write accesses if they
violate this condition. As long as they don't, you don't need to check
read accesses.

"In 99% of all cases, write accesses occur rarely, at least ..." - well,
you know the game. ;)
Pascal

--
Pascal Costanza University of Bonn
mailto:co******@web.de Institute of Computer Science III
http://www.pascalcostanza.de Römerstr. 164, D-53117 Bonn (Germany)

Jul 18 '05 #404
Fergus Henderson wrote:
Pascal Costanza <co******@web.de> writes:

Hmm, could a kind of "meta type system protocol" be feasible? I.e., a
language environment in which you could tweak the type system to your
concrete needs, without having to change the language completely?

Yes. But that is still a research issue at this stage.
For some work in this area, see the following references:

[1] Martin Sulzmann, "A General Type Inference Framework for
Hindley/Milner Style Systems." In 5th International Symposium
on Functional and Logic Programming (FLOPS), Tokyo, Japan,
March 2001.

[2] Sandra Alves and Mario Florido, "Type Inference using
Constraint Handling Rules", Electronic Notes in Theoretical
Computer Science volume 64, 2002.

[3] Kevin Glynn, Martin Sulzmann, Peter J. Stuckey,
"Type Classes and Constraint Handling Rules",
University of Melbourne Department of Computer Science
and Software Engineering Technical Report 2000/7, June 2000.

Also the work on dependent type systems, e.g. the language Cayenne,
could be considered to fit in this category.


Thanks a lot for the references!
Pascal

--
Pascal Costanza University of Bonn
mailto:co******@web.de Institute of Computer Science III
http://www.pascalcostanza.de Römerstr. 164, D-53117 Bonn (Germany)

Jul 18 '05 #405
Espen Vestre wrote:
Now you're conflating two readings of "want declarations" (i.e. "want
them whenever they're convenient for optimizing" vs. "want them
everywhere and always")


Type inference is about "as much static checking as possible with as
little annotations as absolutely necessary".
HM typing is extremely far on the "few declarations" side: a handful
even in large systems.

It sounds unbelievable, but it really works.

Oh, there's one catch: Most functional programs have heaps of type
definitions similar to this one:
Tree a = Leaf a
| Node (Tree a) (Tree a)
| Empty
However, these definitions don't only declare the type, they also
introduce constructors, which also serve as inspectors for pattern matching.

In other words, the above code is all that's needed to allow me to
construct arbitrary values of the new type (gratuitious angle brackets
inserts to make the code easier to recognize with a C++ background),
like this:
Leaf 5 -- Creates a Leaf <Integer> object that contains value 5
Node Empty Empty -- Creates a node that doesn't have leaves
-- Type is Tree <anything>, i.e. we can insert this object into
-- a tree of any type, since there's no way that this can ever
-- lead to type errors.
Node (Leaf 5) (Leaf 6) -- Creates a node with two leaves

It also allows me to use the constructor names as tags for pattern
matching. Note that every one of the following three definitions
consists of the name of the function being defined, a pattern that the
arguments must follow to select this particular definition, and the
actual function body (which is just an expression here). Calling a
function with parameters that match neither pattern is considered an
error (which is usually caught at compile time but not in all cases -
not everything is static even in functional languages).
mapTree f (Leaf foo) = Leaf f foo
-- If mapTree if given a "something" called f,
-- and some piece of data that was constructed using Leaf foo,
-- then the result will be obtained by applying f as a function
-- to that foo, and making the result a Leaf.
-- The occurence of f in a function position on the right side
-- makes type inference recognize it as a function.
-- The Leaf pattern (and, incidentally, the use of the Leaf
-- constructor on the right side) will make type inference
-- recognize that the second parameter of mapTree must be
-- of Tree <anything> type. Usage details will also make
-- type inference conclude that f must be a function from
-- one "anything" type to another, potentially different
-- "anything" type.
-- In other words, the type of mapTree is known to be
-- (a -> b) -> Tree a -> Tree b
-- without a single type declaration in mapTree's code!
mapTree f (Node left right) = Node (maptree f left) (maptree f right)
-- This code is structured in exact analogy to the Leaf case.
-- The only difference is that it uses recursion to descend
-- into the subtrees.
-- Incidentally, this definition of mapTree
mapTree f Empty = Empty
-- The null value doesn't need to be mapped, it will look the
-- same on output.
-- Note that this definition of mapTree doesn't restrict the
-- type of f in the least.
-- In HM typing, you usually don't specify the types, every
-- usage of some object adds further restrictions what that
-- type can be. If the set of types that a name can have becomes
-- empty, you have contradictory type usage and hence a type error.

Hope this helps
Jo

Jul 18 '05 #406
Pascal Costanza <co******@web.de> writes:
Fergus Henderson wrote:
Pascal Costanza <co******@web.de> writes:
Fergus Henderson wrote:
Pascal Costanza <co******@web.de> writes:
Furthermore, if I remember correctly, dynamically compiled systems use
type inferencing at runtime to reduce the number of type checks.


In cases such as the one described above, they may reduce the number of
times that the type of the _collection_ is checked, but they won't be
able to avoid checking the element type at every element access.


Why? If the collection happens to contain only elements of a single type
(or this type at most), you only need to check write accesses if they
violate this condition. As long as they don't, you don't need to check
read accesses.


So which, if any, implementations of dynamic languages actually perform such
optimizations?

--
Fergus Henderson <fj*@cs.mu.oz.au> | "I have always known that the pursuit
The University of Melbourne | of excellence is a lethal habit"
WWW: <http://www.cs.mu.oz.au/~fjh> | -- the last words of T. S. Garp.
Jul 18 '05 #407
Matthias Blume <fi**@my.address.elsewhere> writes:
Pascal Costanza <co******@web.de> writes:
Computers are fast enough and have enough memory nowadays. You are
talking about micro efficiency. That's not interesting anymore.


I have worked on projects where people worried about *every cycle*.
(Most of the time I agree with you, though. Still, using infinite
precision by default is, IMO, a mistake.


What are you writing about? Figments of your imagination or real
concrete systems?
[20]> (typep (fact 100) 'fixnum)
NIL
[21]> (typep (fact 100) 'bignum)
T
[22]> (typep (/ (fact 100) (fact 99)) 'fixnum)
T
[23]> (typep (/ (fact 100) (fact 99)) 'bignum)
NIL
[24]> (/ 1 3)
1/3
[25]> (/ 1.0 3)
0.33333334

Where do you see "infinite precision by default"?
--
__Pascal_Bourguignon__
http://www.informatimago.com/
Jul 18 '05 #408
Joachim Durchholz <jo***************@web.de> writes:
Espen Vestre wrote:
Now you're conflating two readings of "want declarations" (i.e. "want
them whenever they're convenient for optimizing" vs. "want them
everywhere and always")
Type inference is about "as much static checking as possible with as
little annotations as absolutely necessary".
HM typing is extremely far on the "few declarations" side: a handful
even in large systems.


I certainly don't mind as much static checking as possible. I get a
little put off by *any* annotations that are *absolutely necessary*,
though. My preference is that all `lexically correct' code be
compilable, even if the object code ends up being the single
instruction `jmp error-handler'. Of course I'd like to get a
compilation warning in this case.

It sounds unbelievable, but it really works.


I believe you. I have trouble swallowing claims like `It is never
wrong, always completes, and the resulting code never has a run-time
error.' or `You will never need to run the kind of code it doesn't allow.'
Jul 18 '05 #409
Pascal Costanza <co******@web.de> writes:
No, for christ's sake! There are dynamically typed programs that you
cannot translate into statically typed ones!


You are really going to confuse the static typers here. Certainly
there is no program expressable in a dynamically typed language such
as Lisp that is not also expressible in a statically typed language
such as SML.

But it *is* the case that there are programs for which safe execution
*must* depend upon checks (type checks or pattern matching) that are
performed at run time. Static analysis will not remove the need for
these.

It is *also* the case that there are programs for which safe execution
requires *no* runtime checking, yet static analysis cannot prove that
this is the case.

A static analyzer that neither inserts the necessary run-time checks,
nor requires the user to do so will either fail to compile some correct
programs, or fail to correctly compile some programs.

I think the static typers will be agree (but probably not be happy
with) this statement: There exist programs that may dynamically admit
a correct solution for which static analyzers are unable to prove that
a correct solution exists.
Jul 18 '05 #410
Joe Marshall wrote:
Pascal Costanza <co******@web.de> writes:
No, for christ's sake! There are dynamically typed programs that you
cannot translate into statically typed ones!


You are really going to confuse the static typers here. Certainly
there is no program expressable in a dynamically typed language such
as Lisp that is not also expressible in a statically typed language
such as SML.


Yes, of course. Bad wording on my side.

Thanks for clarification.

I am not interested in Turing equivalence in the static vs. dynamic
typing debate. It's taken for granted that for every program written in
either kind of language you can write an equivalent program in the other
kind of language. I seriously don't intend to suggest that dynamically
typed languages beat Turing computability. ;-)

I am not interested in _what_ programs can be implemented, but in _how_
programs can be implemented.
Pascal

--
Pascal Costanza University of Bonn
mailto:co******@web.de Institute of Computer Science III
http://www.pascalcostanza.de Römerstr. 164, D-53117 Bonn (Germany)

Jul 18 '05 #411
On Wed, 29 Oct 2003 19:53:12 +0100, Pascal Costanza <co******@web.de>
wrote:
Joe Marshall wrote:
Pascal Costanza <co******@web.de> writes:


Pascal


Do you ever do any real work? Or do you spend all your time constructing
replies here...

--
Using M2, Opera's revolutionary e-mail client: http://www.opera.com/m2/
Jul 18 '05 #412
ra*****@mediaone.net (Raffael Cavallaro) wrote:
With lisp, you only add as much type checking as you need, *when* you
need it.


if you knew how much you needed and when, you wouldn't need it.

----
Garry Hodgson, Technology Consultant, AT&T Labs

Be happy for this moment.
This moment is your life.

Jul 18 '05 #413
Raffael Cavallaro <ra*****@mediaone.net> wrote:
Matthias Blume <fi**@my.address.elsewhere> wrote in message
news:<m1************@tti5.uchicago.edu>...
[This whole discussion is entirely due to a mismatch of our notions of
what constitutes expressive power.]


No, it is due to your desire to be type constrained inappropriately
early in the development process. Lispers know that early on,


That's very arrogant. You presume to know what is appropriate for *him*.

And I could retort with ``Static typers know that even early in the
development process it is appropriate to have the additional safety that
static typing brings to a program'', but I won't, since I'm not that
arrogant :-).
Jul 18 '05 #414
John Thingstad <jo************@chello.no> writes:
Do you ever do any real work? Or do you spend all your time
constructing replies here...


Proves the point about programmer efficiency, eh? Use Lisp, and you
too can spend most of the day posting to Usenet! ;-)
Jul 18 '05 #415
Joe Marshall <jr*@ccs.neu.edu> wrote:
I have trouble swallowing claims like `It is never wrong, always
completes, and the resulting code never has a run-time error.'
I can understand this. Fortunately, you don't have to swallow it,
you can verify it for yourself.

A comprehensive book about the lambda calculus should contain the
Hindley-Mindler type inference algorithm. The HM-algorithm is fairly
old, so I don't know if there are any papers on the web that explain
it in detail. Unification of types and the HM-algorithm together are
maybe two pages.

* Termination is easy to verify; the algorithm works by induction on
the structure of the lambda term.

* For "it never has a runtime error" look at the typing rules of
the lambda calculus and convince yourself that they express the
invariant that any function (including "constants", i.e. built-in
functions) will only be applied to types that match its own
type signature. Hence, no runtime errors.

A good book will also contain the proof (or a sketch of it) that
if the HM-algorithm succeeds, the term in question can indeed by
typed by the typing rules.

* For the other case (i.e., there is a mismatch during unification; I
guess that's what you mean by "it is never wrong"), try to assign to
every variable a value of the type under the current type
environment, and reduce along every possible reduction path of the
subterm. One of those reductions will fail with a type error (though
this reduction may never happen if execution never reaches this part
of the subterm on the path that the evaluation strategy of your
language chooses).

Maybe it's best to do this for a few examples.
or `You will never need to run the kind of code it doesn't allow.'


The last point should show that such a code at least is problematic,
unless you can somehow make sure that the part that contains the
type error is never executed. In that case, this part is useless,
so the code should be probably rewritten. At least I cannot think
of a good reason why you would "need" such kind of code.

- Dirk

Jul 18 '05 #416
> > Do you ever do any real work? Or do you spend all your time
constructing replies here...


Proves the point about programmer efficiency, eh? Use Lisp, and you
too can spend most of the day posting to Usenet! ;-)


Because you're unemployed? ;-)

Jul 18 '05 #417
Joe Marshall wrote:
Joachim Durchholz <jo***************@web.de> writes:
Type inference is about "as much static checking as possible with as
little annotations as absolutely necessary".
HM typing is extremely far on the "few declarations" side: a handful
even in large systems.


I certainly don't mind as much static checking as possible. I get a
little put off by *any* annotations that are *absolutely necessary*,
though. My preference is that all `lexically correct' code be
compilable, even if the object code ends up being the single
instruction `jmp error-handler'. Of course I'd like to get a
compilation warning in this case.


Then static typing is probably not for you.
Mainstream FPLs tend to require an occasional type declaration. And
you'll have to know about the type machinery to interpret the type
errors that the compiler is going to spit at you - never mind that these
errors will always indicate a bug (unless one of those rare explicit
type annotations is involved, in which case it could be a bug or a
defective type annotation).
It sounds unbelievable, but it really works.


I believe you. I have trouble swallowing claims like `It is never
wrong, always completes, and the resulting code never has a run-time
error.' or `You will never need to run the kind of code it doesn't allow.'


This kind of claim comes is usually just a misunderstanding.
For example, the above claim indeed holds for HM typing - for the right
definitions of "never wrong" and "never has an error".

HM typing "is never wrong and never has a run-time error" in the
following sense: the algorithm will never allow an ill-typed program to
pass, and there will never be a type error at run-time. However, people
tend to overlook the "type" bit in the "type error" term, at which point
the discussion quickly degenerates into discourses of general correctness.
Adding to the confusion is the often-reported experience of functional
programmers, that annotating your code with static type declarations can
be a very efficient way to finding design errors soon.
The type correctness claims are backed by hard theory; the design
improvement claims are of a social nature and cannot be proved (they
could be verified by field studies at best).

Jul 18 '05 #418
Pascal Costanza <co******@web.de> writes:
These are both all or nothing solutions.

+ "all the tests for a particular feature in one place" - maybe that's
not what I want (and you have ignored my arguments in this regard)

and:
+ what if I want to run _some_ of the tests that my macro produces but
not _all_ of them?
Actually, that seems to be the typical reaction of static typing
fans.


The solutions may be all or nothing but IMHO they are simple and I
like simple things. I can't really say the same for scenarios which
involve running only some tests generated by macros that may or may
not be in the same files as other tests generated from the same
macros. Perhaps it all comes down to different approaches to the
programming process rather than languages per se, e.g. I don't do
either of the above even when writing Common Lisp.
Jul 18 '05 #419
Pascal Costanza <co******@web.de> writes:
Are these algorithms reason enough to have machine word sized
numerical data types as the default for a _general purpose_ language?


I've no idea, I don't care that much what the default is since I
prefer to specify what the type/size should be if the compiler fails
to infer the one I wanted :-)
Jul 18 '05 #420
Pascal Costanza <co******@web.de> writes:
Matthias Blume wrote:
No, it's not. There's a class of programs that exhibit a certain
behavior at runtime that you cannot write in a statically typed
language _directly in the language itself_.


This is simply not true. See above.


OK, let's try to distill this to some simple questions.

Assume you have a compiler ML->CL that translates an arbitrary ML
program with a main function into Common Lisp. The main function is a
distinguished function that starts the program (similar to main in C).
The result is a Common Lisp program that behaves exactly like its ML
counterpart, including the fact that it doesn't throw any type errors at
runtime.

Assume furthermore that ML->CL retains the explicit type annotations in
the result of the translation in the form of comments, so that another
compiler CL->ML can fully reconstruct the original ML program without
manual help.

Now we can modify the result of ML->CL for any ML program as follows. We
add a new function that is defined as follows:

(defun new-main ()
(loop (print (eval (read)))))

(We assume that NEW-MAIN is a name that isn't defined in the rest of the
original program. Otherwise, it's easy to automatically generate a
different unique name.)

Note that we haven't written an interpreter/compiler by ourselves here,
we just use what the language offers by default.

Furthermore, we add the following to the program: We write a function
RUN (again a unique name) that spawns two threads. The first thread
starts the original main function, the second thread opens a console
window and starts NEW-MAIN.

Now, RUN is a function that executes the original ML program (as
translated by ML->CL, with the same semantics, including the fact that
it doesn't throw any runtime type errors in its form as generated by
ML->CL), but furthermore executes a read-eval-print-loop that allows
modification of the internals of that original program in arbitrary
ways. For example, the console allows you to use DEFUN to redefine an
arbitrary function of the original program that runs in the first
thread, so that the original definition is not visible anymore and all
calls to the original definiton within the first thread use the new
definition after the redefinition is completed. [1]

Now here come the questions.

Is it possible to modify CL->ML in a way that any program originally
written in ML, translated with ML->CL, and then modified as sketched
above (including NEW-MAIN and RUN) can be translated back to ML? For the
sake of simplicity we can assume an implementation of ML that already
offers multithreading. Again, for the sake of simplicity, it's
acceptable that the result of CL->ML accepts ML as an input language for
the read-eval-print-loop in RUN instead of Common Lisp. The important
thing here is that redefinitions issued in the second thread should
affect the internals of the program running in the first thread, as
described above.


You have an interesting point here, but it is only partly related to
static typing. It is more about static binding vs. dynamic binding.
That is, are you able to dynamically change the definition of any
identifier in the language.
One must recognizes that dynamically typed systems, in particular Lisp
and Smalltalk, also give you full dynamic binding, and that this is an
incredibly powerful feature, at least for developpers.

Now, would it be possible to do it in a statically typed language?
I don't see why not. Some parts are relatively easy: allowing you to
change function definitions, as long as you don't change the type. I
say only relatively, because polymorphism may get in your way: you
would have to replace any polymorphic function by a function at least
as polymorphic as it, whether this polymorphism is really needed by
the rest of the program or not.

Some parts are much more difficult: how to change the data
representation dynamically. This is already difficult in dynamically
typed language (you must introduce lots of kludges to convert the data
on the fly, probably on an as-needed basis), but in a statically typed
language this must be expressed at the level of types. Matthias'
argument would be that anyway you need to do some formal reasonning to
prove that, knowing the dataflow of your program, you have indeed
introduced all the needed conversions and kludges in you dynamically
typed program, and that this reasonning could be converted to some
kind of types, but this is going to be very hard. The most I know
about this is some work on data versionning, but it doesn't consider
modifications inside a running program.

I'm willing to concede you the point: there may be applications where
you want this ability to dynamically modify the internals of your
program, and, while knowing this is just going to be damned dangerous,
a fully dynamic (both types and binding) language is your only way to
be sure that you will be able to do all and every possible changes.
But these applications strike me as being of the high availability
kind, so that the very fact this is so dangerous may be a major
concern.

On the other hand, in the huge majority of cases, this feature is only
used during program development, and once you're done you compile and
optimize, and optimizing actually means loosing most of the dynamic
binding to allow inlining.
In those cases, clever statically typed languages like ML compensate
for their staticness in various ways, for instance by allowing
efficient separate compilation as long as interfaces do not change.
You may have to restart your program, but do not loose much time for
that. And since more bugs are caught by the type system, the need to
correct them is less frequent. You are also provided with an
interactive toplevel, which lets you change some of the definitions at
runtime, at least those functions and variables you have explicitly
declared as mutable. Static typing does not prevent you from running
and modifying interactively a GUI application, it just restricts the
extent of the modifications you can do.

(Contrary to Matthias I'm a purely static guy, but I've always been
attracted by those fancy dynamic development environments.)

---------------------------------------------------------------------------
Jacques Garrigue Kyoto University garrigue at kurims.kyoto-u.ac.jp
<A HREF=http://wwwfun.kurims.kyoto-u.ac.jp/~garrigue/>JG</A>
Jul 18 '05 #421

Joe Marshall <jr*@ccs.neu.edu> writes:

I think the static typers will be agree (but probably not be happy
with) this statement: There exist programs that may dynamically admit
a correct solution for which static analyzers are unable to prove that
a correct solution exists.


Agreed. However, if you allow the programer to explicitly guide the static
analyzers with hints. I think that set of correct programs that are
provablely correct under with a static analyzer and explicit programer
hints, is very small.

Type inference and type checking are different things. Inference will always
be incomplete or undecidable in ways that are probably quite annoying. Type
checking maybe be incomplete, but no more incomplete than modern
mathematics.
Jul 18 '05 #422
Jacques Garrigue <se*@my.signature> writes:

[ ... ]

Thanks for the detailed reply, Jacques! (I was contemplating a reply
of my own, but I think I have to start behaving again and cut down on
the time I waste on netnews. :-)
(Contrary to Matthias I'm a purely static guy, but I've always been
attracted by those fancy dynamic development environments.)


Do you mean that you don't have any dynamically typed skeletons in
your closet? My excuse is that I have been attracted by the static
side of the force all along, but for a long time I didn't understand
that this was the case... :-)

Matthias
Jul 18 '05 #423
"Marshall Spight" <ms*****@dnai.com> wrote in message news:<YBhmb.19492$Tr4.40240@attbi_s03>...
"Pascal Costanza" <co******@web.de> wrote in message news:bn**********@f1node01.rhrz.uni-bonn.de...

And that's all I wanted from the very beginning - static typing as an
additional tool, not as one that I don't have any other choice than use
by default.


I can get behind that idea! It strikes me as being better
than what one has now with either statically typed
languages or dynamically typed languages.

Then the addition of parameterized types to goo, might interest you.

<http://www.csail.mit.edu/research/abstracts/abstracts03/dynamic-languages/03knight.pdf>

Also see the main goo page:
<http://www.ai.mit.edu/~jrb/goo/>
Jul 18 '05 #424
Pascal Bourguignon <sp**@thalassa.informatimago.com> writes:
Matthias Blume <fi**@my.address.elsewhere> writes:
Pascal Costanza <co******@web.de> writes:
> Computers are fast enough and have enough memory nowadays. You are
> talking about micro efficiency. That's not interesting anymore.
I have worked on projects where people worried about *every cycle*.
(Most of the time I agree with you, though. Still, using infinite
precision by default is, IMO, a mistake.


What are you writing about? Figments of your imagination or real
concrete systems?


He is talking about real concrete systems, e.g. Haskell or Lisp.
Where do you see "infinite precision by default" [in Lisp]?


We know that implementations of dynamically typed languages such as Lisp
represent small integers efficiently. So do good implementations of
statically typed languages in which arbitrary precision arithmetic is
the default. But in both cases, these implementations pay a price --
compared to statically typed languages using fixed precision arithmetic --
because of the possibility that adding two unknown word-sized values
may generate a result which no longer fits in a single word. The compiler
needs to generate extra code to cater for that possibility. Then in turn,
each subsequent operation needs to cater for the possibility that the input
will not fit in a word. The extra tests slow the code down, and the
extra code size reduces locality.

In a dynamically typed language, this price is normally not considered
to be much of an issue, because it has already been paid for up-front:
the cost is just the same cost as you would normally for pay for *every*
data access in a dynamically typed language.

--
Fergus Henderson <fj*@cs.mu.oz.au> | "I have always known that the pursuit
The University of Melbourne | of excellence is a lethal habit"
WWW: <http://www.cs.mu.oz.au/~fjh> | -- the last words of T. S. Garp.
Jul 18 '05 #425
Stephen J. Bevan wrote:
Pascal Costanza <co******@web.de> writes:
Are these algorithms reason enough to have machine word sized
numerical data types as the default for a _general purpose_ language?

I've no idea, I don't care that much what the default is since I
prefer to specify what the type/size should be if the compiler fails
to infer the one I wanted :-)


:-)
Pascal

Jul 18 '05 #426
Matthias Blume <fi**@my.address.elsewhere> writes:
(Contrary to Matthias I'm a purely static guy, but I've always been
attracted by those fancy dynamic development environments.)


Do you mean that you don't have any dynamically typed skeletons in
your closet? My excuse is that I have been attracted by the static
side of the force all along, but for a long time I didn't understand
that this was the case... :-)


To be honest, I have been for a long time a fan of Prolog. To choose
an untyped language, I prefer it (pseudo) intelligent! And you can
also do plenty of fun stuff with meta-programming in Prolog.

Maybe the switch has been when I was (as undergrad) assigned a project
to write a lazy prolog interpreter in ML. This was so easy that I
didn't see the point of using prolog afterwards...
Not that I pretend that good compilers for untyped languages are easy
to write. But at least type inference (even trivial) is a bit harder
than interpreters, which gives you this warm feeling that you're doing
some real work.

---------------------------------------------------------------------------
Jacques Garrigue Kyoto University garrigue at kurims.kyoto-u.ac.jp
<A HREF=http://wwwfun.kurims.kyoto-u.ac.jp/~garrigue/>JG</A>
Jul 18 '05 #427
Pascal Costanza <co******@web.de> writes:
...and it requires you to go to all the places where they are defined.

Yes, I know the answer: "But they should be all in one place." No,
they shouldn't need to be all in one place. For example, I might want
to place test code close to the definitions that they test. Or I might
want to organize them according to some other criteria.

No, it's not hard to find them all, then. I can use grep or my IDE to
find them. But that's still more work than just commenting them
out. If I seldomly need to find all test cases, I can trade locality
of all test cases for some other possible advantages.

With a good IDE, "distance" should be the same as "semantic nearness",
if that term makes sense. In a good IDE, there already is an existing
way to browse all the tests, or it is easy to extend the IDE to allow
it. So things are in the "same place" whenever they have a semantic
attribute that the tools can index on. No matter how you layout
tests, there is sure to be a way for a decent IDE to show you all the
tests.
-Lex
Jul 18 '05 #428
Fergus Henderson <fj*@cs.mu.oz.au> writes:
Why? If the collection happens to contain only elements of a single type
(or this type at most), you only need to check write accesses if they
violate this condition. As long as they don't, you don't need to check
read accesses.


So which, if any, implementations of dynamic languages actually perform such
optimizations?

I'm sure every implementation does this "optimization", because it is
simply less code. The only time you get a dynamic type error are:

1. You try to call a method, but the object has no such method.

2. You call a primitive function or method, and the primitive
balks. (This would include trying to write a string into an
array-of-byte.)
I suppose you could add this one, which also applies to statically
typed languages:

3. The code explicitly checks for type information.
If you are simply doing "x := y" then there is no checking required.
Regarding your earlier question, though, the great trick in Self was
to remember the result of a check and thus avoid doing it again
whenever possible. If you do "y := x + 1", and you determine that "x"
is a floating point number, then you know that "y" will also be a
floating point number immediately afterwards.
This points to a general observation. Dealing with the #1 style
dynamic type errors is a subset of dealing with dynamic dispatch in
general. If you try to execute "x + 1" or "(foo data-structure)", you
will need to locate which "+" method or which branch of foo's case
statement to execute. A dynamic type error means that you decide
to use method "typeError" or branch "type error". Furthermore, any
optimizations that get rid of these dynamic lookups, will also get
rid of type checks just by their nature. If "x + 1" always uses the
floating-point "+", then clearly it cannot ever use the "typeError"
version of "+".
-Lex

Jul 18 '05 #429
Lex Spoon wrote:
Pascal Costanza <co******@web.de> writes:
...and it requires you to go to all the places where they are defined.

Yes, I know the answer: "But they should be all in one place." No,
they shouldn't need to be all in one place. For example, I might want
to place test code close to the definitions that they test. Or I might
want to organize them according to some other criteria.

No, it's not hard to find them all, then. I can use grep or my IDE to
find them. But that's still more work than just commenting them
out. If I seldomly need to find all test cases, I can trade locality
of all test cases for some other possible advantages.


With a good IDE, "distance" should be the same as "semantic nearness",
if that term makes sense. In a good IDE, there already is an existing
way to browse all the tests, or it is easy to extend the IDE to allow
it. So things are in the "same place" whenever they have a semantic
attribute that the tools can index on. No matter how you layout
tests, there is sure to be a way for a decent IDE to show you all the
tests.


....an if the IDE is already that smart, why should it still require me
to comment out code just to make some other part of the program run?

A programming language environment should make programming as convenient
as possible, not in some areas convenient and in some other arbitrary
areas less convenient.

(And if you think that static type checking makes programming more
convenient, then yes, why not? Add that as an additional option! But
make it possible to switch it on or off on demand!)
Pascal

Jul 18 '05 #430
Lex Spoon <le*@cc.gatech.edu> writes:
Fergus Henderson <fj*@cs.mu.oz.au> writes:
[someone wrote:]
Why? If the collection happens to contain only elements of a single type
(or this type at most), you only need to check write accesses if they
violate this condition. As long as they don't, you don't need to check
read accesses.
So which, if any, implementations of dynamic languages actually perform such
optimizations?


I'm sure every implementation does this "optimization", because it is
simply less code.


You're wrong. I think you misunderstood what optimization I'm talking about.
The only time you get a dynamic type error are:

1. You try to call a method, but the object has no such method.
Calling a method is a read access. We were discussing optimizations that
ensure that you *don't* need to do a dynamic check for each read access.
If you are simply doing "x := y" then there is no checking required.
Yes, we covered that already. But that's not what is happening in
the scenario that I was describing. The scenario that I'm describing is

Collection c;

...
foreach x in c do
use(x);

where use(x) might be a method call, a field access, or similar.
For example, perhaps the collection is a set of integers, and you
are computing their sum, so use(x) would be "sum += x".

I think that in these situations, dynamically typed language
implementations will do O(N) dynamic type checks, where N is the number
of elements in "c". In theory it is possible to optimize these away,
but I don't know of any such implementations that do, and I would be
suprised if there are any (except perhaps in limited cases, e.g. when
the collection is an array or is implemented using an array).
Regarding your earlier question, though, the great trick in Self was
to remember the result of a check and thus avoid doing it again
whenever possible. If you do "y := x + 1", and you determine that "x"
is a floating point number, then you know that "y" will also be a
floating point number immediately afterwards.
Sure. That helps an implementation avoid checking the type of the collection
"c" at every element access. But it doesn't help the implementation avoid
checking the type of the element "x" at each iteration of the loop.
This points to a general observation. Dealing with the #1 style
dynamic type errors is a subset of dealing with dynamic dispatch in
general. [....] any optimizations that get rid of these dynamic lookups,
will also get rid of type checks just by their nature.


That's true. But the difference between dynamically typed languages and
statically typed languages is that in dynamically typed languages, *every*
data access (other than just copying data around) involves a dynamic dispatch.
Sure, implementations can optimize a lot of them away. But generally you're
still left lots that your implementation can't optimize away, but which
would not be present in a statically typed language, such as the O(N)
dynamic type checks in the example above.

--
Fergus Henderson <fj*@cs.mu.oz.au> | "I have always known that the pursuit
The University of Melbourne | of excellence is a lethal habit"
WWW: <http://www.cs.mu.oz.au/~fjh> | -- the last words of T. S. Garp.
Jul 18 '05 #431
Hi Fergus Henderson,
Yes, we covered that already. But that's not what is happening in
the scenario that I was describing. The scenario that I'm describing is

Collection c;

...
foreach x in c do
use(x);

where use(x) might be a method call, a field access, or similar.
For example, perhaps the collection is a set of integers, and you
are computing their sum, so use(x) would be "sum += x".

I think that in these situations, dynamically typed language
implementations will do O(N) dynamic type checks, where N is the number
of elements in "c". In theory it is possible to optimize these away,
but I don't know of any such implementations that do, and I would be
suprised if there are any (except perhaps in limited cases, e.g. when
the collection is an array or is implemented using an array).


I've implemented a collection of integers as a list:

* (disassemble
(compile nil
(lambda ()
(declare (optimize (safety 0)))
(let ((c '(1 2 3 4 5 6 7 8 9 10)))
(loop for x of-type (integer 1 10) in c
sum x of-type fixnum)))))

; Compiling lambda nil:
; Compiling Top-Level Form:

482C1160: .entry "lambda nil"() ; (function nil fixnum)
78: pop dword ptr [ebp-8]
7B: lea esp, [ebp-32]
7E: xor eax, eax ; No-arg-parsing entry point
80: mov ecx, [#x482C1158] ; '(1 2 3 4 ...)
86: xor edx, edx
88: jmp L1
8A: L0: mov eax, [ecx-3]
8D: mov ecx, [ecx+1]
90: add edx, eax
92: L1: cmp ecx, #x2800000B ; nil
98: jne L0
9A: mov ecx, [ebp-8]
9D: mov eax, [ebp-4]
A0: add ecx, 2
A3: mov esp, ebp
A5: mov ebp, eax
A7: jmp ecx

No type checks. And 32-bit assembly arithmetic (but the type bits are
still present. most-positive-fixnum is 536,870,911).
Regarding your earlier question, though, the great trick in Self was
to remember the result of a check and thus avoid doing it again
whenever possible. If you do "y := x + 1", and you determine that "x"
is a floating point number, then you know that "y" will also be a
floating point number immediately afterwards.


Sure. That helps an implementation avoid checking the type of the collection
"c" at every element access. But it doesn't help the implementation avoid
checking the type of the element "x" at each iteration of the loop.


Lisp provides standard ways to supply declarations. Some implementations
will trust those declarations in order to optimise the code at compile
time.
This points to a general observation. Dealing with the #1 style
dynamic type errors is a subset of dealing with dynamic dispatch in
general. [....] any optimizations that get rid of these dynamic lookups,
will also get rid of type checks just by their nature.


That's true. But the difference between dynamically typed languages and
statically typed languages is that in dynamically typed languages, *every*
data access (other than just copying data around) involves a dynamic dispatch.
Sure, implementations can optimize a lot of them away. But generally you're
still left lots that your implementation can't optimize away, but which
would not be present in a statically typed language, such as the O(N)
dynamic type checks in the example above.


Again, Lisp provides standard ways to supply declarations. Some
implementations will trust those declarations in order to optimise the
code at compile time. And use the declarations for type inference.

Followup-To is set as comp.lang.lisp.

Regards,
Adam
Jul 18 '05 #432
Fergus Henderson <fj*@cs.mu.oz.au> writes:
But the difference between dynamically typed languages and
statically typed languages is that in dynamically typed languages, *every*
data access (other than just copying data around) involves a dynamic dispatch.
Sure, implementations can optimize a lot of them away. But generally you're
still left lots that your implementation can't optimize away, but which
would not be present in a statically typed language, such as the O(N)
dynamic type checks in the example above.


That's what the type-checking hardware is for.
Jul 18 '05 #433
Pascal Costanza <co******@web.de> writes:
A type "error" detected at compile-time doesn't imply that the
program will fail.
Actually it does, in a statically typed language.


No, it doesn't.


Well, it's rather a meaningless question since all statically-typed
languages define their semantics only for well-typed expressions, and
so something like

f :: Int -> Int
x :: String = "hello"
f "hello"

has no semantics, and so cannot be said to 'fail' or 'not fail'. But
it seems pretty clear that if you did bypass the type checking and
compile such code, it would go wrong as soon as it was run.
If you write a function which expects a Boolean and you pass it a
string instead, it's going to fail one way or another.


Yes, that's one example. This doesn't mean that this implication
always holds. What part of "doesn't imply" is the one you don't
understand?


I just gave one example - 'Boolean' and 'string' in the above are just
examples of one possible type mismatch. But the same holds for any
other type mismatch. You cannot call a function defined for type X
and pass a value of type Y, where Y is not an instance of X.
I don't care for unreachable code in this specific context. A part of
the program that passes a value of type "don't know" to a variable of
type "don't know" might be unacceptable to a static type sytem,
Actually, an expressive static type system will allow this:

f :: a -> a

f takes a parameter 'don't know' and returns a result of the same
type. Or you can have the even more general a -> b, any type to any
type. Such a function isn't especially useful however, since if you
know nothing at all about what the type supports (eg, not even
equality might be defined) then you can't promise much about the
return value.
but might still not throw any exceptions at all at runtime.

(I do mean a type error and not a type 'error' - obviously if you
have some mechanism to catch exceptions caused by passing the wrong
type, you wouldn't want this to be checked at compile time.)


Exactly.


Some statically-typed languages do support this, for example Haskell's
Dynamic library, but you have to ask for it explicitly. For me, the
common case of a type error is when I've simply made a mistake, and I
would like as much help as possible from the computer to catch the
mistake as early as possible. But one might want to suppress the
checking occasionally.

(However, with a more expressive type system you don't so often feel
the need to suppress it altogether.)

--
Ed Avis <ed@membled.com>
Jul 18 '05 #434
pr***********@comcast.net writes:
What would you like to call those `syntactic constructs' that do not
currently type check in Haskell, yet *may* belong to a class of
syntactic constructs that *could* conceivably be type checked in a
successor to Haskell that has a better type inferencing engine?


Well of course, if you program in a different language you'd need a
different type checking system.

--
Ed Avis <ed@membled.com>
Jul 18 '05 #435
Ed Avis <ed@membled.com> writes:
pr***********@comcast.net writes:
What would you like to call those `syntactic constructs' that do not
currently type check in Haskell, yet *may* belong to a class of
syntactic constructs that *could* conceivably be type checked in a
successor to Haskell that has a better type inferencing engine?


Well of course, if you program in a different language you'd need a
different type checking system.


Obviously.

But let us suppose that someone improved the type system of Haskell
such that some useful complicated constructs that did not pass the
type checker were now able to be verified as correct. Didn't this
happen when Haskell was extended with second-order polymorphic types?
(when the restriction on forall was lifted?)

You could say that lifting this restriction created a `new' language
and refuse to admit the notion that two are related, or you might take
the viewpoint that some programs that were invalid before are now
valid. The former point becomes rather strained if you have some sort
of switch on the implementation.

For instance, OCaml allows recursive function types if you specify
that you want them. Most people seem to view this as

`OCaml with recursive function types'

instead of

`completely new language unrelated to OCaml, but sharing the exact
same syntax in all places except for declaration of recursive function
types.'

And most people seem to think that my `black hole' `syntactic
construct', which does not type check under OCaml without the flag,
but *does* under OCaml with the flag, can be unambiguously called a
`program'.
Jul 18 '05 #436
pr***********@comcast.net writes:
But let us suppose that someone improved the type system of Haskell
such that some useful complicated constructs that did not pass the
type checker were now able to be verified as correct.


Wouldn't you need to define the semantics for these constructs too?
And perhaps extend the compiler to generate code for them?

My original point was that the type-checker won't reject programs
which are valid Haskell, so it makes no sense to talk about the
checker being too strict or not allowing enough flexibility. A
type-checker for some other language such as Lisp would obviously have
to not flag errors for any legal Lisp program. (That would probably
mean not checking anything at all, with the programmer having to
explicitly state 'yes I don't want to wait until runtime to catch this
error'.)

--
Ed Avis <ed@membled.com>
Jul 18 '05 #437
Ed Avis <ed@membled.com> writes:
pr***********@comcast.net writes:
But let us suppose that someone improved the type system of Haskell
such that some useful complicated constructs that did not pass the
type checker were now able to be verified as correct.


Wouldn't you need to define the semantics for these constructs too?
And perhaps extend the compiler to generate code for them?

My original point was that the type-checker won't reject programs
which are valid Haskell, so it makes no sense to talk about the
checker being too strict or not allowing enough flexibility.


So any program that currently runs on Haskell will run on the very
first version of Haskell? No? But Haskell won't reject programs that
are valid Haskell, so is the later version wrong?
Jul 18 '05 #438
pr***********@comcast.net writes:
Fergus Henderson <fj*@cs.mu.oz.au> writes:
But the difference between dynamically typed languages and
statically typed languages is that in dynamically typed languages, *every*
data access (other than just copying data around) involves a dynamic dispatch.
Sure, implementations can optimize a lot of them away. But generally you're
still left lots that your implementation can't optimize away, but which
would not be present in a statically typed language, such as the O(N)
dynamic type checks in the example above.


That's what the type-checking hardware is for.


Did you forget a smiley?

In case not: type-checking hardware has been tried already, and failed.

(Anyway, type-checking hardware would only solve part of the problem, I think.
Dynamic type checking imposes two costs: one is the time cost of performing
the checks, and the other is the locality cost due to the code size increase.
Type-checking hardware avoids the code size increases, but I don't think it
helps with the time cost of performing the checks.)

--
Fergus Henderson <fj*@cs.mu.oz.au> | "I have always known that the pursuit
The University of Melbourne | of excellence is a lethal habit"
WWW: <http://www.cs.mu.oz.au/~fjh> | -- the last words of T. S. Garp.
Jul 18 '05 #439
pr***********@comcast.net wrote:

And most people seem to think that my `black hole' `syntactic
construct', which does not type check under OCaml without the flag,
but *does* under OCaml with the flag, can be unambiguously called a
`program'.


I'm still hoping for an explanation on the practical relevance of that
black hole device.
I don't insist that the black hole function as given does anything
useful; if the black-hole function was just a condensed example of a
general usage pattern, I'd like to know what that pattern is. It would
help me to find out where in the expressivity spectrum the various
languages lie.

Regards,
Jo

Jul 18 '05 #440
Fergus Henderson <fj*@cs.mu.oz.au> writes:
pr***********@comcast.net writes:
Fergus Henderson <fj*@cs.mu.oz.au> writes:
But the difference between dynamically typed languages and
statically typed languages is that in dynamically typed languages, *every*
data access (other than just copying data around) involves a dynamic dispatch.
Sure, implementations can optimize a lot of them away. But generally you're
still left lots that your implementation can't optimize away, but which
would not be present in a statically typed language, such as the O(N)
dynamic type checks in the example above.
That's what the type-checking hardware is for.


Did you forget a smiley?


No, I never use smileys.
In case not: type-checking hardware has been tried already, and failed.
News to me. I've used type checking hardware and it works like a charm.
(Anyway, type-checking hardware would only solve part of the problem, I think.
Dynamic type checking imposes two costs: one is the time cost of performing
the checks, and the other is the locality cost due to the code size increase.
Type-checking hardware avoids the code size increases, but I don't think it
helps with the time cost of performing the checks.)


Actually it works quite well with performing the checks. In general,
type checking is much quicker than computation, and in general it can
be performed in parallel with computation (you simply discard the
bogus result if it fails). You don't need very much hardware, either.


Jul 18 '05 #441
Joachim Durchholz <jo***************@web.de> writes:
pr***********@comcast.net wrote:
And most people seem to think that my `black hole' `syntactic
construct', which does not type check under OCaml without the flag,
but *does* under OCaml with the flag, can be unambiguously called a
`program'.


I'm still hoping for an explanation on the practical relevance of that
black hole device.


Neel Krishnaswami had a wonderful explanation in article
<sl******************@gs3106.sp.cs.cmu.edu>

Jul 18 '05 #442
pr***********@comcast.net writes:
My original point was that the type-checker won't reject programs
which are valid Haskell, so it makes no sense to talk about the
checker being too strict or not allowing enough flexibility.


So any program that currently runs on Haskell will run on the very
first version of Haskell?


I should have said 'Haskell 98' and 'a type-checker in a working
Haskell 98 implementation'.

The same applies to any language, of course - Haskell 98 is just an
example. What I mean is don't shoot the messenger if the type checker
tells you your program is invalid. You might however want to change
to a different language (such as a later version of Haskell, or one
with nonstandard extensions) where your program is valid (and of
course the typechecker for that implementation will be happy).

--
Ed Avis <ed@membled.com>
Jul 18 '05 #443

pr***********@comcast.net writes:
Fergus Henderson <fj*@cs.mu.oz.au> writes:
pr***********@comcast.net writes:
Fergus Henderson <fj*@cs.mu.oz.au> writes:

(Anyway, type-checking hardware would only solve part of the problem, I think.
Dynamic type checking imposes two costs: one is the time cost of performing
the checks, and the other is the locality cost due to the code size increase.
Type-checking hardware avoids the code size increases, but I don't think it
helps with the time cost of performing the checks.)


Actually it works quite well with performing the checks. In general,
type checking is much quicker than computation, and in general it can
be performed in parallel with computation (you simply discard the
bogus result if it fails). You don't need very much hardware, either.


As is also amply demonstrated by ECC hardware that operates right
alongside the memory- considerably easier than doing it in software.

Gregm
Jul 18 '05 #444
>>>It sounds unbelievable, but it really works.
I believe you. I have trouble swallowing claims like `It is never
wrong, always completes, and the resulting code never has a run-time
error.' or `You will never need to run the kind of code it doesn't allow.'


This kind of claim comes is usually just a misunderstanding.
For example, the above claim indeed holds for HM typing - for the
right definitions of "never wrong" and "never has an error".

HM typing "is never wrong and never has a run-time error" in the
following sense: the algorithm will never allow an ill-typed program
to pass, and there will never be a type error at run-time. However,
people tend to overlook the "type" bit in the "type error" term, at
which point the discussion quickly degenerates into discourses of
general correctness.

It is misleading to make this claim without a lot of qualification.
It requires careful, technical definitions of "type" and "type error"
that are different from what an unprepared audience will expect.
For example, it is not considered a type error if you get the wrong
branch of a datatype. So if you define:

datatype sexp = Atom of string | Cons of (sexp, sexp)

fun car (Cons (a,b)) = a

then the following would not be considered a "type error" :

car (Atom "hello")

To add to the situation, HM flags extra errors, too, that many people
would not consider "type errors" but which are for HM's purposes. For
example, it is considered a type error if two branches of an "if" do
not match, even if one branch is impossible or if the later code can
remember which branch was followed. For example:

val myint : int =
if true
then 0
else "hello"
or more interestingly:

val (tag, thingie) =
if (whatever)
then (0, 1)
else (1, 1.0)

val myotherstuff =
if tag = 0
then (tofloat thingie) + 1.5
else thingie + 1.5
In common parlance, as opposed to the formal definitions of "type
error", HM both overlooks some type errors and adds some others. It
is extremely misleading to claim, in a non-technical discussion, that
HM rejects precisely those programs that have a type error. The
statement is actually false if you use the expected meanings of "type
error" and "type".
All this said, I agree that HM type inference is a beautiful thing and
that it has significant benefits. But the benefit of removing type
errors is a red herring--both in practice and, as described in this
post, in theory as well.

-Lex
Jul 18 '05 #445
pr***********@comcast.net wrote:
Neel Krishnaswami had a wonderful explanation in article
<sl******************@gs3106.sp.cs.cmu.edu>


Sorry, that link doesn't work for me, I don't know the proper syntax for
news: links, and I couldn't type one in even if I knew it.

Anybody got a full reference? This thread is too long for searching...

Regards,
Jo

Jul 18 '05 #446
"Joachim Durchholz" asks:
pr***********@comcast.net wrote:
Neel Krishnaswami had a wonderful explanation in article
<sl******************@gs3106.sp.cs.cmu.edu>


Sorry, that link doesn't work for me, I don't know the proper syntax for
news: links, and I couldn't type one in even if I knew it.


I prepend:

http://groups.google.com/groups?selm=
in this case yielding

http://groups.google.com/groups?selm....sp.cs.cmu.edu
Emile van Sebille
em***@fenx.com
Jul 18 '05 #447
Lex Spoon <le*@cc.gatech.edu> wrote:
To add to the situation, HM flags extra errors, too, that many people
would not consider "type errors" but which are for HM's purposes. For
example, it is considered a type error if two branches of an "if" do
not match, even if one branch is impossible or if the later code can
remember which branch was followed. [...] val (tag, thingie) =
if (whatever)
then (0, 1)
else (1, 1.0)

val myotherstuff =
if tag = 0
then (tofloat thingie) + 1.5
else thingie + 1.5


The point here is of course that you "glue together" the tag
and the value, with the additional side effect that this documents
your intention. So you would write in this case

data Thingie = Tag0 Integer | Tag1 Float

and then you can write

myfirststuff whatever = if whatever then Tag0 1 else Tag1 1.0

myotherstuff (Tag0 thingie) = (fromInteger thingie) + 1.5
myotherstuff (Tag1 thingie) = thingie + 1.5

Then the type checker will happily infer that

myfirststuff :: Bool -> Thingie and
myotherstuff :: Thingie -> Float

So you indeed have to express your tags a bit differently. Is this
asking too much? Is that so inconvient, especially when you get a
good documention of your intentions for free?

- Dirk
Jul 18 '05 #448
Emile van Sebille wrote:
"Joachim Durchholz" asks:
pr***********@comcast.net wrote:
Neel Krishnaswami had a wonderful explanation in article
<sl******************@gs3106.sp.cs.cmu.edu>


Sorry, that link doesn't work for me, I don't know the proper syntax for
news: links, and I couldn't type one in even if I knew it.

I prepend:

http://groups.google.com/groups?selm=


Got it - thanks!

Regards,
Jo

Jul 18 '05 #449
Pascal Costanza <co******@web.de> writes:
- The important thing here is that the EMLOYED mixin works on any class,
even one that is added later on to a running program. So even if you
want to hire martians some time in the future you can still do this.


What happens if the existing class defines a slot named "salary" or "company",
but with a different meaning? Are slot names global, or is there some sort
of namespace control to prevent this kind of accidental name capture?

Anyway, regarding how to write this example in a statically typed
language: you can do this in a quite straight-forward manner,
by just keeping a separate table of employees.
For example, here it is in Java.

import java.util.*;
public class Employed {
static String default_company = "constanz-inc";

static class Employee {
public Object obj;
public String company;
public int salary;
public Employee(Object o, String c, int s) {
company = c;
salary = s;
obj = o;
}
}

static Hashtable employees = new Hashtable();

static void hire(Object obj, int salary) {
hire(obj, salary, default_company);
}
static void hire(Object obj, int salary, String company) {
employees.put(obj, new Employee(obj, company, salary));
}
static void fire(Object obj) {
employees.remove(obj);
}

static void test_employed() {
class Person {
public String name;
Person(String n) { name = n; }
};
Person joe = new Person("joe");
System.out.println("-> hire joe");
hire(joe, 60000);
System.out.println("name: " + joe.name);
System.out.println("class: "
+ joe.getClass().getName());
Employee e = (Employee) employees.get(joe);
System.out.println("employed: " +
(e != null ? "yes" : "no"));
System.out.println("company: " + e.company);
System.out.println("salary: " + e.salary);
System.out.println("-> fire joe");
fire(joe);
if (employees.contains(joe)) {
System.out.println("joe is still employed.");
} else {
System.out.println(
"joe is not employed anymore.");
}
}

public static void main(String args[]) {
test_employed();
}
}

As you can see, there's no need here for dynamically changing the types of
objects at runtime or for creating classes at runtime. But you can employ
Martians or any other object.

This example makes use of one dynamic cast; that's because the Java
type system doesn't support generics / parametric polymorphism. It would
be a little nicer to do this in a language which supported generics, then
we could use `Hashtable<Object, Employee>' rather than just `Hashtable',
and there wouldn't be any need for the dynamic cast to `(Employee)'.

--
Fergus Henderson <fj*@cs.mu.oz.au> | "I have always known that the pursuit
The University of Melbourne | of excellence is a lethal habit"
WWW: <http://www.cs.mu.oz.au/~fjh> | -- the last words of T. S. Garp.
Jul 18 '05 #450

This thread has been closed and replies have been disabled. Please start a new discussion.

Similar topics

75
by: Xah Lee | last post by:
http://python.org/doc/2.4.1/lib/module-re.html http://python.org/doc/2.4.1/lib/node114.html --------- QUOTE The module defines several functions, constants, and an exception. Some of the...
6
by: Juha S. | last post by:
Hi, I'm writing a small text editor type application with Python 2.5 and Tkinter. I'm using the Tk text widget for input and output, and the problem is that when I try to save its contents to a...
4
by: Berco Beute | last post by:
I wonder what it would take to implement Python in JavaScript so it can run on those fancy new JavaScript VM's such as Chrome's V8 or Firefox' tracemonkey. Much the same as Python implementations...
0
by: Luke Kenneth Casson Leighton | last post by:
On Sep 3, 10:02 pm, bearophileH...@lycos.com wrote: 1200 lines of code for the compiler, and about... 800 for a basic suite of builtin types (Dict, List, set, string). ...
0
by: Charles Arthur | last post by:
How do i turn on java script on a villaon, callus and itel keypad mobile phone
0
by: ryjfgjl | last post by:
In our work, we often receive Excel tables with data in the same format. If we want to analyze these data, it can be difficult to analyze them because the data is spread across multiple Excel files...
0
by: emmanuelkatto | last post by:
Hi All, I am Emmanuel katto from Uganda. I want to ask what challenges you've faced while migrating a website to cloud. Please let me know. Thanks! Emmanuel
1
by: Sonnysonu | last post by:
This is the data of csv file 1 2 3 1 2 3 1 2 3 1 2 3 2 3 2 3 3 the lengths should be different i have to store the data by column-wise with in the specific length. suppose the i have to...
0
marktang
by: marktang | last post by:
ONU (Optical Network Unit) is one of the key components for providing high-speed Internet services. Its primary function is to act as an endpoint device located at the user's premises. However,...
0
by: Hystou | last post by:
Most computers default to English, but sometimes we require a different language, especially when relocating. Forgot to request a specific language before your computer shipped? No problem! You can...
0
Oralloy
by: Oralloy | last post by:
Hello folks, I am unable to find appropriate documentation on the type promotion of bit-fields when using the generalised comparison operator "<=>". The problem is that using the GNU compilers,...
0
jinu1996
by: jinu1996 | last post by:
In today's digital age, having a compelling online presence is paramount for businesses aiming to thrive in a competitive landscape. At the heart of this digital strategy lies an intricately woven...
0
tracyyun
by: tracyyun | last post by:
Dear forum friends, With the development of smart home technology, a variety of wireless communication protocols have appeared on the market, such as Zigbee, Z-Wave, Wi-Fi, Bluetooth, etc. Each...

By using Bytes.com and it's services, you agree to our Privacy Policy and Terms of Use.

To disable or enable advertisements and analytics tracking please visit the manage ads & tracking page.